Compare commits

..

81 Commits

Author SHA1 Message Date
Ed Hennis
69dda6b34c Set version to 3.1.0-rc2 2026-01-22 18:13:41 -05:00
Ed Hennis
e3644f265c fix: Remove DEFAULT fields that change to the default in associateAsset (#6259)
- Add Vault creation tests for showing valid range for AssetsMaximum
2026-01-22 16:52:57 -05:00
Ayaz Salikhov
8c573cd0bb Fix dependencies so clio can use libxrpl (#6251)
- Include <functional> header in Number.h
2026-01-22 11:30:54 -05:00
Ed Hennis
c894cd2b5f Revert "Fix dependencies so clio can use libxrpl (#6251)"
This reverts commit ecfe43ece7.
2026-01-22 11:30:39 -05:00
Ayaz Salikhov
ecfe43ece7 Fix dependencies so clio can use libxrpl (#6251)
- Include <functional> header in Number.h
2026-01-21 14:11:21 -05:00
Bart
564a35175e Set version to 3.1.0-rc1 2026-01-12 20:44:28 -05:00
Ed Hennis
919ded6694 Change LendingProtocol feature and dependencies to supported (#5632) 2026-01-12 20:44:08 -05:00
Ed Hennis
1ead5a7ac1 Expand Number to support full integer range (#6192)
- Refactor Number internals away from int64 to uint64 & a sign flag
  - ctors and accessors return `rep`. Very few things expose
    `internalrep`.
  - An exception is "unchecked" and the new "normalized", which explicitly
    take an internalrep. But with those special control flags, it's easier
    to distinguish and control when they are used.

- For now, skip the larger mantissas in AMM transactions and tests

- Remove trailing zeros from scientific notation Number strings
  - Update tests. This has the happy side effect of making some of the string
    representations _more_ consistent between the small and large
    mantissa ranges.

- Add semi-automatic rounding of STNumbers based on Asset types
  - Create a new SField metadata enum, sMD_NeedsAsset, which indicates
    the field should be associated with an Asset so it can be rounded.
  - Add a new STTakesAsset intermediate class to handle the Asset
    association to a derived ST class. Currently only used in STNumber,
    but could be used by other types in the future.
  - Add "associateAsset" which takes an SLE and an Asset, finds the
    sMD_NeedsAsset fields, and associates the Asset to them. In the case
    of STNumber, that both stores the Asset, and rounds the value
    immediately.
  - Transactors only need to add a call to associateAsset _after_ all of
    the STNumbers have been set. Unfortunately, the inner workings of
    STObject do not do the association correctly with uninitialized
    fields.
  - When serializing an STNumber that has an Asset, round it before
    serializing.
  - Add an override of roundToAsset, which rounds a Number value in place
    to an Asset, but without any additional scale.
  - Update and fix a bunch of Loan-related tests to accommodate the
    expanded Number class.

---------

Co-authored-by: Vito <5780819+Tapanito@users.noreply.github.com>
Co-authored-by: Shawn Xie <35279399+shawnxie999@users.noreply.github.com>
2026-01-13 01:04:22 +00:00
Ed Hennis
91d239f10b Improve and fix bugs in Lending Protocol - XLS-66 (#6156)
- Fix overpayment asserts (#6084)

- MPTTester::operator() parameter should be std::int64_t
	- Originally defined as uint64_t, but the testIssuerLoan() test called
	  it with a negative number, causing an overflow to a very large number
	  that in some circumstances could be silently cast back to an int64_t,
	  but might not be. I believe this is UB, and we don't want to rely on
	  that.

- Fix Overpayment Calculation  (#6087)
	- Adds additional unit tests to cover math calculations.
	- Removes unused methods.
- Fix LoanBrokerSet debtMaximum limits (#6116)
- Fix some minor bugs in Lending Protocol (#6101)
	- add nodiscard to unimpairLoan, and check result in LoanPay
	- add a check to verify that issuer exists
	- improve LoanManage error code for dust amounts

- In overpayment results, the management fee was being calculated twice:
  once as part of the value change, and as part of the fees paid.
  Exclude it from the value change.

- Round the initial total value computation upward, unless there is
  0-interest.
- Rename getVaultScale to getAssetsTotalScale, and convert one incorrect
  computation to use it.
- Use adjustImpreciseNumber for LossUnrealized.
- Add some logging to computeLoanProperties.

- Check permissions in LoanSet and LoanPay (#6108)

- Disallow pseudo accounts to be Destination for LoanBrokerCoverWithdraw (#6106)

- Ensure vault asset cap is not exceeded (#6124)

- Fix Overpayment ValueChange calculation in Lending Protocol (#6114)
	- Adds loan state to LoanProperties.
	- Cleans up computeLoanProperties.
	- Fixes missing management fee from overpayment.

- fix: Enable LP Deposits when the broker is the asset issuer (#6119)

- Add a few minor changes (#6158)
	- Updates or fixes a couple of things I noticed while reviewing changes
	  to the spec.
	- Rename sfPreviousPaymentDate to sfPreviousPaymentDueDate.
	- Make the vault asset cap check added in #6124 a little more robust:
	  1. Check in preflight if the vault is _already_ over the limit.
	  2. Prevent overflow when checking with the loan value. (Subtract
	     instead of adding, in case the values are near maxint. Both return
	     the same result. Also add a unit test so each case is covered.

- Add minimum grace period validation (#6133)

- Fix bugs: frozen pseudo-account, and FLC cutoff (#6170)
	- Fixes LoanManage tfBAD_LEDGER case by capping the amount of FLC to use to cover a loss at the amount of cover available.
	- Check if the Vault pseudo-account is frozen in LoanBrokerSet

- refactor: Rename raw state to theoretical state (#6187)

- Check if a withdrawal amount exceeds any applicable receiving limit. (#6117)
	- Check the trust line limit is not exceeded for a withdraw to a third party Destination account.

- Fix test failures from interaction between #6120 and #6133
	- LoanSet transaction added in #6120 failed the minimum grace period
	  added by #6133.

- Fix overpayment result calculation (#6195)

- Address review feedback from Lending Protocol re-review (#6161)
	- Reduce code duplication in LoanBrokerDelete
	- Reorder "canWithdraw" parameters to put "view" first
	- Combine accountSpendable into accountHolds
	- Avoid copies by taking a reference for the claw amount
	- Return function results directly
	- Fix typo for "parseLoan" in ledger_entry RPC
	- Improve some comments and unused variables
	- No need for late payment components lambda
	- Add explanatory comment for computeLoanProperties
	- Add comment linking computeRawLoanState to spec
	- Fix typo: TrueTotalPrincipalOutstanding
	- Fix missed ripple -> xrpl update
	- Remove unnecessary "else"s.
	- Clean up std::visit in accountHolds.
	
---------

Co-authored-by: Vito Tumas <5780819+Tapanito@users.noreply.github.com>
Co-authored-by: Shawn Xie <35279399+shawnxie999@users.noreply.github.com>
Co-authored-by: Jingchen <a1q123456@users.noreply.github.com>
Co-authored-by: Gregory Tsipenyuk <gregtatcam@users.noreply.github.com>
2026-01-12 18:08:35 -05:00
Vito Tumas
418ce68302 VaultClawback: Burn shares of an empty vault (#6120)
- Adds a mechanism for the vault owner to burn user shares when the vault is stuck. If the Vault has 0 AssetsAvailable and Total, the owner may submit a VaultClawback to reclaim the worthless fees, and thus allow the Vault to be deleted. The Amount must be left off (unless the owner is the asset issuer), specified as 0 Shares, or specified as the number of Shares held.
2026-01-10 01:41:22 -05:00
Ed Hennis
f17e476f7a fix: Inner batch transactions never have valid signatures (#6069)
- Introduces amendment `fixBatchInnerSigs`
- Update Batch unit tests
  - Fix all the Env instantiations to _use_ the "features" parameter.
  - testInnerSubmitRPC runs with Batch enabled and disabled.
  - Add a test to testInnerSubmitRPC for a correctly signed tx incorrectly
    using the tfInnerBatchTxn flag.
  - Generalize the submitAndValidate lambda in testInnerSubmitRPC.
  - With the fix amendment, a transaction never reaches the transaction
    engine (Transactor and derived classes.)
  - Test submitting a pseudo-transaction. Stopped before reaching the
    transaction engine, but with different errors.
- The tests verify that without the amendment, a transaction with
  tfInnerBatchTxn is immediately rejected. Without the amendment, things
  are safe. The amendment just makes things safer and more future-proof.
2026-01-10 00:05:31 -05:00
Denis Angell
d25915ca1d fix: Reorder Batch Preflight Errors (#6176)
This change fixes https://github.com/XRPLF/rippled/issues/6058.
2026-01-09 21:59:23 -05:00
Bart
8990c45c40 Set version to 3.1.0-b1 (#6152) 2025-12-17 18:38:32 -05:00
Ed Hennis
7527e35379 Set version to 3.0.0 2025-12-09 12:11:16 -05:00
Vladislav Vysokikh
e19b2c55c2 Version 3.0.0-rc2 2025-12-02 15:11:44 -05:00
Ed Hennis
138d6e751b Implement Lending Protocol (unsupported) (#5270)
- Spec: XLS-66
- Introduces amendment "LendingProtocol", but leaves it UNSUPPORTED to
  allow for standalone testing, future development work, and potential
  bug fixes.
- AccountInfo RPC will indicate the type of pseudo-account when
  appropriate.
- Refactors and improves several existing classes and functional areas,
  including Number, STAmount, STObject, json_value, Asset, directory
  handling, View helper functions, and unit test helpers.
2025-12-02 15:11:43 -05:00
Bronek Kozicki
b195011eff fix: Apply object reserve for Vault pseudo-account (#5954) 2025-11-28 11:58:16 +00:00
Bart
cd00aa591f ci: Clean workspace on Windows self-hosted runners (#6024)
This change updates the `cleanup-workspace` action to its latest version, which added support for Windows.
2025-11-28 11:58:15 +00:00
hustrust
d3466de16c docs: fix spelling in comments (#6002) 2025-11-28 11:58:15 +00:00
Ayaz Salikhov
90894ec6c1 ci: Update Conan to 2.22.2 (#6019)
This updates the CI image hashes after following change: https://github.com/XRPLF/ci/pull/81. And, since we use latest Conan, we can have `conan.lock` with a newline at the end, and we don't need to exclude it from `pre-commit` hooks any longer.
2025-11-28 11:20:31 +00:00
Bronek Kozicki
6b55db490e fix: JSON parsing of negative STNumber and STAmount (#5990)
This change fixes JSON parsing of negative `int` input in `STNumber` and `STAmount`. The conversion of JSON to `STNumber` or `STAmount` may trigger a condition where we negate smallest possible `int` value, which is undefined behaviour. We use a temporary storage as `int64_t` to avoid this bug. Note that this only affects RPC, because we do not parse JSON in the protocol layer, and hence no amendment is needed.
2025-11-28 11:20:31 +00:00
Bart
2237644ec5 ci: Trigger clio pipeline on PRs targeting release branches (#6080)
This change triggers the Clio pipeline on PRs that target any of the `release*` branches (in addition to the `master` branch), as opposed to only the `release` branch.
2025-11-26 11:01:22 +00:00
Ed Hennis
587d4ac5cc refactor: Add support for extra transaction signature validation (#5851)
- Restructures `STTx` signature checking code to be able to handle
  a `sigObject`, which may be the full transaction, or may be an object
  field containing a separate signature. Either way, the `sigObject` can
  be a single- or multi-sign signature.
- This is distinct from 550f90a75e (#5594), which changed the check in
  Transactor, which validates whether a given account is allowed to sign
  for the given transaction. This cryptographically checks the signature
  validity.
2025-11-25 18:30:44 +00:00
Bart
13b169ffcb ci: Remove missing commits check (#6077)
This change removes the CI check for missing commits, as well as a stray path to the publish-docs workflow that isn't used in the on-trigger workflow.
2025-11-25 17:29:07 +00:00
Jingchen
d42f8e0bda chore: Clean up comment in NetworkOps_test.cpp (#6066)
This change removes a copyright notice that was accidentally copied over from another file.
2025-11-25 17:29:07 +00:00
Vito Tumas
0276a6b6bd docs: Improve VaultWithdraw documentation (#6068) 2025-11-25 17:29:07 +00:00
Ayaz Salikhov
234dc6bdca ci: Only upload artifacts in XRPLF repo owner (#6060)
This change prevents uploading too many artifacts in non-public repositories.
2025-11-25 17:29:06 +00:00
sunnyraindy
82c2bf7144 chore: Fix some typos in comments (#6040) 2025-11-25 17:29:06 +00:00
Bart
210d49df44 ci: Fix filtering out of Clang 20+ on ARM (#6046)
This change fixes the strategy matrix check to filter out the Clang 20+ on ARM, which still fail due to problems with Boost.
2025-11-25 17:29:06 +00:00
Bart
3ffa30bf24 ci: Use new Debian Trixie images (#6034)
This change uses the new Debian Trixie CI images added by XRPLF/ci#83.
2025-11-25 17:29:05 +00:00
Bronek Kozicki
e7e4d52e38 Version 3.0.0-rc1 2025-11-12 09:30:43 +00:00
Bronek Kozicki
4135d56aa0 fix: floating point representation errors in vault (#5997)
This change fixes floating point errors in conversion of shares to assets and other way, used in `VaultDeposit`, `VaultWithdraw` and `VaultClawback`. In the floating point calculations the division introduces a larger error than multiplication. If we do division first, then the error introduced will be increased by the multiplication that follows, which is therefore the wrong order to perform these two operations. This change flips the order of arithmetic operations, which minimizes the error.
2025-11-12 09:30:42 +00:00
Ayaz Salikhov
865557024e ci: Specify bash as the default shell in workflows (#6021) 2025-11-12 09:30:36 +00:00
Bronek Kozicki
91b96d6386 chore: Move running of unit tests out of coverage target (#6018)
This change makes the progress of unit tests visible and also gives more flexibility when running them.
2025-11-11 15:42:14 +00:00
Bart
f1dbb20d7b chore: Make CMake improvements (#6010)
This change removes unused definitions from the CMake files, moves variable definitions from `XrplSanity` to `XrplSettings` where they better belong, and updates the minimum GCC and Clang versions to match what we actually minimally support.
2025-11-11 12:13:12 +00:00
Bronek Kozicki
2a2881ee53 chore: Unify build & test, add ctest to coverage (#6013)
This change unifies the build and test jobs into a single job, and adds `ctest` to coverage reporting.

The mechanics of coverage reporting is slightly complex and most of it is encapsulated in the `coverage` target. The status quo way of preparing coverage reports involves running a single target `cmake --build . --target coverage`, which does three things:
* Build the `rippled` binary (via target dependency)
* Prepare coverage reports:
  * Run `./rippled -u` unit tests.
  * Gather test output and build reports.

This makes it awkward to add an additional `ctest` step between build and coverage reporting steps. The better solution is to split `coverage` target into separate build, followed by `ctest`, followed by test generation. Luckily, the `coverage` target has been designed specifically to support such case; it does not need to build `rippled`, it's just a dependency. Similarly it allows additional tests to be run before gathering test outputs; in principle we could even strip it from running tests and run them separately instead. This means we can keep build, `ctest` and generation of coverage reports as separate steps, as long as the state of build directory is fully (including file timestamps, additional coverage files etc.) preserved between the steps. This means that in order to run `ctest` for coverage reporting we need to integrate build and test into a single job, which this change does.
2025-11-11 12:13:11 +00:00
Shawn Xie
ee2dff337d fix: domain order book insertion #5998 2025-11-11 12:13:11 +00:00
Ed Hennis
102a89f351 test: Count crashed test suites (#5924)
When outputting the unit test summary, this change counts crashed tests as failures.
2025-11-11 12:13:09 +00:00
Bart
9add957962 ci: Update CI image hashes to use netstat (#5987)
To debug test failures we would like to use `netstat`, but that package wasn't installed yet in the CI images. This change uses the new CI images created by https://github.com/XRPLF/ci/pull/79.
2025-11-11 12:06:55 +00:00
Vlad
0f1b607bb4 refactor: Improve txset handling (#5951) 2025-11-03 11:36:41 +00:00
Bronek Kozicki
4425f84c1f Remove directory size limit (#5935)
This change introduces the `fixDirectoryLimit` amendment to remove the directory pages limit. We found that the directory size limit is easier to hit than originally assumed, and there is no good reason to keep this limit, since the object reserve provides the necessary incentive to avoid creating unnecessary objects on the ledger.
2025-10-31 14:26:44 +00:00
Bronek Kozicki
8a6cc3ded8 fix: Change Credential sfSubjectNode to optional (#5936)
Field `sfSubjectNode` is not populated by `CredentialCreate` in self-issued credentials. Rather than fixup the Credentials already on the ledger, we can in this case safely change the object template for this field from `soeREQUIRED` to `soeOPTIONAL`.
2025-10-31 14:26:44 +00:00
Bart
3eec6ffcd7 ci: Check whether test failures are caused by port exhaustion (#5938)
This change adds an extra step to the CI test job that outputs network info, which may allow us to confirm whether random test failures are caused by port exhaustion.
2025-10-31 14:26:43 +00:00
Ayaz Salikhov
6b56c805dd chore: Use new prepare-runner (#5970)
See: XRPLF/actions#19.
2025-10-31 14:26:43 +00:00
Bart
da0eff9c1b ci: Use nproc-2 to set parallelism for builds and tests (#5939)
This change reduces the number of cores used to build and test, as using all cores may be contributing to occasional build and test failures.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:42 +00:00
Bart
6e326e6c11 ci: Use commit hash so workflows are not canceled when merging multiple PRs (#5950)
This change changes the CI concurrency group for pushes to the `develop` branch to use the commit hash instead of the target branch.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:41 +00:00
Bart
405575fd53 ci: Only upload codecov reports in the original repo, not in forks (#5953)
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:41 +00:00
Bart
f38f299a86 ci: Only log into Conan when uploading packages (#5952)
There are separate steps for logging into Conan and uploading packages. However, at the moment sometimes the login step is executed even though no packages will be uploaded. The condition for performing both steps should be the same.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:40 +00:00
Bronek Kozicki
8951419dbe fix: invariant error in fee-sized VaultWithdraw (#5876)
This changes fixes an invariant error where the amount withdrawn is equal to the transaction fee.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:39 +00:00
Ayaz Salikhov
a8b1a01d9e ci: Only run .exe files during test phase on Windows (#5947) 2025-10-31 14:26:39 +00:00
Jingchen
994b490db5 refactor: Migrate json unit tests to use doctest (#5533)
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:38 +00:00
Shawn Xie
7c8b16797f Change fixMPTDeliveredAmount to Supported::yes (#5833)
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:37 +00:00
Ayaz Salikhov
9507d9c276 fix: Upload all test binaries (#5932) 2025-10-31 14:26:37 +00:00
Ayaz Salikhov
8ca21406e6 chore: Better pre-commit failure message (#5940) 2025-10-31 14:26:36 +00:00
Ayaz Salikhov
178f4248e4 fix: Clean up build profile options (#5934)
The `-Wno-missing-template-arg-list-after-template-kw` flag is only needed for the grpc library. Use `+=` for the default build flags to make it easier to extend in the future.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:36 +00:00
Ayaz Salikhov
e8069a40f2 Use "${ENVVAR}" instead of ${{ env.ENVVAR }} syntax in GitHub Actions (#5923) 2025-10-31 14:26:35 +00:00
Bronek Kozicki
5ebc29c481 fix: Enforce reserve when creating trust line or MPToken in VaultWithdraw (#5857)
Similarly to other transaction typed that can create a trust line or MPToken for the transaction submitter (e.g. CashCheck #5285, EscrowFinish #5185 ), VaultWithdraw should enforce reserve before creating a new object. Additionally, the lsfRequireDestTag account flag should be enforced for the transaction submitter.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:35 +00:00
Mayukha Vadari
224b055124 chore: remove unnecessary LCOV_EXCL_LINE (#5913) 2025-10-31 14:26:34 +00:00
Bart
f99c1158d5 chore: Set explicit timeouts for build and test jobs (#5912)
The default job timeout is 5 hours, while build times are anywhere between 4-20 mins and test times between 2-10. As a runner occasionally gets stuck, we should fail much quicker.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:33 +00:00
Bart
a2594b6fe0 chore: Set fail fast to false, except for when the merge group is used (#5897)
This PR sets the fail-fast strategy option to false (it defaults to true), unless it is run by a merge group.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:33 +00:00
Bart
12fb54c66e chore: Clean up Conan variables in CI (#5903)
This change sanitizes inputs by setting them as environment variables, and adjusts the number of CPUs used for building. Namely, GitHub inputs should be sanitized, per recommendation by Semgrep, as using them directly poses a security risk. A recent change further overrode the global configuration by having builds use all cores, but as we have noticed an increased number of job cancelation this change updates it to use all cores less one.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:32 +00:00
Bart
51917be96d chore: Add support for RHEL 8 (#5880)
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:32 +00:00
Ayaz Salikhov
97b8f5c4b3 refactor: Update pre-commit workflow to latest version (#5902)
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:31 +00:00
Mayukha Vadari
a127314a89 refactor: replace string JSONs with Json::Value (#5886)
There are some tests that write out JSONs as a string instead of using the Json::Value library, which are cleaned up by this change.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:31 +00:00
Mayukha Vadari
0754cca98c refactor: replace boost::lexical_cast<std::string> with to_string (#5883)
This change replaces boost::lexical_cast<std::string> with to_string in some of the tests to make them more readable.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:30 +00:00
Mayukha Vadari
eb66ae1bd4 refactor: replace JSON LastLedgerSequence with last_ledger_seq (#5884)
This change replaces instances of JSON LastLedgerSequence with last_ledger_seq, which makes the tests a bit simpler and easier to read.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:29 +00:00
Jingchen
5c3b44d1af chore: Reduce build log verbosity on Windows (#5865)
Windows is extremely chatty and generates tons of logs when building, making it practically impossible to use the build logs to debug issues. This change sets the verbosity to 'quiet' on Windows.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:29 +00:00
Bart
e4b334faba fix: Update tools image shas (#5896)
This change updates the Docker image hashes of the tools-rippled images to fix a missing dependency.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:28 +00:00
zingero
adad20b862 docs: Fix typo in JSON writer documentation (#5881)
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:28 +00:00
tequ
9907fa07a9 refactor: Add paychan namespace and update related tests (#5840)
This change adds a paychan namespace to the TestHelpers and implementation files, improving organization and clarity. Additionally, it updates the AMM test to use the new `paychan::create` function for payment channel creation.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:27 +00:00
Ayaz Salikhov
d032bd681a chore: Support CMake 4 without workarounds (#5866) 2025-10-31 14:26:26 +00:00
Mayukha Vadari
b444457c19 chore: Exclude code/unreachable transaction code from Codecov (#5847)
This change excludes from Codecov unreachable/difficult-to-test transaction code (such as `tecINTERNAL`) and old code (from amendments that have been enabled for a long time that are only around for ledger replay reasons). This removes about 200 lines of misses and increases the Codecov coverage by 0.3% (79.2% to 79.5%).
2025-10-31 14:26:26 +00:00
Bart
4f076cb955 chore: Add wildcard to support triggering for release pipelines (#5879)
This change adds a wildcard to the release branch in the CI pipeline spec. Namely, after adopting an improved release process, with release branches that now look like release-X.Y, the trigger pipeline was no longer running as it only searched for an exact match to release.
2025-10-31 14:26:25 +00:00
Bronek Kozicki
220ab26225 Add vault invariants (#5518)
This change adds invariants for SingleAssetVault #5224 (XLS-065), which had been intentionally skipped earlier to keep the SAV PR size manageable.
2025-10-31 14:26:25 +00:00
tequ
89d81655c6 test: Add more tests for Simulate RPC metadata (#5827) 2025-10-31 14:26:24 +00:00
Bronek Kozicki
7bc2d5cba4 chore: Fix release build error (#5864)
This change fixes a release build error with GCC 15.2.

The `fields` variable is only used in `XRPL_ASSERT`, which evaluates to nothing in a Release build, leaving the variable unused. This change silences the build warning.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:23 +00:00
Bart
a34b36e021 refactor: Update CI strategy matrix to use new RHEL 9 and RHEL 10 images (#5856)
This change uses the new RHEL 9 and 10 images to build and test the binary, and adds support for having different Docker image SHAs per distro-compiler combination.

Instead of supporting RHEL each minor version, we are simplifying our pipelines by only supporting RHEL major versions. Our CI Docker images have already been updated accordingly, and we recently added support for RHEL 10 as well. Up until now, the CI Docker images had all been rebuilt at the same time, but that is not necessarily true as the most recent push to the CI repo has shown where the RHEL images now have a different SHA than the Debian and Ubuntu ones.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:22 +00:00
Mayukha Vadari
5b2ab905c0 chore: exclude all UNREACHABLE blocks from codecov (#5846) 2025-10-31 14:26:16 +00:00
Bart
5e43e91d4a Revert "refactor: Update to Boost 1.88 and 1.86" (#5872)
This change reverts #5570, and then also makes the same changes as was done in #5759 to revert to Boost 1.83. We would like to expose Boost 1.88 to more in-depth testing, but with release 3.0.0 coming out soon there is insufficient time to do so, hence the reversion to Boost 1.83 (skipping Boost 1.86 as it has a bug in the executors).
2025-10-10 11:32:13 -04:00
Bart
5bac21c05b Revert "fix: FD/handle guarding + exponential backoff (#5823)" (#5869)
This reverts commit 330a3215bc.
2025-10-08 16:01:29 -04:00
Vito Tumas
1643d22103 Revert "Bugfix: Adds graceful peer disconnection (#5669)" (#5855)
This reverts commit 17a2606591.
2025-10-08 19:16:47 +01:00
1761 changed files with 196954 additions and 53913 deletions

View File

@@ -37,7 +37,7 @@ BinPackParameters: false
BreakBeforeBinaryOperators: false
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true
ColumnLimit: 120
ColumnLimit: 80
CommentPragmas: "^ IWYU pragma:"
ConstructorInitializerAllOnOneLineOrOnePerLine: true
ConstructorInitializerIndentWidth: 4

View File

@@ -1,247 +0,0 @@
_help_parse: Options affecting listfile parsing
parse:
_help_additional_commands:
- Specify structure for custom cmake functions
additional_commands:
target_protobuf_sources:
pargs:
- target
- prefix
kwargs:
PROTOS: "*"
LANGUAGE: cpp
IMPORT_DIRS: "*"
GENERATE_EXTENSIONS: "*"
PLUGIN: "*"
_help_override_spec:
- Override configurations per-command where available
override_spec: {}
_help_vartags:
- Specify variable tags.
vartags: []
_help_proptags:
- Specify property tags.
proptags: []
_help_format: Options affecting formatting.
format:
_help_disable:
- Disable formatting entirely, making cmake-format a no-op
disable: false
_help_line_width:
- How wide to allow formatted cmake files
line_width: 120
_help_tab_size:
- How many spaces to tab for indent
tab_size: 4
_help_use_tabchars:
- If true, lines are indented using tab characters (utf-8
- 0x09) instead of <tab_size> space characters (utf-8 0x20).
- In cases where the layout would require a fractional tab
- character, the behavior of the fractional indentation is
- governed by <fractional_tab_policy>
use_tabchars: false
_help_fractional_tab_policy:
- If <use_tabchars> is True, then the value of this variable
- indicates how fractional indentions are handled during
- whitespace replacement. If set to 'use-space', fractional
- indentation is left as spaces (utf-8 0x20). If set to
- "`round-up` fractional indentation is replaced with a single"
- tab character (utf-8 0x09) effectively shifting the column
- to the next tabstop
fractional_tab_policy: use-space
_help_max_subgroups_hwrap:
- If an argument group contains more than this many sub-groups
- (parg or kwarg groups) then force it to a vertical layout.
max_subgroups_hwrap: 4
_help_max_pargs_hwrap:
- If a positional argument group contains more than this many
- arguments, then force it to a vertical layout.
max_pargs_hwrap: 5
_help_max_rows_cmdline:
- If a cmdline positional group consumes more than this many
- lines without nesting, then invalidate the layout (and nest)
max_rows_cmdline: 2
_help_separate_ctrl_name_with_space:
- If true, separate flow control names from their parentheses
- with a space
separate_ctrl_name_with_space: true
_help_separate_fn_name_with_space:
- If true, separate function names from parentheses with a
- space
separate_fn_name_with_space: false
_help_dangle_parens:
- If a statement is wrapped to more than one line, than dangle
- the closing parenthesis on its own line.
dangle_parens: false
_help_dangle_align:
- If the trailing parenthesis must be 'dangled' on its on
- "line, then align it to this reference: `prefix`: the start"
- "of the statement, `prefix-indent`: the start of the"
- "statement, plus one indentation level, `child`: align to"
- the column of the arguments
dangle_align: prefix
_help_min_prefix_chars:
- If the statement spelling length (including space and
- parenthesis) is smaller than this amount, then force reject
- nested layouts.
min_prefix_chars: 18
_help_max_prefix_chars:
- If the statement spelling length (including space and
- parenthesis) is larger than the tab width by more than this
- amount, then force reject un-nested layouts.
max_prefix_chars: 10
_help_max_lines_hwrap:
- If a candidate layout is wrapped horizontally but it exceeds
- this many lines, then reject the layout.
max_lines_hwrap: 2
_help_line_ending:
- What style line endings to use in the output.
line_ending: unix
_help_command_case:
- Format command names consistently as 'lower' or 'upper' case
command_case: canonical
_help_keyword_case:
- Format keywords consistently as 'lower' or 'upper' case
keyword_case: unchanged
_help_always_wrap:
- A list of command names which should always be wrapped
always_wrap: []
_help_enable_sort:
- If true, the argument lists which are known to be sortable
- will be sorted lexicographicall
enable_sort: true
_help_autosort:
- If true, the parsers may infer whether or not an argument
- list is sortable (without annotation).
autosort: true
_help_require_valid_layout:
- By default, if cmake-format cannot successfully fit
- everything into the desired linewidth it will apply the
- last, most aggressive attempt that it made. If this flag is
- True, however, cmake-format will print error, exit with non-
- zero status code, and write-out nothing
require_valid_layout: false
_help_layout_passes:
- A dictionary mapping layout nodes to a list of wrap
- decisions. See the documentation for more information.
layout_passes: {}
_help_markup: Options affecting comment reflow and formatting.
markup:
_help_bullet_char:
- What character to use for bulleted lists
bullet_char: "-"
_help_enum_char:
- What character to use as punctuation after numerals in an
- enumerated list
enum_char: .
_help_first_comment_is_literal:
- If comment markup is enabled, don't reflow the first comment
- block in each listfile. Use this to preserve formatting of
- your copyright/license statements.
first_comment_is_literal: false
_help_literal_comment_pattern:
- If comment markup is enabled, don't reflow any comment block
- which matches this (regex) pattern. Default is `None`
- (disabled).
literal_comment_pattern: null
_help_fence_pattern:
- Regular expression to match preformat fences in comments
- default= ``r'^\s*([`~]{3}[`~]*)(.*)$'``
fence_pattern: ^\s*([`~]{3}[`~]*)(.*)$
_help_ruler_pattern:
- Regular expression to match rulers in comments default=
- '``r''^\s*[^\w\s]{3}.*[^\w\s]{3}$''``'
ruler_pattern: ^\s*[^\w\s]{3}.*[^\w\s]{3}$
_help_explicit_trailing_pattern:
- If a comment line matches starts with this pattern then it
- is explicitly a trailing comment for the preceding
- argument. Default is '#<'
explicit_trailing_pattern: "#<"
_help_hashruler_min_length:
- If a comment line starts with at least this many consecutive
- hash characters, then don't lstrip() them off. This allows
- for lazy hash rulers where the first hash char is not
- separated by space
hashruler_min_length: 10
_help_canonicalize_hashrulers:
- If true, then insert a space between the first hash char and
- remaining hash chars in a hash ruler, and normalize its
- length to fill the column
canonicalize_hashrulers: true
_help_enable_markup:
- enable comment markup parsing and reflow
enable_markup: false
_help_lint: Options affecting the linter
lint:
_help_disabled_codes:
- a list of lint codes to disable
disabled_codes: []
_help_function_pattern:
- regular expression pattern describing valid function names
function_pattern: "[0-9a-z_]+"
_help_macro_pattern:
- regular expression pattern describing valid macro names
macro_pattern: "[0-9A-Z_]+"
_help_global_var_pattern:
- regular expression pattern describing valid names for
- variables with global (cache) scope
global_var_pattern: "[A-Z][0-9A-Z_]+"
_help_internal_var_pattern:
- regular expression pattern describing valid names for
- variables with global scope (but internal semantic)
internal_var_pattern: _[A-Z][0-9A-Z_]+
_help_local_var_pattern:
- regular expression pattern describing valid names for
- variables with local scope
local_var_pattern: "[a-z][a-z0-9_]+"
_help_private_var_pattern:
- regular expression pattern describing valid names for
- privatedirectory variables
private_var_pattern: _[0-9a-z_]+
_help_public_var_pattern:
- regular expression pattern describing valid names for public
- directory variables
public_var_pattern: "[A-Z][0-9A-Z_]+"
_help_argument_var_pattern:
- regular expression pattern describing valid names for
- function/macro arguments and loop variables.
argument_var_pattern: "[a-z][a-z0-9_]+"
_help_keyword_pattern:
- regular expression pattern describing valid names for
- keywords used in functions or macros
keyword_pattern: "[A-Z][0-9A-Z_]+"
_help_max_conditionals_custom_parser:
- In the heuristic for C0201, how many conditionals to match
- within a loop in before considering the loop a parser.
max_conditionals_custom_parser: 2
_help_min_statement_spacing:
- Require at least this many newlines between statements
min_statement_spacing: 1
_help_max_statement_spacing:
- Require no more than this many newlines between statements
max_statement_spacing: 2
max_returns: 6
max_branches: 12
max_arguments: 5
max_localvars: 15
max_statements: 50
_help_encode: Options affecting file encoding
encode:
_help_emit_byteorder_mark:
- If true, emit the unicode byte-order mark (BOM) at the start
- of the file
emit_byteorder_mark: false
_help_input_encoding:
- Specify the encoding of the input file. Defaults to utf-8
input_encoding: utf-8
_help_output_encoding:
- Specify the encoding of the output file. Defaults to utf-8.
- Note that cmake only claims to support utf-8 so be careful
- when using anything else
output_encoding: utf-8
_help_misc: Miscellaneous configurations options.
misc:
_help_per_command:
- A dictionary containing any per-command configuration
- overrides. Currently only `command_case` is supported.
per_command: {}

5
.gitattributes vendored
View File

@@ -1,6 +1,9 @@
# Set default behaviour, in case users don't have core.autocrlf set.
#* text=auto
# cspell: disable
# These annoying files
rippled.1 binary
LICENSE binary
# Visual Studio
*.sln text eol=crlf

8
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,8 @@
# Allow anyone to review any change by default.
*
# Require the rpc-reviewers team to review changes to the rpc code.
include/xrpl/protocol/ @xrplf/rpc-reviewers
src/libxrpl/protocol/ @xrplf/rpc-reviewers
src/xrpld/rpc/ @xrplf/rpc-reviewers
src/xrpld/app/misc/ @xrplf/rpc-reviewers

View File

@@ -1,7 +1,7 @@
---
name: Bug Report
about: Create a report to help us improve xrpld
title: "[Title with short description] (Version: [xrpld version])"
about: Create a report to help us improve rippled
title: "[Title with short description] (Version: [rippled version])"
labels: ""
assignees: ""
---
@@ -27,7 +27,7 @@ assignees: ""
## Environment
<!--Please describe your environment setup (such as Ubuntu 18.04 with Boost 1.70).-->
<!-- If you are using a formal release, please use the version returned by './xrpld --version' as the version number-->
<!-- If you are using a formal release, please use the version returned by './rippled --version' as the version number-->
<!-- If you are working off of develop, please add the git hash via 'git rev-parse HEAD'-->
## Supporting Files

View File

@@ -4,6 +4,9 @@ description: "Install Conan dependencies, optionally forcing a rebuild of all de
# Note that actions do not support 'type' and all inputs are strings, see
# https://docs.github.com/en/actions/reference/workflows-and-actions/metadata-syntax#inputs.
inputs:
build_dir:
description: "The directory where to build."
required: true
build_type:
description: 'The build type to use ("Debug", "Release").'
required: true
@@ -18,10 +21,6 @@ inputs:
description: "The logging verbosity."
required: false
default: "verbose"
sanitizers:
description: "The sanitizers to enable."
required: false
default: ""
runs:
using: composite
@@ -29,15 +28,17 @@ runs:
- name: Install Conan dependencies
shell: bash
env:
BUILD_DIR: ${{ inputs.build_dir }}
BUILD_NPROC: ${{ inputs.build_nproc }}
BUILD_OPTION: ${{ inputs.force_build == 'true' && '*' || 'missing' }}
BUILD_TYPE: ${{ inputs.build_type }}
LOG_VERBOSITY: ${{ inputs.log_verbosity }}
SANITIZERS: ${{ inputs.sanitizers }}
run: |
echo 'Installing dependencies.'
mkdir -p "${BUILD_DIR}"
cd "${BUILD_DIR}"
conan install \
--profile ci \
--output-folder . \
--build="${BUILD_OPTION}" \
--options:host='&:tests=True' \
--options:host='&:xrpld=True' \
@@ -45,4 +46,4 @@ runs:
--conf:all tools.build:jobs=${BUILD_NPROC} \
--conf:all tools.build:verbosity="${LOG_VERBOSITY}" \
--conf:all tools.compilation:verbosity="${LOG_VERBOSITY}" \
.
..

View File

@@ -1,44 +0,0 @@
name: Generate build version number
description: "Generate build version number."
outputs:
version:
description: "The generated build version number."
value: ${{ steps.version.outputs.version }}
runs:
using: composite
steps:
# When a tag is pushed, the version is used as-is.
- name: Generate version for tag event
if: ${{ github.event_name == 'tag' }}
shell: bash
env:
VERSION: ${{ github.ref_name }}
run: echo "VERSION=${VERSION}" >> "${GITHUB_ENV}"
# When a tag is not pushed, then the version (e.g. 1.2.3-b0) is extracted
# from the BuildInfo.cpp file and the shortened commit hash appended to it.
# We use a plus sign instead of a hyphen because Conan recipe versions do
# not support two hyphens.
- name: Generate version for non-tag event
if: ${{ github.event_name != 'tag' }}
shell: bash
run: |
echo 'Extracting version from BuildInfo.cpp.'
VERSION="$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" | awk -F '"' '{print $2}')"
if [[ -z "${VERSION}" ]]; then
echo 'Unable to extract version from BuildInfo.cpp.'
exit 1
fi
echo 'Appending shortened commit hash to version.'
SHA='${{ github.sha }}'
VERSION="${VERSION}+${SHA:0:7}"
echo "VERSION=${VERSION}" >> "${GITHUB_ENV}"
- name: Output version
id: version
shell: bash
run: echo "version=${VERSION}" >> "${GITHUB_OUTPUT}"

View File

@@ -11,6 +11,12 @@ runs:
echo 'Checking environment variables.'
set
echo 'Checking CMake version.'
cmake --version
echo 'Checking Conan version.'
conan --version
- name: Check configuration (Linux and macOS)
if: ${{ runner.os == 'Linux' || runner.os == 'macOS' }}
shell: bash
@@ -21,23 +27,17 @@ runs:
echo 'Checking environment variables.'
env | sort
echo 'Checking CMake version.'
cmake --version
echo 'Checking compiler version.'
${{ runner.os == 'Linux' && '${CC}' || 'clang' }} --version
echo 'Checking Conan version.'
conan --version
echo 'Checking Ninja version.'
ninja --version
echo 'Checking nproc version.'
nproc --version
- name: Check configuration (all)
shell: bash
run: |
echo 'Checking Ccache version.'
ccache --version
echo 'Checking CMake version.'
cmake --version
echo 'Checking Conan version.'
conan --version

View File

@@ -2,11 +2,11 @@ name: Setup Conan
description: "Set up Conan configuration, profile, and remote."
inputs:
remote_name:
conan_remote_name:
description: "The name of the Conan remote to use."
required: false
default: xrplf
remote_url:
conan_remote_url:
description: "The URL of the Conan endpoint to use."
required: false
default: https://conan.ripplex.io
@@ -28,19 +28,19 @@ runs:
shell: bash
run: |
echo 'Installing profile.'
conan config install conan/profiles/ -tf $(conan config home)/profiles/
conan config install conan/profiles/default -tf $(conan config home)/profiles/
echo 'Conan profile:'
conan profile show --profile ci
conan profile show
- name: Set up Conan remote
shell: bash
env:
REMOTE_NAME: ${{ inputs.remote_name }}
REMOTE_URL: ${{ inputs.remote_url }}
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
CONAN_REMOTE_URL: ${{ inputs.conan_remote_url }}
run: |
echo "Adding Conan remote '${REMOTE_NAME}' at '${REMOTE_URL}'."
conan remote add --index 0 --force "${REMOTE_NAME}" "${REMOTE_URL}"
echo "Adding Conan remote '${CONAN_REMOTE_NAME}' at '${CONAN_REMOTE_URL}'."
conan remote add --index 0 --force "${CONAN_REMOTE_NAME}" "${CONAN_REMOTE_URL}"
echo 'Listing Conan remotes.'
conan remote list

View File

@@ -3,26 +3,21 @@
Levelization is the term used to describe efforts to prevent rippled from
having or creating cyclic dependencies.
rippled code is organized into directories under `src/xrpld`, `src/libxrpl` (and
rippled code is organized into directories under `src/rippled` (and
`src/test`) representing modules. The modules are intended to be
organized into "tiers" or "levels" such that a module from one level can
only include code from lower levels. Additionally, a module
in one level should never include code in an `impl` or `detail` folder of any level
in one level should never include code in an `impl` folder of any level
other than it's own.
The codebase is split into two main areas:
- **libxrpl** (`src/libxrpl`, `include/xrpl`): Reusable library modules with public interfaces
- **xrpld** (`src/xrpld`): Application-specific implementation code
Unfortunately, over time, enforcement of levelization has been
inconsistent, so the current state of the code doesn't necessarily
reflect these rules. Whenever possible, developers should refactor any
levelization violations they find (by moving files or individual
classes). At the very least, don't make things worse.
The table below summarizes the _desired_ division of modules, based on the current
state of the rippled code. The levels are numbered from
The table below summarizes the _desired_ division of modules, based on the
state of the rippled code when it was created. The levels are numbered from
the bottom up with the lower level, lower numbered, more independent
modules listed first, and the higher level, higher numbered modules with
more dependencies listed later.
@@ -30,33 +25,18 @@ more dependencies listed later.
**tl;dr:** The modules listed first are more independent than the modules
listed later.
## libxrpl Modules (Reusable Libraries)
| Level / Tier | Module(s) |
| ------------ | ----------------------------------- |
| 01 | xrpl/beast |
| 02 | xrpl/basics |
| 03 | xrpl/json xrpl/crypto |
| 04 | xrpl/protocol |
| 05 | xrpl/core xrpl/resource xrpl/server |
| 06 | xrpl/ledger xrpl/nodestore xrpl/net |
| 07 | xrpl/shamap |
## xrpld Modules (Application Implementation)
| Level / Tier | Module(s) |
| ------------ | -------------------------------- |
| 05 | xrpld/conditions xrpld/consensus |
| 06 | xrpld/core xrpld/peerfinder |
| 07 | xrpld/shamap xrpld/overlay |
| 08 | xrpld/app |
| 09 | xrpld/rpc |
| 10 | xrpld/perflog |
## Test Modules
| Level / Tier | Module(s) |
| ------------ | -------------------------------------------------------------------------------------------------------- |
| 01 | ripple/beast ripple/unity |
| 02 | ripple/basics |
| 03 | ripple/json ripple/crypto |
| 04 | ripple/protocol |
| 05 | ripple/core ripple/conditions ripple/consensus ripple/resource ripple/server |
| 06 | ripple/peerfinder ripple/ledger ripple/nodestore ripple/net |
| 07 | ripple/shamap ripple/overlay |
| 08 | ripple/app |
| 09 | ripple/rpc |
| 10 | ripple/perflog |
| 11 | test/jtx test/beast test/csf |
| 12 | test/unit_test |
| 13 | test/crypto test/conditions test/json test/resource test/shamap test/peerfinder test/basics test/overlay |
@@ -65,8 +45,8 @@ listed later.
| 16 | test/rpc test/app |
(Note that `test` levelization is _much_ less important and _much_ less
strictly enforced than `xrpl`/`xrpld` levelization, other than the requirement
that `test` code should _never_ be included in `xrpl` or `xrpld` code.)
strictly enforced than `ripple` levelization, other than the requirement
that `test` code should _never_ be included in `ripple` code.)
## Validation
@@ -81,10 +61,10 @@ It generates many files of [results](results):
- `rawincludes.txt`: The raw dump of the `#includes`
- `paths.txt`: A second dump grouping the source module
to the destination module, de-duped, and with frequency counts.
to the destination module, deduped, and with frequency counts.
- `includes/`: A directory where each file represents a module and
contains a list of modules and counts that the module _includes_.
- `included_by/`: Similar to `includes/`, but the other way around. Each
- `includedby/`: Similar to `includes/`, but the other way around. Each
file represents a module and contains a list of modules and counts
that _include_ the module.
- [`loops.txt`](results/loops.txt): A list of direct loops detected

View File

@@ -29,7 +29,7 @@ pushd results
oldifs=${IFS}
IFS=:
mkdir includes
mkdir included_by
mkdir includedby
echo Build levelization paths
exec 3< ${includes} # open rawincludes.txt for input
while read -r -u 3 file include
@@ -59,7 +59,7 @@ do
echo $level $includelevel | tee -a paths.txt
fi
done
echo Sort and deduplicate paths
echo Sort and dedup paths
sort -ds paths.txt | uniq -c | tee sortedpaths.txt
mv sortedpaths.txt paths.txt
exec 3>&- #close fd 3
@@ -71,7 +71,7 @@ exec 4<paths.txt # open paths.txt for input
while read -r -u 4 count level include
do
echo ${include} ${count} | tee -a includes/${level}
echo ${level} ${count} | tee -a included_by/${include}
echo ${level} ${count} | tee -a includedby/${include}
done
exec 4>&- #close fd 4

View File

@@ -4,18 +4,27 @@ Loop: test.jtx test.toplevel
Loop: test.jtx test.unit_test
test.unit_test == test.jtx
Loop: xrpld.app xrpld.core
xrpld.app > xrpld.core
Loop: xrpld.app xrpld.overlay
xrpld.overlay > xrpld.app
Loop: xrpld.app xrpld.peerfinder
xrpld.peerfinder == xrpld.app
xrpld.peerfinder ~= xrpld.app
Loop: xrpld.app xrpld.rpc
xrpld.rpc > xrpld.app
Loop: xrpld.app xrpld.shamap
xrpld.shamap ~= xrpld.app
xrpld.app > xrpld.shamap
Loop: xrpld.core xrpld.perflog
xrpld.perflog == xrpld.core
Loop: xrpld.overlay xrpld.rpc
xrpld.rpc ~= xrpld.overlay
Loop: xrpld.perflog xrpld.rpc
xrpld.rpc ~= xrpld.perflog

View File

@@ -1,6 +1,4 @@
libxrpl.basics > xrpl.basics
libxrpl.core > xrpl.basics
libxrpl.core > xrpl.core
libxrpl.crypto > xrpl.basics
libxrpl.json > xrpl.basics
libxrpl.json > xrpl.json
@@ -10,47 +8,34 @@ libxrpl.ledger > xrpl.ledger
libxrpl.ledger > xrpl.protocol
libxrpl.net > xrpl.basics
libxrpl.net > xrpl.net
libxrpl.nodestore > xrpl.basics
libxrpl.nodestore > xrpl.json
libxrpl.nodestore > xrpl.nodestore
libxrpl.nodestore > xrpl.protocol
libxrpl.protocol > xrpl.basics
libxrpl.protocol > xrpl.json
libxrpl.protocol > xrpl.protocol
libxrpl.rdb > xrpl.basics
libxrpl.rdb > xrpl.rdb
libxrpl.resource > xrpl.basics
libxrpl.resource > xrpl.json
libxrpl.resource > xrpl.resource
libxrpl.server > xrpl.basics
libxrpl.server > xrpl.json
libxrpl.server > xrpl.protocol
libxrpl.server > xrpl.rdb
libxrpl.server > xrpl.server
libxrpl.shamap > xrpl.basics
libxrpl.shamap > xrpl.protocol
libxrpl.shamap > xrpl.shamap
test.app > test.jtx
test.app > test.rpc
test.app > test.toplevel
test.app > test.unit_test
test.app > xrpl.basics
test.app > xrpl.core
test.app > xrpld.app
test.app > xrpld.core
test.app > xrpld.nodestore
test.app > xrpld.overlay
test.app > xrpld.rpc
test.app > xrpl.json
test.app > xrpl.ledger
test.app > xrpl.nodestore
test.app > xrpl.protocol
test.app > xrpl.rdb
test.app > xrpl.resource
test.app > xrpl.server
test.basics > test.jtx
test.basics > test.unit_test
test.basics > xrpl.basics
test.basics > xrpl.core
test.basics > xrpld.perflog
test.basics > xrpld.rpc
test.basics > xrpl.json
test.basics > xrpl.protocol
@@ -69,10 +54,9 @@ test.core > test.jtx
test.core > test.toplevel
test.core > test.unit_test
test.core > xrpl.basics
test.core > xrpl.core
test.core > xrpld.core
test.core > xrpld.perflog
test.core > xrpl.json
test.core > xrpl.rdb
test.core > xrpl.server
test.csf > xrpl.basics
test.csf > xrpld.consensus
@@ -101,8 +85,9 @@ test.nodestore > test.jtx
test.nodestore > test.toplevel
test.nodestore > test.unit_test
test.nodestore > xrpl.basics
test.nodestore > xrpl.nodestore
test.nodestore > xrpl.rdb
test.nodestore > xrpld.core
test.nodestore > xrpld.nodestore
test.nodestore > xrpld.unity
test.overlay > test.jtx
test.overlay > test.toplevel
test.overlay > test.unit_test
@@ -110,9 +95,8 @@ test.overlay > xrpl.basics
test.overlay > xrpld.app
test.overlay > xrpld.overlay
test.overlay > xrpld.peerfinder
test.overlay > xrpl.nodestore
test.overlay > xrpld.shamap
test.overlay > xrpl.protocol
test.overlay > xrpl.shamap
test.peerfinder > test.beast
test.peerfinder > test.unit_test
test.peerfinder > xrpl.basics
@@ -129,7 +113,6 @@ test.resource > xrpl.resource
test.rpc > test.jtx
test.rpc > test.toplevel
test.rpc > xrpl.basics
test.rpc > xrpl.core
test.rpc > xrpld.app
test.rpc > xrpld.core
test.rpc > xrpld.overlay
@@ -148,93 +131,74 @@ test.server > xrpl.json
test.server > xrpl.server
test.shamap > test.unit_test
test.shamap > xrpl.basics
test.shamap > xrpl.nodestore
test.shamap > xrpld.nodestore
test.shamap > xrpld.shamap
test.shamap > xrpl.protocol
test.shamap > xrpl.shamap
test.toplevel > test.csf
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
tests.libxrpl > xrpl.basics
tests.libxrpl > xrpl.json
tests.libxrpl > xrpl.net
xrpl.core > xrpl.basics
xrpl.core > xrpl.json
xrpl.core > xrpl.ledger
xrpl.core > xrpl.protocol
xrpl.json > xrpl.basics
xrpl.ledger > xrpl.basics
xrpl.ledger > xrpl.protocol
xrpl.net > xrpl.basics
xrpl.nodestore > xrpl.basics
xrpl.nodestore > xrpl.protocol
xrpl.protocol > xrpl.basics
xrpl.protocol > xrpl.json
xrpl.rdb > xrpl.basics
xrpl.rdb > xrpl.core
xrpl.rdb > xrpl.protocol
xrpl.resource > xrpl.basics
xrpl.resource > xrpl.json
xrpl.resource > xrpl.protocol
xrpl.server > xrpl.basics
xrpl.server > xrpl.core
xrpl.server > xrpl.json
xrpl.server > xrpl.protocol
xrpl.server > xrpl.rdb
xrpl.shamap > xrpl.basics
xrpl.shamap > xrpl.nodestore
xrpl.shamap > xrpl.protocol
xrpld.app > test.unit_test
xrpld.app > xrpl.basics
xrpld.app > xrpl.core
xrpld.app > xrpld.conditions
xrpld.app > xrpld.consensus
xrpld.app > xrpld.core
xrpld.app > xrpld.nodestore
xrpld.app > xrpld.perflog
xrpld.app > xrpl.json
xrpld.app > xrpl.ledger
xrpld.app > xrpl.net
xrpld.app > xrpl.nodestore
xrpld.app > xrpl.protocol
xrpld.app > xrpl.rdb
xrpld.app > xrpl.resource
xrpld.app > xrpl.server
xrpld.app > xrpl.shamap
xrpld.conditions > xrpl.basics
xrpld.conditions > xrpl.protocol
xrpld.consensus > xrpl.basics
xrpld.consensus > xrpl.json
xrpld.consensus > xrpl.protocol
xrpld.core > xrpl.basics
xrpld.core > xrpl.core
xrpld.core > xrpl.json
xrpld.core > xrpl.net
xrpld.core > xrpl.protocol
xrpld.core > xrpl.rdb
xrpld.nodestore > xrpl.basics
xrpld.nodestore > xrpld.core
xrpld.nodestore > xrpld.unity
xrpld.nodestore > xrpl.json
xrpld.nodestore > xrpl.protocol
xrpld.overlay > xrpl.basics
xrpld.overlay > xrpl.core
xrpld.overlay > xrpld.core
xrpld.overlay > xrpld.peerfinder
xrpld.overlay > xrpld.perflog
xrpld.overlay > xrpl.json
xrpld.overlay > xrpl.protocol
xrpld.overlay > xrpl.rdb
xrpld.overlay > xrpl.resource
xrpld.overlay > xrpl.server
xrpld.peerfinder > xrpl.basics
xrpld.peerfinder > xrpld.core
xrpld.peerfinder > xrpl.protocol
xrpld.peerfinder > xrpl.rdb
xrpld.perflog > xrpl.basics
xrpld.perflog > xrpl.core
xrpld.perflog > xrpld.rpc
xrpld.perflog > xrpl.json
xrpld.rpc > xrpl.basics
xrpld.rpc > xrpl.core
xrpld.rpc > xrpld.core
xrpld.rpc > xrpld.nodestore
xrpld.rpc > xrpl.json
xrpld.rpc > xrpl.ledger
xrpld.rpc > xrpl.net
xrpld.rpc > xrpl.nodestore
xrpld.rpc > xrpl.protocol
xrpld.rpc > xrpl.rdb
xrpld.rpc > xrpl.resource
xrpld.rpc > xrpl.server
xrpld.shamap > xrpl.shamap
xrpld.shamap > xrpl.basics
xrpld.shamap > xrpld.nodestore
xrpld.shamap > xrpl.protocol

View File

@@ -1,47 +0,0 @@
## Renaming ripple(d) to xrpl(d)
In the initial phases of development of the XRPL, the open source codebase was
called "rippled" and it remains with that name even today. Today, over 1000
nodes run the application, and code contributions have been submitted by
developers located around the world. The XRPL community is larger than ever.
In light of the decentralized and diversified nature of XRPL, we will rename any
references to `ripple` and `rippled` to `xrpl` and `xrpld`, when appropriate.
See [here](https://xls.xrpl.org/xls/XLS-0095-rename-rippled-to-xrpld.html) for
more information.
### Scripts
To facilitate this transition, there will be multiple scripts that developers
can run on their own PRs and forks to minimize conflicts. Each script should be
run from the repository root.
1. `.github/scripts/rename/definitions.sh`: This script will rename all
definitions, such as include guards, from `RIPPLE_XXX` and `RIPPLED_XXX` to
`XRPL_XXX`.
2. `.github/scripts/rename/copyright.sh`: This script will remove superfluous
copyright notices.
3. `.github/scripts/rename/cmake.sh`: This script will rename all CMake files
from `RippleXXX.cmake` or `RippledXXX.cmake` to `XrplXXX.cmake`, and any
references to `ripple` and `rippled` (with or without capital letters) to
`xrpl` and `xrpld`, respectively. The name of the binary will remain as-is,
and will only be renamed to `xrpld` by a later script.
4. `.github/scripts/rename/binary.sh`: This script will rename the binary from
`rippled` to `xrpld`, and reverses the symlink so that `rippled` points to
the `xrpld` binary.
5. `.github/scripts/rename/namespace.sh`: This script will rename the C++
namespaces from `ripple` to `xrpl`.
6. `.github/scripts/rename/config.sh`: This script will rename the config from
`rippled.cfg` to `xrpld.cfg`, and updating the code accordingly. The old
filename will still be accepted.
You can run all these scripts from the repository root as follows:
```shell
./.github/scripts/rename/definitions.sh .
./.github/scripts/rename/copyright.sh .
./.github/scripts/rename/cmake.sh .
./.github/scripts/rename/binary.sh .
./.github/scripts/rename/namespace.sh .
./.github/scripts/rename/config.sh .
```

View File

@@ -1,54 +0,0 @@
#!/bin/bash
# Exit the script as soon as an error occurs.
set -e
# On MacOS, ensure that GNU sed is installed and available as `gsed`.
SED_COMMAND=sed
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
fi
# This script changes the binary name from `rippled` to `xrpld`, and reverses
# the symlink that currently points from `xrpld` to `rippled` so that it points
# from `rippled` to `xrpld` instead.
# Usage: .github/scripts/rename/binary.sh <repository directory>
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <repository directory>"
exit 1
fi
DIRECTORY=$1
echo "Processing directory: ${DIRECTORY}"
if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
pushd ${DIRECTORY}
# Remove the binary name override added by the cmake.sh script.
${SED_COMMAND} -z -i -E 's@\s+# For the time being.+"rippled"\)@@' cmake/XrplCore.cmake
# Reverse the symlink.
${SED_COMMAND} -i -E 's@create_symbolic_link\(rippled@create_symbolic_link(xrpld@' cmake/XrplInstall.cmake
${SED_COMMAND} -i -E 's@/xrpld\$\{suffix\}@/rippled${suffix}@' cmake/XrplInstall.cmake
# Rename references to the binary.
${SED_COMMAND} -i -E 's@rippled@xrpld@g' BUILD.md
${SED_COMMAND} -i -E 's@rippled@xrpld@g' CONTRIBUTING.md
${SED_COMMAND} -i -E 's@rippled@xrpld@g' .github/ISSUE_TEMPLATE/bug_report.md
# Restore and/or fix certain renames. The pre-commit hook will update the
# formatting upon saving/committing.
${SED_COMMAND} -i -E 's@ripple/xrpld@XRPLF/rippled@g' BUILD.md
${SED_COMMAND} -i -E 's@XRPLF/xrpld@XRPLF/rippled@g' BUILD.md
${SED_COMMAND} -i -E 's@xrpld \(`xrpld`\)@xrpld@g' BUILD.md
${SED_COMMAND} -i -E 's@XRPLF/xrpld@XRPLF/rippled@g' CONTRIBUTING.md
popd
echo "Processing complete."

View File

@@ -1,92 +0,0 @@
#!/bin/bash
# Exit the script as soon as an error occurs.
set -e
# On MacOS, ensure that GNU sed and head are installed and available as `gsed`
# and `ghead`, respectively.
SED_COMMAND=sed
HEAD_COMMAND=head
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
if ! command -v ghead &> /dev/null; then
echo "Error: ghead is not installed. Please install it using 'brew install coreutils'."
exit 1
fi
HEAD_COMMAND=ghead
fi
# This script renames CMake files from `RippleXXX.cmake` or `RippledXXX.cmake`
# to `XrplXXX.cmake`, and any references to `ripple` and `rippled` (with or
# without capital letters) to `xrpl` and `xrpld`, respectively. The name of the
# binary will remain as-is, and will only be renamed to `xrpld` in a different
# script, but the proto file will be renamed.
# Usage: .github/scripts/rename/cmake.sh <repository directory>
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <repository directory>"
exit 1
fi
DIRECTORY=$1
echo "Processing directory: ${DIRECTORY}"
if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
pushd ${DIRECTORY}
# Rename the files.
find cmake -type f -name 'Rippled*.cmake' -exec bash -c 'mv "${1}" "${1/Rippled/Xrpl}"' - {} \;
find cmake -type f -name 'Ripple*.cmake' -exec bash -c 'mv "${1}" "${1/Ripple/Xrpl}"' - {} \;
if [ -e cmake/xrpl_add_test.cmake ]; then
mv cmake/xrpl_add_test.cmake cmake/XrplAddTest.cmake
fi
if [ -e include/xrpl/proto/ripple.proto ]; then
mv include/xrpl/proto/ripple.proto include/xrpl/proto/xrpl.proto
fi
# Rename inside the files.
find cmake -type f -name '*.cmake' | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i 's/Rippled/Xrpld/g' "${FILE}"
${SED_COMMAND} -i 's/Ripple/Xrpl/g' "${FILE}"
${SED_COMMAND} -i 's/rippled/xrpld/g' "${FILE}"
${SED_COMMAND} -i 's/ripple/xrpl/g' "${FILE}"
done
${SED_COMMAND} -i -E 's/Rippled?/Xrpl/g' CMakeLists.txt
${SED_COMMAND} -i 's/ripple/xrpl/g' CMakeLists.txt
${SED_COMMAND} -i 's/include(xrpl_add_test)/include(XrplAddTest)/' src/tests/libxrpl/CMakeLists.txt
${SED_COMMAND} -i 's/ripple.pb.h/xrpl.pb.h/' include/xrpl/protocol/messages.h
${SED_COMMAND} -i 's/ripple.pb.h/xrpl.pb.h/' BUILD.md
${SED_COMMAND} -i 's/ripple.pb.h/xrpl.pb.h/' BUILD.md
# Restore the name of the validator keys repository.
${SED_COMMAND} -i 's@xrpl/validator-keys-tool@ripple/validator-keys-tool@' cmake/XrplValidatorKeys.cmake
# Ensure the name of the binary and config remain 'rippled' for now.
${SED_COMMAND} -i -E 's/xrpld(-example)?\.cfg/rippled\1.cfg/g' cmake/XrplInstall.cmake
if grep -q '"xrpld"' cmake/XrplCore.cmake; then
# The script has been rerun, so just restore the name of the binary.
${SED_COMMAND} -i 's/"xrpld"/"rippled"/' cmake/XrplCore.cmake
elif ! grep -q '"rippled"' cmake/XrplCore.cmake; then
${HEAD_COMMAND} -n -1 cmake/XrplCore.cmake > cmake.tmp
echo ' # For the time being, we will keep the name of the binary as it was.' >> cmake.tmp
echo ' set_target_properties(xrpld PROPERTIES OUTPUT_NAME "rippled")' >> cmake.tmp
tail -1 cmake/XrplCore.cmake >> cmake.tmp
mv cmake.tmp cmake/XrplCore.cmake
fi
# Restore the symlink from 'xrpld' to 'rippled'.
${SED_COMMAND} -i -E 's@create_symbolic_link\(xrpld@create_symbolic_link(rippled@' cmake/XrplInstall.cmake
# Remove the symlink that previously pointed from 'ripple' to 'xrpl' but now is
# no longer needed.
${SED_COMMAND} -z -i -E 's@install\(CODE.+CMAKE_INSTALL_INCLUDEDIR}/xrpl\)\n"\)\n+@@' cmake/XrplInstall.cmake
popd
echo "Renaming complete."

View File

@@ -1,72 +0,0 @@
#!/bin/bash
# Exit the script as soon as an error occurs.
set -e
# On MacOS, ensure that GNU sed is installed and available as `gsed`.
SED_COMMAND=sed
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
fi
# This script renames the config from `rippled.cfg` to `xrpld.cfg`, and updates
# the code accordingly. The old filename will still be accepted.
# Usage: .github/scripts/rename/config.sh <repository directory>
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <repository directory>"
exit 1
fi
DIRECTORY=$1
echo "Processing directory: ${DIRECTORY}"
if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
pushd ${DIRECTORY}
# Add the xrpld.cfg to the .gitignore.
if ! grep -q 'xrpld.cfg' .gitignore; then
${SED_COMMAND} -i '/rippled.cfg/a\
/xrpld.cfg' .gitignore
fi
# Rename the files.
if [ -e rippled.cfg ]; then
mv rippled.cfg xrpld.cfg
fi
if [ -e cfg/rippled-example.cfg ]; then
mv cfg/rippled-example.cfg cfg/xrpld-example.cfg
fi
# Rename inside the files.
DIRECTORIES=("cfg" "cmake" "include" "src")
for DIRECTORY in "${DIRECTORIES[@]}"; do
echo "Processing directory: ${DIRECTORY}"
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" -o -name "*.cpp" -o -name "*.cmake" -o -name "*.txt" -o -name "*.cfg" -o -name "*.md" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i -E 's/rippled(-example)?[ .]cfg/xrpld\1.cfg/g' "${FILE}"
done
done
${SED_COMMAND} -i 's/rippled/xrpld/g' cfg/xrpld-example.cfg
${SED_COMMAND} -i 's/rippled/xrpld/g' src/test/core/Config_test.cpp
${SED_COMMAND} -i 's/ripplevalidators/xrplvalidators/g' src/test/core/Config_test.cpp # cspell: disable-line
${SED_COMMAND} -i 's/rippleConfig/xrpldConfig/g' src/test/core/Config_test.cpp
${SED_COMMAND} -i 's@ripple/@xrpld/@g' src/test/core/Config_test.cpp
${SED_COMMAND} -i 's/Rippled/File/g' src/test/core/Config_test.cpp
# Restore the old config file name in the code that maintains support for now.
${SED_COMMAND} -i 's/configLegacyName = "xrpld.cfg"/configLegacyName = "rippled.cfg"/g' src/xrpld/core/detail/Config.cpp
# Restore an URL.
${SED_COMMAND} -i 's/connect-your-xrpld-to-the-xrp-test-net.html/connect-your-rippled-to-the-xrp-test-net.html/g' cfg/xrpld-example.cfg
popd
echo "Renaming complete."

View File

@@ -1,103 +0,0 @@
#!/bin/bash
# Exit the script as soon as an error occurs.
set -e
# On MacOS, ensure that GNU sed is installed and available as `gsed`.
SED_COMMAND=sed
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
fi
# This script removes superfluous copyright notices in source and header files
# in this project. Specifically, it removes all notices referencing Ripple,
# XRPLF, and certain individual contributors upon mutual agreement, so the one
# in the LICENSE.md file applies throughout. Copyright notices referencing
# external contributions, e.g. from Bitcoin, remain as-is.
# Usage: .github/scripts/rename/copyright.sh <repository directory>
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <repository directory>"
exit 1
fi
DIRECTORY=$1
echo "Processing directory: ${DIRECTORY}"
if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
pushd ${DIRECTORY}
# Prevent sed and echo from removing newlines and tabs in string literals by
# temporarily replacing them with placeholders. This only affects one file.
PLACEHOLDER_NEWLINE="__NEWLINE__"
PLACEHOLDER_TAB="__TAB__"
${SED_COMMAND} -i -E "s@\\\n@${PLACEHOLDER_NEWLINE}@g" src/test/rpc/ValidatorInfo_test.cpp
${SED_COMMAND} -i -E "s@\\\t@${PLACEHOLDER_TAB}@g" src/test/rpc/ValidatorInfo_test.cpp
# Process the include/ and src/ directories.
DIRECTORIES=("include" "src")
for DIRECTORY in "${DIRECTORIES[@]}"; do
echo "Processing directory: ${DIRECTORY}"
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" -o -name "*.cpp" -o -name "*.macro" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
# Handle the cases where the copyright notice is enclosed in /* ... */
# and usually surrounded by //---- and //======.
${SED_COMMAND} -z -i -E 's@^//-------+\n+@@' "${FILE}"
${SED_COMMAND} -z -i -E 's@^.*Copyright.+(Ripple|Bougalis|Falco|Hinnant|Null|Ritchford|XRPLF).+PERFORMANCE OF THIS SOFTWARE\.\n\*/\n+@@' "${FILE}" # cspell: ignore Bougalis Falco Hinnant Ritchford
${SED_COMMAND} -z -i -E 's@^//=======+\n+@@' "${FILE}"
# Handle the cases where the copyright notice is commented out with //.
${SED_COMMAND} -z -i -E 's@^//\n// Copyright.+Falco \(vinnie dot falco at gmail dot com\)\n//\n+@@' "${FILE}" # cspell: ignore Vinnie Falco
done
done
# Restore copyright notices that were removed from specific files, without
# restoring the verbiage that is already present in LICENSE.md. Ensure that if
# the script is run multiple times, duplicate notices are not added.
if ! grep -q 'Raw Material Software' include/xrpl/beast/core/CurrentThreadName.h; then
echo -e "// Portions of this file are from JUCE (http://www.juce.com).\n// Copyright (c) 2013 - Raw Material Software Ltd.\n// Please visit http://www.juce.com\n\n$(cat include/xrpl/beast/core/CurrentThreadName.h)" > include/xrpl/beast/core/CurrentThreadName.h
fi
if ! grep -q 'Dev Null' src/test/app/NetworkID_test.cpp; then
echo -e "// Copyright (c) 2020 Dev Null Productions\n\n$(cat src/test/app/NetworkID_test.cpp)" > src/test/app/NetworkID_test.cpp
fi
if ! grep -q 'Dev Null' src/test/app/tx/apply_test.cpp; then
echo -e "// Copyright (c) 2020 Dev Null Productions\n\n$(cat src/test/app/tx/apply_test.cpp)" > src/test/app/tx/apply_test.cpp
fi
if ! grep -q 'Dev Null' src/test/rpc/ManifestRPC_test.cpp; then
echo -e "// Copyright (c) 2020 Dev Null Productions\n\n$(cat src/test/rpc/ManifestRPC_test.cpp)" > src/test/rpc/ManifestRPC_test.cpp
fi
if ! grep -q 'Dev Null' src/test/rpc/ValidatorInfo_test.cpp; then
echo -e "// Copyright (c) 2020 Dev Null Productions\n\n$(cat src/test/rpc/ValidatorInfo_test.cpp)" > src/test/rpc/ValidatorInfo_test.cpp
fi
if ! grep -q 'Dev Null' src/xrpld/rpc/handlers/DoManifest.cpp; then
echo -e "// Copyright (c) 2019 Dev Null Productions\n\n$(cat src/xrpld/rpc/handlers/DoManifest.cpp)" > src/xrpld/rpc/handlers/DoManifest.cpp
fi
if ! grep -q 'Dev Null' src/xrpld/rpc/handlers/ValidatorInfo.cpp; then
echo -e "// Copyright (c) 2019 Dev Null Productions\n\n$(cat src/xrpld/rpc/handlers/ValidatorInfo.cpp)" > src/xrpld/rpc/handlers/ValidatorInfo.cpp
fi
if ! grep -q 'Bougalis' include/xrpl/basics/SlabAllocator.h; then
echo -e "// Copyright (c) 2022, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/SlabAllocator.h)" > include/xrpl/basics/SlabAllocator.h # cspell: ignore Nikolaos Bougalis nikb
fi
if ! grep -q 'Bougalis' include/xrpl/basics/spinlock.h; then
echo -e "// Copyright (c) 2022, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/spinlock.h)" > include/xrpl/basics/spinlock.h # cspell: ignore Nikolaos Bougalis nikb
fi
if ! grep -q 'Bougalis' include/xrpl/basics/tagged_integer.h; then
echo -e "// Copyright (c) 2014, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/tagged_integer.h)" > include/xrpl/basics/tagged_integer.h # cspell: ignore Nikolaos Bougalis nikb
fi
if ! grep -q 'Ritchford' include/xrpl/beast/utility/Zero.h; then
echo -e "// Copyright (c) 2014, Tom Ritchford <tom@swirly.com>\n\n$(cat include/xrpl/beast/utility/Zero.h)" > include/xrpl/beast/utility/Zero.h # cspell: ignore Ritchford
fi
# Restore newlines and tabs in string literals in the affected file.
${SED_COMMAND} -i -E "s@${PLACEHOLDER_NEWLINE}@\\\n@g" src/test/rpc/ValidatorInfo_test.cpp
${SED_COMMAND} -i -E "s@${PLACEHOLDER_TAB}@\\\t@g" src/test/rpc/ValidatorInfo_test.cpp
popd
echo "Removal complete."

View File

@@ -1,42 +0,0 @@
#!/bin/bash
# Exit the script as soon as an error occurs.
set -e
# On MacOS, ensure that GNU sed is installed and available as `gsed`.
SED_COMMAND=sed
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
fi
# This script renames definitions, such as include guards, in this project.
# Specifically, it renames "RIPPLED_XXX" and "RIPPLE_XXX" to "XRPL_XXX" by
# scanning all cmake, header, and source files in the specified directory and
# its subdirectories.
# Usage: .github/scripts/rename/definitions.sh <repository directory>
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <repository directory>"
exit 1
fi
DIRECTORY=$1
echo "Processing directory: ${DIRECTORY}"
if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" -o -name "*.cpp" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i -E 's@#(define|endif|if|ifdef|ifndef)(.*)(RIPPLED_|RIPPLE_)([A-Z0-9_]+)@#\1\2XRPL_\4@g' "${FILE}"
done
find "${DIRECTORY}" -type f \( -name "*.cmake" -o -name "*.txt" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i -E 's@(RIPPLED_|RIPPLE_)([A-Z0-9_]+)@XRPL_\2@g' "${FILE}"
done
echo "Renaming complete."

View File

@@ -1,30 +0,0 @@
#!/bin/bash
# Exit the script as soon as an error occurs.
set -e
# This script checks whether there are no new include guards introduced by a new
# PR, as header files should use "#pragma once" instead. The script assumes any
# include guards will use "XRPL_" as prefix.
# Usage: .github/scripts/rename/include.sh <repository directory>
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <repository directory>"
exit 1
fi
DIRECTORY=$1
echo "Processing directory: ${DIRECTORY}"
if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
if grep -q "#ifndef XRPL_" "${FILE}"; then
echo "Please replace all include guards by #pragma once."
exit 1
fi
done
echo "Checking complete."

View File

@@ -1,58 +0,0 @@
#!/bin/bash
# Exit the script as soon as an error occurs.
set -e
# On MacOS, ensure that GNU sed is installed and available as `gsed`.
SED_COMMAND=sed
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
fi
# This script renames the `ripple` namespace to `xrpl` in this project.
# Specifically, it renames all occurrences of `namespace ripple` and `ripple::`
# to `namespace xrpl` and `xrpl::`, respectively, by scanning all header and
# source files in the specified directory and its subdirectories, as well as any
# occurrences in the documentation. It also renames them in the test suites.
# Usage: .github/scripts/rename/namespace.sh <repository directory>
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <repository directory>"
exit 1
fi
DIRECTORY=$1
echo "Processing directory: ${DIRECTORY}"
if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
pushd ${DIRECTORY}
DIRECTORIES=("include" "src" "tests")
for DIRECTORY in "${DIRECTORIES[@]}"; do
echo "Processing directory: ${DIRECTORY}"
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" -o -name "*.cpp" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i 's/namespace ripple/namespace xrpl/g' "${FILE}"
${SED_COMMAND} -i 's/ripple::/xrpl::/g' "${FILE}"
${SED_COMMAND} -i -E 's/(BEAST_DEFINE_TESTSUITE.+)ripple(.+)/\1xrpl\2/g' "${FILE}"
done
done
# Special case for NuDBFactory that has ripple twice in the test suite name.
${SED_COMMAND} -i -E 's/(BEAST_DEFINE_TESTSUITE.+)ripple(.+)/\1xrpl\2/g' src/test/nodestore/NuDBFactory_test.cpp
DIRECTORY=$1
find "${DIRECTORY}" -type f -name "*.md" | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i 's/ripple::/xrpl::/g' "${FILE}"
done
popd
echo "Renaming complete."

View File

@@ -2,12 +2,11 @@
import argparse
import itertools
import json
from dataclasses import dataclass
from pathlib import Path
from dataclasses import dataclass
THIS_DIR = Path(__file__).parent.resolve()
@dataclass
class Config:
architecture: list[dict]
@@ -15,13 +14,12 @@ class Config:
build_type: list[str]
cmake_args: list[str]
"""
'''
Generate a strategy matrix for GitHub Actions CI.
On each PR commit we will build a selection of Debian, RHEL, Ubuntu, MacOS, and
Windows configurations, while upon merge into the develop or release branches,
we will build all configurations, and test most of them.
Windows configurations, while upon merge into the develop, release, or master
branches, we will build all configurations, and test most of them.
We will further set additional CMake arguments as follows:
- All builds will have the `tests`, `werr`, and `xrpld` options.
@@ -29,305 +27,169 @@ We will further set additional CMake arguments as follows:
- All release builds will have the `assert` option.
- Certain Debian Bookworm configurations will change the reference fee, enable
codecov, and enable voidstar in PRs.
"""
'''
def generate_strategy_matrix(all: bool, config: Config) -> list:
configurations = []
for architecture, os, build_type, cmake_args in itertools.product(
config.architecture, config.os, config.build_type, config.cmake_args
):
for architecture, os, build_type, cmake_args in itertools.product(config.architecture, config.os, config.build_type, config.cmake_args):
# The default CMake target is 'all' for Linux and MacOS and 'install'
# for Windows, but it can get overridden for certain configurations.
cmake_target = "install" if os["distro_name"] == "windows" else "all"
cmake_target = 'install' if os["distro_name"] == 'windows' else 'all'
# We build and test all configurations by default, except for Windows in
# Debug, because it is too slow, as well as when code coverage is
# enabled as that mode already runs the tests.
build_only = False
if os["distro_name"] == "windows" and build_type == "Debug":
if os['distro_name'] == 'windows' and build_type == 'Debug':
build_only = True
# Only generate a subset of configurations in PRs.
if not all:
# Debian:
# - Bookworm using GCC 13: Release on linux/amd64, set the reference
# fee to 500.
# - Bookworm using GCC 15: Debug on linux/amd64, enable code
# coverage (which will be done below).
# - Bookworm using Clang 16: Debug on linux/arm64, enable voidstar.
# - Bookworm using Clang 17: Release on linux/amd64, set the
# reference fee to 1000.
# - Bookworm using Clang 20: Debug on linux/amd64.
if os["distro_name"] == "debian":
# - Bookworm using GCC 13: Release and Unity on linux/amd64, set
# the reference fee to 500.
# - Bookworm using GCC 15: Debug and no Unity on linux/amd64, enable
# code coverage (which will be done below).
# - Bookworm using Clang 16: Debug and no Unity on linux/arm64,
# enable voidstar.
# - Bookworm using Clang 17: Release and no Unity on linux/amd64,
# set the reference fee to 1000.
# - Bookworm using Clang 20: Debug and Unity on linux/amd64.
if os['distro_name'] == 'debian':
skip = True
if os["distro_version"] == "bookworm":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-13"
and build_type == "Release"
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"-DUNIT_TEST_REFERENCE_FEE=500 {cmake_args}"
if os['distro_version'] == 'bookworm':
if f'{os['compiler_name']}-{os['compiler_version']}' == 'gcc-13' and build_type == 'Release' and '-Dunity=ON' in cmake_args and architecture['platform'] == 'linux/amd64':
cmake_args = f'-DUNIT_TEST_REFERENCE_FEE=500 {cmake_args}'
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-15"
and build_type == "Debug"
and architecture["platform"] == "linux/amd64"
):
if f'{os['compiler_name']}-{os['compiler_version']}' == 'gcc-15' and build_type == 'Debug' and '-Dunity=OFF' in cmake_args and architecture['platform'] == 'linux/amd64':
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-16"
and build_type == "Debug"
and architecture["platform"] == "linux/arm64"
):
cmake_args = f"-Dvoidstar=ON {cmake_args}"
if f'{os['compiler_name']}-{os['compiler_version']}' == 'clang-16' and build_type == 'Debug' and '-Dunity=OFF' in cmake_args and architecture['platform'] == 'linux/arm64':
cmake_args = f'-Dvoidstar=ON {cmake_args}'
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-17"
and build_type == "Release"
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"-DUNIT_TEST_REFERENCE_FEE=1000 {cmake_args}"
if f'{os['compiler_name']}-{os['compiler_version']}' == 'clang-17' and build_type == 'Release' and '-Dunity=ON' in cmake_args and architecture['platform'] == 'linux/amd64':
cmake_args = f'-DUNIT_TEST_REFERENCE_FEE=1000 {cmake_args}'
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-20"
and build_type == "Debug"
and architecture["platform"] == "linux/amd64"
):
if f'{os['compiler_name']}-{os['compiler_version']}' == 'clang-20' and build_type == 'Debug' and '-Dunity=ON' in cmake_args and architecture['platform'] == 'linux/amd64':
skip = False
if skip:
continue
# RHEL:
# - 9 using GCC 12: Debug on linux/amd64.
# - 10 using Clang: Release on linux/amd64.
if os["distro_name"] == "rhel":
# - 9 using GCC 12: Debug and Unity on linux/amd64.
# - 10 using Clang: Release and no Unity on linux/amd64.
if os['distro_name'] == 'rhel':
skip = True
if os["distro_version"] == "9":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-12"
and build_type == "Debug"
and architecture["platform"] == "linux/amd64"
):
if os['distro_version'] == '9':
if f'{os['compiler_name']}-{os['compiler_version']}' == 'gcc-12' and build_type == 'Debug' and '-Dunity=ON' in cmake_args and architecture['platform'] == 'linux/amd64':
skip = False
elif os["distro_version"] == "10":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-any"
and build_type == "Release"
and architecture["platform"] == "linux/amd64"
):
elif os['distro_version'] == '10':
if f'{os['compiler_name']}-{os['compiler_version']}' == 'clang-any' and build_type == 'Release' and '-Dunity=OFF' in cmake_args and architecture['platform'] == 'linux/amd64':
skip = False
if skip:
continue
# Ubuntu:
# - Jammy using GCC 12: Debug on linux/arm64.
# - Noble using GCC 14: Release on linux/amd64.
# - Noble using Clang 18: Debug on linux/amd64.
# - Noble using Clang 19: Release on linux/arm64.
if os["distro_name"] == "ubuntu":
# - Jammy using GCC 12: Debug and no Unity on linux/arm64.
# - Noble using GCC 14: Release and Unity on linux/amd64.
# - Noble using Clang 18: Debug and no Unity on linux/amd64.
# - Noble using Clang 19: Release and Unity on linux/arm64.
if os['distro_name'] == 'ubuntu':
skip = True
if os["distro_version"] == "jammy":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-12"
and build_type == "Debug"
and architecture["platform"] == "linux/arm64"
):
if os['distro_version'] == 'jammy':
if f'{os['compiler_name']}-{os['compiler_version']}' == 'gcc-12' and build_type == 'Debug' and '-Dunity=OFF' in cmake_args and architecture['platform'] == 'linux/arm64':
skip = False
elif os["distro_version"] == "noble":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-14"
and build_type == "Release"
and architecture["platform"] == "linux/amd64"
):
elif os['distro_version'] == 'noble':
if f'{os['compiler_name']}-{os['compiler_version']}' == 'gcc-14' and build_type == 'Release' and '-Dunity=ON' in cmake_args and architecture['platform'] == 'linux/amd64':
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-18"
and build_type == "Debug"
and architecture["platform"] == "linux/amd64"
):
if f'{os['compiler_name']}-{os['compiler_version']}' == 'clang-18' and build_type == 'Debug' and '-Dunity=OFF' in cmake_args and architecture['platform'] == 'linux/amd64':
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-19"
and build_type == "Release"
and architecture["platform"] == "linux/arm64"
):
if f'{os['compiler_name']}-{os['compiler_version']}' == 'clang-19' and build_type == 'Release' and '-Dunity=ON' in cmake_args and architecture['platform'] == 'linux/arm64':
skip = False
if skip:
continue
# MacOS:
# - Debug on macos/arm64.
if os["distro_name"] == "macos" and not (
build_type == "Debug" and architecture["platform"] == "macos/arm64"
):
# - Debug and no Unity on macos/arm64.
if os['distro_name'] == 'macos' and not (build_type == 'Debug' and '-Dunity=OFF' in cmake_args and architecture['platform'] == 'macos/arm64'):
continue
# Windows:
# - Release on windows/amd64.
if os["distro_name"] == "windows" and not (
build_type == "Release" and architecture["platform"] == "windows/amd64"
):
# - Release and Unity on windows/amd64.
if os['distro_name'] == 'windows' and not (build_type == 'Release' and '-Dunity=ON' in cmake_args and architecture['platform'] == 'windows/amd64'):
continue
# Additional CMake arguments.
cmake_args = f"{cmake_args} -Dtests=ON -Dwerr=ON -Dxrpld=ON"
if not f"{os['compiler_name']}-{os['compiler_version']}" in [
"gcc-12",
"clang-16",
]:
cmake_args = f"{cmake_args} -Dwextra=ON"
if build_type == "Release":
cmake_args = f"{cmake_args} -Dassert=ON"
cmake_args = f'{cmake_args} -Dtests=ON -Dwerr=ON -Dxrpld=ON'
if not f'{os['compiler_name']}-{os['compiler_version']}' in ['gcc-12', 'clang-16']:
cmake_args = f'{cmake_args} -Dwextra=ON'
if build_type == 'Release':
cmake_args = f'{cmake_args} -Dassert=ON'
# We skip all RHEL on arm64 due to a build failure that needs further
# investigation.
if os["distro_name"] == "rhel" and architecture["platform"] == "linux/arm64":
if os['distro_name'] == 'rhel' and architecture['platform'] == 'linux/arm64':
continue
# We skip all clang 20+ on arm64 due to Boost build error.
if (
f"{os['compiler_name']}-{os['compiler_version']}"
in ["clang-20", "clang-21"]
and architecture["platform"] == "linux/arm64"
):
if f'{os['compiler_name']}-{os['compiler_version']}' in ['clang-20', 'clang-21'] and architecture['platform'] == 'linux/arm64':
continue
# Enable code coverage for Debian Bookworm using GCC 15 in Debug on
# linux/amd64
if (
f"{os['distro_name']}-{os['distro_version']}" == "debian-bookworm"
and f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-15"
and build_type == "Debug"
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"{cmake_args} -Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0"
# Enable unity build for Ubuntu Jammy using GCC 12 in Debug on
# linux/amd64.
if (
f"{os['distro_name']}-{os['distro_version']}" == "ubuntu-jammy"
and f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-12"
and build_type == "Debug"
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"{cmake_args} -Dunity=ON"
# Enable code coverage for Debian Bookworm using GCC 15 in Debug and no
# Unity on linux/amd64
if f'{os['compiler_name']}-{os['compiler_version']}' == 'gcc-15' and build_type == 'Debug' and '-Dunity=OFF' in cmake_args and architecture['platform'] == 'linux/amd64':
cmake_args = f'-Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0 {cmake_args}'
# Generate a unique name for the configuration, e.g. macos-arm64-debug
# or debian-bookworm-gcc-12-amd64-release.
config_name = os["distro_name"]
if (n := os["distro_version"]) != "":
config_name += f"-{n}"
if (n := os["compiler_name"]) != "":
config_name += f"-{n}"
if (n := os["compiler_version"]) != "":
config_name += f"-{n}"
config_name += (
f"-{architecture['platform'][architecture['platform'].find('/')+1:]}"
)
config_name += f"-{build_type.lower()}"
if "-Dcoverage=ON" in cmake_args:
config_name += "-coverage"
if "-Dunity=ON" in cmake_args:
config_name += "-unity"
# or debian-bookworm-gcc-12-amd64-release-unity.
config_name = os['distro_name']
if (n := os['distro_version']) != '':
config_name += f'-{n}'
if (n := os['compiler_name']) != '':
config_name += f'-{n}'
if (n := os['compiler_version']) != '':
config_name += f'-{n}'
config_name += f'-{architecture['platform'][architecture['platform'].find('/')+1:]}'
config_name += f'-{build_type.lower()}'
if '-Dunity=ON' in cmake_args:
config_name += '-unity'
# Add the configuration to the list, with the most unique fields first,
# so that they are easier to identify in the GitHub Actions UI, as long
# names get truncated.
# Add Address and Thread (both coupled with UB) sanitizers for specific bookworm distros.
# GCC-Asan rippled-embedded tests are failing because of https://github.com/google/sanitizers/issues/856
if (
os["distro_version"] == "bookworm"
and f"{os['compiler_name']}-{os['compiler_version']}" == "clang-20"
):
# Add ASAN + UBSAN configuration.
configurations.append(
{
"config_name": config_name + "-asan-ubsan",
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
"sanitizers": "address,undefinedbehavior",
}
)
# TSAN is deactivated due to seg faults with latest compilers.
activate_tsan = False
if activate_tsan:
configurations.append(
{
"config_name": config_name + "-tsan-ubsan",
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
"sanitizers": "thread,undefinedbehavior",
}
)
else:
configurations.append(
{
"config_name": config_name,
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
"sanitizers": "",
}
)
configurations.append({
'config_name': config_name,
'cmake_args': cmake_args,
'cmake_target': cmake_target,
'build_only': build_only,
'build_type': build_type,
'os': os,
'architecture': architecture,
})
return configurations
def read_config(file: Path) -> Config:
config = json.loads(file.read_text())
if (
config["architecture"] is None
or config["os"] is None
or config["build_type"] is None
or config["cmake_args"] is None
):
raise Exception("Invalid configuration file.")
if config['architecture'] is None or config['os'] is None or config['build_type'] is None or config['cmake_args'] is None:
raise Exception('Invalid configuration file.')
return Config(**config)
if __name__ == "__main__":
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"-a",
"--all",
help="Set to generate all configurations (generally used when merging a PR) or leave unset to generate a subset of configurations (generally used when committing to a PR).",
action="store_true",
)
parser.add_argument(
"-c",
"--config",
help="Path to the JSON file containing the strategy matrix configurations.",
required=False,
type=Path,
)
parser.add_argument('-a', '--all', help='Set to generate all configurations (generally used when merging a PR) or leave unset to generate a subset of configurations (generally used when committing to a PR).', action="store_true")
parser.add_argument('-c', '--config', help='Path to the JSON file containing the strategy matrix configurations.', required=False, type=Path)
args = parser.parse_args()
matrix = []
if args.config is None or args.config == "":
matrix += generate_strategy_matrix(
args.all, read_config(THIS_DIR / "linux.json")
)
matrix += generate_strategy_matrix(
args.all, read_config(THIS_DIR / "macos.json")
)
matrix += generate_strategy_matrix(
args.all, read_config(THIS_DIR / "windows.json")
)
if args.config is None or args.config == '':
matrix += generate_strategy_matrix(args.all, read_config(THIS_DIR / "linux.json"))
matrix += generate_strategy_matrix(args.all, read_config(THIS_DIR / "macos.json"))
matrix += generate_strategy_matrix(args.all, read_config(THIS_DIR / "windows.json"))
else:
matrix += generate_strategy_matrix(args.all, read_config(args.config))
# Generate the strategy matrix.
print(f"matrix={json.dumps({'include': matrix})}")
print(f'matrix={json.dumps({"include": matrix})}')

View File

@@ -15,198 +15,198 @@
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "12",
"image_sha": "ab4d1f0"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "13",
"image_sha": "ab4d1f0"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "ab4d1f0"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "15",
"image_sha": "ab4d1f0"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "16",
"image_sha": "ab4d1f0"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "17",
"image_sha": "ab4d1f0"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "18",
"image_sha": "ab4d1f0"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "19",
"image_sha": "ab4d1f0"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "20",
"image_sha": "ab4d1f0"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "ab4d1f0"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "gcc",
"compiler_version": "15",
"image_sha": "ab4d1f0"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "clang",
"compiler_version": "20",
"image_sha": "ab4d1f0"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "clang",
"compiler_version": "21",
"image_sha": "ab4d1f0"
"image_sha": "0525eae"
},
{
"distro_name": "rhel",
"distro_version": "8",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "8",
"compiler_name": "clang",
"compiler_version": "any",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "gcc",
"compiler_version": "12",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "gcc",
"compiler_version": "13",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "clang",
"compiler_version": "any",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "10",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "10",
"compiler_name": "clang",
"compiler_version": "any",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "jammy",
"compiler_name": "gcc",
"compiler_version": "12",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "gcc",
"compiler_version": "13",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "16",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "17",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "18",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "19",
"image_sha": "ab4d1f0"
"image_sha": "e1782cd"
}
],
"build_type": ["Debug", "Release"],
"cmake_args": [""]
"cmake_args": ["-Dunity=OFF", "-Dunity=ON"]
}

View File

@@ -15,5 +15,8 @@
}
],
"build_type": ["Debug", "Release"],
"cmake_args": ["-DCMAKE_POLICY_VERSION_MINIMUM=3.5"]
"cmake_args": [
"-Dunity=OFF -DCMAKE_POLICY_VERSION_MINIMUM=3.5",
"-Dunity=ON -DCMAKE_POLICY_VERSION_MINIMUM=3.5"
]
}

View File

@@ -15,5 +15,5 @@
}
],
"build_type": ["Debug", "Release"],
"cmake_args": [""]
"cmake_args": ["-Dunity=OFF", "-Dunity=ON"]
}

View File

@@ -1,8 +1,7 @@
# This workflow runs all workflows to check, build and test the project on
# various Linux flavors, as well as on MacOS and Windows, on every push to a
# user branch. However, it will not run if the pull request is a draft unless it
# has the 'DraftRunCI' label. For commits to PRs that target a release branch,
# it also uploads the libxrpl recipe to the Conan remote.
# has the 'DraftRunCI' label.
name: PR
on:
@@ -51,15 +50,13 @@ jobs:
files: |
# These paths are unique to `on-pr.yml`.
.github/scripts/levelization/**
.github/scripts/rename/**
.github/workflows/reusable-check-levelization.yml
.github/workflows/reusable-check-rename.yml
.github/workflows/reusable-notify-clio.yml
.github/workflows/on-pr.yml
# Keep the paths below in sync with those in `on-trigger.yml`.
.github/actions/build-deps/**
.github/actions/build-test/**
.github/actions/generate-version/**
.github/actions/setup-conan/**
.github/scripts/strategy-matrix/**
.github/workflows/reusable-build.yml
@@ -67,7 +64,6 @@ jobs:
.github/workflows/reusable-build-test.yml
.github/workflows/reusable-strategy-matrix.yml
.github/workflows/reusable-test.yml
.github/workflows/reusable-upload-recipe.yml
.codecov.yml
cmake/**
conan/**
@@ -102,11 +98,6 @@ jobs:
if: ${{ needs.should-run.outputs.go == 'true' }}
uses: ./.github/workflows/reusable-check-levelization.yml
check-rename:
needs: should-run
if: ${{ needs.should-run.outputs.go == 'true' }}
uses: ./.github/workflows/reusable-check-rename.yml
build-test:
needs: should-run
if: ${{ needs.should-run.outputs.go == 'true' }}
@@ -116,49 +107,26 @@ jobs:
matrix:
os: [linux, macos, windows]
with:
# Enable ccache only for events targeting the XRPLF repository, since
# other accounts will not have access to our remote cache storage.
ccache_enabled: ${{ github.repository_owner == 'XRPLF' }}
os: ${{ matrix.os }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
upload-recipe:
notify-clio:
needs:
- should-run
- build-test
# Only run when committing to a PR that targets a release branch in the
# XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && needs.should-run.outputs.go == 'true' && startsWith(github.ref, 'refs/heads/release') }}
uses: ./.github/workflows/reusable-upload-recipe.yml
if: ${{ needs.should-run.outputs.go == 'true' && (startsWith(github.base_ref, 'release') || github.base_ref == 'master') }}
uses: ./.github/workflows/reusable-notify-clio.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}
notify-clio:
needs: upload-recipe
runs-on: ubuntu-latest
steps:
# Notify the Clio repository about the newly proposed release version, so
# it can be checked for compatibility before the release is actually made.
- name: Notify Clio
env:
GH_TOKEN: ${{ secrets.CLIO_NOTIFY_TOKEN }}
PR_URL: ${{ github.event.pull_request.html_url }}
run: |
gh api --method POST -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" \
/repos/xrplf/clio/dispatches -f "event_type=check_libxrpl" \
-F "client_payload[ref]=${{ needs.upload-recipe.outputs.recipe_ref }}" \
-F "client_payload[pr_url]=${PR_URL}"
clio_notify_token: ${{ secrets.CLIO_NOTIFY_TOKEN }}
conan_remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
conan_remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}
passed:
if: failure() || cancelled()
needs:
- check-levelization
- check-rename
- build-test
- upload-recipe
- notify-clio
- check-levelization
runs-on: ubuntu-latest
steps:
- name: Fail

View File

@@ -1,25 +0,0 @@
# This workflow uploads the libxrpl recipe to the Conan remote when a versioned
# tag is pushed.
name: Tag
on:
push:
tags:
- "v*"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
upload-recipe:
# Only run when a tag is pushed to the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}

View File

@@ -1,7 +1,9 @@
# This workflow runs all workflows to build and test the code on various Linux
# flavors, as well as on MacOS and Windows, on a scheduled basis, on merge into
# the 'develop' or 'release*' branches, or when requested manually. Upon pushes
# to the develop branch it also uploads the libxrpl recipe to the Conan remote.
# This workflow runs all workflows to build the dependencies required for the
# project on various Linux flavors, as well as on MacOS and Windows, on a
# scheduled basis, on merge into the 'develop', 'release', or 'master' branches,
# or manually. The missing commits check is only run when the code is merged
# into the 'develop' or 'release' branches, and the documentation is built when
# the code is merged into the 'develop' branch.
name: Trigger
on:
@@ -9,6 +11,7 @@ on:
branches:
- "develop"
- "release*"
- "master"
paths:
# These paths are unique to `on-trigger.yml`.
- ".github/workflows/on-trigger.yml"
@@ -16,7 +19,6 @@ on:
# Keep the paths below in sync with those in `on-pr.yml`.
- ".github/actions/build-deps/**"
- ".github/actions/build-test/**"
- ".github/actions/generate-version/**"
- ".github/actions/setup-conan/**"
- ".github/scripts/strategy-matrix/**"
- ".github/workflows/reusable-build.yml"
@@ -24,7 +26,6 @@ on:
- ".github/workflows/reusable-build-test.yml"
- ".github/workflows/reusable-strategy-matrix.yml"
- ".github/workflows/reusable-test.yml"
- ".github/workflows/reusable-upload-recipe.yml"
- ".codecov.yml"
- "cmake/**"
- "conan/**"
@@ -67,22 +68,7 @@ jobs:
matrix:
os: [linux, macos, windows]
with:
# Enable ccache only for events targeting the XRPLF repository, since
# other accounts will not have access to our remote cache storage.
# However, we do not enable ccache for events targeting a release branch,
# to protect against the rare case that the output produced by ccache is
# not identical to a regular compilation.
ccache_enabled: ${{ github.repository_owner == 'XRPLF' && !startsWith(github.ref, 'refs/heads/release') }}
os: ${{ matrix.os }}
strategy_matrix: ${{ github.event_name == 'schedule' && 'all' || 'minimal' }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
upload-recipe:
needs: build-test
# Only run when pushing to the develop branch in the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && github.event_name == 'push' && github.ref == 'refs/heads/develop' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}

View File

@@ -3,15 +3,13 @@ name: Run pre-commit hooks
on:
pull_request:
push:
branches:
- "develop"
- "release*"
branches: [develop, release, master]
workflow_dispatch:
jobs:
# Call the workflow in the XRPLF/actions repo that runs the pre-commit hooks.
run-hooks:
uses: XRPLF/actions/.github/workflows/pre-commit.yml@320be44621ca2a080f05aeb15817c44b84518108
uses: XRPLF/actions/.github/workflows/pre-commit.yml@34790936fae4c6c751f62ec8c06696f9c1a5753a
with:
runs_on: ubuntu-latest
container: '{ "image": "ghcr.io/xrplf/ci/tools-rippled-pre-commit:sha-ab4d1f0" }'
container: '{ "image": "ghcr.io/xrplf/ci/tools-rippled-pre-commit:sha-a8c7be1" }'

View File

@@ -22,7 +22,7 @@ defaults:
shell: bash
env:
BUILD_DIR: build
BUILD_DIR: .build
NPROC_SUBTRACT: 2
jobs:
@@ -36,7 +36,7 @@ jobs:
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Get number of processors
uses: XRPLF/actions/get-nproc@cf0433aa74563aead044a1e395610c96d65a37cf
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
id: nproc
with:
subtract: ${{ env.NPROC_SUBTRACT }}

View File

@@ -3,6 +3,11 @@ name: Build and test configuration
on:
workflow_call:
inputs:
build_dir:
description: "The directory where to build."
required: true
type: string
build_only:
description: 'Whether to only build or to build and test the code ("true", "false").'
required: true
@@ -10,14 +15,8 @@ on:
build_type:
description: 'The build type to use ("Debug", "Release").'
required: true
type: string
ccache_enabled:
description: "Whether to enable ccache."
required: false
type: boolean
default: false
required: true
cmake_args:
description: "Additional arguments to pass to CMake."
@@ -27,8 +26,8 @@ on:
cmake_target:
description: "The CMake target to build."
required: true
type: string
required: true
runs_on:
description: Runner to run the job on as a JSON string
@@ -51,12 +50,6 @@ on:
type: number
default: 2
sanitizers:
description: "The sanitizers to enable."
required: false
type: string
default: ""
secrets:
CODECOV_TOKEN:
description: "The Codecov token to use for uploading coverage reports."
@@ -66,11 +59,6 @@ defaults:
run:
shell: bash
env:
# Conan installs the generators in the build/generators directory, see the
# layout() method in conanfile.py. We then run CMake from the build directory.
BUILD_DIR: build
jobs:
build-and-test:
name: ${{ inputs.config_name }}
@@ -78,72 +66,47 @@ jobs:
container: ${{ inputs.image != '' && inputs.image || null }}
timeout-minutes: 60
env:
# Use a namespace to keep the objects separate for each configuration.
CCACHE_NAMESPACE: ${{ inputs.config_name }}
# Ccache supports both Redis and HTTP endpoints.
# * For Redis, use the following format: redis://ip:port, see
# https://github.com/ccache/ccache/wiki/Redis-storage. Note that TLS is
# not directly supported by ccache, and requires use of a proxy.
# * For HTTP use the following format: http://ip:port/cache when using
# nginx as backend or http://ip:port|layout=bazel when using Bazel
# Remote Cache, see https://github.com/ccache/ccache/wiki/HTTP-storage.
# Note that HTTPS is not directly supported by ccache.
CCACHE_REMOTE_ONLY: true
CCACHE_REMOTE_STORAGE: http://cache.dev.ripplex.io:8080|layout=bazel
# Ignore the creation and modification timestamps on files, since the
# header files are copied into separate directories by CMake, which will
# otherwise result in cache misses.
CCACHE_SLOPPINESS: include_file_ctime,include_file_mtime
# Determine if coverage and voidstar should be enabled.
COVERAGE_ENABLED: ${{ contains(inputs.cmake_args, '-Dcoverage=ON') }}
VOIDSTAR_ENABLED: ${{ contains(inputs.cmake_args, '-Dvoidstar=ON') }}
SANITIZERS_ENABLED: ${{ inputs.sanitizers != '' }}
ENABLED_VOIDSTAR: ${{ contains(inputs.cmake_args, '-Dvoidstar=ON') }}
ENABLED_COVERAGE: ${{ contains(inputs.cmake_args, '-Dcoverage=ON') }}
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
uses: XRPLF/actions/.github/actions/cleanup-workspace@01b244d2718865d427b499822fbd3f15e7197fcc
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Prepare runner
uses: XRPLF/actions/prepare-runner@2cbf481018d930656e9276fcc20dc0e3a0be5b6d
uses: XRPLF/actions/.github/actions/prepare-runner@99685816bb60a95a66852f212f382580e180df3a
with:
enable_ccache: ${{ inputs.ccache_enabled }}
- name: Set ccache log file
if: ${{ inputs.ccache_enabled && runner.debug == '1' }}
run: echo "CCACHE_LOGFILE=${{ runner.temp }}/ccache.log" >> "${GITHUB_ENV}"
disable_ccache: false
- name: Print build environment
uses: ./.github/actions/print-env
- name: Get number of processors
uses: XRPLF/actions/get-nproc@cf0433aa74563aead044a1e395610c96d65a37cf
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
id: nproc
with:
subtract: ${{ inputs.nproc_subtract }}
- name: Setup Conan
env:
SANITIZERS: ${{ inputs.sanitizers }}
uses: ./.github/actions/setup-conan
- name: Build dependencies
uses: ./.github/actions/build-deps
with:
build_dir: ${{ inputs.build_dir }}
build_nproc: ${{ steps.nproc.outputs.nproc }}
build_type: ${{ inputs.build_type }}
# Set the verbosity to "quiet" for Windows to avoid an excessive
# amount of logs. For other OSes, the "verbose" logs are more useful.
log_verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }}
sanitizers: ${{ inputs.sanitizers }}
- name: Configure CMake
working-directory: ${{ env.BUILD_DIR }}
working-directory: ${{ inputs.build_dir }}
env:
BUILD_TYPE: ${{ inputs.build_type }}
SANITIZERS: ${{ inputs.sanitizers }}
CMAKE_ARGS: ${{ inputs.cmake_args }}
run: |
cmake \
@@ -154,7 +117,7 @@ jobs:
..
- name: Build the binary
working-directory: ${{ env.BUILD_DIR }}
working-directory: ${{ inputs.build_dir }}
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
BUILD_TYPE: ${{ inputs.build_type }}
@@ -166,30 +129,23 @@ jobs:
--parallel "${BUILD_NPROC}" \
--target "${CMAKE_TARGET}"
- name: Show ccache statistics
if: ${{ inputs.ccache_enabled }}
run: |
ccache --show-stats -vv
if [ '${{ runner.debug }}' = '1' ]; then
cat "${CCACHE_LOGFILE}"
curl ${CCACHE_REMOTE_STORAGE%|*}/status || true
fi
- name: Upload the binary (Linux)
- name: Upload rippled artifact (Linux)
if: ${{ github.repository_owner == 'XRPLF' && runner.os == 'Linux' }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
env:
BUILD_DIR: ${{ inputs.build_dir }}
with:
name: xrpld-${{ inputs.config_name }}
path: ${{ env.BUILD_DIR }}/xrpld
name: rippled-${{ inputs.config_name }}
path: ${{ env.BUILD_DIR }}/rippled
retention-days: 3
if-no-files-found: error
- name: Check linking (Linux)
if: ${{ runner.os == 'Linux' && env.SANITIZERS_ENABLED == 'false' }}
working-directory: ${{ env.BUILD_DIR }}
if: ${{ runner.os == 'Linux' }}
working-directory: ${{ inputs.build_dir }}
run: |
ldd ./xrpld
if [ "$(ldd ./xrpld | grep -E '(libstdc\+\+|libgcc)' | wc -l)" -eq 0 ]; then
ldd ./rippled
if [ "$(ldd ./rippled | grep -E '(libstdc\+\+|libgcc)' | wc -l)" -eq 0 ]; then
echo 'The binary is statically linked.'
else
echo 'The binary is dynamically linked.'
@@ -197,22 +153,14 @@ jobs:
fi
- name: Verify presence of instrumentation (Linux)
if: ${{ runner.os == 'Linux' && env.VOIDSTAR_ENABLED == 'true' }}
working-directory: ${{ env.BUILD_DIR }}
if: ${{ runner.os == 'Linux' && env.ENABLED_VOIDSTAR == 'true' }}
working-directory: ${{ inputs.build_dir }}
run: |
./xrpld --version | grep libvoidstar
- name: Set sanitizer options
if: ${{ !inputs.build_only && env.SANITIZERS_ENABLED == 'true' }}
run: |
echo "ASAN_OPTIONS=print_stacktrace=1:detect_container_overflow=0:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/asan.supp" >> ${GITHUB_ENV}
echo "TSAN_OPTIONS=second_deadlock_stack=1:halt_on_error=0:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/tsan.supp" >> ${GITHUB_ENV}
echo "UBSAN_OPTIONS=suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/ubsan.supp" >> ${GITHUB_ENV}
echo "LSAN_OPTIONS=suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/lsan.supp" >> ${GITHUB_ENV}
./rippled --version | grep libvoidstar
- name: Run the separate tests
if: ${{ !inputs.build_only }}
working-directory: ${{ env.BUILD_DIR }}
working-directory: ${{ inputs.build_dir }}
# Windows locks some of the build files while running tests, and parallel jobs can collide
env:
BUILD_TYPE: ${{ inputs.build_type }}
@@ -225,11 +173,11 @@ jobs:
- name: Run the embedded tests
if: ${{ !inputs.build_only }}
working-directory: ${{ runner.os == 'Windows' && format('{0}/{1}', env.BUILD_DIR, inputs.build_type) || env.BUILD_DIR }}
working-directory: ${{ runner.os == 'Windows' && format('{0}/{1}', inputs.build_dir, inputs.build_type) || inputs.build_dir }}
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
run: |
./xrpld --unittest --unittest-jobs "${BUILD_NPROC}"
./rippled --unittest --unittest-jobs "${BUILD_NPROC}"
- name: Debug failure (Linux)
if: ${{ failure() && runner.os == 'Linux' && !inputs.build_only }}
@@ -240,8 +188,8 @@ jobs:
netstat -an
- name: Prepare coverage report
if: ${{ !inputs.build_only && env.COVERAGE_ENABLED == 'true' }}
working-directory: ${{ env.BUILD_DIR }}
if: ${{ !inputs.build_only && env.ENABLED_COVERAGE == 'true' }}
working-directory: ${{ inputs.build_dir }}
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
BUILD_TYPE: ${{ inputs.build_type }}
@@ -253,13 +201,13 @@ jobs:
--target coverage
- name: Upload coverage report
if: ${{ github.repository_owner == 'XRPLF' && !inputs.build_only && env.COVERAGE_ENABLED == 'true' }}
if: ${{ github.repository_owner == 'XRPLF' && !inputs.build_only && env.ENABLED_COVERAGE == 'true' }}
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
with:
disable_search: true
disable_telem: true
fail_ci_if_error: true
files: ${{ env.BUILD_DIR }}/coverage.xml
files: ${{ inputs.build_dir }}/coverage.xml
plugins: noop
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@@ -8,24 +8,21 @@ name: Build and test
on:
workflow_call:
inputs:
ccache_enabled:
description: "Whether to enable ccache."
build_dir:
description: "The directory where to build."
required: false
type: boolean
default: false
type: string
default: ".build"
os:
description: 'The operating system to use for the build ("linux", "macos", "windows").'
required: true
type: string
strategy_matrix:
# TODO: Support additional strategies, e.g. "ubuntu" for generating all Ubuntu configurations.
description: 'The strategy matrix to use for generating the configurations ("minimal", "all").'
required: false
type: string
default: "minimal"
secrets:
CODECOV_TOKEN:
description: "The Codecov token to use for uploading coverage reports."
@@ -49,14 +46,13 @@ jobs:
matrix: ${{ fromJson(needs.generate-matrix.outputs.matrix) }}
max-parallel: 10
with:
build_dir: ${{ inputs.build_dir }}
build_only: ${{ matrix.build_only }}
build_type: ${{ matrix.build_type }}
ccache_enabled: ${{ inputs.ccache_enabled }}
cmake_args: ${{ matrix.cmake_args }}
cmake_target: ${{ matrix.cmake_target }}
runs_on: ${{ toJSON(matrix.architecture.runner) }}
image: ${{ contains(matrix.architecture.platform, 'linux') && format('ghcr.io/xrplf/ci/{0}-{1}:{2}-{3}-sha-{4}', matrix.os.distro_name, matrix.os.distro_version, matrix.os.compiler_name, matrix.os.compiler_version, matrix.os.image_sha) || '' }}
config_name: ${{ matrix.config_name }}
sanitizers: ${{ matrix.sanitizers }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -25,7 +25,7 @@ jobs:
env:
MESSAGE: |
The dependency relationships between the modules in xrpld have
The dependency relationships between the modules in rippled have
changed, which may be an improvement or a regression.
A rule of thumb is that if your changes caused something to be

View File

@@ -1,54 +0,0 @@
# This workflow checks if the codebase is properly renamed, see more info in
# .github/scripts/rename/README.md.
name: Check rename
# This workflow can only be triggered by other workflows.
on: workflow_call
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-rename
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
rename:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Check definitions
run: .github/scripts/rename/definitions.sh .
- name: Check copyright notices
run: .github/scripts/rename/copyright.sh .
- name: Check CMake configs
run: .github/scripts/rename/cmake.sh .
- name: Check binary name
run: .github/scripts/rename/binary.sh .
- name: Check namespaces
run: .github/scripts/rename/namespace.sh .
- name: Check config name
run: .github/scripts/rename/config.sh .
- name: Check include guards
run: .github/scripts/rename/include.sh .
- name: Check for differences
env:
MESSAGE: |
One or more files contain changes that do not adhere to new naming
conventions.
Run the scripts in '.github/scripts/rename/' in your repo, commit
and push the changes. See .github/scripts/rename/README.md for
more info.
run: |
DIFF=$(git status --porcelain)
if [ -n "${DIFF}" ]; then
# Print the differences to give the contributor a hint about what to
# expect when running the renaming scripts on their own machine.
git diff
echo "${MESSAGE}"
exit 1
fi

View File

@@ -0,0 +1,91 @@
# This workflow exports the built libxrpl package to the Conan remote on a
# a channel named after the pull request, and notifies the Clio repository about
# the new version so it can check for compatibility.
name: Notify Clio
# This workflow can only be triggered by other workflows.
on:
workflow_call:
inputs:
conan_remote_name:
description: "The name of the Conan remote to use."
required: false
type: string
default: xrplf
conan_remote_url:
description: "The URL of the Conan endpoint to use."
required: false
type: string
default: https://conan.ripplex.io
secrets:
clio_notify_token:
description: "The GitHub token to notify Clio about new versions."
required: true
conan_remote_username:
description: "The username for logging into the Conan remote."
required: true
conan_remote_password:
description: "The password for logging into the Conan remote."
required: true
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-clio
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
upload:
if: ${{ github.event.pull_request.head.repo.full_name == github.repository }}
runs-on: ubuntu-latest
container: ghcr.io/xrplf/ci/ubuntu-noble:gcc-13-sha-5dd7158
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Generate outputs
id: generate
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
run: |
echo 'Generating user and channel.'
echo "user=clio" >> "${GITHUB_OUTPUT}"
echo "channel=pr_${PR_NUMBER}" >> "${GITHUB_OUTPUT}"
echo 'Extracting version.'
echo "version=$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" | awk -F '"' '{print $2}')" >> "${GITHUB_OUTPUT}"
- name: Calculate conan reference
id: conan_ref
run: |
echo "conan_ref=${{ steps.generate.outputs.version }}@${{ steps.generate.outputs.user }}/${{ steps.generate.outputs.channel }}" >> "${GITHUB_OUTPUT}"
- name: Set up Conan
uses: ./.github/actions/setup-conan
with:
conan_remote_name: ${{ inputs.conan_remote_name }}
conan_remote_url: ${{ inputs.conan_remote_url }}
- name: Log into Conan remote
env:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
run: conan remote login "${CONAN_REMOTE_NAME}" "${{ secrets.conan_remote_username }}" --password "${{ secrets.conan_remote_password }}"
- name: Upload package
env:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
run: |
conan export --user=${{ steps.generate.outputs.user }} --channel=${{ steps.generate.outputs.channel }} .
conan upload --confirm --check --remote="${CONAN_REMOTE_NAME}" xrpl/${{ steps.conan_ref.outputs.conan_ref }}
outputs:
conan_ref: ${{ steps.conan_ref.outputs.conan_ref }}
notify:
needs: upload
runs-on: ubuntu-latest
steps:
- name: Notify Clio
env:
GH_TOKEN: ${{ secrets.clio_notify_token }}
PR_URL: ${{ github.event.pull_request.html_url }}
run: |
gh api --method POST -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" \
/repos/xrplf/clio/dispatches -f "event_type=check_libxrpl" \
-F "client_payload[conan_ref]=${{ needs.upload.outputs.conan_ref }}" \
-F "client_payload[pr_url]=${PR_URL}"

View File

@@ -1,97 +0,0 @@
# This workflow exports the built libxrpl package to the Conan remote.
name: Upload Conan recipe
# This workflow can only be triggered by other workflows.
on:
workflow_call:
inputs:
remote_name:
description: "The name of the Conan remote to use."
required: false
type: string
default: xrplf
remote_url:
description: "The URL of the Conan endpoint to use."
required: false
type: string
default: https://conan.ripplex.io
secrets:
remote_username:
description: "The username for logging into the Conan remote."
required: true
remote_password:
description: "The password for logging into the Conan remote."
required: true
outputs:
recipe_ref:
description: "The Conan recipe reference ('name/version') that was uploaded."
value: ${{ jobs.upload.outputs.ref }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-upload-recipe
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
upload:
runs-on: ubuntu-latest
container: ghcr.io/xrplf/ci/ubuntu-noble:gcc-13-sha-5dd7158
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Generate build version number
id: version
uses: ./.github/actions/generate-version
- name: Set up Conan
uses: ./.github/actions/setup-conan
with:
remote_name: ${{ inputs.remote_name }}
remote_url: ${{ inputs.remote_url }}
- name: Log into Conan remote
env:
REMOTE_NAME: ${{ inputs.remote_name }}
REMOTE_USERNAME: ${{ secrets.remote_username }}
REMOTE_PASSWORD: ${{ secrets.remote_password }}
run: conan remote login "${REMOTE_NAME}" "${REMOTE_USERNAME}" --password "${REMOTE_PASSWORD}"
- name: Upload Conan recipe (version)
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=${{ steps.version.outputs.version }}
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/${{ steps.version.outputs.version }}
- name: Upload Conan recipe (develop)
if: ${{ github.ref == 'refs/heads/develop' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=develop
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/develop
- name: Upload Conan recipe (rc)
if: ${{ startsWith(github.ref, 'refs/heads/release') }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=rc
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/rc
- name: Upload Conan recipe (release)
if: ${{ github.event_name == 'tag' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=release
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/release
outputs:
ref: xrpl/${{ steps.version.outputs.version }}

View File

@@ -64,43 +64,41 @@ jobs:
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
uses: XRPLF/actions/.github/actions/cleanup-workspace@01b244d2718865d427b499822fbd3f15e7197fcc
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Prepare runner
uses: XRPLF/actions/prepare-runner@2cbf481018d930656e9276fcc20dc0e3a0be5b6d
uses: XRPLF/actions/.github/actions/prepare-runner@99685816bb60a95a66852f212f382580e180df3a
with:
enable_ccache: false
disable_ccache: false
- name: Print build environment
uses: ./.github/actions/print-env
- name: Get number of processors
uses: XRPLF/actions/get-nproc@cf0433aa74563aead044a1e395610c96d65a37cf
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
id: nproc
with:
subtract: ${{ env.NPROC_SUBTRACT }}
- name: Setup Conan
env:
SANITIZERS: ${{ matrix.sanitizers }}
uses: ./.github/actions/setup-conan
with:
remote_name: ${{ env.CONAN_REMOTE_NAME }}
remote_url: ${{ env.CONAN_REMOTE_URL }}
conan_remote_name: ${{ env.CONAN_REMOTE_NAME }}
conan_remote_url: ${{ env.CONAN_REMOTE_URL }}
- name: Build dependencies
uses: ./.github/actions/build-deps
with:
build_dir: .build
build_nproc: ${{ steps.nproc.outputs.nproc }}
build_type: ${{ matrix.build_type }}
force_build: ${{ github.event_name == 'schedule' || github.event.inputs.force_source_build == 'true' }}
# Set the verbosity to "quiet" for Windows to avoid an excessive
# amount of logs. For other OSes, the "verbose" logs are more useful.
log_verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }}
sanitizers: ${{ matrix.sanitizers }}
- name: Log into Conan remote
if: ${{ github.repository_owner == 'XRPLF' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}

120
.gitignore vendored
View File

@@ -1,48 +1,69 @@
# .gitignore
# cspell: disable
# Macintosh Desktop Services Store files.
bin/boostbook_catalog.xml
bin/config.log
bin/project-cache.jam
# Ignore vim swap files.
*.swp
# Ignore SCons support files.
.sconsign.dblite
# Ignore python compiled files.
*.pyc
# Ignore Macintosh Desktop Services Store files.
.DS_Store
# Build, intermediate, and temporary artifacts.
# Ignore backup/temps
*~
*.o
*.pdb
*.swp
/.clangd
Debug/
Release/
/.build/
/build/
/db/
/out.txt
/Testing/
/tmp/
CMakeSettings.json
CMakeUserPresets.json
# Coverage files.
# Ignore object files.
*.o
.nih_c
tags
TAGS
GTAGS
GRTAGS
GPATH
bin/rippled
Debug/*.*
Release/*.*
# Ignore coverage files.
*.gcno
*.gcda
*.gcov
# Profiling data.
gmon.out
# Levelization data.
# Levelization checking
.github/scripts/levelization/results/*
!.github/scripts/levelization/results/loops.txt
!.github/scripts/levelization/results/ordering.txt
# Customized configs.
/rippled.cfg
/xrpld.cfg
/validators.txt
# Ignore tmp directory.
tmp
# Locally patched Conan recipes
external/conan-center-index/
# Ignore database directory.
db/
db/*.db
db/*.db-*
# XCode IDE.
# Ignore debug logs
debug_log.txt
# Ignore customized configs
rippled.cfg
validators.txt
# Doxygen generated documentation output
HtmlDocumentation
docs/html_doc
# Xcode user-specific project settings
# Xcode
.DS_Store
/build/
*.pbxuser
!default.pbxuser
*.mode1v3
@@ -55,19 +76,38 @@ xcuserdata
profile
*.moved-aside
DerivedData
.idea/
*.hmap
# JetBrains IDE.
/.idea/
# Intel Parallel Studio 2013 XE
My Amplifier XE Results - RippleD
# Microsoft Visual Studio IDE.
/.vs/
/.vscode/
# Compiler intermediate output
/out.txt
# zed IDE.
/.zed/
# Build Log
rippled-build.log
# AI tools.
/.augment
/.claude
/CLAUDE.md
# Profiling data
gmon.out
Builds/VisualStudio2015/*.db
Builds/VisualStudio2015/*.user
Builds/VisualStudio2015/*.opendb
Builds/VisualStudio2015/*.sdf
# MSVC
*.pdb
.vs/
CMakeSettings.json
compile_commands.json
.clangd
packages
pkg_out
pkg
CMakeUserPresets.json
bld.rippled/
.vscode
# Suggested in-tree build directory
/.build*/

View File

@@ -26,37 +26,11 @@ repos:
args: [--style=file]
"types_or": [c++, c, proto]
- repo: https://github.com/cheshirekow/cmake-format-precommit
rev: e2c2116d86a80e72e7146a06e68b7c228afc6319 # frozen: v0.6.13
hooks:
- id: cmake-format
additional_dependencies: [PyYAML]
- repo: https://github.com/rbubley/mirrors-prettier
rev: 5ba47274f9b181bce26a5150a725577f3c336011 # frozen: v3.6.2
hooks:
- id: prettier
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 831207fd435b47aeffdf6af853097e64322b4d44 # frozen: v25.12.0
hooks:
- id: black
- repo: https://github.com/streetsidesoftware/cspell-cli
rev: 1cfa010f078c354f3ffb8413616280cc28f5ba21 # frozen: v9.4.0
hooks:
- id: cspell # Spell check changed files
exclude: .config/cspell.config.yaml
- id: cspell # Spell check the commit message
name: check commit message spelling
args:
- --no-must-find-files
- --no-progress
- --no-summary
- --files
- .git/COMMIT_EDITMSG
stages: [commit-msg]
exclude: |
(?x)^(
external/.*|

View File

@@ -6,85 +6,90 @@ For info about how [API versioning](https://xrpl.org/request-formatting.html#api
The API version controls the API behavior you see. This includes what properties you see in responses, what parameters you're permitted to send in requests, and so on. You specify the API version in each of your requests. When a breaking change is introduced to the `rippled` API, a new version is released. To avoid breaking your code, you should set (or increase) your version when you're ready to upgrade.
The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conventions/request-formatting/#commandline-format) always uses the latest API version. The command line is intended for ad-hoc usage by humans, not programs or automated scripts. The command line is not meant for use in production code.
For a log of breaking changes, see the **API Version [number]** headings. In general, breaking changes are associated with a particular API Version number. For non-breaking changes, scroll to the **XRP Ledger version [x.y.z]** headings. Non-breaking changes are associated with a particular XRP Ledger (`rippled`) release.
## API Version 3 (Beta)
API version 3 is currently a beta API. It requires enabling `[beta_rpc_api]` in the rippled configuration to use. See [API-VERSION-3.md](API-VERSION-3.md) for the full list of changes in API version 3.
## API Version 2
API version 2 is available in `rippled` version 2.0.0 and later. See [API-VERSION-2.md](API-VERSION-2.md) for the full list of changes in API version 2.
API version 2 is available in `rippled` version 2.0.0 and later. To use this API, clients specify `"api_version" : 2` in each request.
#### Removed methods
In API version 2, the following deprecated methods are no longer available: (https://github.com/XRPLF/rippled/pull/4759)
- `tx_history` - Instead, use other methods such as `account_tx` or `ledger` with the `transactions` field set to `true`.
- `ledger_header` - Instead, use the `ledger` method.
#### Modifications to JSON transaction element in V2
In API version 2, JSON elements for transaction output have been changed and made consistent for all methods which output transactions. (https://github.com/XRPLF/rippled/pull/4775)
This helps to unify the JSON serialization format of transactions. (https://github.com/XRPLF/clio/issues/722, https://github.com/XRPLF/rippled/issues/4727)
- JSON transaction element is named `tx_json`
- Binary transaction element is named `tx_blob`
- JSON transaction metadata element is named `meta`
- Binary transaction metadata element is named `meta_blob`
Additionally, these elements are now consistently available next to `tx_json` (i.e. sibling elements), where possible:
- `hash` - Transaction ID. This data was stored inside transaction output in API version 1, but in API version 2 is a sibling element.
- `ledger_index` - Ledger index (only set on validated ledgers)
- `ledger_hash` - Ledger hash (only set on closed or validated ledgers)
- `close_time_iso` - Ledger close time expressed in ISO 8601 time format (only set on validated ledgers)
- `validated` - Bool element set to `true` if the transaction is in a validated ledger, otherwise `false`
This change affects the following methods:
- `tx` - Transaction data moved into element `tx_json` (was inline inside `result`) or, if binary output was requested, moved from `tx` to `tx_blob`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `account_tx` - Renamed transaction element from `tx` to `tx_json`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `transaction_entry` - Renamed transaction metadata element from `metadata` to `meta`. Changed location of `hash` and added new elements
- `subscribe` - Renamed transaction element from `transaction` to `tx_json`. Changed location of `hash` and added new elements
- `sign`, `sign_for`, `submit` and `submit_multisigned` - Changed location of `hash` element.
#### Modification to `Payment` transaction JSON schema
When reading Payments, the `Amount` field should generally **not** be used. Instead, use [delivered_amount](https://xrpl.org/partial-payments.html#the-delivered_amount-field) to see the amount that the Payment delivered. To clarify its meaning, the `Amount` field is being renamed to `DeliverMax`. (https://github.com/XRPLF/rippled/pull/4733)
- In `Payment` transaction type, JSON RPC field `Amount` is renamed to `DeliverMax`. To enable smooth client transition, `Amount` is still handled, as described below: (https://github.com/XRPLF/rippled/pull/4733)
- On JSON RPC input (e.g. `submit_multisigned` etc. methods), `Amount` is recognized as an alias to `DeliverMax` for both API version 1 and version 2 clients.
- On JSON RPC input, submitting both `Amount` and `DeliverMax` fields is allowed _only_ if they are identical; otherwise such input is rejected with `rpcINVALID_PARAMS` error.
- On JSON RPC output (e.g. `subscribe`, `account_tx` etc. methods), `DeliverMax` is present in both API version 1 and version 2.
- On JSON RPC output, `Amount` is only present in API version 1 and _not_ in version 2.
#### Modifications to account_info response
- `signer_lists` is returned in the root of the response. (In API version 1, it was nested under `account_data`.) (https://github.com/XRPLF/rippled/pull/3770)
- When using an invalid `signer_lists` value, the API now returns an "invalidParams" error. (https://github.com/XRPLF/rippled/pull/4585)
- (`signer_lists` must be a boolean. In API version 1, strings were accepted and may return a normal response - i.e. as if `signer_lists` were `true`.)
#### Modifications to [account_tx](https://xrpl.org/account_tx.html#account_tx) response
- Using `ledger_index_min`, `ledger_index_max`, and `ledger_index` returns `invalidParams` because if you use `ledger_index_min` or `ledger_index_max`, then it does not make sense to also specify `ledger_index`. In API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4571)
- The same applies for `ledger_index_min`, `ledger_index_max`, and `ledger_hash`. (https://github.com/XRPLF/rippled/issues/4545#issuecomment-1565065579)
- Using a `ledger_index_min` or `ledger_index_max` beyond the range of ledgers that the server has:
- returns `lgrIdxMalformed` in API version 2. Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/issues/4288)
- Attempting to use a non-boolean value (such as a string) for the `binary` or `forward` parameters returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
#### Modifications to [noripple_check](https://xrpl.org/noripple_check.html#noripple_check) response
- Attempting to use a non-boolean value (such as a string) for the `transactions` parameter returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
## API Version 1
This version is supported by all `rippled` versions. For WebSocket and HTTP JSON-RPC requests, it is currently the default API version used when no `api_version` is specified.
## XRP Ledger server version 3.1.0
The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conventions/request-formatting/#commandline-format) always uses the latest API version. The command line is intended for ad-hoc usage by humans, not programs or automated scripts. The command line is not meant for use in production code.
[Version 3.1.0](https://github.com/XRPLF/rippled/releases/tag/3.1.0) was released on Jan 27, 2026.
### Inconsistency: server_info - network_id
### Additions in 3.1.0
- `vault_info`: New RPC method to retrieve information about a specific vault (part of XLS-66 Lending Protocol). ([#6156](https://github.com/XRPLF/rippled/pull/6156))
## XRP Ledger server version 3.0.0
[Version 3.0.0](https://github.com/XRPLF/rippled/releases/tag/3.0.0) was released on Dec 9, 2025.
### Additions in 3.0.0
- `ledger_entry`: Supports all ledger entry types with dedicated parsers. ([#5237](https://github.com/XRPLF/rippled/pull/5237))
- `ledger_entry`: New error codes `entryNotFound` and `unexpectedLedgerType` for more specific error handling. ([#5237](https://github.com/XRPLF/rippled/pull/5237))
- `ledger_entry`: Improved error messages with more context (e.g., specifying which field is invalid or missing). ([#5237](https://github.com/XRPLF/rippled/pull/5237))
- `ledger_entry`: Assorted bug fixes in RPC processing. ([#5237](https://github.com/XRPLF/rippled/pull/5237))
- `simulate`: Supports additional metadata in the response. ([#5754](https://github.com/XRPLF/rippled/pull/5754))
## XRP Ledger server version 2.6.2
[Version 2.6.2](https://github.com/XRPLF/rippled/releases/tag/2.6.2) was released on Nov 19, 2025.
This release contains bug fixes only and no API changes.
## XRP Ledger server version 2.6.1
[Version 2.6.1](https://github.com/XRPLF/rippled/releases/tag/2.6.1) was released on Sep 30, 2025.
This release contains bug fixes only and no API changes.
## XRP Ledger server version 2.6.0
[Version 2.6.0](https://github.com/XRPLF/rippled/releases/tag/2.6.0) was released on Aug 27, 2025.
### Additions in 2.6.0
- `account_info`: Added `allowTrustLineLocking` flag in response. ([#5525](https://github.com/XRPLF/rippled/pull/5525))
- `ledger`: Removed the type filter from the RPC command. ([#4934](https://github.com/XRPLF/rippled/pull/4934))
- `subscribe` (`validations` stream): `network_id` is now included. ([#5579](https://github.com/XRPLF/rippled/pull/5579))
- `subscribe` (`transactions` stream): `nftoken_id`, `nftoken_ids`, and `offer_id` are now included in transaction metadata. ([#5230](https://github.com/XRPLF/rippled/pull/5230))
## XRP Ledger server version 2.5.1
[Version 2.5.1](https://github.com/XRPLF/rippled/releases/tag/2.5.1) was released on Sep 17, 2025.
This release contains bug fixes only and no API changes.
The `network_id` field was added in the `server_info` response in version 1.5.0 (2019), but it is not returned in [reporting mode](https://xrpl.org/rippled-server-modes.html#reporting-mode). However, use of reporting mode is now discouraged, in favor of using [Clio](https://github.com/XRPLF/clio) instead.
## XRP Ledger server version 2.5.0
[Version 2.5.0](https://github.com/XRPLF/rippled/releases/tag/2.5.0) was released on Jun 24, 2025.
As of 2025-04-04, version 2.5.0 is in development. You can use a pre-release version by building from source or [using the `nightly` package](https://xrpl.org/docs/infrastructure/installation/install-rippled-on-ubuntu).
### Additions and bugfixes in 2.5.0
- `tx`: Added `ctid` field to the response and improved error handling. ([#4738](https://github.com/XRPLF/rippled/pull/4738))
- `ledger_entry`: Improved error messages in `permissioned_domain`. ([#5344](https://github.com/XRPLF/rippled/pull/5344))
- `simulate`: Improved multi-sign usage. ([#5479](https://github.com/XRPLF/rippled/pull/5479))
- `channel_authorize`: If `signing_support` is not enabled in the config, the RPC is disabled. ([#5385](https://github.com/XRPLF/rippled/pull/5385))
- `subscribe` (admin): Removed webhook queue limit to prevent dropping notifications; reduced HTTP timeout from 10 minutes to 30 seconds. ([#5163](https://github.com/XRPLF/rippled/pull/5163))
- `ledger_data` (gRPC): Fixed crashing issue with some invalid markers. ([#5137](https://github.com/XRPLF/rippled/pull/5137))
- `account_lines`: Fixed error with `no_ripple` and `no_ripple_peer` sometimes showing up incorrectly. ([#5345](https://github.com/XRPLF/rippled/pull/5345))
- `account_tx`: Fixed issue with incorrect CTIDs. ([#5408](https://github.com/XRPLF/rippled/pull/5408))
- `channel_authorize`: If `signing_support` is not enabled in the config, the RPC is disabled.
## XRP Ledger server version 2.4.0
@@ -92,19 +97,11 @@ This release contains bug fixes only and no API changes.
### Additions and bugfixes in 2.4.0
- `simulate`: A new RPC that executes a [dry run of a transaction submission](https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0069d-simulate#2-rpc-simulate). ([#5069](https://github.com/XRPLF/rippled/pull/5069))
- Signing methods (`sign`, `sign_for`, `submit`): Autofill fees better, properly handle transactions without a base fee, and autofill the `NetworkID` field. ([#5069](https://github.com/XRPLF/rippled/pull/5069))
- `ledger_entry`: `state` is added as an alias for `ripple_state`. ([#5199](https://github.com/XRPLF/rippled/pull/5199))
- `ledger`, `ledger_data`, `account_objects`: Support filtering ledger entry types by their canonical names (case-insensitive). ([#5271](https://github.com/XRPLF/rippled/pull/5271))
- `validators`: Added new field `validator_list_threshold` in response. ([#5112](https://github.com/XRPLF/rippled/pull/5112))
- `server_info`: Added git commit hash info on admin connection. ([#5225](https://github.com/XRPLF/rippled/pull/5225))
- `server_definitions`: Changed larger `UInt` serialized types to `Hash`. ([#5231](https://github.com/XRPLF/rippled/pull/5231))
## XRP Ledger server version 2.3.1
[Version 2.3.1](https://github.com/XRPLF/rippled/releases/tag/2.3.1) was released on Jan 29, 2025.
This release contains bug fixes only and no API changes.
- `ledger_entry`: `state` is added an alias for `ripple_state`.
- `ledger_entry`: Enables case-insensitive filtering by canonical name in addition to case-sensitive filtering by RPC name.
- `validators`: Added new field `validator_list_threshold` in response.
- `simulate`: A new RPC that executes a [dry run of a transaction submission](https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0069d-simulate#2-rpc-simulate)
- Signing methods autofill fees better and properly handle transactions that don't have a base fee, and will also autofill the `NetworkID` field.
## XRP Ledger server version 2.3.0
@@ -112,30 +109,19 @@ This release contains bug fixes only and no API changes.
### Breaking changes in 2.3.0
- `book_changes`: If the requested ledger version is not available on this node, a `ledgerNotFound` error is returned and the node does not attempt to acquire the ledger from the p2p network (as with other non-admin RPCs). Admins can still attempt to retrieve old ledgers with the `ledger_request` RPC.
- `book_changes`: If the requested ledger version is not available on this node, a `ledgerNotFound` error is returned and the node does not attempt to acquire the ledger from the p2p network (as with other non-admin RPCs).
Admins can still attempt to retrieve old ledgers with the `ledger_request` RPC.
### Additions and bugfixes in 2.3.0
- `book_changes`: Returns a `validated` field in its response. ([#5096](https://github.com/XRPLF/rippled/pull/5096))
- `book_changes`: Accepts shortcut strings (`current`, `closed`, `validated`) for the `ledger_index` parameter. ([#5096](https://github.com/XRPLF/rippled/pull/5096))
- `server_definitions`: Include `index` in response. ([#5190](https://github.com/XRPLF/rippled/pull/5190))
- `account_nfts`: Fix issue where unassociated marker would return incorrect results. ([#5045](https://github.com/XRPLF/rippled/pull/5045))
- `account_objects`: Fix issue where invalid marker would not return an error. ([#5046](https://github.com/XRPLF/rippled/pull/5046))
- `account_objects`: Disallow filtering by ledger entry types that an account cannot hold. ([#5056](https://github.com/XRPLF/rippled/pull/5056))
- `tx`: Allow lowercase CTID. ([#5049](https://github.com/XRPLF/rippled/pull/5049))
- `feature`: Better error handling for invalid values of `feature`. ([#5063](https://github.com/XRPLF/rippled/pull/5063))
- `book_changes`: Returns a `validated` field in its response, which was missing in prior versions.
## XRP Ledger server version 2.2.0
[Version 2.2.0](https://github.com/XRPLF/rippled/releases/tag/2.2.0) was released on Jun 5, 2024. The following additions are non-breaking (because they are purely additive):
- `feature`: Add a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 2.0.1
[Version 2.0.1](https://github.com/XRPLF/rippled/releases/tag/2.0.1) was released on Jan 29, 2024. The following additions are non-breaking:
- `path_find`: Fixes unbounded memory growth. ([#4822](https://github.com/XRPLF/rippled/pull/4822))
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 2.0.0
@@ -143,18 +129,24 @@ This release contains bug fixes only and no API changes.
- `server_definitions`: A new RPC that generates a `definitions.json`-like output that can be used in XRPL libraries.
- In `Payment` transactions, `DeliverMax` has been added. This is a replacement for the `Amount` field, which should not be used. Typically, the `delivered_amount` (in transaction metadata) should be used. To ease the transition, `DeliverMax` is present regardless of API version, since adding a field is non-breaking.
- API version 2 has been moved from beta to supported, meaning that it is generally available (regardless of the `beta_rpc_api` setting). The full list of changes is in [API-VERSION-2.md](API-VERSION-2.md).
- API version 2 has been moved from beta to supported, meaning that it is generally available (regardless of the `beta_rpc_api` setting).
## XRP Ledger server version 2.2.0
The following is a non-breaking addition to the API.
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 1.12.0
[Version 1.12.0](https://github.com/XRPLF/rippled/releases/tag/1.12.0) was released on Sep 6, 2023. The following additions are non-breaking (because they are purely additive):
[Version 1.12.0](https://github.com/XRPLF/rippled/releases/tag/1.12.0) was released on Sep 6, 2023. The following additions are non-breaking (because they are purely additive).
- `server_info`: Added `ports`, an array which advertises the RPC and WebSocket ports. This information is also included in the `/crawl` endpoint (which calls `server_info` internally). `grpc` and `peer` ports are also included. ([#4427](https://github.com/XRPLF/rippled/pull/4427))
- `server_info`: Added `ports`, an array which advertises the RPC and WebSocket ports. This information is also included in the `/crawl` endpoint (which calls `server_info` internally). `grpc` and `peer` ports are also included. (https://github.com/XRPLF/rippled/pull/4427)
- `ports` contains objects, each containing a `port` for the listening port (a number string), and a `protocol` array listing the supported protocols on that port.
- This allows crawlers to build a more detailed topology without needing to port-scan nodes.
- (For peers and other non-admin clients, the info about admin ports is excluded.)
- Clawback: The following additions are gated by the Clawback amendment (`featureClawback`). ([#4553](https://github.com/XRPLF/rippled/pull/4553))
- Adds an [AccountRoot flag](https://xrpl.org/accountroot.html#accountroot-flags) called `lsfAllowTrustLineClawback`. ([#4617](https://github.com/XRPLF/rippled/pull/4617))
- Clawback: The following additions are gated by the Clawback amendment (`featureClawback`). (https://github.com/XRPLF/rippled/pull/4553)
- Adds an [AccountRoot flag](https://xrpl.org/accountroot.html#accountroot-flags) called `lsfAllowTrustLineClawback` (https://github.com/XRPLF/rippled/pull/4617)
- Adds the corresponding `asfAllowTrustLineClawback` [AccountSet Flag](https://xrpl.org/accountset.html#accountset-flags) as well.
- Clawback is disabled by default, so if an issuer desires the ability to claw back funds, they must use an `AccountSet` transaction to set the AllowTrustLineClawback flag. They must do this before creating any trust lines, offers, escrows, payment channels, or checks.
- Adds the [Clawback transaction type](https://github.com/XRPLF/XRPL-Standards/blob/master/XLS-39d-clawback/README.md#331-clawback-transaction), containing these fields:
@@ -189,16 +181,16 @@ This release contains bug fixes only and no API changes.
### Breaking changes in 1.11
- Added the ability to mark amendments as obsolete. For the `feature` admin API, there is a new possible value for the `vetoed` field. ([#4291](https://github.com/XRPLF/rippled/pull/4291))
- Added the ability to mark amendments as obsolete. For the `feature` admin API, there is a new possible value for the `vetoed` field. (https://github.com/XRPLF/rippled/pull/4291)
- The value of `vetoed` can now be `true`, `false`, or `"Obsolete"`.
- Removed the acceptance of seeds or public keys in place of account addresses. ([#4404](https://github.com/XRPLF/rippled/pull/4404))
- Removed the acceptance of seeds or public keys in place of account addresses. (https://github.com/XRPLF/rippled/pull/4404)
- This simplifies the API and encourages better security practices (i.e. seeds should never be sent over the network).
- For the `ledger_data` method, when all entries are filtered out, the `state` field of the response is now an empty list (in other words, an empty array, `[]`). (Previously, it would return `null`.) While this is technically a breaking change, the new behavior is consistent with the documentation, so this is considered only a bug fix. ([#4398](https://github.com/XRPLF/rippled/pull/4398))
- For the `ledger_data` method, when all entries are filtered out, the `state` field of the response is now an empty list (in other words, an empty array, `[]`). (Previously, it would return `null`.) While this is technically a breaking change, the new behavior is consistent with the documentation, so this is considered only a bug fix. (https://github.com/XRPLF/rippled/pull/4398)
- If and when the `fixNFTokenRemint` amendment activates, there will be a new AccountRoot field, `FirstNFTSequence`. This field is set to the current account sequence when the account issues their first NFT. If an account has not issued any NFTs, then the field is not set. ([#4406](https://github.com/XRPLF/rippled/pull/4406))
- There is a new account deletion restriction: an account can only be deleted if `FirstNFTSequence` + `MintedNFTokens` + `256` is less than the current ledger sequence.
- This is potentially a breaking change if clients have logic for determining whether an account can be deleted.
- NetworkID
- For sidechains and networks with a network ID greater than 1024, there is a new [transaction common field](https://xrpl.org/transaction-common-fields.html), `NetworkID`. ([#4370](https://github.com/XRPLF/rippled/pull/4370))
- For sidechains and networks with a network ID greater than 1024, there is a new [transaction common field](https://xrpl.org/transaction-common-fields.html), `NetworkID`. (https://github.com/XRPLF/rippled/pull/4370)
- This field helps to prevent replay attacks and is now required for chains whose network ID is 1025 or higher.
- The field must be omitted for Mainnet, so there is no change for Mainnet users.
- There are three new local error codes:
@@ -208,10 +200,10 @@ This release contains bug fixes only and no API changes.
### Additions and bug fixes in 1.11
- Added `nftoken_id`, `nftoken_ids` and `offer_id` meta fields into NFT `tx` and `account_tx` responses. ([#4447](https://github.com/XRPLF/rippled/pull/4447))
- Added an `account_flags` object to the `account_info` method response. ([#4459](https://github.com/XRPLF/rippled/pull/4459))
- Added `NFTokenPages` to the `account_objects` RPC. ([#4352](https://github.com/XRPLF/rippled/pull/4352))
- Fixed: `marker` returned from the `account_lines` command would not work on subsequent commands. ([#4361](https://github.com/XRPLF/rippled/pull/4361))
- Added `nftoken_id`, `nftoken_ids` and `offer_id` meta fields into NFT `tx` and `account_tx` responses. (https://github.com/XRPLF/rippled/pull/4447)
- Added an `account_flags` object to the `account_info` method response. (https://github.com/XRPLF/rippled/pull/4459)
- Added `NFTokenPages` to the `account_objects` RPC. (https://github.com/XRPLF/rippled/pull/4352)
- Fixed: `marker` returned from the `account_lines` command would not work on subsequent commands. (https://github.com/XRPLF/rippled/pull/4361)
## XRP Ledger server version 1.10.0

View File

@@ -1,66 +0,0 @@
# API Version 2
API version 2 is available in `rippled` version 2.0.0 and later. To use this API, clients specify `"api_version" : 2` in each request.
For info about how [API versioning](https://xrpl.org/request-formatting.html#api-versioning) works, including examples, please view the [XLS-22d spec](https://github.com/XRPLF/XRPL-Standards/discussions/54). For details about the implementation of API versioning, view the [implementation PR](https://github.com/XRPLF/rippled/pull/3155). API versioning ensures existing integrations and users continue to receive existing behavior, while those that request a higher API version will experience new behavior.
## Removed methods
In API version 2, the following deprecated methods are no longer available: ([#4759](https://github.com/XRPLF/rippled/pull/4759))
- `tx_history` - Instead, use other methods such as `account_tx` or `ledger` with the `transactions` field set to `true`.
- `ledger_header` - Instead, use the `ledger` method.
## Modifications to JSON transaction element in API version 2
In API version 2, JSON elements for transaction output have been changed and made consistent for all methods which output transactions. ([#4775](https://github.com/XRPLF/rippled/pull/4775))
This helps to unify the JSON serialization format of transactions. ([clio#722](https://github.com/XRPLF/clio/issues/722), [#4727](https://github.com/XRPLF/rippled/issues/4727))
- JSON transaction element is named `tx_json`
- Binary transaction element is named `tx_blob`
- JSON transaction metadata element is named `meta`
- Binary transaction metadata element is named `meta_blob`
Additionally, these elements are now consistently available next to `tx_json` (i.e. sibling elements), where possible:
- `hash` - Transaction ID. This data was stored inside transaction output in API version 1, but in API version 2 is a sibling element.
- `ledger_index` - Ledger index (only set on validated ledgers)
- `ledger_hash` - Ledger hash (only set on closed or validated ledgers)
- `close_time_iso` - Ledger close time expressed in ISO 8601 time format (only set on validated ledgers)
- `validated` - Bool element set to `true` if the transaction is in a validated ledger, otherwise `false`
This change affects the following methods:
- `tx` - Transaction data moved into element `tx_json` (was inline inside `result`) or, if binary output was requested, moved from `tx` to `tx_blob`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `account_tx` - Renamed transaction element from `tx` to `tx_json`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `transaction_entry` - Renamed transaction metadata element from `metadata` to `meta`. Changed location of `hash` and added new elements
- `subscribe` - Renamed transaction element from `transaction` to `tx_json`. Changed location of `hash` and added new elements
- `sign`, `sign_for`, `submit` and `submit_multisigned` - Changed location of `hash` element.
## Modifications to `Payment` transaction JSON schema
When reading Payments, the `Amount` field should generally **not** be used. Instead, use [delivered_amount](https://xrpl.org/partial-payments.html#the-delivered_amount-field) to see the amount that the Payment delivered. To clarify its meaning, the `Amount` field is being renamed to `DeliverMax`. ([#4733](https://github.com/XRPLF/rippled/pull/4733))
- In `Payment` transaction type, JSON RPC field `Amount` is renamed to `DeliverMax`. To enable smooth client transition, `Amount` is still handled, as described below: ([#4733](https://github.com/XRPLF/rippled/pull/4733))
- On JSON RPC input (e.g. `submit_multisigned` etc. methods), `Amount` is recognized as an alias to `DeliverMax` for both API version 1 and version 2 clients.
- On JSON RPC input, submitting both `Amount` and `DeliverMax` fields is allowed _only_ if they are identical; otherwise such input is rejected with `rpcINVALID_PARAMS` error.
- On JSON RPC output (e.g. `subscribe`, `account_tx` etc. methods), `DeliverMax` is present in both API version 1 and version 2.
- On JSON RPC output, `Amount` is only present in API version 1 and _not_ in version 2.
## Modifications to account_info response
- `signer_lists` is returned in the root of the response. (In API version 1, it was nested under `account_data`.) ([#3770](https://github.com/XRPLF/rippled/pull/3770))
- When using an invalid `signer_lists` value, the API now returns an "invalidParams" error. ([#4585](https://github.com/XRPLF/rippled/pull/4585))
- (`signer_lists` must be a boolean. In API version 1, strings were accepted and may return a normal response - i.e. as if `signer_lists` were `true`.)
## Modifications to [account_tx](https://xrpl.org/account_tx.html#account_tx) response
- Using `ledger_index_min`, `ledger_index_max`, and `ledger_index` returns `invalidParams` because if you use `ledger_index_min` or `ledger_index_max`, then it does not make sense to also specify `ledger_index`. In API version 1, no error was returned. ([#4571](https://github.com/XRPLF/rippled/pull/4571))
- The same applies for `ledger_index_min`, `ledger_index_max`, and `ledger_hash`. ([#4545](https://github.com/XRPLF/rippled/issues/4545#issuecomment-1565065579))
- Using a `ledger_index_min` or `ledger_index_max` beyond the range of ledgers that the server has:
- returns `lgrIdxMalformed` in API version 2. Previously, in API version 1, no error was returned. ([#4288](https://github.com/XRPLF/rippled/issues/4288))
- Attempting to use a non-boolean value (such as a string) for the `binary` or `forward` parameters returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. ([#4620](https://github.com/XRPLF/rippled/pull/4620))
## Modifications to [noripple_check](https://xrpl.org/noripple_check.html#noripple_check) response
- Attempting to use a non-boolean value (such as a string) for the `transactions` parameter returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. ([#4620](https://github.com/XRPLF/rippled/pull/4620))

View File

@@ -1,27 +0,0 @@
# API Version 3
API version 3 is currently a **beta API**. It requires enabling `[beta_rpc_api]` in the rippled configuration to use. To use this API, clients specify `"api_version" : 3` in each request.
For info about how [API versioning](https://xrpl.org/request-formatting.html#api-versioning) works, including examples, please view the [XLS-22d spec](https://github.com/XRPLF/XRPL-Standards/discussions/54). For details about the implementation of API versioning, view the [implementation PR](https://github.com/XRPLF/rippled/pull/3155). API versioning ensures existing integrations and users continue to receive existing behavior, while those that request a higher API version will experience new behavior.
## Breaking Changes
### Modifications to `amm_info`
The order of error checks has been changed to provide more specific error messages. ([#4924](https://github.com/XRPLF/rippled/pull/4924))
- **Before (API v2)**: When sending an invalid account or asset to `amm_info` while other parameters are not set as expected, the method returns a generic `rpcINVALID_PARAMS` error.
- **After (API v3)**: The same scenario returns a more specific error: `rpcISSUE_MALFORMED` for malformed assets or `rpcACT_MALFORMED` for malformed accounts.
### Modifications to `ledger_entry`
Added support for string shortcuts to look up fixed-location ledger entries using the `"index"` parameter. ([#5644](https://github.com/XRPLF/rippled/pull/5644))
In API version 3, the following string values can be used with the `"index"` parameter:
- `"index": "amendments"` - Returns the `Amendments` ledger entry
- `"index": "fee"` - Returns the `FeeSettings` ledger entry
- `"index": "nunl"` - Returns the `NegativeUNL` ledger entry
- `"index": "hashes"` - Returns the "short" `LedgerHashes` ledger entry (recent ledger hashes)
These shortcuts are only available in API version 3 and later. In API versions 1 and 2, these string values would result in an error.

197
BUILD.md
View File

@@ -1,5 +1,5 @@
| :warning: **WARNING** :warning: |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| :warning: **WARNING** :warning:
|---|
| These instructions assume you have a C++ development environment ready with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up on Linux, macOS, or Windows, [see this guide](./docs/build/environment.md). |
> These instructions also assume a basic familiarity with Conan and CMake.
@@ -10,7 +10,7 @@
## Branches
For a stable release, choose the `master` branch or one of the [tagged
releases](https://github.com/XRPLF/rippled/releases).
releases](https://github.com/ripple/rippled/releases).
```bash
git checkout master
@@ -33,7 +33,7 @@ git checkout develop
See [System Requirements](https://xrpl.org/system-requirements.html).
Building xrpld generally requires git, Python, Conan, CMake, and a C++
Building rippled generally requires git, Python, Conan, CMake, and a C++
compiler. Some guidance on setting up such a [C++ development environment can be
found here](./docs/build/environment.md).
@@ -45,7 +45,7 @@ found here](./docs/build/environment.md).
It is possible to build with Conan 1.60+, but the instructions are
significantly different, which is why we are not recommending it.
`xrpld` is written in the C++20 dialect and includes the `<concepts>` header.
`rippled` is written in the C++20 dialect and includes the `<concepts>` header.
The [minimum compiler versions][2] required are:
| Compiler | Version |
@@ -66,7 +66,7 @@ Linux](./docs/build/environment.md#linux).
### Mac
Many xrpld engineers use macOS for development.
Many rippled engineers use macOS for development.
Here are [sample instructions for setting up a C++ development environment on
macOS](./docs/build/environment.md#macos).
@@ -88,13 +88,6 @@ These instructions assume a basic familiarity with Conan and CMake. If you are
unfamiliar with Conan, then please read [this crash course](./docs/build/conan.md) or the official
[Getting Started][3] walkthrough.
#### Conan lockfile
To achieve reproducible dependencies, we use a [Conan lockfile](https://docs.conan.io/2/tutorial/versioning/lockfiles.html),
which has to be updated every time dependencies change.
Please see the [instructions on how to regenerate the lockfile](conan/lockfile/README.md).
#### Default profile
We recommend that you import the provided `conan/profiles/default` profile:
@@ -126,7 +119,7 @@ default profile.
### Patched recipes
The recipes in Conan Center occasionally need to be patched for compatibility
with the latest version of `xrpld`. We maintain a fork of the Conan Center
with the latest version of `rippled`. We maintain a fork of the Conan Center
[here](https://github.com/XRPLF/conan-center-index/) containing the patches.
To ensure our patched recipes are used, you must add our Conan remote at a
@@ -141,42 +134,17 @@ Alternatively, you can pull the patched recipes into the repository and use them
locally:
```bash
# Extract the version number from the lockfile.
function extract_version {
version=$(cat conan.lock | sed -nE "s@.+${1}/(.+)#.+@\1@p" | head -n1)
echo ${version}
}
# Define which recipes to export.
recipes=('ed25519' 'grpc' 'nudb' 'openssl' 'secp256k1' 'snappy' 'soci')
folders=('all' 'all' 'all' '3.x.x' 'all' 'all' 'all')
# Selectively check out the recipes from our CCI fork.
cd external
mkdir -p conan-center-index
cd conan-center-index
git init
git remote add origin git@github.com:XRPLF/conan-center-index.git
git sparse-checkout init
for ((index = 1; index <= ${#recipes[@]}; index++)); do
recipe=${recipes[index]}
folder=${folders[index]}
echo "Checking out recipe '${recipe}' from folder '${folder}'..."
git sparse-checkout add recipes/${recipe}/${folder}
done
git sparse-checkout set recipes/snappy
git sparse-checkout add recipes/soci
git fetch origin master
git checkout master
cd ../..
# Export the recipes into the local cache.
for ((index = 1; index <= ${#recipes[@]}; index++)); do
recipe=${recipes[index]}
folder=${folders[index]}
version=$(extract_version ${recipe})
echo "Exporting '${recipe}/${version}' from '${recipe}/${folder}'..."
conan export --version $(extract_version ${recipe}) \
external/conan-center-index/recipes/${recipe}/${folder}
done
conan export --version 1.1.10 recipes/snappy/all
conan export --version 4.0.3 recipes/soci/all
rm -rf .git
```
In the case we switch to a newer version of a dependency that still requires a
@@ -187,8 +155,7 @@ the new recipe will be automatically pulled from the official Conan Center.
> [!NOTE]
> You might need to add `--lockfile=""` to your `conan install` command
> to avoid automatic use of the existing `conan.lock` file when you run
> `conan export` manually on your machine
> to avoid automatic use of the existing `conan.lock` file when you run `conan export` manually on your machine
### Conan profile tweaks
@@ -297,7 +264,7 @@ sed -i.bak -e 's|^compiler\.libcxx=.*$|compiler.libcxx=libstdc++11|' $(conan con
to do that is to run the shortcut "x64 Native Tools Command Prompt" for the
version of Visual Studio that you have installed.
Windows developers must also build `xrpld` and its dependencies for the x64
Windows developers must also build `rippled` and its dependencies for the x64
architecture:
```bash
@@ -368,36 +335,6 @@ The workaround for this error is to add two lines to your profile:
tools.build:cxxflags=['-DBOOST_ASIO_DISABLE_CONCEPTS']
```
### Set Up Ccache
To speed up repeated compilations, we recommend that you install
[ccache](https://ccache.dev), a tool that wraps your compiler so that it can
cache build objects locally.
#### Linux
You can install it using the package manager, e.g. `sudo apt install ccache`
(Ubuntu) or `sudo dnf install ccache` (RHEL).
#### macOS
You can install it using Homebrew, i.e. `brew install ccache`.
#### Windows
You can install it using Chocolatey, i.e. `choco install ccache`. If you already
have Ccache installed, then `choco upgrade ccache` will update it to the latest
version. However, if you see an error such as:
```
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(617,5): error MSB6006: "cl.exe" exited with code 3.
```
then please install a specific version of Ccache that we know works, via: `choco
install ccache --version 4.11.3 --allow-downgrade`.
### Build and Test
1. Create a build directory and move into it.
@@ -436,6 +373,19 @@ install ccache --version 4.11.3 --allow-downgrade`.
`--settings build_type=$BUILD_TYPE` or in the profile itself,
under the section `[settings]` with the key `build_type`.
If you are using a Microsoft Visual C++ compiler,
then you will need to ensure consistency between the `build_type` setting
and the `compiler.runtime` setting.
When `build_type` is `Release`, `compiler.runtime` should be `MT`.
When `build_type` is `Debug`, `compiler.runtime` should be `MTd`.
```
conan install .. --output-folder . --build missing --settings build_type=Release --settings compiler.runtime=MT
conan install .. --output-folder . --build missing --settings build_type=Debug --settings compiler.runtime=MTd
```
3. Configure CMake and pass the toolchain file generated by Conan, located at
`$OUTPUT_FOLDER/build/generators/conan_toolchain.cmake`.
@@ -457,9 +407,9 @@ install ccache --version 4.11.3 --allow-downgrade`.
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -Dxrpld=ON -Dtests=ON ..
```
**Note:** You can pass build options for `xrpld` in this step.
**Note:** You can pass build options for `rippled` in this step.
4. Build `xrpld`.
4. Build `rippled`.
For a single-configuration generator, it will build whatever configuration
you passed for `CMAKE_BUILD_TYPE`. For a multi-configuration generator, you
@@ -478,28 +428,55 @@ install ccache --version 4.11.3 --allow-downgrade`.
cmake --build . --config Debug
```
5. Test xrpld.
5. Test rippled.
Single-config generators:
```
./xrpld --unittest --unittest-jobs N
./rippled --unittest --unittest-jobs N
```
Multi-config generators:
```
./Release/xrpld --unittest --unittest-jobs N
./Debug/xrpld --unittest --unittest-jobs N
./Release/rippled --unittest --unittest-jobs N
./Debug/rippled --unittest --unittest-jobs N
```
Replace the `--unittest-jobs` parameter N with the desired unit tests
concurrency. Recommended setting is half of the number of available CPU
cores.
The location of `xrpld` binary in your build directory depends on your
The location of `rippled` binary in your build directory depends on your
CMake generator. Pass `--help` to see the rest of the command line options.
#### Conan lockfile
To achieve reproducible dependencies, we use [Conan lockfile](https://docs.conan.io/2/tutorial/versioning/lockfiles.html).
The `conan.lock` file in the repository contains a "snapshot" of the current dependencies.
It is implicitly used when running `conan` commands, you don't need to specify it.
You have to update this file every time you add a new dependency or change a revision or version of an existing dependency.
> [!NOTE]
> Conan uses local cache by default when creating a lockfile.
>
> To ensure, that lockfile creation works the same way on all developer machines, you should clear the local cache before creating a new lockfile.
To create a new lockfile, run the following commands in the repository root:
```bash
conan remove '*' --confirm
rm conan.lock
# This ensure that xrplf remote is the first to be consulted
conan remote add --force --index 0 xrplf https://conan.ripplex.io
conan lock create . -o '&:jemalloc=True' -o '&:rocksdb=True'
```
> [!NOTE]
> If some dependencies are exclusive for some OS, you may need to run the last command for them adding `--profile:all <PROFILE>`.
## Coverage report
The coverage report is intended for developers using compilers GCC
@@ -516,18 +493,18 @@ Prerequisites for the coverage report:
A coverage report is created when the following steps are completed, in order:
1. `xrpld` binary built with instrumentation data, enabled by the `coverage`
1. `rippled` binary built with instrumentation data, enabled by the `coverage`
option mentioned above
2. completed one or more run of the unit tests, which populates coverage capture data
3. completed run of the `gcovr` tool (which internally invokes either `gcov` or `llvm-cov`)
to assemble both instrumentation data and the coverage capture data into a coverage report
The last step of the above is automated into a single target `coverage`. The instrumented
`xrpld` binary can also be used for regular development or testing work, at
`rippled` binary can also be used for regular development or testing work, at
the cost of extra disk space utilization and a small performance hit
(to store coverage capture data). Since `xrpld` binary is simply a dependency of the
(to store coverage capture data). Since `rippled` binary is simply a dependency of the
coverage report target, it is possible to re-run the `coverage` target without
rebuilding the `xrpld` binary. Note, running of the unit tests before the `coverage`
rebuilding the `rippled` binary. Note, running of the unit tests before the `coverage`
target is left to the developer. Each such run will append to the coverage data
collected in the build directory.
@@ -553,37 +530,23 @@ stored inside the build directory, as either of:
- file named `coverage.`_extension_, with a suitable extension for the report format, or
- directory named `coverage`, with the `index.html` and other files inside, for the `html-details` or `html-nested` report formats.
## Sanitizers
To build dependencies and xrpld with sanitizer instrumentation, set the
`SANITIZERS` environment variable (only once before running conan and cmake) and use the `sanitizers` profile in conan:
```bash
export SANITIZERS=address,undefinedbehavior
conan install .. --output-folder . --profile:all sanitizers --build missing --settings build_type=Debug
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Debug -Dxrpld=ON -Dtests=ON ..
```
See [Sanitizers docs](./docs/build/sanitizers.md) for more details.
## Options
| Option | Default Value | Description |
| ---------- | ------------- | -------------------------------------------------------------- |
| `assert` | OFF | Enable assertions. |
| `coverage` | OFF | Prepare the coverage report. |
| `tests` | OFF | Build tests. |
| `unity` | OFF | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld application, and not just the libxrpl library. |
| `werr` | OFF | Treat compilation warnings as errors |
| `wextra` | OFF | Enable additional compilation warnings |
| Option | Default Value | Description |
| ---------- | ------------- | -------------------------------------------------------------------------- |
| `assert` | OFF | Enable assertions. |
| `coverage` | OFF | Prepare the coverage report. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
| `tests` | OFF | Build tests. |
| `unity` | OFF | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld (`rippled`) application, and not just the libxrpl library. |
| `werr` | OFF | Treat compilation warnings as errors |
| `wextra` | OFF | Enable additional compilation warnings |
[Unity builds][5] may be faster for the first build (at the cost of much more
memory) since they concatenate sources into fewer translation units. Non-unity
builds may be faster for incremental builds, and can be helpful for detecting
`#include` omissions.
[Unity builds][5] may be faster for the first build
(at the cost of much more memory) since they concatenate sources into fewer
translation units. Non-unity builds may be faster for incremental builds,
and can be helpful for detecting `#include` omissions.
## Troubleshooting
@@ -622,7 +585,7 @@ you might have generated CMake files for a different `build_type` than the
`CMAKE_BUILD_TYPE` you passed to Conan.
```
/xrpld/.build/pb-xrpl.libpb/xrpl/proto/xrpl.pb.h:10:10: fatal error: 'google/protobuf/port_def.inc' file not found
/rippled/.build/pb-xrpl.libpb/xrpl/proto/ripple.pb.h:10:10: fatal error: 'google/protobuf/port_def.inc' file not found
10 | #include <google/protobuf/port_def.inc>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.

View File

@@ -1,16 +1,14 @@
cmake_minimum_required(VERSION 3.16)
if (POLICY CMP0074)
cmake_policy(SET CMP0074 NEW)
endif ()
if (POLICY CMP0077)
cmake_policy(SET CMP0077 NEW)
endif ()
if(POLICY CMP0074)
cmake_policy(SET CMP0074 NEW)
endif()
if(POLICY CMP0077)
cmake_policy(SET CMP0077 NEW)
endif()
# Fix "unrecognized escape" issues when passing CMAKE_MODULE_PATH on Windows.
if (DEFINED CMAKE_MODULE_PATH)
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
endif ()
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
project(xrpl)
@@ -18,130 +16,141 @@ set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
include(CompilationEnv)
if (is_gcc)
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU")
# GCC-specific fixes
add_compile_options(-Wno-unknown-pragmas -Wno-subobject-linkage)
# -Wno-subobject-linkage can be removed when we upgrade GCC version to at least 13.3
elseif (is_clang)
elseif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
# Clang-specific fixes
add_compile_options(-Wno-unknown-warning-option) # Ignore unknown warning options
elseif (is_msvc)
elseif(MSVC)
# MSVC-specific fixes
add_compile_options(/wd4068) # Ignore unknown pragmas
endif ()
# Enable ccache to speed up builds.
include(Ccache)
endif()
# make GIT_COMMIT_HASH define available to all sources
find_package(Git)
if (Git_FOUND)
if(Git_FOUND)
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git rev-parse HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gch)
if (gch)
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gch)
if(gch)
set(GIT_COMMIT_HASH "${gch}")
message(STATUS gch: ${GIT_COMMIT_HASH})
add_definitions(-DGIT_COMMIT_HASH="${GIT_COMMIT_HASH}")
endif ()
endif()
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git rev-parse --abbrev-ref HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gb)
if (gb)
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gb)
if(gb)
set(GIT_BRANCH "${gb}")
message(STATUS gb: ${GIT_BRANCH})
add_definitions(-DGIT_BRANCH="${GIT_BRANCH}")
endif ()
endif () # git
endif()
endif() #git
if (thread_safety_analysis)
add_compile_options(-Wthread-safety -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS
-DXRPL_ENABLE_THREAD_SAFETY_ANNOTATIONS)
add_compile_options("-stdlib=libc++")
add_link_options("-stdlib=libc++")
endif ()
if(thread_safety_analysis)
add_compile_options(-Wthread-safety -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS -DRIPPLE_ENABLE_THREAD_SAFETY_ANNOTATIONS)
add_compile_options("-stdlib=libc++")
add_link_options("-stdlib=libc++")
endif()
include(CheckCXXCompilerFlag)
include(FetchContent)
include(ExternalProject)
include(CMakeFuncs) # must come *after* ExternalProject b/c it overrides one function in EP
include (CheckCXXCompilerFlag)
include (FetchContent)
include (ExternalProject)
include (CMakeFuncs) # must come *after* ExternalProject b/c it overrides one function in EP
if (target)
message(FATAL_ERROR "The target option has been removed - use native cmake options to control build")
message (FATAL_ERROR "The target option has been removed - use native cmake options to control build")
endif ()
include(XrplSanity)
include(XrplVersion)
include(XrplSettings)
# this check has to remain in the top-level cmake because of the early return statement
include(RippledSanity)
include(RippledVersion)
include(RippledSettings)
# this check has to remain in the top-level cmake
# because of the early return statement
if (packages_only)
if (NOT TARGET rpm)
message(FATAL_ERROR "packages_only requested, but targets were not created - is docker installed?")
endif ()
return()
if (NOT TARGET rpm)
message (FATAL_ERROR "packages_only requested, but targets were not created - is docker installed?")
endif()
return ()
endif ()
include(XrplCompiler)
include(XrplSanitizers)
include(XrplInterface)
include(RippledCompiler)
include(RippledInterface)
option(only_docs "Include only the docs target?" FALSE)
include(XrplDocs)
if (only_docs)
return()
endif ()
include(RippledDocs)
if(only_docs)
return()
endif()
###
include(deps/Boost)
find_package(OpenSSL 1.1.1 REQUIRED)
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2
)
set(SECP256K1_INSTALL TRUE)
set(SECP256K1_BUILD_BENCHMARK FALSE)
set(SECP256K1_BUILD_TESTS FALSE)
set(SECP256K1_BUILD_EXHAUSTIVE_TESTS FALSE)
set(SECP256K1_BUILD_CTIME_TESTS FALSE)
set(SECP256K1_BUILD_EXAMPLES FALSE)
add_subdirectory(external/secp256k1)
add_library(secp256k1::secp256k1 ALIAS secp256k1)
add_subdirectory(external/ed25519-donna)
add_subdirectory(external/antithesis-sdk)
find_package(date REQUIRED)
find_package(ed25519 REQUIRED)
find_package(gRPC REQUIRED)
find_package(LibArchive REQUIRED)
find_package(lz4 REQUIRED)
find_package(nudb REQUIRED)
find_package(OpenSSL REQUIRED)
find_package(secp256k1 REQUIRED)
# Target names with :: are not allowed in a generator expression.
# We need to pull the include directories and imported location properties
# from separate targets.
find_package(LibArchive REQUIRED)
find_package(SOCI REQUIRED)
find_package(SQLite3 REQUIRED)
find_package(xxHash REQUIRED)
target_link_libraries(
xrpl_libs
INTERFACE ed25519::ed25519
lz4::lz4
OpenSSL::Crypto
OpenSSL::SSL
secp256k1::secp256k1
soci::soci
SQLite::SQLite3)
option(rocksdb "Enable RocksDB" ON)
if (rocksdb)
find_package(RocksDB REQUIRED)
set_target_properties(RocksDB::rocksdb PROPERTIES INTERFACE_COMPILE_DEFINITIONS XRPL_ROCKSDB_AVAILABLE=1)
target_link_libraries(xrpl_libs INTERFACE RocksDB::rocksdb)
endif ()
if(rocksdb)
find_package(RocksDB REQUIRED)
set_target_properties(RocksDB::rocksdb PROPERTIES
INTERFACE_COMPILE_DEFINITIONS RIPPLE_ROCKSDB_AVAILABLE=1
)
target_link_libraries(ripple_libs INTERFACE RocksDB::rocksdb)
endif()
find_package(nudb REQUIRED)
find_package(date REQUIRED)
find_package(xxHash REQUIRED)
target_link_libraries(ripple_libs INTERFACE
ed25519::ed25519
lz4::lz4
OpenSSL::Crypto
OpenSSL::SSL
secp256k1::secp256k1
soci::soci
SQLite::SQLite3
)
# Work around changes to Conan recipe for now.
if (TARGET nudb::core)
set(nudb nudb::core)
elseif (TARGET NuDB::nudb)
set(nudb NuDB::nudb)
else ()
message(FATAL_ERROR "unknown nudb target")
endif ()
target_link_libraries(xrpl_libs INTERFACE ${nudb})
if(TARGET nudb::core)
set(nudb nudb::core)
elseif(TARGET NuDB::nudb)
set(nudb NuDB::nudb)
else()
message(FATAL_ERROR "unknown nudb target")
endif()
target_link_libraries(ripple_libs INTERFACE ${nudb})
if (coverage)
include(XrplCov)
endif ()
if(coverage)
include(RippledCov)
endif()
set(PROJECT_EXPORT_SET XrplExports)
include(XrplCore)
include(XrplInstall)
include(XrplValidatorKeys)
set(PROJECT_EXPORT_SET RippleExports)
include(RippledCore)
include(RippledInstall)
include(RippledValidatorKeys)
if (tests)
include(CTest)
add_subdirectory(src/tests/libxrpl)
endif ()
if(tests)
include(CTest)
add_subdirectory(src/tests/libxrpl)
endif()

View File

@@ -24,7 +24,7 @@ your verifying key. Please set up [signature verification][signing].
In general, external contributions should be developed in your personal
[fork][forking]. Contributions from developers with write permissions
should be done in [the main repository][xrpld] in a branch with
should be done in [the main repository][rippled] in a branch with
a permitted prefix. Permitted prefixes are:
- XLS-[a-zA-Z0-9]+/.+
@@ -47,18 +47,13 @@ choose the next available standard number, and open a discussion with an
appropriate title to propose your draft standard.
When you submit a pull request, please link the corresponding XLS in the
description. An XLS still in `Draft` status is considered a
description. An XLS still in draft status is considered a
work-in-progress and open for discussion. Please allow time for
questions, suggestions, and changes to the XLS draft. It is the
responsibility of the XLS author to update the draft to match the final
implementation when its corresponding pull request is merged, unless the
author delegates that responsibility to others.
Any amendment or major RPC change requires either a new XLS or an update
to an existing XLS. Neither change will be released (in an amendment's
case, marked as `Supported::yes`) until the corresponding XLS's status
is `Final`.
## Before making a pull request
(Or marking a draft pull request as ready.)
@@ -73,7 +68,7 @@ Ensure that your code compiles according to the build instructions in
Please write tests for your code.
If your test can be run offline, in under 60 seconds, then it can be an
automatic test run by `xrpld --unittest`.
automatic test run by `rippled --unittest`.
Otherwise, it must be a manual test.
If you create new source files, they must be organized as follows:
@@ -256,13 +251,13 @@ pre-commit install
We are using [Antithesis](https://antithesis.com/) for continuous fuzzing,
and keep a copy of [Antithesis C++ SDK](https://github.com/antithesishq/antithesis-sdk-cpp/)
in `external/antithesis-sdk`. One of the aims of fuzzing is to identify bugs
by finding external conditions which cause contracts violations inside `xrpld`.
by finding external conditions which cause contracts violations inside `rippled`.
The contracts are expressed as `XRPL_ASSERT` or `UNREACHABLE` (defined in
`include/xrpl/beast/utility/instrumentation.h`), which are effectively (outside
of Antithesis) wrappers for `assert(...)` with added name. The purpose of name
is to provide contracts with stable identity which does not rely on line numbers.
When `xrpld` is built with the Antithesis instrumentation enabled
When `rippled` is built with the Antithesis instrumentation enabled
(using `voidstar` CMake option) and ran on the Antithesis platform, the
contracts become
[test properties](https://antithesis.com/docs/using_antithesis/properties.html);
@@ -304,7 +299,7 @@ For this reason:
- Example **bad** name
`"RFC1751::insert(char* s, int x, int start, int length) : length is greater than or equal zero"`
(missing namespace, unnecessary full function signature, description too verbose).
Good name: `"xrpl::RFC1751::insert : minimum length"`.
Good name: `"ripple::RFC1751::insert : minimum length"`.
- In **few** well-justified cases a non-standard name can be used, in which case a
comment should be placed to explain the rationale (example in `contract.cpp`)
- Do **not** rename a contract without a good reason (e.g. the name no longer
@@ -318,7 +313,7 @@ For this reason:
To execute all unit tests:
`xrpld --unittest --unittest-jobs=<number of cores>`
`rippled --unittest --unittest-jobs=<number of cores>`
(Note: Using multiple cores on a Mac M1 can cause spurious test failures. The
cause is still under investigation. If you observe this problem, try specifying fewer jobs.)
@@ -326,7 +321,7 @@ cause is still under investigation. If you observe this problem, try specifying
To run a specific set of test suites:
```
xrpld --unittest TestSuiteName
rippled --unittest TestSuiteName
```
Note: In this example, all tests with prefix `TestSuiteName` will be run, so if
@@ -555,16 +550,16 @@ Rippled uses a linear workflow model that can be summarized as:
git fetch --multiple upstreams user1 user2 user3 [...]
git checkout -B release-next --no-track upstream/develop
# Only do an ff-only merge if pr-branch1 is either already
# Only do an ff-only merge if prbranch1 is either already
# squashed, or needs to be merged with separate commits,
# and has no merge commits.
# Use -S on the ff-only merge if pr-branch1 isn't signed.
git merge [-S] --ff-only user1/pr-branch1
# Use -S on the ff-only merge if prbranch1 isn't signed.
git merge [-S] --ff-only user1/prbranch1
git merge --squash user2/pr-branch2
git merge --squash user2/prbranch2
git commit -S # Use the commit message provided on the PR
git merge --squash user3/pr-branch3
git merge --squash user3/prbranch3
git commit -S # Use the commit message provided on the PR
[...]
@@ -872,12 +867,11 @@ git push --delete upstream-push master-next
11. [Create a new release on
Github](https://github.com/XRPLF/rippled/releases). Be sure that
"Set as the latest release" is checked.
12. Open a PR to update the [API-CHANGELOG](API-CHANGELOG.md) and `API-VERSION-[n].md` with the changes for this release (if any are missing).
13. Finally, [reverse merge the release into `develop`](#follow-up-reverse-merge).
12. Finally [reverse merge the release into `develop`](#follow-up-reverse-merge).
#### Special cases: point releases, hotfixes, etc.
On occasion, a bug or issue is discovered in a version that already
On occassion, a bug or issue is discovered in a version that already
had a final release. Most of the time, development will have started
on the next version, and will usually have changes in `develop`
and often in `release`.
@@ -1076,7 +1070,7 @@ git fetch upstreams
[contrib]: https://docs.github.com/en/get-started/quickstart/contributing-to-projects
[squash]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/about-pull-request-merges#squash-and-merge-your-commits
[forking]: https://github.com/XRPLF/rippled/fork
[xrpld]: https://github.com/XRPLF/rippled
[rippled]: https://github.com/XRPLF/rippled
[signing]: https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification
[setup-upstreams]: ./bin/git/setup-upstreams.sh
[squash-branches]: ./bin/git/squash-branches.sh

View File

@@ -1,7 +1,7 @@
ISC License
Copyright (c) 2011, Arthur Britto, David Schwartz, Jed McCaleb, Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant.
Copyright (c) 2012-2025, the XRP Ledger developers.
Copyright (c) 2012-2020, the XRP Ledger developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above

View File

@@ -1,239 +0,0 @@
# Distributed Tracing Fundamentals
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Next**: [Architecture Analysis](./01-architecture-analysis.md)
---
## What is Distributed Tracing?
Distributed tracing is a method for tracking data objects as they flow through distributed systems. In a network like XRP Ledger, a single transaction touches multiple independent nodes—each with no shared memory or logging. Distributed tracing connects these dots.
**Without tracing:** You see isolated logs on each node with no way to correlate them.
**With tracing:** You see the complete journey of a transaction or an event across all nodes it touched.
---
## Core Concepts
### 1. Trace
A **trace** represents the entire journey of a request through the system. It has a unique `trace_id` that stays constant across all nodes.
```
Trace ID: abc123
├── Node A: received transaction
├── Node B: relayed transaction
├── Node C: included in consensus
└── Node D: applied to ledger
```
### 2. Span
A **span** represents a single unit of work within a trace. Each span has:
| Attribute | Description | Example |
| ---------------- | --------------------- | -------------------------- |
| `trace_id` | Links to parent trace | `abc123` |
| `span_id` | Unique identifier | `span456` |
| `parent_span_id` | Parent span (if any) | `p_span123` |
| `name` | Operation name | `rpc.submit` |
| `start_time` | When work began | `2024-01-15T10:30:00Z` |
| `end_time` | When work completed | `2024-01-15T10:30:00.050Z` |
| `attributes` | Key-value metadata | `tx.hash=ABC...` |
| `status` | OK, ERROR MSG | `OK` |
### 3. Trace Context
**Trace context** is the data that propagates between services to link spans together. It contains:
- `trace_id` - The trace this span belongs to
- `span_id` - The current span (becomes parent for child spans)
- `trace_flags` - Sampling decisions
---
## How Spans Form a Trace
Spans have parent-child relationships forming a tree structure:
```mermaid
flowchart TB
subgraph trace["Trace: abc123"]
A["tx.submit<br/>span_id: 001<br/>50ms"] --> B["tx.validate<br/>span_id: 002<br/>5ms"]
A --> C["tx.relay<br/>span_id: 003<br/>10ms"]
A --> D["tx.apply<br/>span_id: 004<br/>30ms"]
D --> E["ledger.update<br/>span_id: 005<br/>20ms"]
end
style A fill:#0d47a1,stroke:#082f6a,color:#ffffff
style B fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style C fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style D fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style E fill:#bf360c,stroke:#8c2809,color:#ffffff
```
The same trace visualized as a **timeline (Gantt chart)**:
```
Time → 0ms 10ms 20ms 30ms 40ms 50ms
├───────────────────────────────────────────┤
tx.submit│▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
├─────┤
tx.valid │▓▓▓▓▓│
│ ├──────────┤
tx.relay │ │▓▓▓▓▓▓▓▓▓▓│
│ ├────────────────────────────┤
tx.apply │ │▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
│ ├──────────────────┤
ledger │ │▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│
```
---
## Distributed Traces Across Nodes
In distributed systems like rippled, traces span **multiple independent nodes**. The trace context must be propagated in network messages:
```mermaid
sequenceDiagram
participant Client
participant NodeA as Node A
participant NodeB as Node B
participant NodeC as Node C
Client->>NodeA: Submit TX<br/>(no trace context)
Note over NodeA: Creates new trace<br/>trace_id: abc123<br/>span: tx.receive
NodeA->>NodeB: Relay TX<br/>(trace_id: abc123, parent: 001)
Note over NodeB: Creates child span<br/>span: tx.relay<br/>parent_span_id: 001
NodeA->>NodeC: Relay TX<br/>(trace_id: abc123, parent: 001)
Note over NodeC: Creates child span<br/>span: tx.relay<br/>parent_span_id: 001
Note over NodeA,NodeC: All spans share trace_id: abc123<br/>enabling correlation across nodes
```
---
## Context Propagation
For traces to work across nodes, **trace context must be propagated** in messages.
### What's in the Context (32 bytes)
| Field | Size | Description |
| ------------- | ---------- | ------------------------------------------------------- |
| `trace_id` | 16 bytes | Identifies the entire trace (constant across all nodes) |
| `span_id` | 8 bytes | The sender's current span (becomes parent on receiver) |
| `trace_flags` | 4 bytes | Sampling decision flags |
| `trace_state` | ~0-4 bytes | Optional vendor-specific data |
### How span_id Changes at Each Hop
Only **one** `span_id` travels in the context - the sender's current span. Each node:
1. Extracts the received `span_id` and uses it as the `parent_span_id`
2. Creates a **new** `span_id` for its own span
3. Sends its own `span_id` as the parent when forwarding
```
Node A Node B Node C
────── ────── ──────
Span AAA Span BBB Span CCC
│ │ │
▼ ▼ ▼
Context out: Context out: Context out:
├─ trace_id: abc123 ├─ trace_id: abc123 ├─ trace_id: abc123
├─ span_id: AAA ──────────► ├─ span_id: BBB ──────────► ├─ span_id: CCC ──────►
└─ flags: 01 └─ flags: 01 └─ flags: 01
│ │
parent = AAA parent = BBB
```
The `trace_id` stays constant, but `span_id` **changes at every hop** to maintain the parent-child chain.
### Propagation Formats
There are two patterns:
### HTTP/RPC Headers (W3C Trace Context)
```
traceparent: 00-abc123def456-span789-01
│ │ │ │
│ │ │ └── Flags (sampled)
│ │ └── Parent span ID
│ └── Trace ID
└── Version
```
### Protocol Buffers (rippled P2P messages)
```protobuf
message TMTransaction {
bytes rawTransaction = 1;
// ... existing fields ...
// Trace context extension
bytes trace_parent = 100; // W3C traceparent
bytes trace_state = 101; // W3C tracestate
}
```
---
## Sampling
Not every trace needs to be recorded. **Sampling** reduces overhead:
### Head Sampling (at trace start)
```
Request arrives → Random 10% chance → Record or skip entire trace
```
- ✅ Low overhead
- ❌ May miss interesting traces
### Tail Sampling (after trace completes)
```
Trace completes → Collector evaluates:
- Error? → KEEP
- Slow? → KEEP
- Normal? → Sample 10%
```
- ✅ Never loses important traces
- ❌ Higher memory usage at collector
---
## Key Benefits for rippled
| Challenge | How Tracing Helps |
| ---------------------------------- | ---------------------------------------- |
| "Where is my transaction?" | Follow trace across all nodes it touched |
| "Why was consensus slow?" | See timing breakdown of each phase |
| "Which node is the bottleneck?" | Compare span durations across nodes |
| "What happened during the outage?" | Correlate errors across the network |
---
## Glossary
| Term | Definition |
| ------------------- | --------------------------------------------------------------- |
| **Trace** | Complete journey of a request, identified by `trace_id` |
| **Span** | Single operation within a trace |
| **Context** | Data propagated between services (`trace_id`, `span_id`, flags) |
| **Instrumentation** | Code that creates spans and propagates context |
| **Collector** | Service that receives, processes, and exports traces |
| **Backend** | Storage/visualization system (Jaeger, Tempo, etc.) |
| **Head Sampling** | Sampling decision at trace start |
| **Tail Sampling** | Sampling decision after trace completes |
---
*Next: [Architecture Analysis](./01-architecture-analysis.md)* | *Back to: [Overview](./OpenTelemetryPlan.md)*

View File

@@ -1,328 +0,0 @@
# Architecture Analysis
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Design Decisions](./02-design-decisions.md) | [Implementation Strategy](./03-implementation-strategy.md)
---
## 1.1 Current rippled Architecture Overview
The rippled node software consists of several interconnected components that need instrumentation for distributed tracing:
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
subgraph services["Core Services"]
RPC["RPC Server<br/>(HTTP/WS/gRPC)"]
Overlay["Overlay<br/>(P2P Network)"]
Consensus["Consensus<br/>(RCLConsensus)"]
end
JobQueue["JobQueue<br/>(Thread Pool)"]
subgraph processing["Processing Layer"]
NetworkOPs["NetworkOPs<br/>(Tx Processing)"]
LedgerMaster["LedgerMaster<br/>(Ledger Mgmt)"]
NodeStore["NodeStore<br/>(Database)"]
end
subgraph observability["Existing Observability"]
PerfLog["PerfLog<br/>(JSON)"]
Insight["Insight<br/>(StatsD)"]
Logging["Logging<br/>(Journal)"]
end
services --> JobQueue
JobQueue --> processing
end
style rippled fill:#424242,stroke:#212121,color:#ffffff
style services fill:#1565c0,stroke:#0d47a1,color:#ffffff
style processing fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style observability fill:#e65100,stroke:#bf360c,color:#ffffff
```
---
## 1.2 Key Components for Instrumentation
| Component | Location | Purpose | Trace Value |
| ----------------- | ------------------------------------------ | ------------------------ | ---------------------------- |
| **Overlay** | `src/xrpld/overlay/` | P2P communication | Message propagation timing |
| **PeerImp** | `src/xrpld/overlay/detail/PeerImp.cpp` | Individual peer handling | Per-peer latency |
| **RCLConsensus** | `src/xrpld/app/consensus/RCLConsensus.cpp` | Consensus algorithm | Round timing, phase analysis |
| **NetworkOPs** | `src/xrpld/app/misc/NetworkOPs.cpp` | Transaction processing | Tx lifecycle tracking |
| **ServerHandler** | `src/xrpld/rpc/detail/ServerHandler.cpp` | RPC entry point | Request latency |
| **RPCHandler** | `src/xrpld/rpc/detail/RPCHandler.cpp` | Command execution | Per-command timing |
| **JobQueue** | `src/xrpl/core/JobQueue.h` | Async task execution | Queue wait times |
---
## 1.3 Transaction Flow Diagram
Transaction flow spans multiple nodes in the network. Each node creates linked spans to form a distributed trace:
```mermaid
sequenceDiagram
participant Client
participant PeerA as Peer A (Receive)
participant PeerB as Peer B (Relay)
participant PeerC as Peer C (Validate)
Client->>PeerA: 1. Submit TX
rect rgb(230, 245, 255)
Note over PeerA: tx.receive SPAN START
PeerA->>PeerA: HashRouter Deduplication
PeerA->>PeerA: tx.validate (child span)
end
PeerA->>PeerB: 2. Relay TX (with trace ctx)
rect rgb(230, 245, 255)
Note over PeerB: tx.receive (linked span)
end
PeerB->>PeerC: 3. Relay TX
rect rgb(230, 245, 255)
Note over PeerC: tx.receive (linked span)
PeerC->>PeerC: tx.process
end
Note over Client,PeerC: DISTRIBUTED TRACE (same trace_id: abc123)
```
### Trace Structure
```
trace_id: abc123
├── span: tx.receive (Peer A)
│ ├── span: tx.validate
│ └── span: tx.relay
├── span: tx.receive (Peer B) [parent: Peer A]
│ └── span: tx.relay
└── span: tx.receive (Peer C) [parent: Peer B]
└── span: tx.process
```
---
## 1.4 Consensus Round Flow
Consensus rounds are multi-phase operations that benefit significantly from tracing:
```mermaid
flowchart TB
subgraph round["consensus.round (root span)"]
attrs["Attributes:<br/>xrpl.consensus.ledger.seq = 12345678<br/>xrpl.consensus.mode = proposing<br/>xrpl.consensus.proposers = 35"]
subgraph open["consensus.phase.open"]
open_desc["Duration: ~3s<br/>Waiting for transactions"]
end
subgraph establish["consensus.phase.establish"]
est_attrs["proposals_received = 28<br/>disputes_resolved = 3"]
est_children["├── consensus.proposal.receive (×28)<br/>├── consensus.proposal.send (×1)<br/>└── consensus.dispute.resolve (×3)"]
end
subgraph accept["consensus.phase.accept"]
acc_attrs["transactions_applied = 150<br/>ledger.hash = DEF456..."]
acc_children["├── ledger.build<br/>└── ledger.validate"]
end
attrs --> open
open --> establish
establish --> accept
end
style round fill:#f57f17,stroke:#e65100,color:#ffffff
style open fill:#1565c0,stroke:#0d47a1,color:#ffffff
style establish fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style accept fill:#c2185b,stroke:#880e4f,color:#ffffff
```
---
## 1.5 RPC Request Flow
RPC requests support W3C Trace Context headers for distributed tracing across services:
```mermaid
flowchart TB
subgraph request["rpc.request (root span)"]
http["HTTP Request<br/>POST /<br/>traceparent: 00-abc123...-def456...-01"]
attrs["Attributes:<br/>http.method = POST<br/>net.peer.ip = 192.168.1.100<br/>xrpl.rpc.command = submit"]
subgraph enqueue["jobqueue.enqueue"]
job_attr["xrpl.job.type = jtCLIENT_RPC"]
end
subgraph command["rpc.command.submit"]
cmd_attrs["xrpl.rpc.version = 2<br/>xrpl.rpc.role = user"]
cmd_children["├── tx.deserialize<br/>├── tx.validate_local<br/>└── tx.submit_to_network"]
end
response["Response: 200 OK<br/>Duration: 45ms"]
http --> attrs
attrs --> enqueue
enqueue --> command
command --> response
end
style request fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style enqueue fill:#1565c0,stroke:#0d47a1,color:#ffffff
style command fill:#e65100,stroke:#bf360c,color:#ffffff
```
---
## 1.6 Key Trace Points
The following table identifies priority instrumentation points across the codebase:
| Category | Span Name | File | Method | Priority |
| --------------- | ---------------------- | -------------------- | ---------------------- | -------- |
| **Transaction** | `tx.receive` | `PeerImp.cpp` | `handleTransaction()` | High |
| **Transaction** | `tx.validate` | `NetworkOPs.cpp` | `processTransaction()` | High |
| **Transaction** | `tx.process` | `NetworkOPs.cpp` | `doTransactionSync()` | High |
| **Transaction** | `tx.relay` | `OverlayImpl.cpp` | `relay()` | Medium |
| **Consensus** | `consensus.round` | `RCLConsensus.cpp` | `startRound()` | High |
| **Consensus** | `consensus.phase.*` | `Consensus.h` | `timerEntry()` | High |
| **Consensus** | `consensus.proposal.*` | `RCLConsensus.cpp` | `peerProposal()` | Medium |
| **RPC** | `rpc.request` | `ServerHandler.cpp` | `onRequest()` | High |
| **RPC** | `rpc.command.*` | `RPCHandler.cpp` | `doCommand()` | High |
| **Peer** | `peer.connect` | `OverlayImpl.cpp` | `onHandoff()` | Low |
| **Peer** | `peer.message.*` | `PeerImp.cpp` | `onMessage()` | Low |
| **Ledger** | `ledger.acquire` | `InboundLedgers.cpp` | `acquire()` | Medium |
| **Ledger** | `ledger.build` | `RCLConsensus.cpp` | `buildLCL()` | High |
---
## 1.7 Instrumentation Priority
```mermaid
quadrantChart
title Instrumentation Priority Matrix
x-axis Low Complexity --> High Complexity
y-axis Low Value --> High Value
quadrant-1 Implement First
quadrant-2 Plan Carefully
quadrant-3 Quick Wins
quadrant-4 Consider Later
RPC Tracing: [0.3, 0.85]
Transaction Tracing: [0.65, 0.92]
Consensus Tracing: [0.75, 0.87]
Peer Message Tracing: [0.4, 0.3]
Ledger Acquisition: [0.5, 0.6]
JobQueue Tracing: [0.35, 0.5]
```
---
## 1.8 Observable Outcomes
After implementing OpenTelemetry, operators and developers will gain visibility into the following:
### 1.8.1 What You Will See: Traces
| Trace Type | Description | Example Query in Grafana/Tempo |
| -------------------------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| **Transaction Lifecycle** | Full journey from RPC submission through validation, relay, consensus, and ledger inclusion | `{service.name="rippled" && xrpl.tx.hash="ABC123..."}` |
| **Cross-Node Propagation** | Transaction path across multiple rippled nodes with timing | `{xrpl.tx.relay_count > 0}` |
| **Consensus Rounds** | Complete round with all phases (open, establish, accept) | `{span.name=~"consensus.round.*"}` |
| **RPC Request Processing** | Individual command execution with timing breakdown | `{xrpl.rpc.command="account_info"}` |
| **Ledger Acquisition** | Peer-to-peer ledger data requests and responses | `{span.name="ledger.acquire"}` |
### 1.8.2 What You Will See: Metrics (Derived from Traces)
| Metric | Description | Dashboard Panel |
| ----------------------------- | -------------------------------------- | --------------------------- |
| **RPC Latency (p50/p95/p99)** | Response time distribution per command | Heatmap by command |
| **Transaction Throughput** | Transactions processed per second | Time series graph |
| **Consensus Round Duration** | Time to complete consensus phases | Histogram |
| **Cross-Node Latency** | Time for transaction to reach N nodes | Line chart with percentiles |
| **Error Rate** | Failed transactions/RPC calls by type | Stacked bar chart |
### 1.8.3 Concrete Dashboard Examples
**Transaction Trace View (Jaeger/Tempo):**
```
┌────────────────────────────────────────────────────────────────────────────────┐
│ Trace: abc123... (Transaction Submission) Duration: 847ms │
├────────────────────────────────────────────────────────────────────────────────┤
│ ├── rpc.request [ServerHandler] ████░░░░░░ 45ms │
│ │ └── rpc.command.submit [RPCHandler] ████░░░░░░ 42ms │
│ │ └── tx.receive [NetworkOPs] ███░░░░░░░ 35ms │
│ │ ├── tx.validate [TxQ] █░░░░░░░░░ 8ms │
│ │ └── tx.relay [Overlay] ██░░░░░░░░ 15ms │
│ │ ├── tx.receive [Node-B] █████░░░░░ 52ms │
│ │ │ └── tx.relay [Node-B] ██░░░░░░░░ 18ms │
│ │ └── tx.receive [Node-C] ██████░░░░ 65ms │
│ └── consensus.round [RCLConsensus] ████████░░ 720ms │
│ ├── consensus.phase.open ██░░░░░░░░ 180ms │
│ ├── consensus.phase.establish █████░░░░░ 480ms │
│ └── consensus.phase.accept █░░░░░░░░░ 60ms │
└────────────────────────────────────────────────────────────────────────────────┘
```
**RPC Performance Dashboard Panel:**
```
┌─────────────────────────────────────────────────────────────┐
│ RPC Command Latency (Last 1 Hour) │
├─────────────────────────────────────────────────────────────┤
│ Command │ p50 │ p95 │ p99 │ Errors │ Rate │
│──────────────────┼────────┼────────┼────────┼────────┼──────│
│ account_info │ 12ms │ 45ms │ 89ms │ 0.1% │ 150/s│
│ submit │ 35ms │ 120ms │ 250ms │ 2.3% │ 45/s│
│ ledger │ 8ms │ 25ms │ 55ms │ 0.0% │ 80/s│
│ tx │ 15ms │ 50ms │ 100ms │ 0.5% │ 60/s│
│ server_info │ 5ms │ 12ms │ 20ms │ 0.0% │ 200/s│
└─────────────────────────────────────────────────────────────┘
```
**Consensus Health Dashboard Panel:**
```mermaid
---
config:
xyChart:
width: 1200
height: 400
plotReservedSpacePercent: 50
chartOrientation: vertical
themeVariables:
xyChart:
plotColorPalette: "#3498db"
---
xychart-beta
title "Consensus Round Duration (Last 24 Hours)"
x-axis "Time of Day (Hours)" [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24]
y-axis "Duration (seconds)" 1 --> 5
line [2.1, 2.3, 2.5, 2.4, 2.8, 1.6, 3.2, 3.0, 3.5, 1.3, 3.8, 3.6, 4.0, 3.2, 4.3, 4.1, 4.5, 4.3, 4.2, 2.4, 4.8, 4.6, 4.9, 4.7, 5.0, 4.9, 4.8, 2.6, 4.7, 4.5, 4.2, 4.0, 2.5, 3.7, 3.2, 3.4, 2.9, 3.1, 2.6, 2.8, 2.3, 1.5, 2.7, 2.4, 2.5, 2.3, 2.2, 2.1, 2.0]
```
### 1.8.4 Operator Actionable Insights
| Scenario | What You'll See | Action |
| --------------------- | ---------------------------------------------------------------------------- | -------------------------------- |
| **Slow RPC** | Span showing which phase is slow (parsing, execution, serialization) | Optimize specific code path |
| **Transaction Stuck** | Trace stops at validation; error attribute shows reason | Fix transaction parameters |
| **Consensus Delay** | Phase.establish taking too long; proposer attribute shows missing validators | Investigate network connectivity |
| **Memory Spike** | Large batch of spans correlating with memory increase | Tune batch_size or sampling |
| **Network Partition** | Traces missing cross-node links for specific peer | Check peer connectivity |
### 1.8.5 Developer Debugging Workflow
1. **Find Transaction**: Query by `xrpl.tx.hash` to get full trace
2. **Identify Bottleneck**: Look at span durations to find slowest component
3. **Check Attributes**: Review `xrpl.tx.validity`, `xrpl.rpc.status` for errors
4. **Correlate Logs**: Use `trace_id` to find related PerfLog entries
5. **Compare Nodes**: Filter by `service.instance.id` to compare behavior across nodes
---
*Next: [Design Decisions](./02-design-decisions.md)* | *Back to: [Overview](./OpenTelemetryPlan.md)*

View File

@@ -1,404 +0,0 @@
# Design Decisions
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Architecture Analysis](./01-architecture-analysis.md) | [Code Samples](./04-code-samples.md)
---
## 2.1 OpenTelemetry Components
### 2.1.1 SDK Selection
**Primary Choice**: OpenTelemetry C++ SDK (`opentelemetry-cpp`)
| Component | Purpose | Required |
| --------------------------------------- | ---------------------- | ----------- |
| `opentelemetry-cpp::api` | Tracing API headers | Yes |
| `opentelemetry-cpp::sdk` | SDK implementation | Yes |
| `opentelemetry-cpp::ext` | Extensions (exporters) | Yes |
| `opentelemetry-cpp::otlp_grpc_exporter` | OTLP/gRPC export | Recommended |
| `opentelemetry-cpp::otlp_http_exporter` | OTLP/HTTP export | Alternative |
### 2.1.2 Instrumentation Strategy
**Manual Instrumentation** (recommended):
| Approach | Pros | Cons |
| ---------- | ----------------------------------------------------------------- | ------------------------------------------------------- |
| **Manual** | Precise control, optimized placement, rippled-specific attributes | More development effort |
| **Auto** | Less code, automatic coverage | Less control, potential overhead, limited customization |
---
## 2.2 Exporter Configuration
```mermaid
flowchart TB
subgraph nodes["rippled Nodes"]
node1["rippled<br/>Node 1"]
node2["rippled<br/>Node 2"]
node3["rippled<br/>Node 3"]
end
collector["OpenTelemetry<br/>Collector<br/>(sidecar or standalone)"]
subgraph backends["Observability Backends"]
jaeger["Jaeger<br/>(Dev)"]
tempo["Tempo<br/>(Prod)"]
elastic["Elastic<br/>APM"]
end
node1 -->|"OTLP/gRPC<br/>:4317"| collector
node2 -->|"OTLP/gRPC<br/>:4317"| collector
node3 -->|"OTLP/gRPC<br/>:4317"| collector
collector --> jaeger
collector --> tempo
collector --> elastic
style nodes fill:#0d47a1,stroke:#082f6a,color:#ffffff
style backends fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style collector fill:#bf360c,stroke:#8c2809,color:#ffffff
```
### 2.2.1 OTLP/gRPC (Recommended)
```cpp
// Configuration for OTLP over gRPC
namespace otlp = opentelemetry::exporter::otlp;
otlp::OtlpGrpcExporterOptions opts;
opts.endpoint = "localhost:4317";
opts.use_ssl_credentials = true;
opts.ssl_credentials_cacert_path = "/path/to/ca.crt";
```
### 2.2.2 OTLP/HTTP (Alternative)
```cpp
// Configuration for OTLP over HTTP
namespace otlp = opentelemetry::exporter::otlp;
otlp::OtlpHttpExporterOptions opts;
opts.url = "http://localhost:4318/v1/traces";
opts.content_type = otlp::HttpRequestContentType::kJson; // or kBinary
```
---
## 2.3 Span Naming Conventions
### 2.3.1 Naming Schema
```
<component>.<operation>[.<sub-operation>]
```
**Examples**:
- `tx.receive` - Transaction received from peer
- `consensus.phase.establish` - Consensus establish phase
- `rpc.command.server_info` - server_info RPC command
### 2.3.2 Complete Span Catalog
```yaml
# Transaction Spans
tx:
receive: "Transaction received from network"
validate: "Transaction signature/format validation"
process: "Full transaction processing"
relay: "Transaction relay to peers"
apply: "Apply transaction to ledger"
# Consensus Spans
consensus:
round: "Complete consensus round"
phase:
open: "Open phase - collecting transactions"
establish: "Establish phase - reaching agreement"
accept: "Accept phase - applying consensus"
proposal:
receive: "Receive peer proposal"
send: "Send our proposal"
validation:
receive: "Receive peer validation"
send: "Send our validation"
# RPC Spans
rpc:
request: "HTTP/WebSocket request handling"
command:
"*": "Specific RPC command (dynamic)"
# Peer Spans
peer:
connect: "Peer connection establishment"
disconnect: "Peer disconnection"
message:
send: "Send protocol message"
receive: "Receive protocol message"
# Ledger Spans
ledger:
acquire: "Ledger acquisition from network"
build: "Build new ledger"
validate: "Ledger validation"
close: "Close ledger"
# Job Spans
job:
enqueue: "Job added to queue"
execute: "Job execution"
```
---
## 2.4 Attribute Schema
### 2.4.1 Resource Attributes (Set Once at Startup)
```cpp
// Standard OpenTelemetry semantic conventions
resource::SemanticConventions::SERVICE_NAME = "rippled"
resource::SemanticConventions::SERVICE_VERSION = BuildInfo::getVersionString()
resource::SemanticConventions::SERVICE_INSTANCE_ID = <node_public_key_base58>
// Custom rippled attributes
"xrpl.network.id" = <network_id> // e.g., 0 for mainnet
"xrpl.network.type" = "mainnet" | "testnet" | "devnet" | "standalone"
"xrpl.node.type" = "validator" | "stock" | "reporting"
"xrpl.node.cluster" = <cluster_name> // If clustered
```
### 2.4.2 Span Attributes by Category
#### Transaction Attributes
```cpp
"xrpl.tx.hash" = string // Transaction hash (hex)
"xrpl.tx.type" = string // "Payment", "OfferCreate", etc.
"xrpl.tx.account" = string // Source account (redacted in prod)
"xrpl.tx.sequence" = int64 // Account sequence number
"xrpl.tx.fee" = int64 // Fee in drops
"xrpl.tx.result" = string // "tesSUCCESS", "tecPATH_DRY", etc.
"xrpl.tx.ledger_index" = int64 // Ledger containing transaction
```
#### Consensus Attributes
```cpp
"xrpl.consensus.round" = int64 // Round number
"xrpl.consensus.phase" = string // "open", "establish", "accept"
"xrpl.consensus.mode" = string // "proposing", "observing", etc.
"xrpl.consensus.proposers" = int64 // Number of proposers
"xrpl.consensus.ledger.prev" = string // Previous ledger hash
"xrpl.consensus.ledger.seq" = int64 // Ledger sequence
"xrpl.consensus.tx_count" = int64 // Transactions in consensus set
"xrpl.consensus.duration_ms" = float64 // Round duration
```
#### RPC Attributes
```cpp
"xrpl.rpc.command" = string // Command name
"xrpl.rpc.version" = int64 // API version
"xrpl.rpc.role" = string // "admin" or "user"
"xrpl.rpc.params" = string // Sanitized parameters (optional)
```
#### Peer & Message Attributes
```cpp
"xrpl.peer.id" = string // Peer public key (base58)
"xrpl.peer.address" = string // IP:port
"xrpl.peer.latency_ms" = float64 // Measured latency
"xrpl.peer.cluster" = string // Cluster name if clustered
"xrpl.message.type" = string // Protocol message type name
"xrpl.message.size_bytes" = int64 // Message size
"xrpl.message.compressed" = bool // Whether compressed
```
#### Ledger & Job Attributes
```cpp
"xrpl.ledger.hash" = string // Ledger hash
"xrpl.ledger.index" = int64 // Ledger sequence/index
"xrpl.ledger.close_time" = int64 // Close time (epoch)
"xrpl.ledger.tx_count" = int64 // Transaction count
"xrpl.job.type" = string // Job type name
"xrpl.job.queue_ms" = float64 // Time spent in queue
"xrpl.job.worker" = int64 // Worker thread ID
```
---
## 2.5 Context Propagation Design
### 2.5.1 Propagation Boundaries
```mermaid
flowchart TB
subgraph http["HTTP/WebSocket (RPC)"]
w3c["W3C Trace Context Headers:<br/>traceparent: 00-{trace_id}-{span_id}-{flags}<br/>tracestate: rippled=<state>"]
end
subgraph protobuf["Protocol Buffers (P2P)"]
proto["message TraceContext {<br/> bytes trace_id = 1; // 16 bytes<br/> bytes span_id = 2; // 8 bytes<br/> uint32 trace_flags = 3;<br/> string trace_state = 4;<br/>}"]
end
subgraph jobqueue["JobQueue (Internal Async)"]
job["Context captured at job creation,<br/>restored at execution<br/><br/>class Job {<br/> opentelemetry::context::Context traceContext_;<br/>};"]
end
style http fill:#0d47a1,stroke:#082f6a,color:#ffffff
style protobuf fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style jobqueue fill:#bf360c,stroke:#8c2809,color:#ffffff
```
---
## 2.6 Integration with Existing Observability
### 2.6.1 Existing Frameworks Comparison
rippled already has two observability mechanisms. OpenTelemetry complements (not replaces) them:
| Aspect | PerfLog | Beast Insight (StatsD) | OpenTelemetry |
| --------------------- | ----------------------------- | ---------------------------- | ------------------------- |
| **Type** | Logging | Metrics | Distributed Tracing |
| **Data** | JSON log entries | Counters, gauges, histograms | Spans with context |
| **Scope** | Single node | Single node | **Cross-node** |
| **Output** | `perf.log` file | StatsD server | OTLP Collector |
| **Question answered** | "What happened on this node?" | "How many? How fast?" | "What was the journey?" |
| **Correlation** | By timestamp | By metric name | By `trace_id` |
| **Overhead** | Low (file I/O) | Low (UDP packets) | Low-Medium (configurable) |
### 2.6.2 What Each Framework Does Best
#### PerfLog
- **Purpose**: Detailed local event logging for RPC and job execution
- **Strengths**:
- Rich JSON output with timing data
- Already integrated in RPC handlers
- File-based, no external dependencies
- **Limitations**:
- Single-node only (no cross-node correlation)
- No parent-child relationships between events
- Manual log parsing required
```json
// Example PerfLog entry
{
"time": "2024-01-15T10:30:00.123Z",
"method": "submit",
"duration_us": 1523,
"result": "tesSUCCESS"
}
```
#### Beast Insight (StatsD)
- **Purpose**: Real-time metrics for monitoring dashboards
- **Strengths**:
- Aggregated metrics (counters, gauges, histograms)
- Low overhead (UDP, fire-and-forget)
- Good for alerting thresholds
- **Limitations**:
- No request-level detail
- No causal relationships
- Single-node perspective
```cpp
// Example StatsD usage in rippled
insight.increment("rpc.submit.count");
insight.gauge("ledger.age", age);
insight.timing("consensus.round", duration);
```
#### OpenTelemetry (NEW)
- **Purpose**: Distributed request tracing across nodes
- **Strengths**:
- **Cross-node correlation** via `trace_id`
- Parent-child span relationships
- Rich attributes per span
- Industry standard (CNCF)
- **Limitations**:
- Requires collector infrastructure
- Higher complexity than logging
```cpp
// Example OpenTelemetry span
auto span = telemetry.startSpan("tx.relay");
span->SetAttribute("tx.hash", hash);
span->SetAttribute("peer.id", peerId);
// Span automatically linked to parent via context
```
### 2.6.3 When to Use Each
| Scenario | PerfLog | StatsD | OpenTelemetry |
| --------------------------------------- | --------- | ------ | ------------- |
| "How many TXs per second?" | ❌ | ✅ | ❌ |
| "What's the p99 RPC latency?" | ❌ | ✅ | ✅ |
| "Why was this specific TX slow?" | ⚠️ partial | ❌ | ✅ |
| "Which node delayed consensus?" | ❌ | ❌ | ✅ |
| "What happened on node X at time T?" | ✅ | ❌ | ✅ |
| "Show me the TX journey across 5 nodes" | ❌ | ❌ | ✅ |
### 2.6.4 Coexistence Strategy
```mermaid
flowchart TB
subgraph rippled["rippled Process"]
perflog["PerfLog<br/>(JSON to file)"]
insight["Beast Insight<br/>(StatsD)"]
otel["OpenTelemetry<br/>(Tracing)"]
end
perflog --> perffile["perf.log"]
insight --> statsd["StatsD Server"]
otel --> collector["OTLP Collector"]
perffile --> grafana["Grafana<br/>(Unified UI)"]
statsd --> grafana
collector --> grafana
style rippled fill:#212121,stroke:#0a0a0a,color:#ffffff
style grafana fill:#bf360c,stroke:#8c2809,color:#ffffff
```
### 2.6.5 Correlation with PerfLog
Trace IDs can be correlated with existing PerfLog entries for comprehensive debugging:
```cpp
// In RPCHandler.cpp - correlate trace with PerfLog
Status doCommand(RPC::JsonContext& context, Json::Value& result)
{
// Start OpenTelemetry span
auto span = context.app.getTelemetry().startSpan(
"rpc.command." + context.method);
// Get trace ID for correlation
auto traceId = span->GetContext().trace_id().IsValid()
? toHex(span->GetContext().trace_id())
: "";
// Use existing PerfLog with trace correlation
auto const curId = context.app.getPerfLog().currentId();
context.app.getPerfLog().rpcStart(context.method, curId);
// Future: Add trace ID to PerfLog entry
// context.app.getPerfLog().setTraceId(curId, traceId);
try {
auto ret = handler(context, result);
context.app.getPerfLog().rpcFinish(context.method, curId);
span->SetStatus(opentelemetry::trace::StatusCode::kOk);
return ret;
} catch (std::exception const& e) {
context.app.getPerfLog().rpcError(context.method, curId);
span->RecordException(e);
span->SetStatus(opentelemetry::trace::StatusCode::kError, e.what());
throw;
}
}
```
---
*Previous: [Architecture Analysis](./01-architecture-analysis.md)* | *Next: [Implementation Strategy](./03-implementation-strategy.md)* | *Back to: [Overview](./OpenTelemetryPlan.md)*

View File

@@ -1,448 +0,0 @@
# Implementation Strategy
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Code Samples](./04-code-samples.md) | [Configuration Reference](./05-configuration-reference.md)
---
## 3.1 Directory Structure
The telemetry implementation follows rippled's existing code organization pattern:
```
include/xrpl/
├── telemetry/
│ ├── Telemetry.h # Main telemetry interface
│ ├── TelemetryConfig.h # Configuration structures
│ ├── TraceContext.h # Context propagation utilities
│ ├── SpanGuard.h # RAII span management
│ └── SpanAttributes.h # Attribute helper functions
src/libxrpl/
├── telemetry/
│ ├── Telemetry.cpp # Implementation
│ ├── TelemetryConfig.cpp # Config parsing
│ ├── TraceContext.cpp # Context serialization
│ └── NullTelemetry.cpp # No-op implementation
src/xrpld/
├── telemetry/
│ ├── TracingInstrumentation.h # Instrumentation macros
│ └── TracingInstrumentation.cpp
```
---
## 3.2 Implementation Approach
<div align="center">
```mermaid
%%{init: {'flowchart': {'nodeSpacing': 20, 'rankSpacing': 30}}}%%
flowchart TB
subgraph phase1["Phase 1: Core"]
direction LR
sdk["SDK Integration"] ~~~ interface["Telemetry Interface"] ~~~ config["Configuration"]
end
subgraph phase2["Phase 2: RPC"]
direction LR
http["HTTP Context"] ~~~ rpc["RPC Handlers"]
end
subgraph phase3["Phase 3: P2P"]
direction LR
proto["Protobuf Context"] ~~~ tx["Transaction Relay"]
end
subgraph phase4["Phase 4: Consensus"]
direction LR
consensus["Consensus Rounds"] ~~~ proposals["Proposals"]
end
phase1 --> phase2 --> phase3 --> phase4
style phase1 fill:#1565c0,stroke:#0d47a1,color:#ffffff
style phase2 fill:#2e7d32,stroke:#1b5e20,color:#ffffff
style phase3 fill:#e65100,stroke:#bf360c,color:#ffffff
style phase4 fill:#c2185b,stroke:#880e4f,color:#ffffff
```
</div>
### Key Principles
1. **Minimal Intrusion**: Instrumentation should not alter existing control flow
2. **Zero-Cost When Disabled**: Use compile-time flags and no-op implementations
3. **Backward Compatibility**: Protocol Buffer extensions use high field numbers
4. **Graceful Degradation**: Tracing failures must not affect node operation
---
## 3.3 Performance Overhead Summary
| Metric | Overhead | Notes |
| ------------- | ---------- | ----------------------------------- |
| CPU | 1-3% | Span creation and attribute setting |
| Memory | 2-5 MB | Batch buffer for pending spans |
| Network | 10-50 KB/s | Compressed OTLP export to collector |
| Latency (p99) | <2% | With proper sampling configuration |
---
## 3.4 Detailed CPU Overhead Analysis
### 3.4.1 Per-Operation Costs
| Operation | Time (ns) | Frequency | Impact |
| --------------------- | --------- | ---------------------- | ---------- |
| Span creation | 200-500 | Every traced operation | Low |
| Span end | 100-200 | Every traced operation | Low |
| SetAttribute (string) | 80-120 | 3-5 per span | Low |
| SetAttribute (int) | 40-60 | 2-3 per span | Negligible |
| AddEvent | 50-80 | 0-2 per span | Negligible |
| Context injection | 150-250 | Per outgoing message | Low |
| Context extraction | 100-180 | Per incoming message | Low |
| GetCurrent context | 10-20 | Thread-local access | Negligible |
### 3.4.2 Transaction Processing Overhead
<div align="center">
```mermaid
%%{init: {'pie': {'textPosition': 0.75}}}%%
pie showData
"tx.receive (800ns)" : 800
"tx.validate (500ns)" : 500
"tx.relay (500ns)" : 500
"Context inject (600ns)" : 600
```
**Transaction Tracing Overhead (~2.4μs total)**
</div>
**Overhead percentage**: 2.4 μs / 200 μs (avg tx processing) = **~1.2%**
### 3.4.3 Consensus Round Overhead
| Operation | Count | Cost (ns) | Total |
| ---------------------- | ----- | --------- | ---------- |
| consensus.round span | 1 | ~1000 | ~1 μs |
| consensus.phase spans | 3 | ~700 | ~2.1 μs |
| proposal.receive spans | ~20 | ~600 | ~12 μs |
| proposal.send spans | ~3 | ~600 | ~1.8 μs |
| Context operations | ~30 | ~200 | ~6 μs |
| **TOTAL** | | | **~23 μs** |
**Overhead percentage**: 23 μs / 3s (typical round) = **~0.0008%** (negligible)
### 3.4.4 RPC Request Overhead
| Operation | Cost (ns) |
| ---------------- | ------------ |
| rpc.request span | ~700 |
| rpc.command span | ~600 |
| Context extract | ~250 |
| Context inject | ~200 |
| **TOTAL** | **~1.75 μs** |
- Fast RPC (1ms): 1.75 μs / 1ms = **~0.175%**
- Slow RPC (100ms): 1.75 μs / 100ms = **~0.002%**
---
## 3.5 Memory Overhead Analysis
### 3.5.1 Static Memory
| Component | Size | Allocated |
| ------------------------ | ----------- | ---------- |
| TracerProvider singleton | ~64 KB | At startup |
| BatchSpanProcessor | ~128 KB | At startup |
| OTLP exporter | ~256 KB | At startup |
| Propagator registry | ~8 KB | At startup |
| **Total static** | **~456 KB** | |
### 3.5.2 Dynamic Memory
| Component | Size per unit | Max units | Peak |
| -------------------- | ------------- | ---------- | ----------- |
| Active span | ~200 bytes | 1000 | ~200 KB |
| Queued span (export) | ~500 bytes | 2048 | ~1 MB |
| Attribute storage | ~50 bytes | 5 per span | Included |
| Context storage | ~64 bytes | Per thread | ~6.4 KB |
| **Total dynamic** | | | **~1.2 MB** |
### 3.5.3 Memory Growth Characteristics
```mermaid
---
config:
xyChart:
width: 700
height: 400
---
xychart-beta
title "Memory Usage vs Span Rate"
x-axis "Spans/second" [0, 200, 400, 600, 800, 1000]
y-axis "Memory (MB)" 0 --> 6
line [1, 1.8, 2.6, 3.4, 4.2, 5]
```
**Notes**:
- Memory increases linearly with span rate
- Batch export prevents unbounded growth
- Queue size is configurable (default 2048 spans)
- At queue limit, oldest spans are dropped (not blocked)
---
## 3.6 Network Overhead Analysis
### 3.6.1 Export Bandwidth
| Sampling Rate | Spans/sec | Bandwidth | Notes |
| ------------- | --------- | --------- | ---------------- |
| 100% | ~500 | ~250 KB/s | Development only |
| 10% | ~50 | ~25 KB/s | Staging |
| 1% | ~5 | ~2.5 KB/s | Production |
| Error-only | ~1 | ~0.5 KB/s | Minimal overhead |
### 3.6.2 Trace Context Propagation
| Message Type | Context Size | Messages/sec | Overhead |
| ---------------------- | ------------ | ------------ | ----------- |
| TMTransaction | 32 bytes | ~100 | ~3.2 KB/s |
| TMProposeSet | 32 bytes | ~10 | ~320 B/s |
| TMValidation | 32 bytes | ~50 | ~1.6 KB/s |
| **Total P2P overhead** | | | **~5 KB/s** |
---
## 3.7 Optimization Strategies
### 3.7.1 Sampling Strategies
```mermaid
flowchart TD
trace["New Trace"]
trace --> errors{"Is Error?"}
errors -->|Yes| sample["SAMPLE"]
errors -->|No| consensus{"Is Consensus?"}
consensus -->|Yes| sample
consensus -->|No| slow{"Is Slow?"}
slow -->|Yes| sample
slow -->|No| prob{"Random < 10%?"}
prob -->|Yes| sample
prob -->|No| drop["DROP"]
style sample fill:#4caf50,stroke:#388e3c,color:#fff
style drop fill:#f44336,stroke:#c62828,color:#fff
```
### 3.7.2 Batch Tuning Recommendations
| Environment | Batch Size | Batch Delay | Max Queue |
| ------------------ | ---------- | ----------- | --------- |
| Low-latency | 128 | 1000ms | 512 |
| High-throughput | 1024 | 10000ms | 8192 |
| Memory-constrained | 256 | 2000ms | 512 |
### 3.7.3 Conditional Instrumentation
```cpp
// Compile-time feature flag
#ifndef XRPL_ENABLE_TELEMETRY
// Zero-cost when disabled
#define XRPL_TRACE_SPAN(t, n) ((void)0)
#endif
// Runtime component filtering
if (telemetry.shouldTracePeer())
{
XRPL_TRACE_SPAN(telemetry, "peer.message.receive");
// ... instrumentation
}
// No overhead when component tracing disabled
```
---
## 3.8 Links to Detailed Documentation
- **[Code Samples](./04-code-samples.md)**: Complete implementation code for all components
- **[Configuration Reference](./05-configuration-reference.md)**: Configuration options and collector setup
- **[Implementation Phases](./06-implementation-phases.md)**: Detailed timeline and milestones
---
## 3.9 Code Intrusiveness Assessment
This section provides a detailed assessment of how intrusive the OpenTelemetry integration is to the existing rippled codebase.
### 3.9.1 Files Modified Summary
| Component | Files Modified | Lines Added | Lines Changed | Architectural Impact |
| --------------------- | -------------- | ----------- | ------------- | -------------------- |
| **Core Telemetry** | 5 new files | ~800 | 0 | None (new module) |
| **Application Init** | 2 files | ~30 | ~5 | Minimal |
| **RPC Layer** | 3 files | ~80 | ~20 | Minimal |
| **Transaction Relay** | 4 files | ~120 | ~40 | Low |
| **Consensus** | 3 files | ~100 | ~30 | Low-Medium |
| **Protocol Buffers** | 1 file | ~25 | 0 | Low |
| **CMake/Build** | 3 files | ~50 | ~10 | Minimal |
| **Total** | **~21 files** | **~1,205** | **~105** | **Low** |
### 3.9.2 Detailed File Impact
```mermaid
pie title Code Changes by Component
"New Telemetry Module" : 800
"Transaction Relay" : 160
"Consensus" : 130
"RPC Layer" : 100
"Application Init" : 35
"Protocol Buffers" : 25
"Build System" : 60
```
#### New Files (No Impact on Existing Code)
| File | Lines | Purpose |
| ---------------------------------------------- | ----- | -------------------- |
| `include/xrpl/telemetry/Telemetry.h` | ~160 | Main interface |
| `include/xrpl/telemetry/SpanGuard.h` | ~120 | RAII wrapper |
| `include/xrpl/telemetry/TraceContext.h` | ~80 | Context propagation |
| `src/xrpld/telemetry/TracingInstrumentation.h` | ~60 | Macros |
| `src/libxrpl/telemetry/Telemetry.cpp` | ~200 | Implementation |
| `src/libxrpl/telemetry/TelemetryConfig.cpp` | ~60 | Config parsing |
| `src/libxrpl/telemetry/NullTelemetry.cpp` | ~40 | No-op implementation |
#### Modified Files (Existing Rippled Code)
| File | Lines Added | Lines Changed | Risk Level |
| ------------------------------------------------- | ----------- | ------------- | ---------- |
| `src/xrpld/app/main/Application.cpp` | ~15 | ~3 | Low |
| `include/xrpl/app/main/Application.h` | ~5 | ~2 | Low |
| `src/xrpld/rpc/detail/ServerHandler.cpp` | ~40 | ~10 | Low |
| `src/xrpld/rpc/handlers/*.cpp` | ~30 | ~8 | Low |
| `src/xrpld/overlay/detail/PeerImp.cpp` | ~60 | ~15 | Medium |
| `src/xrpld/overlay/detail/OverlayImpl.cpp` | ~30 | ~10 | Medium |
| `src/xrpld/app/consensus/RCLConsensus.cpp` | ~50 | ~15 | Medium |
| `src/xrpld/app/consensus/RCLConsensusAdaptor.cpp` | ~40 | ~12 | Medium |
| `src/xrpld/core/JobQueue.cpp` | ~20 | ~5 | Low |
| `src/xrpld/overlay/detail/ripple.proto` | ~25 | 0 | Low |
| `CMakeLists.txt` | ~40 | ~8 | Low |
| `cmake/FindOpenTelemetry.cmake` | ~50 | 0 | None (new) |
### 3.9.3 Risk Assessment by Component
<div align="center">
**Do First** ↖ ↗ **Plan Carefully**
```mermaid
quadrantChart
title Code Intrusiveness Risk Matrix
x-axis Low Risk --> High Risk
y-axis Low Value --> High Value
RPC Tracing: [0.2, 0.8]
Transaction Relay: [0.5, 0.9]
Consensus Tracing: [0.7, 0.95]
Peer Message Tracing: [0.8, 0.4]
JobQueue Context: [0.4, 0.5]
Ledger Acquisition: [0.5, 0.6]
```
**Optional** ↙ ↘ **Avoid**
</div>
#### Risk Level Definitions
| Risk Level | Definition | Mitigation |
| ---------- | ---------------------------------------------------------------- | ---------------------------------- |
| **Low** | Additive changes only; no modification to existing logic | Standard code review |
| **Medium** | Minor modifications to existing functions; clear boundaries | Comprehensive unit tests |
| **High** | Changes to core logic or data structures; potential side effects | Integration tests + staged rollout |
### 3.9.4 Architectural Impact Assessment
| Aspect | Impact | Justification |
| -------------------- | ------- | --------------------------------------------------------------------- |
| **Data Flow** | None | Tracing is purely observational; no business logic changes |
| **Threading Model** | Minimal | Context propagation uses thread-local storage (standard OTel pattern) |
| **Memory Model** | Low | Bounded queues prevent unbounded growth; RAII ensures cleanup |
| **Network Protocol** | Low | Optional fields in protobuf (high field numbers); backward compatible |
| **Configuration** | None | New config section; existing configs unaffected |
| **Build System** | Low | Optional CMake flag; builds work without OpenTelemetry |
| **Dependencies** | Low | OpenTelemetry SDK is optional; null implementation when disabled |
### 3.9.5 Backward Compatibility
| Compatibility | Status | Notes |
| --------------- | ------ | ----------------------------------------------------- |
| **Config File** | ✅ Full | New `[telemetry]` section is optional |
| **Protocol** | ✅ Full | Optional protobuf fields with high field numbers |
| **Build** | ✅ Full | `XRPL_ENABLE_TELEMETRY=OFF` produces identical binary |
| **Runtime** | ✅ Full | `enabled=0` produces zero overhead |
| **API** | ✅ Full | No changes to public RPC or P2P APIs |
### 3.9.6 Rollback Strategy
If issues are discovered after deployment:
1. **Immediate**: Set `enabled=0` in config and restart (zero code change)
2. **Quick**: Rebuild with `XRPL_ENABLE_TELEMETRY=OFF`
3. **Complete**: Revert telemetry commits (clean separation makes this easy)
### 3.9.7 Code Change Examples
**Minimal RPC Instrumentation (Low Intrusiveness):**
```cpp
// Before
void ServerHandler::onRequest(...) {
auto result = processRequest(req);
send(result);
}
// After (only ~10 lines added)
void ServerHandler::onRequest(...) {
XRPL_TRACE_RPC(app_.getTelemetry(), "rpc.request"); // +1 line
XRPL_TRACE_SET_ATTR("xrpl.rpc.command", command); // +1 line
auto result = processRequest(req);
XRPL_TRACE_SET_ATTR("xrpl.rpc.status", status); // +1 line
send(result);
}
```
**Consensus Instrumentation (Medium Intrusiveness):**
```cpp
// Before
void RCLConsensusAdaptor::startRound(...) {
// ... existing logic
}
// After (context storage required)
void RCLConsensusAdaptor::startRound(...) {
XRPL_TRACE_CONSENSUS(app_.getTelemetry(), "consensus.round");
XRPL_TRACE_SET_ATTR("xrpl.consensus.ledger.seq", seq);
// Store context for child spans in phase transitions
currentRoundContext_ = _xrpl_guard_->context(); // New member variable
// ... existing logic unchanged
}
```
---
*Previous: [Design Decisions](./02-design-decisions.md)* | *Next: [Code Samples](./04-code-samples.md)* | *Back to: [Overview](./OpenTelemetryPlan.md)*

View File

@@ -1,982 +0,0 @@
# Code Samples
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Implementation Strategy](./03-implementation-strategy.md) | [Configuration Reference](./05-configuration-reference.md)
---
## 4.1 Core Interfaces
### 4.1.1 Main Telemetry Interface
```cpp
// include/xrpl/telemetry/Telemetry.h
#pragma once
#include <xrpl/telemetry/TelemetryConfig.h>
#include <opentelemetry/trace/tracer.h>
#include <opentelemetry/trace/span.h>
#include <opentelemetry/context/context.h>
#include <memory>
#include <string>
#include <string_view>
namespace xrpl {
namespace telemetry {
/**
* Main telemetry interface for OpenTelemetry integration.
*
* This class provides the primary API for distributed tracing in rippled.
* It manages the OpenTelemetry SDK lifecycle and provides convenience
* methods for creating spans and propagating context.
*/
class Telemetry
{
public:
/**
* Configuration for the telemetry system.
*/
struct Setup
{
bool enabled = false;
std::string serviceName = "rippled";
std::string serviceVersion;
std::string serviceInstanceId; // Node public key
// Exporter configuration
std::string exporterType = "otlp_grpc"; // "otlp_grpc", "otlp_http", "none"
std::string exporterEndpoint = "localhost:4317";
bool useTls = false;
std::string tlsCertPath;
// Sampling configuration
double samplingRatio = 1.0; // 1.0 = 100% sampling
// Batch processor settings
std::uint32_t batchSize = 512;
std::chrono::milliseconds batchDelay{5000};
std::uint32_t maxQueueSize = 2048;
// Network attributes
std::uint32_t networkId = 0;
std::string networkType = "mainnet";
// Component filtering
bool traceTransactions = true;
bool traceConsensus = true;
bool traceRpc = true;
bool tracePeer = false; // High volume, disabled by default
bool traceLedger = true;
};
virtual ~Telemetry() = default;
// ═══════════════════════════════════════════════════════════════════════
// LIFECYCLE
// ═══════════════════════════════════════════════════════════════════════
/** Start the telemetry system (call after configuration) */
virtual void start() = 0;
/** Stop the telemetry system (flushes pending spans) */
virtual void stop() = 0;
/** Check if telemetry is enabled */
virtual bool isEnabled() const = 0;
// ═══════════════════════════════════════════════════════════════════════
// TRACER ACCESS
// ═══════════════════════════════════════════════════════════════════════
/** Get the tracer for creating spans */
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Tracer>
getTracer(std::string_view name = "rippled") = 0;
// ═══════════════════════════════════════════════════════════════════════
// SPAN CREATION (Convenience Methods)
// ═══════════════════════════════════════════════════════════════════════
/** Start a new span with default options */
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::trace::SpanKind kind =
opentelemetry::trace::SpanKind::kInternal) = 0;
/** Start a span as child of given context */
virtual opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span>
startSpan(
std::string_view name,
opentelemetry::context::Context const& parentContext,
opentelemetry::trace::SpanKind kind =
opentelemetry::trace::SpanKind::kInternal) = 0;
// ═══════════════════════════════════════════════════════════════════════
// CONTEXT PROPAGATION
// ═══════════════════════════════════════════════════════════════════════
/** Serialize context for network transmission */
virtual std::string serializeContext(
opentelemetry::context::Context const& ctx) = 0;
/** Deserialize context from network data */
virtual opentelemetry::context::Context deserializeContext(
std::string const& serialized) = 0;
// ═══════════════════════════════════════════════════════════════════════
// COMPONENT FILTERING
// ═══════════════════════════════════════════════════════════════════════
/** Check if transaction tracing is enabled */
virtual bool shouldTraceTransactions() const = 0;
/** Check if consensus tracing is enabled */
virtual bool shouldTraceConsensus() const = 0;
/** Check if RPC tracing is enabled */
virtual bool shouldTraceRpc() const = 0;
/** Check if peer message tracing is enabled */
virtual bool shouldTracePeer() const = 0;
};
// Factory functions
std::unique_ptr<Telemetry>
make_Telemetry(
Telemetry::Setup const& setup,
beast::Journal journal);
Telemetry::Setup
setup_Telemetry(
Section const& section,
std::string const& nodePublicKey,
std::string const& version);
} // namespace telemetry
} // namespace xrpl
```
---
## 4.2 RAII Span Guard
```cpp
// include/xrpl/telemetry/SpanGuard.h
#pragma once
#include <opentelemetry/trace/span.h>
#include <opentelemetry/trace/scope.h>
#include <opentelemetry/trace/status.h>
#include <string_view>
#include <exception>
namespace xrpl {
namespace telemetry {
/**
* RAII guard for OpenTelemetry spans.
*
* Automatically ends the span on destruction and makes it the current
* span in the thread-local context.
*/
class SpanGuard
{
opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span> span_;
opentelemetry::trace::Scope scope_;
public:
/**
* Construct guard with span.
* The span becomes the current span in thread-local context.
*/
explicit SpanGuard(
opentelemetry::nostd::shared_ptr<opentelemetry::trace::Span> span)
: span_(std::move(span))
, scope_(span_)
{
}
// Non-copyable, non-movable
SpanGuard(SpanGuard const&) = delete;
SpanGuard& operator=(SpanGuard const&) = delete;
SpanGuard(SpanGuard&&) = delete;
SpanGuard& operator=(SpanGuard&&) = delete;
~SpanGuard()
{
if (span_)
span_->End();
}
/** Access the underlying span */
opentelemetry::trace::Span& span() { return *span_; }
opentelemetry::trace::Span const& span() const { return *span_; }
/** Set span status to OK */
void setOk()
{
span_->SetStatus(opentelemetry::trace::StatusCode::kOk);
}
/** Set span status with code and description */
void setStatus(
opentelemetry::trace::StatusCode code,
std::string_view description = "")
{
span_->SetStatus(code, std::string(description));
}
/** Set an attribute on the span */
template<typename T>
void setAttribute(std::string_view key, T&& value)
{
span_->SetAttribute(
opentelemetry::nostd::string_view(key.data(), key.size()),
std::forward<T>(value));
}
/** Add an event to the span */
void addEvent(std::string_view name)
{
span_->AddEvent(std::string(name));
}
/** Record an exception on the span */
void recordException(std::exception const& e)
{
span_->RecordException(e);
span_->SetStatus(
opentelemetry::trace::StatusCode::kError,
e.what());
}
/** Get the current trace context */
opentelemetry::context::Context context() const
{
return opentelemetry::context::RuntimeContext::GetCurrent();
}
};
/**
* No-op span guard for when tracing is disabled.
* Provides the same interface but does nothing.
*/
class NullSpanGuard
{
public:
NullSpanGuard() = default;
void setOk() {}
void setStatus(opentelemetry::trace::StatusCode, std::string_view = "") {}
template<typename T>
void setAttribute(std::string_view, T&&) {}
void addEvent(std::string_view) {}
void recordException(std::exception const&) {}
};
} // namespace telemetry
} // namespace xrpl
```
---
## 4.3 Instrumentation Macros
```cpp
// src/xrpld/telemetry/TracingInstrumentation.h
#pragma once
#include <xrpl/telemetry/Telemetry.h>
#include <xrpl/telemetry/SpanGuard.h>
namespace xrpl {
namespace telemetry {
// ═══════════════════════════════════════════════════════════════════════════
// INSTRUMENTATION MACROS
// ═══════════════════════════════════════════════════════════════════════════
#ifdef XRPL_ENABLE_TELEMETRY
// Start a span that is automatically ended when guard goes out of scope
#define XRPL_TRACE_SPAN(telemetry, name) \
auto _xrpl_span_ = (telemetry).startSpan(name); \
::xrpl::telemetry::SpanGuard _xrpl_guard_(_xrpl_span_)
// Start a span with specific kind
#define XRPL_TRACE_SPAN_KIND(telemetry, name, kind) \
auto _xrpl_span_ = (telemetry).startSpan(name, kind); \
::xrpl::telemetry::SpanGuard _xrpl_guard_(_xrpl_span_)
// Conditional span based on component
#define XRPL_TRACE_TX(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceTransactions()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
#define XRPL_TRACE_CONSENSUS(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceConsensus()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
#define XRPL_TRACE_RPC(telemetry, name) \
std::optional<::xrpl::telemetry::SpanGuard> _xrpl_guard_; \
if ((telemetry).shouldTraceRpc()) { \
_xrpl_guard_.emplace((telemetry).startSpan(name)); \
}
// Set attribute on current span (if exists)
#define XRPL_TRACE_SET_ATTR(key, value) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->setAttribute(key, value); \
}
// Record exception on current span
#define XRPL_TRACE_EXCEPTION(e) \
if (_xrpl_guard_.has_value()) { \
_xrpl_guard_->recordException(e); \
}
#else // XRPL_ENABLE_TELEMETRY not defined
#define XRPL_TRACE_SPAN(telemetry, name) ((void)0)
#define XRPL_TRACE_SPAN_KIND(telemetry, name, kind) ((void)0)
#define XRPL_TRACE_TX(telemetry, name) ((void)0)
#define XRPL_TRACE_CONSENSUS(telemetry, name) ((void)0)
#define XRPL_TRACE_RPC(telemetry, name) ((void)0)
#define XRPL_TRACE_SET_ATTR(key, value) ((void)0)
#define XRPL_TRACE_EXCEPTION(e) ((void)0)
#endif // XRPL_ENABLE_TELEMETRY
} // namespace telemetry
} // namespace xrpl
```
---
## 4.4 Protocol Buffer Extensions
### 4.4.1 TraceContext Message Definition
Add to `src/xrpld/overlay/detail/ripple.proto`:
```protobuf
// Trace context for distributed tracing across nodes
// Uses W3C Trace Context format internally
message TraceContext {
// 16-byte trace identifier (required for valid context)
bytes trace_id = 1;
// 8-byte span identifier of parent span
bytes span_id = 2;
// Trace flags (bit 0 = sampled)
uint32 trace_flags = 3;
// W3C tracestate header value for vendor-specific data
string trace_state = 4;
}
// Extend existing messages with optional trace context
// High field numbers (1000+) to avoid conflicts
message TMTransaction {
// ... existing fields ...
// Optional trace context for distributed tracing
optional TraceContext trace_context = 1001;
}
message TMProposeSet {
// ... existing fields ...
optional TraceContext trace_context = 1001;
}
message TMValidation {
// ... existing fields ...
optional TraceContext trace_context = 1001;
}
message TMGetLedger {
// ... existing fields ...
optional TraceContext trace_context = 1001;
}
message TMLedgerData {
// ... existing fields ...
optional TraceContext trace_context = 1001;
}
```
### 4.4.2 Context Serialization/Deserialization
```cpp
// include/xrpl/telemetry/TraceContext.h
#pragma once
#include <opentelemetry/context/context.h>
#include <opentelemetry/trace/span_context.h>
#include <protocol/messages.h> // Generated protobuf
#include <optional>
#include <string>
namespace xrpl {
namespace telemetry {
/**
* Utilities for trace context serialization and propagation.
*/
class TraceContextPropagator
{
public:
/**
* Extract trace context from Protocol Buffer message.
* Returns empty context if no trace info present.
*/
static opentelemetry::context::Context
extract(protocol::TraceContext const& proto);
/**
* Inject current trace context into Protocol Buffer message.
*/
static void
inject(
opentelemetry::context::Context const& ctx,
protocol::TraceContext& proto);
/**
* Extract trace context from HTTP headers (for RPC).
* Supports W3C Trace Context (traceparent, tracestate).
*/
static opentelemetry::context::Context
extractFromHeaders(
std::function<std::optional<std::string>(std::string_view)> headerGetter);
/**
* Inject trace context into HTTP headers (for RPC responses).
*/
static void
injectToHeaders(
opentelemetry::context::Context const& ctx,
std::function<void(std::string_view, std::string_view)> headerSetter);
};
// ═══════════════════════════════════════════════════════════════════════════
// IMPLEMENTATION
// ═══════════════════════════════════════════════════════════════════════════
inline opentelemetry::context::Context
TraceContextPropagator::extract(protocol::TraceContext const& proto)
{
using namespace opentelemetry::trace;
if (proto.trace_id().size() != 16 || proto.span_id().size() != 8)
return opentelemetry::context::Context{}; // Invalid, return empty
// Construct TraceId and SpanId from bytes
TraceId traceId(reinterpret_cast<uint8_t const*>(proto.trace_id().data()));
SpanId spanId(reinterpret_cast<uint8_t const*>(proto.span_id().data()));
TraceFlags flags(static_cast<uint8_t>(proto.trace_flags()));
// Create SpanContext from extracted data
SpanContext spanContext(traceId, spanId, flags, /* remote = */ true);
// Create context with extracted span as parent
return opentelemetry::context::Context{}.SetValue(
opentelemetry::trace::kSpanKey,
opentelemetry::nostd::shared_ptr<Span>(
new DefaultSpan(spanContext)));
}
inline void
TraceContextPropagator::inject(
opentelemetry::context::Context const& ctx,
protocol::TraceContext& proto)
{
using namespace opentelemetry::trace;
// Get current span from context
auto span = GetSpan(ctx);
if (!span)
return;
auto const& spanCtx = span->GetContext();
if (!spanCtx.IsValid())
return;
// Serialize trace_id (16 bytes)
auto const& traceId = spanCtx.trace_id();
proto.set_trace_id(traceId.Id().data(), TraceId::kSize);
// Serialize span_id (8 bytes)
auto const& spanId = spanCtx.span_id();
proto.set_span_id(spanId.Id().data(), SpanId::kSize);
// Serialize flags
proto.set_trace_flags(spanCtx.trace_flags().flags());
// Note: tracestate not implemented yet
}
} // namespace telemetry
} // namespace xrpl
```
---
## 4.5 Module-Specific Instrumentation
### 4.5.1 Transaction Relay Instrumentation
```cpp
// src/xrpld/overlay/detail/PeerImp.cpp (modified)
#include <xrpl/telemetry/TracingInstrumentation.h>
void
PeerImp::handleTransaction(
std::shared_ptr<protocol::TMTransaction> const& m)
{
// Extract trace context from incoming message
opentelemetry::context::Context parentCtx;
if (m->has_trace_context())
{
parentCtx = telemetry::TraceContextPropagator::extract(
m->trace_context());
}
// Start span as child of remote span (cross-node link)
auto span = app_.getTelemetry().startSpan(
"tx.receive",
parentCtx,
opentelemetry::trace::SpanKind::kServer);
telemetry::SpanGuard guard(span);
try
{
// Parse and validate transaction
SerialIter sit(makeSlice(m->rawtransaction()));
auto stx = std::make_shared<STTx const>(sit);
// Add transaction attributes
guard.setAttribute("xrpl.tx.hash", to_string(stx->getTransactionID()));
guard.setAttribute("xrpl.tx.type", stx->getTxnType());
guard.setAttribute("xrpl.peer.id", remote_address_.to_string());
// Check if we've seen this transaction (HashRouter)
auto const [flags, suppressed] =
app_.getHashRouter().addSuppressionPeer(
stx->getTransactionID(),
id_);
if (suppressed)
{
guard.setAttribute("xrpl.tx.suppressed", true);
guard.addEvent("tx.duplicate");
return; // Already processing this transaction
}
// Create child span for validation
{
auto validateSpan = app_.getTelemetry().startSpan("tx.validate");
telemetry::SpanGuard validateGuard(validateSpan);
auto [validity, reason] = checkTransaction(stx);
validateGuard.setAttribute("xrpl.tx.validity",
validity == Validity::Valid ? "valid" : "invalid");
if (validity != Validity::Valid)
{
validateGuard.setStatus(
opentelemetry::trace::StatusCode::kError,
reason);
return;
}
}
// Relay to other peers (capture context for propagation)
auto ctx = guard.context();
// Create child span for relay
auto relaySpan = app_.getTelemetry().startSpan(
"tx.relay",
ctx,
opentelemetry::trace::SpanKind::kClient);
{
telemetry::SpanGuard relayGuard(relaySpan);
// Inject context into outgoing message
protocol::TraceContext protoCtx;
telemetry::TraceContextPropagator::inject(
relayGuard.context(), protoCtx);
// Relay to other peers
app_.overlay().relay(
stx->getTransactionID(),
*m,
protoCtx, // Pass trace context
exclusions);
relayGuard.setAttribute("xrpl.tx.relay_count",
static_cast<int64_t>(relayCount));
}
guard.setOk();
}
catch (std::exception const& e)
{
guard.recordException(e);
JLOG(journal_.warn()) << "Transaction handling failed: " << e.what();
}
}
```
### 4.5.2 Consensus Instrumentation
```cpp
// src/xrpld/app/consensus/RCLConsensus.cpp (modified)
#include <xrpl/telemetry/TracingInstrumentation.h>
void
RCLConsensusAdaptor::startRound(
NetClock::time_point const& now,
RCLCxLedger::ID const& prevLedgerHash,
RCLCxLedger const& prevLedger,
hash_set<NodeID> const& peers,
bool proposing)
{
XRPL_TRACE_CONSENSUS(app_.getTelemetry(), "consensus.round");
XRPL_TRACE_SET_ATTR("xrpl.consensus.ledger.prev", to_string(prevLedgerHash));
XRPL_TRACE_SET_ATTR("xrpl.consensus.ledger.seq",
static_cast<int64_t>(prevLedger.seq() + 1));
XRPL_TRACE_SET_ATTR("xrpl.consensus.proposers",
static_cast<int64_t>(peers.size()));
XRPL_TRACE_SET_ATTR("xrpl.consensus.mode",
proposing ? "proposing" : "observing");
// Store trace context for use in phase transitions
currentRoundContext_ = _xrpl_guard_.has_value()
? _xrpl_guard_->context()
: opentelemetry::context::Context{};
// ... existing implementation ...
}
ConsensusPhase
RCLConsensusAdaptor::phaseTransition(ConsensusPhase newPhase)
{
// Create span for phase transition
auto span = app_.getTelemetry().startSpan(
"consensus.phase." + to_string(newPhase),
currentRoundContext_);
telemetry::SpanGuard guard(span);
guard.setAttribute("xrpl.consensus.phase", to_string(newPhase));
guard.addEvent("phase.enter");
auto const startTime = std::chrono::steady_clock::now();
try
{
auto result = doPhaseTransition(newPhase);
auto const duration = std::chrono::steady_clock::now() - startTime;
guard.setAttribute("xrpl.consensus.phase_duration_ms",
std::chrono::duration<double, std::milli>(duration).count());
guard.setOk();
return result;
}
catch (std::exception const& e)
{
guard.recordException(e);
throw;
}
}
void
RCLConsensusAdaptor::peerProposal(
NetClock::time_point const& now,
RCLCxPeerPos const& proposal)
{
// Extract trace context from proposal message
opentelemetry::context::Context parentCtx;
if (proposal.hasTraceContext())
{
parentCtx = telemetry::TraceContextPropagator::extract(
proposal.traceContext());
}
auto span = app_.getTelemetry().startSpan(
"consensus.proposal.receive",
parentCtx,
opentelemetry::trace::SpanKind::kServer);
telemetry::SpanGuard guard(span);
guard.setAttribute("xrpl.consensus.proposer",
toBase58(TokenType::NodePublic, proposal.nodeId()));
guard.setAttribute("xrpl.consensus.round",
static_cast<int64_t>(proposal.proposal().proposeSeq()));
// ... existing implementation ...
guard.setOk();
}
```
### 4.5.3 RPC Handler Instrumentation
```cpp
// src/xrpld/rpc/detail/ServerHandler.cpp (modified)
#include <xrpl/telemetry/TracingInstrumentation.h>
void
ServerHandler::onRequest(
http_request_type&& req,
std::function<void(http_response_type&&)>&& send)
{
// Extract trace context from HTTP headers (W3C Trace Context)
auto parentCtx = telemetry::TraceContextPropagator::extractFromHeaders(
[&req](std::string_view name) -> std::optional<std::string> {
auto it = req.find(boost::beast::http::field{
std::string(name)});
if (it != req.end())
return std::string(it->value());
return std::nullopt;
});
// Start request span
auto span = app_.getTelemetry().startSpan(
"rpc.request",
parentCtx,
opentelemetry::trace::SpanKind::kServer);
telemetry::SpanGuard guard(span);
// Add HTTP attributes
guard.setAttribute("http.method", std::string(req.method_string()));
guard.setAttribute("http.target", std::string(req.target()));
guard.setAttribute("http.user_agent",
std::string(req[boost::beast::http::field::user_agent]));
auto const startTime = std::chrono::steady_clock::now();
try
{
// Parse and process request
auto const& body = req.body();
Json::Value jv;
Json::Reader reader;
if (!reader.parse(body, jv))
{
guard.setStatus(
opentelemetry::trace::StatusCode::kError,
"Invalid JSON");
sendError(send, "Invalid JSON");
return;
}
// Extract command name
std::string command = jv.isMember("command")
? jv["command"].asString()
: jv.isMember("method")
? jv["method"].asString()
: "unknown";
guard.setAttribute("xrpl.rpc.command", command);
// Create child span for command execution
auto cmdSpan = app_.getTelemetry().startSpan(
"rpc.command." + command);
{
telemetry::SpanGuard cmdGuard(cmdSpan);
// Execute RPC command
auto result = processRequest(jv);
// Record result attributes
if (result.isMember("status"))
{
cmdGuard.setAttribute("xrpl.rpc.status",
result["status"].asString());
}
if (result["status"].asString() == "error")
{
cmdGuard.setStatus(
opentelemetry::trace::StatusCode::kError,
result.isMember("error_message")
? result["error_message"].asString()
: "RPC error");
}
else
{
cmdGuard.setOk();
}
}
auto const duration = std::chrono::steady_clock::now() - startTime;
guard.setAttribute("http.duration_ms",
std::chrono::duration<double, std::milli>(duration).count());
// Inject trace context into response headers
http_response_type resp;
telemetry::TraceContextPropagator::injectToHeaders(
guard.context(),
[&resp](std::string_view name, std::string_view value) {
resp.set(std::string(name), std::string(value));
});
guard.setOk();
send(std::move(resp));
}
catch (std::exception const& e)
{
guard.recordException(e);
JLOG(journal_.error()) << "RPC request failed: " << e.what();
sendError(send, e.what());
}
}
```
### 4.5.4 JobQueue Context Propagation
```cpp
// src/xrpld/core/JobQueue.h (modified)
#include <opentelemetry/context/context.h>
class Job
{
// ... existing members ...
// Captured trace context at job creation
opentelemetry::context::Context traceContext_;
public:
// Constructor captures current trace context
Job(JobType type, std::function<void()> func, ...)
: type_(type)
, func_(std::move(func))
, traceContext_(opentelemetry::context::RuntimeContext::GetCurrent())
// ... other initializations ...
{
}
// Get trace context for restoration during execution
opentelemetry::context::Context const&
traceContext() const { return traceContext_; }
};
// src/xrpld/core/JobQueue.cpp (modified)
void
Worker::run()
{
while (auto job = getJob())
{
// Restore trace context from job creation
auto token = opentelemetry::context::RuntimeContext::Attach(
job->traceContext());
// Start execution span
auto span = app_.getTelemetry().startSpan("job.execute");
telemetry::SpanGuard guard(span);
guard.setAttribute("xrpl.job.type", to_string(job->type()));
guard.setAttribute("xrpl.job.queue_ms", job->queueTimeMs());
guard.setAttribute("xrpl.job.worker", workerId_);
try
{
job->execute();
guard.setOk();
}
catch (std::exception const& e)
{
guard.recordException(e);
JLOG(journal_.error()) << "Job execution failed: " << e.what();
}
}
}
```
---
## 4.6 Span Flow Visualization
<div align="center">
```mermaid
flowchart TB
subgraph Client["External Client"]
submit["Submit TX"]
end
subgraph NodeA["rippled Node A"]
rpcA["rpc.request"]
cmdA["rpc.command.submit"]
txRecvA["tx.receive"]
txValA["tx.validate"]
txRelayA["tx.relay"]
end
subgraph NodeB["rippled Node B"]
txRecvB["tx.receive"]
txValB["tx.validate"]
txRelayB["tx.relay"]
end
subgraph NodeC["rippled Node C"]
txRecvC["tx.receive"]
consensusC["consensus.round"]
phaseC["consensus.phase.establish"]
end
submit --> rpcA
rpcA --> cmdA
cmdA --> txRecvA
txRecvA --> txValA
txValA --> txRelayA
txRelayA -.->|"TraceContext"| txRecvB
txRecvB --> txValB
txValB --> txRelayB
txRelayB -.->|"TraceContext"| txRecvC
txRecvC --> consensusC
consensusC --> phaseC
style Client fill:#334155,stroke:#1e293b,color:#fff
style NodeA fill:#1e3a8a,stroke:#172554,color:#fff
style NodeB fill:#064e3b,stroke:#022c22,color:#fff
style NodeC fill:#78350f,stroke:#451a03,color:#fff
style submit fill:#e2e8f0,stroke:#cbd5e1,color:#1e293b
style rpcA fill:#1d4ed8,stroke:#1e40af,color:#fff
style cmdA fill:#1d4ed8,stroke:#1e40af,color:#fff
style txRecvA fill:#047857,stroke:#064e3b,color:#fff
style txValA fill:#047857,stroke:#064e3b,color:#fff
style txRelayA fill:#047857,stroke:#064e3b,color:#fff
style txRecvB fill:#047857,stroke:#064e3b,color:#fff
style txValB fill:#047857,stroke:#064e3b,color:#fff
style txRelayB fill:#047857,stroke:#064e3b,color:#fff
style txRecvC fill:#047857,stroke:#064e3b,color:#fff
style consensusC fill:#fef3c7,stroke:#fde68a,color:#1e293b
style phaseC fill:#fef3c7,stroke:#fde68a,color:#1e293b
```
</div>
---
*Previous: [Implementation Strategy](./03-implementation-strategy.md)* | *Next: [Configuration Reference](./05-configuration-reference.md)* | *Back to: [Overview](./OpenTelemetryPlan.md)*

View File

@@ -1,936 +0,0 @@
# Configuration Reference
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Code Samples](./04-code-samples.md) | [Implementation Phases](./06-implementation-phases.md)
---
## 5.1 rippled Configuration
### 5.1.1 Configuration File Section
Add to `cfg/xrpld-example.cfg`:
```ini
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY (OpenTelemetry Distributed Tracing)
# ═══════════════════════════════════════════════════════════════════════════════
#
# Enables distributed tracing for transaction flow, consensus, and RPC calls.
# Traces are exported to an OpenTelemetry Collector using OTLP protocol.
#
# [telemetry]
#
# # Enable/disable telemetry (default: 0 = disabled)
# enabled=1
#
# # Exporter type: "otlp_grpc" (default), "otlp_http", or "none"
# exporter=otlp_grpc
#
# # OTLP endpoint (default: localhost:4317 for gRPC, localhost:4318 for HTTP)
# endpoint=localhost:4317
#
# # Use TLS for exporter connection (default: 0)
# use_tls=0
#
# # Path to CA certificate for TLS (optional)
# # tls_ca_cert=/path/to/ca.crt
#
# # Sampling ratio: 0.0-1.0 (default: 1.0 = 100% sampling)
# # Use lower values in production to reduce overhead
# sampling_ratio=0.1
#
# # Batch processor settings
# batch_size=512 # Spans per batch (default: 512)
# batch_delay_ms=5000 # Max delay before sending batch (default: 5000)
# max_queue_size=2048 # Max queued spans (default: 2048)
#
# # Component-specific tracing (default: all enabled except peer)
# trace_transactions=1 # Transaction relay and processing
# trace_consensus=1 # Consensus rounds and proposals
# trace_rpc=1 # RPC request handling
# trace_peer=0 # Peer messages (high volume, disabled by default)
# trace_ledger=1 # Ledger acquisition and building
#
# # Service identification (automatically detected if not specified)
# # service_name=rippled
# # service_instance_id=<node_public_key>
[telemetry]
enabled=0
```
### 5.1.2 Configuration Options Summary
| Option | Type | Default | Description |
| --------------------- | ------ | ---------------- | ----------------------------------------- |
| `enabled` | bool | `false` | Enable/disable telemetry |
| `exporter` | string | `"otlp_grpc"` | Exporter type: otlp_grpc, otlp_http, none |
| `endpoint` | string | `localhost:4317` | OTLP collector endpoint |
| `use_tls` | bool | `false` | Enable TLS for exporter connection |
| `tls_ca_cert` | string | `""` | Path to CA certificate file |
| `sampling_ratio` | float | `1.0` | Sampling ratio (0.0-1.0) |
| `batch_size` | uint | `512` | Spans per export batch |
| `batch_delay_ms` | uint | `5000` | Max delay before sending batch (ms) |
| `max_queue_size` | uint | `2048` | Maximum queued spans |
| `trace_transactions` | bool | `true` | Enable transaction tracing |
| `trace_consensus` | bool | `true` | Enable consensus tracing |
| `trace_rpc` | bool | `true` | Enable RPC tracing |
| `trace_peer` | bool | `false` | Enable peer message tracing (high volume) |
| `trace_ledger` | bool | `true` | Enable ledger tracing |
| `service_name` | string | `"rippled"` | Service name for traces |
| `service_instance_id` | string | `<node_pubkey>` | Instance identifier |
---
## 5.2 Configuration Parser
```cpp
// src/libxrpl/telemetry/TelemetryConfig.cpp
#include <xrpl/telemetry/Telemetry.h>
#include <xrpl/basics/Log.h>
namespace xrpl {
namespace telemetry {
Telemetry::Setup
setup_Telemetry(
Section const& section,
std::string const& nodePublicKey,
std::string const& version)
{
Telemetry::Setup setup;
// Basic settings
setup.enabled = section.value_or("enabled", false);
setup.serviceName = section.value_or("service_name", "rippled");
setup.serviceVersion = version;
setup.serviceInstanceId = section.value_or(
"service_instance_id", nodePublicKey);
// Exporter settings
setup.exporterType = section.value_or("exporter", "otlp_grpc");
if (setup.exporterType == "otlp_grpc")
setup.exporterEndpoint = section.value_or("endpoint", "localhost:4317");
else if (setup.exporterType == "otlp_http")
setup.exporterEndpoint = section.value_or("endpoint", "localhost:4318");
setup.useTls = section.value_or("use_tls", false);
setup.tlsCertPath = section.value_or("tls_ca_cert", "");
// Sampling
setup.samplingRatio = section.value_or("sampling_ratio", 1.0);
if (setup.samplingRatio < 0.0 || setup.samplingRatio > 1.0)
{
Throw<std::runtime_error>(
"telemetry.sampling_ratio must be between 0.0 and 1.0");
}
// Batch processor
setup.batchSize = section.value_or("batch_size", 512u);
setup.batchDelay = std::chrono::milliseconds{
section.value_or("batch_delay_ms", 5000u)};
setup.maxQueueSize = section.value_or("max_queue_size", 2048u);
// Component filtering
setup.traceTransactions = section.value_or("trace_transactions", true);
setup.traceConsensus = section.value_or("trace_consensus", true);
setup.traceRpc = section.value_or("trace_rpc", true);
setup.tracePeer = section.value_or("trace_peer", false);
setup.traceLedger = section.value_or("trace_ledger", true);
return setup;
}
} // namespace telemetry
} // namespace xrpl
```
---
## 5.3 Application Integration
### 5.3.1 ApplicationImp Changes
```cpp
// src/xrpld/app/main/Application.cpp (modified)
#include <xrpl/telemetry/Telemetry.h>
class ApplicationImp : public Application
{
// ... existing members ...
// Telemetry (must be constructed early, destroyed late)
std::unique_ptr<telemetry::Telemetry> telemetry_;
public:
ApplicationImp(...)
{
// Initialize telemetry early (before other components)
auto telemetrySection = config_->section("telemetry");
auto telemetrySetup = telemetry::setup_Telemetry(
telemetrySection,
toBase58(TokenType::NodePublic, nodeIdentity_.publicKey()),
BuildInfo::getVersionString());
// Set network attributes
telemetrySetup.networkId = config_->NETWORK_ID;
telemetrySetup.networkType = [&]() {
if (config_->NETWORK_ID == 0) return "mainnet";
if (config_->NETWORK_ID == 1) return "testnet";
if (config_->NETWORK_ID == 2) return "devnet";
return "custom";
}();
telemetry_ = telemetry::make_Telemetry(
telemetrySetup,
logs_->journal("Telemetry"));
// ... rest of initialization ...
}
void start() override
{
// Start telemetry first
if (telemetry_)
telemetry_->start();
// ... existing start code ...
}
void stop() override
{
// ... existing stop code ...
// Stop telemetry last (to capture shutdown spans)
if (telemetry_)
telemetry_->stop();
}
telemetry::Telemetry& getTelemetry() override
{
assert(telemetry_);
return *telemetry_;
}
};
```
### 5.3.2 Application Interface Addition
```cpp
// include/xrpl/app/main/Application.h (modified)
namespace telemetry { class Telemetry; }
class Application
{
public:
// ... existing virtual methods ...
/** Get the telemetry system for distributed tracing */
virtual telemetry::Telemetry& getTelemetry() = 0;
};
```
---
## 5.4 CMake Integration
### 5.4.1 Find OpenTelemetry Module
```cmake
# cmake/FindOpenTelemetry.cmake
# Find OpenTelemetry C++ SDK
#
# This module defines:
# OpenTelemetry_FOUND - System has OpenTelemetry
# OpenTelemetry::api - API library target
# OpenTelemetry::sdk - SDK library target
# OpenTelemetry::otlp_grpc_exporter - OTLP gRPC exporter target
# OpenTelemetry::otlp_http_exporter - OTLP HTTP exporter target
find_package(opentelemetry-cpp CONFIG QUIET)
if(opentelemetry-cpp_FOUND)
set(OpenTelemetry_FOUND TRUE)
# Create imported targets if not already created by config
if(NOT TARGET OpenTelemetry::api)
add_library(OpenTelemetry::api ALIAS opentelemetry-cpp::api)
endif()
if(NOT TARGET OpenTelemetry::sdk)
add_library(OpenTelemetry::sdk ALIAS opentelemetry-cpp::sdk)
endif()
if(NOT TARGET OpenTelemetry::otlp_grpc_exporter)
add_library(OpenTelemetry::otlp_grpc_exporter ALIAS
opentelemetry-cpp::otlp_grpc_exporter)
endif()
else()
# Try pkg-config fallback
find_package(PkgConfig QUIET)
if(PKG_CONFIG_FOUND)
pkg_check_modules(OTEL opentelemetry-cpp QUIET)
if(OTEL_FOUND)
set(OpenTelemetry_FOUND TRUE)
# Create imported targets from pkg-config
add_library(OpenTelemetry::api INTERFACE IMPORTED)
target_include_directories(OpenTelemetry::api INTERFACE
${OTEL_INCLUDE_DIRS})
endif()
endif()
endif()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(OpenTelemetry
REQUIRED_VARS OpenTelemetry_FOUND)
```
### 5.4.2 CMakeLists.txt Changes
```cmake
# CMakeLists.txt (additions)
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY OPTIONS
# ═══════════════════════════════════════════════════════════════════════════════
option(XRPL_ENABLE_TELEMETRY
"Enable OpenTelemetry distributed tracing support" OFF)
if(XRPL_ENABLE_TELEMETRY)
find_package(OpenTelemetry REQUIRED)
# Define compile-time flag
add_compile_definitions(XRPL_ENABLE_TELEMETRY)
message(STATUS "OpenTelemetry tracing: ENABLED")
else()
message(STATUS "OpenTelemetry tracing: DISABLED")
endif()
# ═══════════════════════════════════════════════════════════════════════════════
# TELEMETRY LIBRARY
# ═══════════════════════════════════════════════════════════════════════════════
if(XRPL_ENABLE_TELEMETRY)
add_library(xrpl_telemetry
src/libxrpl/telemetry/Telemetry.cpp
src/libxrpl/telemetry/TelemetryConfig.cpp
src/libxrpl/telemetry/TraceContext.cpp
)
target_include_directories(xrpl_telemetry
PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/include
)
target_link_libraries(xrpl_telemetry
PUBLIC
OpenTelemetry::api
OpenTelemetry::sdk
OpenTelemetry::otlp_grpc_exporter
PRIVATE
xrpl_basics
)
# Add to main library dependencies
target_link_libraries(xrpld PRIVATE xrpl_telemetry)
else()
# Create null implementation library
add_library(xrpl_telemetry
src/libxrpl/telemetry/NullTelemetry.cpp
)
target_include_directories(xrpl_telemetry
PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include
)
endif()
```
---
## 5.5 OpenTelemetry Collector Configuration
### 5.5.1 Development Configuration
```yaml
# otel-collector-dev.yaml
# Minimal configuration for local development
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
# Console output for debugging
logging:
verbosity: detailed
sampling_initial: 5
sampling_thereafter: 200
# Jaeger for trace visualization
jaeger:
endpoint: jaeger:14250
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, jaeger]
```
### 5.5.2 Production Configuration
```yaml
# otel-collector-prod.yaml
# Production configuration with filtering, sampling, and multiple backends
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
tls:
cert_file: /etc/otel/server.crt
key_file: /etc/otel/server.key
ca_file: /etc/otel/ca.crt
processors:
# Memory limiter to prevent OOM
memory_limiter:
check_interval: 1s
limit_mib: 1000
spike_limit_mib: 200
# Batch processing for efficiency
batch:
timeout: 5s
send_batch_size: 512
send_batch_max_size: 1024
# Tail-based sampling (keep errors and slow traces)
tail_sampling:
decision_wait: 10s
num_traces: 100000
expected_new_traces_per_sec: 1000
policies:
# Always keep error traces
- name: errors
type: status_code
status_code:
status_codes: [ERROR]
# Keep slow consensus rounds (>5s)
- name: slow-consensus
type: latency
latency:
threshold_ms: 5000
# Keep slow RPC requests (>1s)
- name: slow-rpc
type: and
and:
and_sub_policy:
- name: rpc-spans
type: string_attribute
string_attribute:
key: xrpl.rpc.command
values: [".*"]
enabled_regex_matching: true
- name: latency
type: latency
latency:
threshold_ms: 1000
# Probabilistic sampling for the rest
- name: probabilistic
type: probabilistic
probabilistic:
sampling_percentage: 10
# Attribute processing
attributes:
actions:
# Hash sensitive data
- key: xrpl.tx.account
action: hash
# Add deployment info
- key: deployment.environment
value: production
action: upsert
exporters:
# Grafana Tempo for long-term storage
otlp/tempo:
endpoint: tempo.monitoring:4317
tls:
insecure: false
ca_file: /etc/otel/tempo-ca.crt
# Elastic APM for correlation with logs
otlp/elastic:
endpoint: apm.elastic:8200
headers:
Authorization: "Bearer ${ELASTIC_APM_TOKEN}"
extensions:
health_check:
endpoint: 0.0.0.0:13133
zpages:
endpoint: 0.0.0.0:55679
service:
extensions: [health_check, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, tail_sampling, attributes, batch]
exporters: [otlp/tempo, otlp/elastic]
```
---
## 5.6 Docker Compose Development Environment
```yaml
# docker-compose-telemetry.yaml
version: '3.8'
services:
# OpenTelemetry Collector
otel-collector:
image: otel/opentelemetry-collector-contrib:0.92.0
container_name: otel-collector
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-dev.yaml:/etc/otel-collector-config.yaml:ro
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "13133:13133" # Health check
depends_on:
- jaeger
# Jaeger for trace visualization
jaeger:
image: jaegertracing/all-in-one:1.53
container_name: jaeger
environment:
- COLLECTOR_OTLP_ENABLED=true
ports:
- "16686:16686" # UI
- "14250:14250" # gRPC
# Grafana for dashboards
grafana:
image: grafana/grafana:10.2.3
container_name: grafana
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
volumes:
- ./grafana/provisioning:/etc/grafana/provisioning:ro
- ./grafana/dashboards:/var/lib/grafana/dashboards:ro
ports:
- "3000:3000"
depends_on:
- jaeger
# Prometheus for metrics (optional, for correlation)
prometheus:
image: prom/prometheus:v2.48.1
container_name: prometheus
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml:ro
ports:
- "9090:9090"
networks:
default:
name: rippled-telemetry
```
---
## 5.7 Configuration Architecture
```mermaid
flowchart TB
subgraph config["Configuration Sources"]
cfgFile["xrpld.cfg<br/>[telemetry] section"]
cmake["CMake<br/>XRPL_ENABLE_TELEMETRY"]
end
subgraph init["Initialization"]
parse["setup_Telemetry()"]
factory["make_Telemetry()"]
end
subgraph runtime["Runtime Components"]
tracer["TracerProvider"]
exporter["OTLP Exporter"]
processor["BatchProcessor"]
end
subgraph collector["Collector Pipeline"]
recv["Receivers"]
proc["Processors"]
exp["Exporters"]
end
cfgFile --> parse
cmake -->|"compile flag"| parse
parse --> factory
factory --> tracer
tracer --> processor
processor --> exporter
exporter -->|"OTLP"| recv
recv --> proc
proc --> exp
style config fill:#e3f2fd,stroke:#1976d2
style runtime fill:#e8f5e9,stroke:#388e3c
style collector fill:#fff3e0,stroke:#ff9800
```
---
## 5.8 Grafana Integration
Step-by-step instructions for integrating rippled traces with Grafana.
### 5.8.1 Data Source Configuration
#### Tempo (Recommended)
```yaml
# grafana/provisioning/datasources/tempo.yaml
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo:3200
jsonData:
httpMethod: GET
tracesToLogs:
datasourceUid: loki
tags: ['service.name', 'xrpl.tx.hash']
mappedTags: [{ key: 'trace_id', value: 'traceID' }]
mapTagNamesEnabled: true
filterByTraceID: true
serviceMap:
datasourceUid: prometheus
nodeGraph:
enabled: true
search:
hide: false
lokiSearch:
datasourceUid: loki
```
#### Jaeger
```yaml
# grafana/provisioning/datasources/jaeger.yaml
apiVersion: 1
datasources:
- name: Jaeger
type: jaeger
access: proxy
url: http://jaeger:16686
jsonData:
tracesToLogs:
datasourceUid: loki
tags: ['service.name']
```
#### Elastic APM
```yaml
# grafana/provisioning/datasources/elastic-apm.yaml
apiVersion: 1
datasources:
- name: Elasticsearch-APM
type: elasticsearch
access: proxy
url: http://elasticsearch:9200
database: "apm-*"
jsonData:
esVersion: "8.0.0"
timeField: "@timestamp"
logMessageField: message
logLevelField: log.level
```
### 5.8.2 Dashboard Provisioning
```yaml
# grafana/provisioning/dashboards/dashboards.yaml
apiVersion: 1
providers:
- name: 'rippled-dashboards'
orgId: 1
folder: 'rippled'
folderUid: 'rippled'
type: file
disableDeletion: false
updateIntervalSeconds: 30
options:
path: /var/lib/grafana/dashboards/rippled
```
### 5.8.3 Example Dashboard: RPC Performance
```json
{
"title": "rippled RPC Performance",
"uid": "rippled-rpc-performance",
"panels": [
{
"title": "RPC Latency by Command",
"type": "heatmap",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && span.xrpl.rpc.command != \"\"} | histogram_over_time(duration) by (span.xrpl.rpc.command)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 }
},
{
"title": "RPC Error Rate",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && status.code=error} | rate() by (span.xrpl.rpc.command)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 }
},
{
"title": "Top 10 Slowest RPC Commands",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && span.xrpl.rpc.command != \"\"} | avg(duration) by (span.xrpl.rpc.command) | topk(10)"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 8 }
},
{
"title": "Recent Traces",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"}"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 16 }
}
]
}
```
### 5.8.4 Example Dashboard: Transaction Tracing
```json
{
"title": "rippled Transaction Tracing",
"uid": "rippled-tx-tracing",
"panels": [
{
"title": "Transaction Throughput",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | rate()"
}
],
"gridPos": { "h": 4, "w": 6, "x": 0, "y": 0 }
},
{
"title": "Cross-Node Relay Count",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.relay\"} | avg(span.xrpl.tx.relay_count)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 4 }
},
{
"title": "Transaction Validation Errors",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.validate\" && status.code=error}"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 4 }
}
]
}
```
### 5.8.5 TraceQL Query Examples
Common queries for rippled traces:
```
# Find all traces for a specific transaction hash
{resource.service.name="rippled" && span.xrpl.tx.hash="ABC123..."}
# Find slow RPC commands (>100ms)
{resource.service.name="rippled" && name=~"rpc.command.*"} | duration > 100ms
# Find consensus rounds taking >5 seconds
{resource.service.name="rippled" && name="consensus.round"} | duration > 5s
# Find failed transactions with error details
{resource.service.name="rippled" && name="tx.validate" && status.code=error}
# Find transactions relayed to many peers
{resource.service.name="rippled" && name="tx.relay"} | span.xrpl.tx.relay_count > 10
# Compare latency across nodes
{resource.service.name="rippled" && name="rpc.command.account_info"} | avg(duration) by (resource.service.instance.id)
```
### 5.8.6 Correlation with PerfLog
To correlate OpenTelemetry traces with existing PerfLog data:
**Step 1: Configure Loki to ingest PerfLog**
```yaml
# promtail-config.yaml
scrape_configs:
- job_name: rippled-perflog
static_configs:
- targets:
- localhost
labels:
job: rippled
__path__: /var/log/rippled/perf*.log
pipeline_stages:
- json:
expressions:
trace_id: trace_id
ledger_seq: ledger_seq
tx_hash: tx_hash
- labels:
trace_id:
ledger_seq:
tx_hash:
```
**Step 2: Add trace_id to PerfLog entries**
Modify PerfLog to include trace_id when available:
```cpp
// In PerfLog output, add trace_id from current span context
void logPerf(Json::Value& entry) {
auto span = opentelemetry::trace::GetSpan(
opentelemetry::context::RuntimeContext::GetCurrent());
if (span && span->GetContext().IsValid()) {
char traceIdHex[33];
span->GetContext().trace_id().ToLowerBase16(traceIdHex);
entry["trace_id"] = std::string(traceIdHex, 32);
}
// ... existing logging
}
```
**Step 3: Configure Grafana trace-to-logs link**
In Tempo data source configuration, set up the derived field:
```yaml
jsonData:
tracesToLogs:
datasourceUid: loki
tags: ['trace_id', 'xrpl.tx.hash']
filterByTraceID: true
filterBySpanID: false
```
### 5.8.7 Correlation with Insight/StatsD Metrics
To correlate traces with existing Beast Insight metrics:
**Step 1: Export Insight metrics to Prometheus**
```yaml
# prometheus.yaml
scrape_configs:
- job_name: 'rippled-statsd'
static_configs:
- targets: ['statsd-exporter:9102']
```
**Step 2: Add exemplars to metrics**
OpenTelemetry SDK automatically adds exemplars (trace IDs) to metrics when using the Prometheus exporter. This links metrics spikes to specific traces.
**Step 3: Configure Grafana metric-to-trace link**
```yaml
# In Prometheus data source
jsonData:
exemplarTraceIdDestinations:
- name: trace_id
datasourceUid: tempo
```
**Step 4: Dashboard panel with exemplars**
```json
{
"title": "RPC Latency with Trace Links",
"type": "timeseries",
"datasource": "Prometheus",
"targets": [
{
"expr": "histogram_quantile(0.99, rate(rippled_rpc_duration_seconds_bucket[5m]))",
"exemplar": true
}
]
}
```
This allows clicking on metric data points to jump directly to the related trace.
---
*Previous: [Code Samples](./04-code-samples.md)* | *Next: [Implementation Phases](./06-implementation-phases.md)* | *Back to: [Overview](./OpenTelemetryPlan.md)*

View File

@@ -1,537 +0,0 @@
# Implementation Phases
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Configuration Reference](./05-configuration-reference.md) | [Observability Backends](./07-observability-backends.md)
---
## 6.1 Phase Overview
```mermaid
gantt
title OpenTelemetry Implementation Timeline
dateFormat YYYY-MM-DD
axisFormat Week %W
section Phase 1
Core Infrastructure :p1, 2024-01-01, 2w
SDK Integration :p1a, 2024-01-01, 4d
Telemetry Interface :p1b, after p1a, 3d
Configuration & CMake :p1c, after p1b, 3d
Unit Tests :p1d, after p1c, 2d
section Phase 2
RPC Tracing :p2, after p1, 2w
HTTP Context Extraction :p2a, after p1, 2d
RPC Handler Instrumentation :p2b, after p2a, 4d
WebSocket Support :p2c, after p2b, 2d
Integration Tests :p2d, after p2c, 2d
section Phase 3
Transaction Tracing :p3, after p2, 2w
Protocol Buffer Extension :p3a, after p2, 2d
PeerImp Instrumentation :p3b, after p3a, 3d
Relay Context Propagation :p3c, after p3b, 3d
Multi-node Tests :p3d, after p3c, 2d
section Phase 4
Consensus Tracing :p4, after p3, 2w
Consensus Round Spans :p4a, after p3, 3d
Proposal Handling :p4b, after p4a, 3d
Validation Tests :p4c, after p4b, 4d
section Phase 5
Documentation & Deploy :p5, after p4, 1w
```
---
## 6.2 Phase 1: Core Infrastructure (Weeks 1-2)
**Objective**: Establish foundational telemetry infrastructure
### Tasks
| Task | Description | Effort | Risk |
| ---- | ----------------------------------------------------- | ------ | ------ |
| 1.1 | Add OpenTelemetry C++ SDK to Conan/CMake | 2d | Low |
| 1.2 | Implement `Telemetry` interface and factory | 2d | Low |
| 1.3 | Implement `SpanGuard` RAII wrapper | 1d | Low |
| 1.4 | Implement configuration parser | 1d | Low |
| 1.5 | Integrate into `ApplicationImp` | 1d | Medium |
| 1.6 | Add conditional compilation (`XRPL_ENABLE_TELEMETRY`) | 1d | Low |
| 1.7 | Create `NullTelemetry` no-op implementation | 0.5d | Low |
| 1.8 | Unit tests for core infrastructure | 1.5d | Low |
**Total Effort**: 10 days (2 developers)
### Exit Criteria
- [ ] OpenTelemetry SDK compiles and links
- [ ] Telemetry can be enabled/disabled via config
- [ ] Basic span creation works
- [ ] No performance regression when disabled
- [ ] Unit tests passing
---
## 6.3 Phase 2: RPC Tracing (Weeks 3-4)
**Objective**: Complete tracing for all RPC operations
### Tasks
| Task | Description | Effort | Risk |
| ---- | -------------------------------------------------- | ------ | ------ |
| 2.1 | Implement W3C Trace Context HTTP header extraction | 1d | Low |
| 2.2 | Instrument `ServerHandler::onRequest()` | 1d | Low |
| 2.3 | Instrument `RPCHandler::doCommand()` | 2d | Medium |
| 2.4 | Add RPC-specific attributes | 1d | Low |
| 2.5 | Instrument WebSocket handler | 1d | Medium |
| 2.6 | Integration tests for RPC tracing | 2d | Low |
| 2.7 | Performance benchmarks | 1d | Low |
| 2.8 | Documentation | 1d | Low |
**Total Effort**: 10 days
### Exit Criteria
- [ ] All RPC commands traced
- [ ] Trace context propagates from HTTP headers
- [ ] WebSocket and HTTP both instrumented
- [ ] <1ms overhead per RPC call
- [ ] Integration tests passing
---
## 6.4 Phase 3: Transaction Tracing (Weeks 5-6)
**Objective**: Trace transaction lifecycle across network
### Tasks
| Task | Description | Effort | Risk |
| ---- | --------------------------------------------- | ------ | ------ |
| 3.1 | Define `TraceContext` Protocol Buffer message | 1d | Low |
| 3.2 | Implement protobuf context serialization | 1d | Low |
| 3.3 | Instrument `PeerImp::handleTransaction()` | 2d | Medium |
| 3.4 | Instrument `NetworkOPs::submitTransaction()` | 1d | Medium |
| 3.5 | Instrument HashRouter integration | 1d | Medium |
| 3.6 | Implement relay context propagation | 2d | High |
| 3.7 | Integration tests (multi-node) | 2d | Medium |
| 3.8 | Performance benchmarks | 1d | Low |
**Total Effort**: 11 days
### Exit Criteria
- [ ] Transaction traces span across nodes
- [ ] Trace context in Protocol Buffer messages
- [ ] HashRouter deduplication visible in traces
- [ ] Multi-node integration tests passing
- [ ] <5% overhead on transaction throughput
---
## 6.5 Phase 4: Consensus Tracing (Weeks 7-8)
**Objective**: Full observability into consensus rounds
### Tasks
| Task | Description | Effort | Risk |
| ---- | ---------------------------------------------- | ------ | ------ |
| 4.1 | Instrument `RCLConsensusAdaptor::startRound()` | 1d | Medium |
| 4.2 | Instrument phase transitions | 2d | Medium |
| 4.3 | Instrument proposal handling | 2d | High |
| 4.4 | Instrument validation handling | 1d | Medium |
| 4.5 | Add consensus-specific attributes | 1d | Low |
| 4.6 | Correlate with transaction traces | 1d | Medium |
| 4.7 | Multi-validator integration tests | 2d | High |
| 4.8 | Performance validation | 1d | Medium |
**Total Effort**: 11 days
### Exit Criteria
- [ ] Complete consensus round traces
- [ ] Phase transitions visible
- [ ] Proposals and validations traced
- [ ] No impact on consensus timing
- [ ] Multi-validator test network validated
---
## 6.6 Phase 5: Documentation & Deployment (Week 9)
**Objective**: Production readiness
### Tasks
| Task | Description | Effort | Risk |
| ---- | ----------------------------- | ------ | ---- |
| 5.1 | Operator runbook | 1d | Low |
| 5.2 | Grafana dashboards | 1d | Low |
| 5.3 | Alert definitions | 0.5d | Low |
| 5.4 | Collector deployment examples | 0.5d | Low |
| 5.5 | Developer documentation | 1d | Low |
| 5.6 | Training materials | 0.5d | Low |
| 5.7 | Final integration testing | 0.5d | Low |
**Total Effort**: 5 days
---
## 6.7 Risk Assessment
```mermaid
quadrantChart
title Risk Assessment Matrix
x-axis Low Impact --> High Impact
y-axis Low Likelihood --> High Likelihood
quadrant-1 Monitor Closely
quadrant-2 Mitigate Immediately
quadrant-3 Accept Risk
quadrant-4 Plan Mitigation
SDK Compatibility: [0.25, 0.2]
Protocol Changes: [0.75, 0.65]
Performance Overhead: [0.65, 0.45]
Context Propagation: [0.5, 0.5]
Memory Leaks: [0.8, 0.2]
```
### Risk Details
| Risk | Likelihood | Impact | Mitigation |
| ------------------------------------ | ---------- | ------ | --------------------------------------- |
| Protocol changes break compatibility | Medium | High | Use high field numbers, optional fields |
| Performance overhead unacceptable | Medium | Medium | Sampling, conditional compilation |
| Context propagation complexity | Medium | Medium | Phased rollout, extensive testing |
| SDK compatibility issues | Low | Medium | Pin SDK version, fallback to no-op |
| Memory leaks in long-running nodes | Low | High | Memory profiling, bounded queues |
---
## 6.8 Success Metrics
| Metric | Target | Measurement |
| ------------------------ | ------------------------------ | --------------------- |
| Trace coverage | >95% of transactions | Sampling verification |
| CPU overhead | <3% | Benchmark tests |
| Memory overhead | <5 MB | Memory profiling |
| Latency impact (p99) | <2% | Performance tests |
| Trace completeness | >99% spans with required attrs | Validation script |
| Cross-node trace linkage | >90% of multi-hop transactions | Integration tests |
---
## 6.9 Effort Summary
<div align="center">
```mermaid
%%{init: {'pie': {'textPosition': 0.75}}}%%
pie showData
"Phase 1: Core Infrastructure" : 10
"Phase 2: RPC Tracing" : 10
"Phase 3: Transaction Tracing" : 11
"Phase 4: Consensus Tracing" : 11
"Phase 5: Documentation" : 5
```
**Total Effort Distribution (47 developer-days)**
</div>
### Resource Requirements
| Phase | Developers | Duration | Total Effort |
| --------- | ---------- | ----------- | ------------ |
| 1 | 2 | 2 weeks | 10 days |
| 2 | 1-2 | 2 weeks | 10 days |
| 3 | 2 | 2 weeks | 11 days |
| 4 | 2 | 2 weeks | 11 days |
| 5 | 1 | 1 week | 5 days |
| **Total** | **2** | **9 weeks** | **47 days** |
---
## 6.10 Quick Wins and Crawl-Walk-Run Strategy
This section outlines a prioritized approach to maximize ROI with minimal initial investment.
### 6.10.1 Crawl-Walk-Run Overview
<div align="center">
```mermaid
flowchart TB
subgraph crawl["🐢 CRAWL (Week 1-2)"]
direction LR
c1[Core SDK Setup] ~~~ c2[RPC Tracing Only] ~~~ c3[Single Node]
end
subgraph walk["🚶 WALK (Week 3-5)"]
direction LR
w1[Transaction Tracing] ~~~ w2[Cross-Node Context] ~~~ w3[Basic Dashboards]
end
subgraph run["🏃 RUN (Week 6-9)"]
direction LR
r1[Consensus Tracing] ~~~ r2[Full Correlation] ~~~ r3[Production Deploy]
end
crawl --> walk --> run
style crawl fill:#1b5e20,stroke:#0d3d14,color:#fff
style walk fill:#bf360c,stroke:#8c2809,color:#fff
style run fill:#0d47a1,stroke:#082f6a,color:#fff
style c1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style c2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style c3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style w1 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style w2 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style w3 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style r1 fill:#0d47a1,stroke:#082f6a,color:#fff
style r2 fill:#0d47a1,stroke:#082f6a,color:#fff
style r3 fill:#0d47a1,stroke:#082f6a,color:#fff
```
</div>
### 6.10.2 Quick Wins (Immediate Value)
| Quick Win | Effort | Value | When to Deploy |
| ------------------------------ | -------- | ------ | -------------- |
| **RPC Command Tracing** | 2 days | High | Week 2 |
| **RPC Latency Histograms** | 0.5 days | High | Week 2 |
| **Error Rate Dashboard** | 0.5 days | Medium | Week 2 |
| **Transaction Submit Tracing** | 1 day | High | Week 3 |
| **Consensus Round Duration** | 1 day | Medium | Week 6 |
### 6.10.3 CRAWL Phase (Weeks 1-2)
**Goal**: Get basic tracing working with minimal code changes.
**What You Get**:
- RPC request/response traces for all commands
- Latency breakdown per RPC command
- Error visibility with stack traces
- Basic Grafana dashboard
**Code Changes**: ~15 lines in `ServerHandler.cpp`, ~40 lines in new telemetry module
**Why Start Here**:
- RPC is the lowest-risk, highest-visibility component
- Immediate value for debugging client issues
- No cross-node complexity
- Single file modification to existing code
### 6.10.4 WALK Phase (Weeks 3-5)
**Goal**: Add transaction lifecycle tracing across nodes.
**What You Get**:
- End-to-end transaction traces from submit to relay
- Cross-node correlation (see transaction path)
- HashRouter deduplication visibility
- Relay latency metrics
**Code Changes**: ~120 lines across 4 files, plus protobuf extension
**Why Do This Second**:
- Builds on RPC tracing (transactions submitted via RPC)
- Moderate complexity (requires context propagation)
- High value for debugging transaction issues
### 6.10.5 RUN Phase (Weeks 6-9)
**Goal**: Full observability including consensus.
**What You Get**:
- Complete consensus round visibility
- Phase transition timing
- Validator proposal tracking
- Full end-to-end traces (client → RPC → TX → consensus → ledger)
**Code Changes**: ~100 lines across 3 consensus files
**Why Do This Last**:
- Highest complexity (consensus is critical path)
- Requires thorough testing
- Lower relative value (consensus issues are rarer)
### 6.10.6 ROI Prioritization Matrix
```mermaid
quadrantChart
title Implementation ROI Matrix
x-axis Low Effort --> High Effort
y-axis Low Value --> High Value
quadrant-1 Quick Wins - Do First
quadrant-2 Major Projects - Plan Carefully
quadrant-3 Nice to Have - Optional
quadrant-4 Time Sinks - Avoid
RPC Tracing: [0.15, 0.9]
TX Submit Trace: [0.25, 0.85]
TX Relay Trace: [0.5, 0.8]
Consensus Trace: [0.7, 0.75]
Peer Message Trace: [0.85, 0.3]
Ledger Acquire: [0.55, 0.5]
```
---
## 6.11 Definition of Done
Clear, measurable criteria for each phase.
### 6.11.1 Phase 1: Core Infrastructure
| Criterion | Measurement | Target |
| --------------- | ---------------------------------------------------------- | ---------------------------- |
| SDK Integration | `cmake --build` succeeds with `-DXRPL_ENABLE_TELEMETRY=ON` | ✅ Compiles |
| Runtime Toggle | `enabled=0` produces zero overhead | <0.1% CPU difference |
| Span Creation | Unit test creates and exports span | Span appears in Jaeger |
| Configuration | All config options parsed correctly | Config validation tests pass |
| Documentation | Developer guide exists | PR approved |
**Definition of Done**: All criteria met, PR merged, no regressions in CI.
### 6.11.2 Phase 2: RPC Tracing
| Criterion | Measurement | Target |
| ------------------ | ---------------------------------- | -------------------------- |
| Coverage | All RPC commands instrumented | 100% of commands |
| Context Extraction | traceparent header propagates | Integration test passes |
| Attributes | Command, status, duration recorded | Validation script confirms |
| Performance | RPC latency overhead | <1ms p99 |
| Dashboard | Grafana dashboard deployed | Screenshot in docs |
**Definition of Done**: RPC traces visible in Jaeger/Tempo for all commands, dashboard shows latency distribution.
### 6.11.3 Phase 3: Transaction Tracing
| Criterion | Measurement | Target |
| ---------------- | ------------------------------- | ---------------------------------- |
| Local Trace | Submit validate TxQ traced | Single-node test passes |
| Cross-Node | Context propagates via protobuf | Multi-node test passes |
| Relay Visibility | relay_count attribute correct | Spot check 100 txs |
| HashRouter | Deduplication visible in trace | Duplicate txs show suppressed=true |
| Performance | TX throughput overhead | <5% degradation |
**Definition of Done**: Transaction traces span 3+ nodes in test network, performance within bounds.
### 6.11.4 Phase 4: Consensus Tracing
| Criterion | Measurement | Target |
| -------------------- | ----------------------------- | ------------------------- |
| Round Tracing | startRound creates root span | Unit test passes |
| Phase Visibility | All phases have child spans | Integration test confirms |
| Proposer Attribution | Proposer ID in attributes | Spot check 50 rounds |
| Timing Accuracy | Phase durations match PerfLog | <5% variance |
| No Consensus Impact | Round timing unchanged | Performance test passes |
**Definition of Done**: Consensus rounds fully traceable, no impact on consensus timing.
### 6.11.5 Phase 5: Production Deployment
| Criterion | Measurement | Target |
| ------------ | ---------------------------- | -------------------------- |
| Collector HA | Multiple collectors deployed | No single point of failure |
| Sampling | Tail sampling configured | 10% base + errors + slow |
| Retention | Data retained per policy | 7 days hot, 30 days warm |
| Alerting | Alerts configured | Error spike, high latency |
| Runbook | Operator documentation | Approved by ops team |
| Training | Team trained | Session completed |
**Definition of Done**: Telemetry running in production, operators trained, alerts active.
### 6.11.6 Success Metrics Summary
| Phase | Primary Metric | Secondary Metric | Deadline |
| ------- | ---------------------- | --------------------------- | ------------- |
| Phase 1 | SDK compiles and runs | Zero overhead when disabled | End of Week 2 |
| Phase 2 | 100% RPC coverage | <1ms latency overhead | End of Week 4 |
| Phase 3 | Cross-node traces work | <5% throughput impact | End of Week 6 |
| Phase 4 | Consensus fully traced | No consensus timing impact | End of Week 8 |
| Phase 5 | Production deployment | Operators trained | End of Week 9 |
---
## 6.12 Recommended Implementation Order
Based on ROI analysis, implement in this exact order:
```mermaid
flowchart TB
subgraph week1["Week 1"]
t1[1. OpenTelemetry SDK<br/>Conan/CMake integration]
t2[2. Telemetry interface<br/>SpanGuard, config]
end
subgraph week2["Week 2"]
t3[3. RPC ServerHandler<br/>instrumentation]
t4[4. Basic Jaeger setup<br/>for testing]
end
subgraph week3["Week 3"]
t5[5. Transaction submit<br/>tracing]
t6[6. Grafana dashboard<br/>v1]
end
subgraph week4["Week 4"]
t7[7. Protobuf context<br/>extension]
t8[8. PeerImp tx.relay<br/>instrumentation]
end
subgraph week5["Week 5"]
t9[9. Multi-node<br/>integration tests]
t10[10. Performance<br/>benchmarks]
end
subgraph week6_8["Weeks 6-8"]
t11[11. Consensus<br/>instrumentation]
t12[12. Full integration<br/>testing]
end
subgraph week9["Week 9"]
t13[13. Production<br/>deployment]
t14[14. Documentation<br/>& training]
end
t1 --> t2 --> t3 --> t4
t4 --> t5 --> t6
t6 --> t7 --> t8
t8 --> t9 --> t10
t10 --> t11 --> t12
t12 --> t13 --> t14
style week1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style week2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style week3 fill:#bf360c,stroke:#8c2809,color:#fff
style week4 fill:#bf360c,stroke:#8c2809,color:#fff
style week5 fill:#bf360c,stroke:#8c2809,color:#fff
style week6_8 fill:#0d47a1,stroke:#082f6a,color:#fff
style week9 fill:#4a148c,stroke:#2e0d57,color:#fff
style t1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t4 fill:#1b5e20,stroke:#0d3d14,color:#fff
style t5 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t6 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t7 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t8 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t9 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t10 fill:#ffe0b2,stroke:#ffcc80,color:#1e293b
style t11 fill:#0d47a1,stroke:#082f6a,color:#fff
style t12 fill:#0d47a1,stroke:#082f6a,color:#fff
style t13 fill:#4a148c,stroke:#2e0d57,color:#fff
style t14 fill:#4a148c,stroke:#2e0d57,color:#fff
```
---
*Previous: [Configuration Reference](./05-configuration-reference.md)* | *Next: [Observability Backends](./07-observability-backends.md)* | *Back to: [Overview](./OpenTelemetryPlan.md)*

View File

@@ -1,590 +0,0 @@
# Observability Backend Recommendations
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Implementation Phases](./06-implementation-phases.md) | [Appendix](./08-appendix.md)
---
## 7.1 Development/Testing Backends
| Backend | Pros | Cons | Use Case |
| ---------- | ------------------- | ----------------- | ----------------- |
| **Jaeger** | Easy setup, good UI | Limited retention | Local dev, CI |
| **Zipkin** | Simple, lightweight | Basic features | Quick prototyping |
### Quick Start with Jaeger
```bash
# Start Jaeger with OTLP support
docker run -d --name jaeger \
-e COLLECTOR_OTLP_ENABLED=true \
-p 16686:16686 \
-p 4317:4317 \
-p 4318:4318 \
jaegertracing/all-in-one:latest
```
---
## 7.2 Production Backends
| Backend | Pros | Cons | Use Case |
| ----------------- | ----------------------------------------- | ------------------ | --------------------------- |
| **Grafana Tempo** | Cost-effective, Grafana integration | Newer project | Most production deployments |
| **Elastic APM** | Full observability stack, log correlation | Resource intensive | Existing Elastic users |
| **Honeycomb** | Excellent query, high cardinality | SaaS cost | Deep debugging needs |
| **Datadog APM** | Full platform, easy setup | SaaS cost | Enterprise with budget |
### Backend Selection Flowchart
```mermaid
flowchart TD
start[Select Backend] --> budget{Budget<br/>Constraints?}
budget -->|Yes| oss[Open Source]
budget -->|No| saas{Prefer<br/>SaaS?}
oss --> existing{Existing<br/>Stack?}
existing -->|Grafana| tempo[Grafana Tempo]
existing -->|Elastic| elastic[Elastic APM]
existing -->|None| tempo
saas -->|Yes| enterprise{Enterprise<br/>Support?}
saas -->|No| oss
enterprise -->|Yes| datadog[Datadog APM]
enterprise -->|No| honeycomb[Honeycomb]
tempo --> final[Configure Collector]
elastic --> final
honeycomb --> final
datadog --> final
style start fill:#0f172a,stroke:#020617,color:#fff
style budget fill:#334155,stroke:#1e293b,color:#fff
style oss fill:#1e293b,stroke:#0f172a,color:#fff
style existing fill:#334155,stroke:#1e293b,color:#fff
style saas fill:#334155,stroke:#1e293b,color:#fff
style enterprise fill:#334155,stroke:#1e293b,color:#fff
style final fill:#0f172a,stroke:#020617,color:#fff
style tempo fill:#1b5e20,stroke:#0d3d14,color:#fff
style elastic fill:#bf360c,stroke:#8c2809,color:#fff
style honeycomb fill:#0d47a1,stroke:#082f6a,color:#fff
style datadog fill:#4a148c,stroke:#2e0d57,color:#fff
```
---
## 7.3 Recommended Production Architecture
```mermaid
flowchart TB
subgraph validators["Validator Nodes"]
v1[rippled<br/>Validator 1]
v2[rippled<br/>Validator 2]
end
subgraph stock["Stock Nodes"]
s1[rippled<br/>Stock 1]
s2[rippled<br/>Stock 2]
end
subgraph collector["OTel Collector Cluster"]
c1[Collector<br/>DC1]
c2[Collector<br/>DC2]
end
subgraph backends["Storage Backends"]
tempo[(Grafana<br/>Tempo)]
elastic[(Elastic<br/>APM)]
archive[(S3/GCS<br/>Archive)]
end
subgraph ui["Visualization"]
grafana[Grafana<br/>Dashboards]
end
v1 -->|OTLP| c1
v2 -->|OTLP| c1
s1 -->|OTLP| c2
s2 -->|OTLP| c2
c1 --> tempo
c1 --> elastic
c2 --> tempo
c2 --> archive
tempo --> grafana
elastic --> grafana
style validators fill:#b71c1c,stroke:#7f1d1d,color:#ffffff
style stock fill:#0d47a1,stroke:#082f6a,color:#ffffff
style collector fill:#bf360c,stroke:#8c2809,color:#ffffff
style backends fill:#1b5e20,stroke:#0d3d14,color:#ffffff
style ui fill:#4a148c,stroke:#2e0d57,color:#ffffff
```
---
## 7.4 Architecture Considerations
### 7.4.1 Collector Placement
| Strategy | Description | Pros | Cons |
| ------------- | -------------------- | ------------------------ | ----------------------- |
| **Sidecar** | Collector per node | Isolation, simple config | Resource overhead |
| **DaemonSet** | Collector per host | Shared resources | Complexity |
| **Gateway** | Central collector(s) | Centralized processing | Single point of failure |
**Recommendation**: Use **Gateway** pattern with regional collectors for rippled networks:
- One collector cluster per datacenter/region
- Tail-based sampling at collector level
- Multiple export destinations for redundancy
### 7.4.2 Sampling Strategy
```mermaid
flowchart LR
subgraph head["Head Sampling (Node)"]
hs[10% probabilistic]
end
subgraph tail["Tail Sampling (Collector)"]
ts1[Keep all errors]
ts2[Keep slow >5s]
ts3[Keep 10% rest]
end
head --> tail
ts1 --> final[Final Traces]
ts2 --> final
ts3 --> final
style head fill:#0d47a1,stroke:#082f6a,color:#fff
style tail fill:#1b5e20,stroke:#0d3d14,color:#fff
style hs fill:#0d47a1,stroke:#082f6a,color:#fff
style ts1 fill:#1b5e20,stroke:#0d3d14,color:#fff
style ts2 fill:#1b5e20,stroke:#0d3d14,color:#fff
style ts3 fill:#1b5e20,stroke:#0d3d14,color:#fff
style final fill:#bf360c,stroke:#8c2809,color:#fff
```
### 7.4.3 Data Retention
| Environment | Hot Storage | Warm Storage | Cold Archive |
| ----------- | ----------- | ------------ | ------------ |
| Development | 24 hours | N/A | N/A |
| Staging | 7 days | N/A | N/A |
| Production | 7 days | 30 days | many years |
---
## 7.5 Integration Checklist
- [ ] Choose primary backend (Tempo recommended for cost/features)
- [ ] Deploy collector cluster with high availability
- [ ] Configure tail-based sampling for error/latency traces
- [ ] Set up Grafana dashboards for trace visualization
- [ ] Configure alerts for trace anomalies
- [ ] Establish data retention policies
- [ ] Test trace correlation with logs and metrics
---
## 7.6 Grafana Dashboard Examples
Pre-built dashboards for rippled observability.
### 7.6.1 Consensus Health Dashboard
```json
{
"title": "rippled Consensus Health",
"uid": "rippled-consensus-health",
"tags": ["rippled", "consensus", "tracing"],
"panels": [
{
"title": "Consensus Round Duration",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | avg(duration) by (resource.service.instance.id)"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms",
"thresholds": {
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 4000 },
{ "color": "red", "value": 5000 }
]
}
}
},
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 }
},
{
"title": "Phase Duration Breakdown",
"type": "barchart",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=~\"consensus.phase.*\"} | avg(duration) by (name)"
}
],
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 }
},
{
"title": "Proposers per Round",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | avg(span.xrpl.consensus.proposers)"
}
],
"gridPos": { "h": 4, "w": 6, "x": 0, "y": 8 }
},
{
"title": "Recent Slow Rounds (>5s)",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"consensus.round\"} | duration > 5s"
}
],
"gridPos": { "h": 8, "w": 24, "x": 0, "y": 12 }
}
]
}
```
### 7.6.2 Node Overview Dashboard
```json
{
"title": "rippled Node Overview",
"uid": "rippled-node-overview",
"panels": [
{
"title": "Active Nodes",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"} | count_over_time() by (resource.service.instance.id) | count()"
}
],
"gridPos": { "h": 4, "w": 4, "x": 0, "y": 0 }
},
{
"title": "Total Transactions (1h)",
"type": "stat",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | count()"
}
],
"gridPos": { "h": 4, "w": 4, "x": 4, "y": 0 }
},
{
"title": "Error Rate",
"type": "gauge",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && status.code=error} | rate() / {resource.service.name=\"rippled\"} | rate() * 100"
}
],
"fieldConfig": {
"defaults": {
"unit": "percent",
"max": 10,
"thresholds": {
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 1 },
{ "color": "red", "value": 5 }
]
}
}
},
"gridPos": { "h": 4, "w": 4, "x": 8, "y": 0 }
},
{
"title": "Service Map",
"type": "nodeGraph",
"datasource": "Tempo",
"gridPos": { "h": 12, "w": 12, "x": 12, "y": 0 }
}
]
}
```
### 7.6.3 Alert Rules
```yaml
# grafana/provisioning/alerting/rippled-alerts.yaml
apiVersion: 1
groups:
- name: rippled-tracing-alerts
folder: rippled
interval: 1m
rules:
- uid: consensus-slow
title: Consensus Round Slow
condition: A
data:
- refId: A
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name="consensus.round"} | avg(duration) > 5s'
for: 5m
annotations:
summary: Consensus rounds taking >5 seconds
description: "Consensus duration: {{ $value }}ms"
labels:
severity: warning
- uid: rpc-error-spike
title: RPC Error Rate Spike
condition: B
data:
- refId: B
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name=~"rpc.command.*" && status.code=error} | rate() > 0.05'
for: 2m
annotations:
summary: RPC error rate >5%
labels:
severity: critical
- uid: tx-throughput-drop
title: Transaction Throughput Drop
condition: C
data:
- refId: C
datasourceUid: tempo
model:
queryType: traceql
query: '{resource.service.name="rippled" && name="tx.receive"} | rate() < 10'
for: 10m
annotations:
summary: Transaction throughput below threshold
labels:
severity: warning
```
---
## 7.7 PerfLog and Insight Correlation
How to correlate OpenTelemetry traces with existing rippled observability.
### 7.7.1 Correlation Architecture
```mermaid
flowchart TB
subgraph rippled["rippled Node"]
otel[OpenTelemetry<br/>Spans]
perflog[PerfLog<br/>JSON Logs]
insight[Beast Insight<br/>StatsD Metrics]
end
subgraph collectors["Data Collection"]
otelc[OTel Collector]
promtail[Promtail/Fluentd]
statsd[StatsD Exporter]
end
subgraph storage["Storage"]
tempo[(Tempo)]
loki[(Loki)]
prom[(Prometheus)]
end
subgraph grafana["Grafana"]
traces[Trace View]
logs[Log View]
metrics[Metrics View]
corr[Correlation<br/>Panel]
end
otel -->|OTLP| otelc --> tempo
perflog -->|JSON| promtail --> loki
insight -->|StatsD| statsd --> prom
tempo --> traces
loki --> logs
prom --> metrics
traces --> corr
logs --> corr
metrics --> corr
style rippled fill:#0d47a1,stroke:#082f6a,color:#fff
style collectors fill:#bf360c,stroke:#8c2809,color:#fff
style storage fill:#1b5e20,stroke:#0d3d14,color:#fff
style grafana fill:#4a148c,stroke:#2e0d57,color:#fff
style otel fill:#0d47a1,stroke:#082f6a,color:#fff
style perflog fill:#0d47a1,stroke:#082f6a,color:#fff
style insight fill:#0d47a1,stroke:#082f6a,color:#fff
style otelc fill:#bf360c,stroke:#8c2809,color:#fff
style promtail fill:#bf360c,stroke:#8c2809,color:#fff
style statsd fill:#bf360c,stroke:#8c2809,color:#fff
style tempo fill:#1b5e20,stroke:#0d3d14,color:#fff
style loki fill:#1b5e20,stroke:#0d3d14,color:#fff
style prom fill:#1b5e20,stroke:#0d3d14,color:#fff
style traces fill:#4a148c,stroke:#2e0d57,color:#fff
style logs fill:#4a148c,stroke:#2e0d57,color:#fff
style metrics fill:#4a148c,stroke:#2e0d57,color:#fff
style corr fill:#4a148c,stroke:#2e0d57,color:#fff
```
### 7.7.2 Correlation Fields
| Source | Field | Link To | Purpose |
| ----------- | --------------------------- | ------------- | -------------------------- |
| **Trace** | `trace_id` | Logs | Find log entries for trace |
| **Trace** | `xrpl.tx.hash` | Logs, Metrics | Find TX-related data |
| **Trace** | `xrpl.consensus.ledger.seq` | Logs | Find ledger-related logs |
| **PerfLog** | `trace_id` (new) | Traces | Jump to trace from log |
| **PerfLog** | `ledger_seq` | Traces | Find consensus trace |
| **Insight** | `exemplar.trace_id` | Traces | Jump from metric spike |
### 7.7.3 Example: Debugging a Slow Transaction
**Step 1: Find the trace**
```
# In Grafana Explore with Tempo
{resource.service.name="rippled" && span.xrpl.tx.hash="ABC123..."}
```
**Step 2: Get the trace_id from the trace view**
```
Trace ID: 4bf92f3577b34da6a3ce929d0e0e4736
```
**Step 3: Find related PerfLog entries**
```
# In Grafana Explore with Loki
{job="rippled"} |= "4bf92f3577b34da6a3ce929d0e0e4736"
```
**Step 4: Check Insight metrics for the time window**
```
# In Grafana with Prometheus
rate(rippled_tx_applied_total[1m])
@ timestamp_from_trace
```
### 7.7.4 Unified Dashboard Example
```json
{
"title": "rippled Unified Observability",
"uid": "rippled-unified",
"panels": [
{
"title": "Transaction Latency (Traces)",
"type": "timeseries",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\" && name=\"tx.receive\"} | histogram_over_time(duration)"
}
],
"gridPos": { "h": 6, "w": 8, "x": 0, "y": 0 }
},
{
"title": "Transaction Rate (Metrics)",
"type": "timeseries",
"datasource": "Prometheus",
"targets": [
{
"expr": "rate(rippled_tx_received_total[5m])",
"legendFormat": "{{ instance }}"
}
],
"fieldConfig": {
"defaults": {
"links": [
{
"title": "View traces",
"url": "/explore?left={\"datasource\":\"Tempo\",\"query\":\"{resource.service.name=\\\"rippled\\\" && name=\\\"tx.receive\\\"}\"}"
}
]
}
},
"gridPos": { "h": 6, "w": 8, "x": 8, "y": 0 }
},
{
"title": "Recent Logs",
"type": "logs",
"datasource": "Loki",
"targets": [
{
"expr": "{job=\"rippled\"} | json"
}
],
"gridPos": { "h": 6, "w": 8, "x": 16, "y": 0 }
},
{
"title": "Trace Search",
"type": "table",
"datasource": "Tempo",
"targets": [
{
"queryType": "traceql",
"query": "{resource.service.name=\"rippled\"}"
}
],
"fieldConfig": {
"overrides": [
{
"matcher": { "id": "byName", "options": "traceID" },
"properties": [
{
"id": "links",
"value": [
{
"title": "View trace",
"url": "/explore?left={\"datasource\":\"Tempo\",\"query\":\"${__value.raw}\"}"
},
{
"title": "View logs",
"url": "/explore?left={\"datasource\":\"Loki\",\"query\":\"{job=\\\"rippled\\\"} |= \\\"${__value.raw}\\\"\"}"
}
]
}
]
}
]
},
"gridPos": { "h": 12, "w": 24, "x": 0, "y": 6 }
}
]
}
```
---
*Previous: [Implementation Phases](./06-implementation-phases.md)* | *Next: [Appendix](./08-appendix.md)* | *Back to: [Overview](./OpenTelemetryPlan.md)*

View File

@@ -1,133 +0,0 @@
# Appendix
> **Parent Document**: [OpenTelemetryPlan.md](./OpenTelemetryPlan.md)
> **Related**: [Observability Backends](./07-observability-backends.md)
---
## 8.1 Glossary
| Term | Definition |
| --------------------- | ---------------------------------------------------------- |
| **Span** | A unit of work with start/end time, name, and attributes |
| **Trace** | A collection of spans representing a complete request flow |
| **Trace ID** | 128-bit unique identifier for a trace |
| **Span ID** | 64-bit unique identifier for a span within a trace |
| **Context** | Carrier for trace/span IDs across boundaries |
| **Propagator** | Component that injects/extracts context |
| **Sampler** | Decides which traces to record |
| **Exporter** | Sends spans to backend |
| **Collector** | Receives, processes, and forwards telemetry |
| **OTLP** | OpenTelemetry Protocol (wire format) |
| **W3C Trace Context** | Standard HTTP headers for trace propagation |
| **Baggage** | Key-value pairs propagated across service boundaries |
| **Resource** | Entity producing telemetry (service, host, etc.) |
| **Instrumentation** | Code that creates telemetry data |
### rippled-Specific Terms
| Term | Definition |
| ----------------- | -------------------------------------------------- |
| **Overlay** | P2P network layer managing peer connections |
| **Consensus** | XRP Ledger consensus algorithm (RCL) |
| **Proposal** | Validator's suggested transaction set for a ledger |
| **Validation** | Validator's signature on a closed ledger |
| **HashRouter** | Component for transaction deduplication |
| **JobQueue** | Thread pool for asynchronous task execution |
| **PerfLog** | Existing performance logging system in rippled |
| **Beast Insight** | Existing metrics framework in rippled |
---
## 8.2 Span Hierarchy Visualization
```mermaid
flowchart TB
subgraph trace["Trace: Transaction Lifecycle"]
rpc["rpc.submit<br/>(entry point)"]
validate["tx.validate"]
relay["tx.relay<br/>(parent span)"]
subgraph peers["Peer Spans"]
p1["peer.send<br/>Peer A"]
p2["peer.send<br/>Peer B"]
p3["peer.send<br/>Peer C"]
end
consensus["consensus.round"]
apply["tx.apply"]
end
rpc --> validate
validate --> relay
relay --> p1
relay --> p2
relay --> p3
p1 -.->|"context propagation"| consensus
consensus --> apply
style trace fill:#0f172a,stroke:#020617,color:#fff
style peers fill:#1e3a8a,stroke:#172554,color:#fff
style rpc fill:#1d4ed8,stroke:#1e40af,color:#fff
style validate fill:#047857,stroke:#064e3b,color:#fff
style relay fill:#047857,stroke:#064e3b,color:#fff
style p1 fill:#0e7490,stroke:#155e75,color:#fff
style p2 fill:#0e7490,stroke:#155e75,color:#fff
style p3 fill:#0e7490,stroke:#155e75,color:#fff
style consensus fill:#fef3c7,stroke:#fde68a,color:#1e293b
style apply fill:#047857,stroke:#064e3b,color:#fff
```
---
## 8.3 References
### OpenTelemetry Resources
1. [OpenTelemetry C++ SDK](https://github.com/open-telemetry/opentelemetry-cpp)
2. [OpenTelemetry Specification](https://opentelemetry.io/docs/specs/otel/)
3. [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/)
4. [OTLP Protocol Specification](https://opentelemetry.io/docs/specs/otlp/)
### Standards
5. [W3C Trace Context](https://www.w3.org/TR/trace-context/)
6. [W3C Baggage](https://www.w3.org/TR/baggage/)
7. [Protocol Buffers](https://protobuf.dev/)
### rippled Resources
8. [rippled Source Code](https://github.com/XRPLF/rippled)
9. [XRP Ledger Documentation](https://xrpl.org/docs/)
10. [rippled Overlay README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/overlay/README.md)
11. [rippled RPC README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/rpc/README.md)
12. [rippled Consensus README](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/app/consensus/README.md)
---
## 8.4 Version History
| Version | Date | Author | Changes |
| ------- | ---------- | ------ | --------------------------------- |
| 1.0 | 2026-02-12 | - | Initial implementation plan |
| 1.1 | 2026-02-13 | - | Refactored into modular documents |
---
## 8.5 Document Index
| Document | Description |
| ---------------------------------------------------------------- | ------------------------------------------ |
| [OpenTelemetryPlan.md](./OpenTelemetryPlan.md) | Master overview and executive summary |
| [01-architecture-analysis.md](./01-architecture-analysis.md) | rippled architecture and trace points |
| [02-design-decisions.md](./02-design-decisions.md) | SDK selection, exporters, span conventions |
| [03-implementation-strategy.md](./03-implementation-strategy.md) | Directory structure, performance analysis |
| [04-code-samples.md](./04-code-samples.md) | C++ code examples for all components |
| [05-configuration-reference.md](./05-configuration-reference.md) | rippled config, CMake, Collector configs |
| [06-implementation-phases.md](./06-implementation-phases.md) | Timeline, tasks, risks, success metrics |
| [07-observability-backends.md](./07-observability-backends.md) | Backend selection and architecture |
| [08-appendix.md](./08-appendix.md) | Glossary, references, version history |
---
*Previous: [Observability Backends](./07-observability-backends.md)* | *Back to: [Overview](./OpenTelemetryPlan.md)*

View File

@@ -1,187 +0,0 @@
# [OpenTelemetry](00-tracing-fundamentals.md) Distributed Tracing Implementation Plan for rippled (xrpld)
## Executive Summary
This document provides a comprehensive implementation plan for integrating OpenTelemetry distributed tracing into the rippled XRP Ledger node software. The plan addresses the unique challenges of a decentralized peer-to-peer system where trace context must propagate across network boundaries between independent nodes.
### Key Benefits
- **End-to-end transaction visibility**: Track transactions from submission through consensus to ledger inclusion
- **Consensus round analysis**: Understand timing and behavior of consensus phases across validators
- **RPC performance insights**: Identify slow handlers and optimize response times
- **Network topology understanding**: Visualize message propagation patterns between peers
- **Incident debugging**: Correlate events across distributed nodes during issues
### Estimated Performance Overhead
| Metric | Overhead | Notes |
| ------------- | ---------- | ----------------------------------- |
| CPU | 1-3% | Span creation and attribute setting |
| Memory | 2-5 MB | Batch buffer for pending spans |
| Network | 10-50 KB/s | Compressed OTLP export to collector |
| Latency (p99) | <2% | With proper sampling configuration |
---
## Document Structure
This implementation plan is organized into modular documents for easier navigation:
<div align="center">
```mermaid
flowchart TB
overview["📋 OpenTelemetryPlan.md<br/>(This Document)"]
subgraph analysis["Analysis & Design"]
arch["01-architecture-analysis.md"]
design["02-design-decisions.md"]
end
subgraph impl["Implementation"]
strategy["03-implementation-strategy.md"]
code["04-code-samples.md"]
config["05-configuration-reference.md"]
end
subgraph deploy["Deployment & Planning"]
phases["06-implementation-phases.md"]
backends["07-observability-backends.md"]
appendix["08-appendix.md"]
end
overview --> analysis
overview --> impl
overview --> deploy
arch --> design
design --> strategy
strategy --> code
code --> config
config --> phases
phases --> backends
backends --> appendix
style overview fill:#1b5e20,stroke:#0d3d14,color:#fff,stroke-width:2px
style analysis fill:#0d47a1,stroke:#082f6a,color:#fff
style impl fill:#bf360c,stroke:#8c2809,color:#fff
style deploy fill:#4a148c,stroke:#2e0d57,color:#fff
style arch fill:#0d47a1,stroke:#082f6a,color:#fff
style design fill:#0d47a1,stroke:#082f6a,color:#fff
style strategy fill:#bf360c,stroke:#8c2809,color:#fff
style code fill:#bf360c,stroke:#8c2809,color:#fff
style config fill:#bf360c,stroke:#8c2809,color:#fff
style phases fill:#4a148c,stroke:#2e0d57,color:#fff
style backends fill:#4a148c,stroke:#2e0d57,color:#fff
style appendix fill:#4a148c,stroke:#2e0d57,color:#fff
```
</div>
---
## Table of Contents
| Section | Document | Description |
| ------- | ---------------------------------------------------------- | ---------------------------------------------------------------------- |
| **1** | [Architecture Analysis](./01-architecture-analysis.md) | rippled component analysis, trace points, instrumentation priorities |
| **2** | [Design Decisions](./02-design-decisions.md) | SDK selection, exporters, span naming, attributes, context propagation |
| **3** | [Implementation Strategy](./03-implementation-strategy.md) | Directory structure, key principles, performance optimization |
| **4** | [Code Samples](./04-code-samples.md) | Complete C++ implementation examples for all components |
| **5** | [Configuration Reference](./05-configuration-reference.md) | rippled config, CMake integration, Collector configurations |
| **6** | [Implementation Phases](./06-implementation-phases.md) | 5-phase timeline, tasks, risks, success metrics |
| **7** | [Observability Backends](./07-observability-backends.md) | Backend selection guide and production architecture |
| **8** | [Appendix](./08-appendix.md) | Glossary, references, version history |
---
## 1. Architecture Analysis
The rippled node consists of several key components that require instrumentation for comprehensive distributed tracing. The main areas include the RPC server (HTTP/WebSocket), Overlay P2P network, Consensus mechanism (RCLConsensus), JobQueue for async task execution, and existing observability infrastructure (PerfLog, Insight/StatsD, Journal logging).
Key trace points span across transaction submission via RPC, peer-to-peer message propagation, consensus round execution, and ledger building. The implementation prioritizes high-value, low-risk components first: RPC handlers provide immediate value with minimal risk, while consensus tracing requires careful implementation to avoid timing impacts.
➡️ **[Read full Architecture Analysis](./01-architecture-analysis.md)**
---
## 2. Design Decisions
The OpenTelemetry C++ SDK is selected for its CNCF backing, active development, and native performance characteristics. Traces are exported via OTLP/gRPC (primary) or OTLP/HTTP (fallback) to an OpenTelemetry Collector, which provides flexible routing and sampling.
Span naming follows a hierarchical `<component>.<operation>` convention (e.g., `rpc.submit`, `tx.relay`, `consensus.round`). Context propagation uses W3C Trace Context headers for HTTP and embedded Protocol Buffer fields for P2P messages. The implementation coexists with existing PerfLog and Insight observability systems through correlation IDs.
➡️ **[Read full Design Decisions](./02-design-decisions.md)**
---
## 3. Implementation Strategy
The telemetry code is organized under `include/xrpl/telemetry/` for headers and `src/libxrpl/telemetry/` for implementation. Key principles include RAII-based span management via `SpanGuard`, conditional compilation with `XRPL_ENABLE_TELEMETRY`, and minimal runtime overhead through batch processing and efficient sampling.
Performance optimization strategies include probabilistic head sampling (10% default), tail-based sampling at the collector for errors and slow traces, batch export to reduce network overhead, and conditional instrumentation that compiles to no-ops when disabled.
➡️ **[Read full Implementation Strategy](./03-implementation-strategy.md)**
---
## 4. Code Samples
Complete C++ implementation examples are provided for all telemetry components:
- `Telemetry.h` - Core interface for tracer access and span creation
- `SpanGuard.h` - RAII wrapper for automatic span lifecycle management
- `TracingInstrumentation.h` - Macros for conditional instrumentation
- Protocol Buffer extensions for trace context propagation
- Module-specific instrumentation (RPC, Consensus, P2P, JobQueue)
➡️ **[View all Code Samples](./04-code-samples.md)**
---
## 5. Configuration Reference
Configuration is handled through the `[telemetry]` section in `xrpld.cfg` with options for enabling/disabling, exporter selection, endpoint configuration, sampling ratios, and component-level filtering. CMake integration includes a `XRPL_ENABLE_TELEMETRY` option for compile-time control.
OpenTelemetry Collector configurations are provided for development (with Jaeger) and production (with tail-based sampling, Tempo, and Elastic APM). Docker Compose examples enable quick local development environment setup.
➡️ **[View full Configuration Reference](./05-configuration-reference.md)**
---
## 6. Implementation Phases
The implementation spans 9 weeks across 5 phases:
| Phase | Duration | Focus | Key Deliverables |
| ----- | --------- | ------------------- | --------------------------------------------------- |
| 1 | Weeks 1-2 | Core Infrastructure | SDK integration, Telemetry interface, Configuration |
| 2 | Weeks 3-4 | RPC Tracing | HTTP context extraction, Handler instrumentation |
| 3 | Weeks 5-6 | Transaction Tracing | Protocol Buffer context, Relay propagation |
| 4 | Weeks 7-8 | Consensus Tracing | Round spans, Proposal/validation tracing |
| 5 | Week 9 | Documentation | Runbook, Dashboards, Training |
**Total Effort**: 47 developer-days with 2 developers
➡️ **[View full Implementation Phases](./06-implementation-phases.md)**
---
## 7. Observability Backends
For development and testing, Jaeger provides easy setup with a good UI. For production deployments, Grafana Tempo is recommended for its cost-effectiveness and Grafana integration, while Elastic APM is ideal for organizations with existing Elastic infrastructure.
The recommended production architecture uses a gateway collector pattern with regional collectors performing tail-based sampling, routing traces to multiple backends (Tempo for primary storage, Elastic for log correlation, S3/GCS for long-term archive).
➡️ **[View Observability Backend Recommendations](./07-observability-backends.md)**
---
## 8. Appendix
The appendix contains a glossary of OpenTelemetry and rippled-specific terms, references to external documentation and specifications, version history for this implementation plan, and a complete document index.
➡️ **[View Appendix](./08-appendix.md)**
---
*This document provides a comprehensive implementation plan for integrating OpenTelemetry distributed tracing into the rippled XRP Ledger node software. For detailed information on any section, follow the links to the corresponding sub-documents.*

View File

@@ -42,7 +42,7 @@ For more information on responsible disclosure, please read this [Wikipedia arti
## Report Handling Process
Please report the bug directly to us and limit further disclosure. If you want to prove that you knew the bug as of a given time, consider using a cryptographic pre-commitment: hash the content of your report and publish the hash on a medium of your choice (e.g. on Twitter or as a memo in a transaction) as "proof" that you had written the text at a given point in time.
Please report the bug directly to us and limit further disclosure. If you want to prove that you knew the bug as of a given time, consider using a cryptographic precommitment: hash the content of your report and publish the hash on a medium of your choice (e.g. on Twitter or as a memo in a transaction) as "proof" that you had written the text at a given point in time.
Once we receive a report, we:
@@ -78,61 +78,72 @@ To report a qualifying bug, please send a detailed report to:
| Email Address | bugs@ripple.com |
| :-----------: | :-------------------------------------------------- |
| Short Key ID | `0xA9F514E0` |
| Long Key ID | `0xD900855AA9F514E0` |
| Fingerprint | `B72C 0654 2F2A E250 2763 A268 D900 855A A9F5 14E0` |
| Short Key ID | `0xC57929BE` |
| Long Key ID | `0xCD49A0AFC57929BE` |
| Fingerprint | `24E6 3B02 37E0 FA9C 5E96 8974 CD49 A0AF C579 29BE` |
The full PGP key for this address, which is also available on several key servers (e.g. on [keyserver.ubuntu.com](https://keyserver.ubuntu.com)), is:
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGkSZAQBEACprU199OhgdsOsygNjiQV4msuN3vDOUooehL+NwfsGfW79Tbqq
Q2u7uQ3NZjW+M2T4nsDwuhkr7pe7xSReR5W8ssaczvtUyxkvbMClilcgZ2OSCAuC
N9tzJsqOqkwBvXoNXkn//T2jnPz0ZU2wSF+NrEibq5FeuyGdoX3yXXBxq9pW9HzK
HkQll63QSl6BzVSGRQq+B6lGgaZGLwf3mzmIND9Z5VGLNK2jKynyz9z091whNG/M
kV+E7/r/bujHk7WIVId07G5/COTXmSr7kFnNEkd2Umw42dkgfiNKvlmJ9M7c1wLK
KbL9Eb4ADuW6rRc5k4s1e6GT8R4/VPliWbCl9SE32hXH8uTkqVIFZP2eyM5WRRHs
aKzitkQG9UK9gcb0kdgUkxOvvgPHAe5IuZlcHFzU4y0dBbU1VEFWVpiLU0q+IuNw
5BRemeHc59YNsngkmAZ+/9zouoShRusZmC8Wzotv75C2qVBcjijPvmjWAUz0Zunm
Lsr+O71vqHE73pERjD07wuD/ISjiYRYYE/bVrXtXLZijC7qAH4RE3nID+2ojcZyO
/2jMQvt7un56RsGH4UBHi3aBHi9bUoDGCXKiQY981cEuNaOxpou7Mh3x/ONzzSvk
sTV6nl1LOZHykN1JyKwaNbTSAiuyoN+7lOBqbV04DNYAHL88PrT21P83aQARAQAB
tB1SaXBwbGUgTGFicyA8YnVnc0ByaXBwbGUuY29tPokCTgQTAQgAOBYhBLcsBlQv
KuJQJ2OiaNkAhVqp9RTgBQJpEmQEAhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheA
AAoJENkAhVqp9RTgBzgP/i7y+aDWl1maig1XMdyb+o0UGusumFSW4Hmj278wlKVv
usgLPihYgHE0PKrv6WRyKOMC1tQEcYYN93M+OeQ1vFhS2YyURq6RCMmh4zq/awXG
uZbG36OURB5NH8lGBOHiN/7O+nY0CgenBT2JWm+GW3nEOAVOVm4+r5GlpPlv+Dp1
NPBThcKXFMnH73++NpSQoDzTfRYHPxhDAX3jkLi/moXfSanOLlR6l94XNNN0jBHW
Quao0rzf4WSXq9g6AS224xhAA5JyIcFl8TX7hzj5HaFn3VWo3COoDu4U7H+BM0fl
85yqiMQypp7EhN2gxpMMWaHY5TFM85U/bFXFYfEgihZ4/gt4uoIzsNI9jlX7mYvG
KFdDij+oTlRsuOxdIy60B3dKcwOH9nZZCz0SPsN/zlRWgKzK4gDKdGhFkU9OlvPu
94ZqscanoiWKDoZkF96+sjgfjkuHsDK7Lwc1Xi+T4drHG/3aVpkYabXox+lrKB/S
yxZjeqOIQzWPhnLgCaLyvsKo5hxKzL0w3eURu8F3IS7RgOOlljv4M+Me9sEVcdNV
aN3/tQwbaomSX1X5D5YXqhBwC3rU3wXwamsscRTGEpkV+JCX6KUqGP7nWmxCpAly
FL05XuOd5SVHJjXLeuje0JqLUpN514uL+bThWwDbDTdAdwW3oK/2WbXz7IfJRLBj
uQINBGkSZAQBEADdI3SL2F72qkrgFqXWE6HSRBu9bsAvTE5QrRPWk7ux6at537r4
S4sIw2dOwLvbyIrDgKNq3LQ5wCK88NO/NeCOFm4AiCJSl3pJHXYnTDoUxTrrxx+o
vSRI4I3fHEql/MqzgiAb0YUezjgFdh3vYheMPp/309PFbOLhiFqEcx80Mx5h06UH
gDzu1qNj3Ec+31NLic5zwkrAkvFvD54d6bqYR3SEgMau6aYEewpGHbWBi2pLqSi2
lQcAeOFixqGpTwDmAnYR8YtjBYepy0MojEAdTHcQQlOYSDk4q4elG+io2N8vECfU
rD6ORecN48GXdZINYWTAdslrUeanmBdgQrYkSpce8TSghgT9P01SNaXxmyaehVUO
lqI4pcg5G2oojAE8ncNS3TwDtt7daTaTC3bAdr4PXDVAzNAiewjMNZPB7xidkDGQ
Y4W1LxTMXyJVWxehYOH7tsbBRKninlfRnLgYzmtIbNRAAvNcsxU6ihv3AV0WFknN
YbSzotEv1Xq/5wk309x8zCDe+sP0cQicvbXafXmUzPAZzeqFg+VLFn7F9MP1WGlW
B1u7VIvBF1Mp9Nd3EAGBAoLRdRu+0dVWIjPTQuPIuD9cCatJA0wVaKUrjYbBMl88
a12LixNVGeSFS9N7ADHx0/o7GNT6l88YbaLP6zggUHpUD/bR+cDN7vllIQARAQAB
iQI2BBgBCAAgFiEEtywGVC8q4lAnY6Jo2QCFWqn1FOAFAmkSZAQCGwwACgkQ2QCF
Wqn1FOAfAA/8CYq4p0p4bobY20CKEMsZrkBTFJyPDqzFwMeTjgpzqbD7Y3Qq5QCK
OBbvY02GWdiIsNOzKdBxiuam2xYP9WHZj4y7/uWEvT0qlPVmDFu+HXjoJ43oxwFd
CUp2gMuQ4cSL3X94VRJ3BkVL+tgBm8CNY0vnTLLOO3kum/R69VsGJS1JSGUWjNM+
4qwS3mz+73xJu1HmERyN2RZF/DGIZI2PyONQQ6aH85G1Dd2ohu2/DBAkQAMBrPbj
FrbDaBLyFhODxU3kTWqnfLlaElSm2EGdIU2yx7n4BggEa//NZRMm5kyeo4vzhtlQ
YIVUMLAOLZvnEqDnsLKp+22FzNR/O+htBQC4lPywl53oYSALdhz1IQlcAC1ru5KR
XPzhIXV6IIzkcx9xNkEclZxmsuy5ERXyKEmLbIHAlzFmnrldlt2ZgXDtzaorLmxj
klKibxd5tF50qOpOivz+oPtFo7n+HmFa1nlVAMxlDCUdM0pEVeYDKI5zfVwalyhZ
NnjpakdZSXMwgc7NP/hH9buF35hKDp7EckT2y3JNYwHsDdy1icXN2q40XZw5tSIn
zkPWdu3OUY8PISohN6Pw4h0RH4ZmoX97E8sEfmdKaT58U4Hf2aAv5r9IWCSrAVqY
u5jvac29CzQR9Kal0A+8phHAXHNFD83SwzIC0syaT9ficAguwGH8X6Q=
=nGuD
mQINBFUwGHYBEAC0wpGpBPkd8W1UdQjg9+cEFzeIEJRaoZoeuJD8mofwI5Ejnjdt
kCpUYEDal0ygkKobu8SzOoATcDl18iCrScX39VpTm96vISFZMhmOryYCIp4QLJNN
4HKc2ZdBj6W4igNi6vj5Qo6JMyGpLY2mz4CZskbt0TNuUxWrGood+UrCzpY8x7/N
a93fcvNw+prgCr0rCH3hAPmAFfsOBbtGzNnmq7xf3jg5r4Z4sDiNIF1X1y53DAfV
rWDx49IKsuCEJfPMp1MnBSvDvLaQ2hKXs+cOpx1BCZgHn3skouEUxxgqbtTzBLt1
xXpmuijsaltWngPnGO7mOAzbpZSdBm82/Emrk9bPMuD0QaLQjWr7HkTSUs6ZsKt4
7CLPdWqxyY/QVw9UaxeHEtWGQGMIQGgVJGh1fjtUr5O1sC9z9jXcQ0HuIHnRCTls
GP7hklJmfH5V4SyAJQ06/hLuEhUJ7dn+BlqCsT0tLmYTgZYNzNcLHcqBFMEZHvHw
9GENMx/tDXgajKql4bJnzuTK0iGU/YepanANLd1JHECJ4jzTtmKOus9SOGlB2/l1
0t0ADDYAS3eqOdOcUvo9ElSLCI5vSVHhShSte/n2FMWU+kMUboTUisEG8CgQnrng
g2CvvQvqDkeOtZeqMcC7HdiZS0q3LJUWtwA/ViwxrVlBDCxiTUXCotyBWwARAQAB
tDBSaXBwbGUgTGFicyBCdWcgQm91bnR5IFByb2dyYW0gPGJ1Z3NAcmlwcGxlLmNv
bT6JAjcEEwEKACEFAlUwGHYCGwMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ
zUmgr8V5Kb6R0g//SwY/mVJY59k87iL26/KayauSoOcz7xjcST26l4ZHVVX85gOY
HYZl8k0+m8X3zxeYm9a3QAoAml8sfoaFRFQP8ynnefRrLUPaZ2MjbJ0SACMwZNef
T6o7Mi8LBAaiNZdYVyIfX1oM6YXtqYkuJdav6ZCyvVYqc9OvMJPY2ZzJYuI/ZtvQ
/lTndxCeg9ALNX/iezOLGdfMpf4HuIFVwcPPlwGi+HDlB9/bggDEHC8z434SXVFc
aQatXAPcDkjMUweU7y0CZtYEj00HITd4pSX6MqGiHrxlDZTqinCOPs1Ieqp7qufs
MzlM6irLGucxj1+wa16ieyYvEtGaPIsksUKkywx0O7cf8N2qKg+eIkUk6O0Uc6eO
CszizmiXIXy4O6OiLlVHGKkXHMSW9Nwe9GE95O8G9WR8OZCEuDv+mHPAutO+IjdP
PDAAUvy+3XnkceO+HGWRpVvJZfFP2YH4A33InFL5yqlJmSoR/yVingGLxk55bZDM
+HYGR3VeMb8Xj1rf/02qERsZyccMCFdAvKDbTwmvglyHdVLu5sPmktxbBYiemfyJ
qxMxmYXCc9S0hWrWZW7edktBa9NpE58z1mx+hRIrDNbS2sDHrib9PULYCySyVYcF
P+PWEe1CAS5jqkR2ker5td2/pHNnJIycynBEs7l6zbc9fu+nktFJz0q2B+GJAhwE
EAEKAAYFAlUwGaQACgkQ+tiY1qQ2QkjMFw//f2hNY3BPNe+1qbhzumMDCnbTnGif
kLuAGl9OKt81VHG1f6RnaGiLpR696+6Ja45KzH15cQ5JJl5Bgs1YkR/noTGX8IAD
c70eNwiFu8JXTaaeeJrsmFkF9Tueufb364risYkvPP8tNUD3InBFEZT3WN7JKwix
coD4/BwekUwOZVDd/uCFEyhlhZsROxdKNisNo3VtAq2s+3tIBAmTrriFUl0K+ZC5
zgavcpnPN57zMtW9aK+VO3wXqAKYLYmtgxkVzSLUZt2M7JuwOaAdyuYWAneKZPCu
1AXkmyo+d84sd5mZaKOr5xArAFiNMWPUcZL4rkS1Fq4dKtGAqzzR7a7hWtA5o27T
6vynuxZ1n0PPh0er2O/zF4znIjm5RhTlfjp/VmhZdQfpulFEQ/dMxxGkQ9z5IYbX
mTlSDbCSb+FMsanRBJ7Drp5EmBIudVGY6SHI5Re1RQiEh7GoDfUMUwZO+TVDII5R
Ra7WyuimYleJgDo/+7HyfuIyGDaUCVj6pwVtYtYIdOI3tTw1R1Mr0V8yaNVnJghL
CHcEJQL+YHSmiMM3ySil3O6tm1By6lFz8bVe/rgG/5uklQrnjMR37jYboi1orCC4
yeIoQeV0ItlxeTyBwYIV/o1DBNxDevTZvJabC93WiGLw2XFjpZ0q/9+zI2rJUZJh
qxmKP+D4e27lCI65Ag0EVTAYdgEQAMvttYNqeRNBRpSX8fk45WVIV8Fb21fWdwk6
2SkZnJURbiC0LxQnOi7wrtii7DeFZtwM2kFHihS1VHekBnIKKZQSgGoKuFAQMGyu
a426H4ZsSmA9Ufd7kRbvdtEcp7/RTAanhrSL4lkBhaKJrXlxBJ27o3nd7/rh7r3a
OszbPY6DJ5bWClX3KooPTDl/RF2lHn+fweFk58UvuunHIyo4BWJUdilSXIjLun+P
Qaik4ZAsZVwNhdNz05d+vtai4AwbYoO7adboMLRkYaXSQwGytkm+fM6r7OpXHYuS
cR4zB/OK5hxCVEpWfiwN71N2NMvnEMaWd/9uhqxJzyvYgkVUXV9274TUe16pzXnW
ZLfmitjwc91e7mJBBfKNenDdhaLEIlDRwKTLj7k58f9srpMnyZFacntu5pUMNblB
cjXwWxz5ZaQikLnKYhIvrIEwtWPyjqOzNXNvYfZamve/LJ8HmWGCKao3QHoAIDvB
9XBxrDyTJDpxbog6Qu4SY8AdgVlan6c/PsLDc7EUegeYiNTzsOK+eq3G5/E92eIu
TsUXlciypFcRm1q8vLRr+HYYe2mJDo4GetB1zLkAFBcYJm/x9iJQbu0hn5NxJvZO
R0Y5nOJQdyi+muJzKYwhkuzaOlswzqVXkq/7+QCjg7QsycdcwDjiQh3OrsgXHrwl
M7gyafL9ABEBAAGJAh8EGAEKAAkFAlUwGHYCGwwACgkQzUmgr8V5Kb50BxAAhj9T
TwmNrgRldTHszj+Qc+v8RWqV6j+R+zc0cn5XlUa6XFaXI1OFFg71H4dhCPEiYeN0
IrnocyMNvCol+eKIlPKbPTmoixjQ4udPTR1DC1Bx1MyW5FqOrsgBl5t0e1VwEViM
NspSStxu5Hsr6oWz2GD48lXZWJOgoL1RLs+uxjcyjySD/em2fOKASwchYmI+ezRv
plfhAFIMKTSCN2pgVTEOaaz13M0U+MoprThqF1LWzkGkkC7n/1V1f5tn83BWiagG
2N2Q4tHLfyouzMUKnX28kQ9sXfxwmYb2sA9FNIgxy+TdKU2ofLxivoWT8zS189z/
Yj9fErmiMjns2FzEDX+bipAw55X4D/RsaFgC+2x2PDbxeQh6JalRA2Wjq32Ouubx
u+I4QhEDJIcVwt9x6LPDuos1F+M5QW0AiUhKrZJ17UrxOtaquh/nPUL9T3l2qPUn
1ChrZEEEhHO6vA8+jn0+cV9n5xEz30Str9iHnDQ5QyR5LyV4UBPgTdWyQzNVKA69
KsSr9lbHEtQFRzGuBKwt6UlSFv9vPWWJkJit5XDKAlcKuGXj0J8OlltToocGElkF
+gEBZfoOWi/IBjRLrFW2cT3p36DTR5O1Ud/1DLnWRqgWNBLrbs2/KMKE6EnHttyD
7Tz8SQkuxltX/yBXMV3Ddy0t6nWV2SZEfuxJAQI=
=spg4
-----END PGP PUBLIC KEY BLOCK-----
```

View File

@@ -29,18 +29,18 @@
#
# Purpose
#
# This file documents and provides examples of all xrpld server process
# configuration options. When the xrpld server instance is launched, it
# This file documents and provides examples of all rippled server process
# configuration options. When the rippled server instance is launched, it
# looks for a file with the following name:
#
# xrpld.cfg
# rippled.cfg
#
# For more information on where the xrpld server instance searches for the
# For more information on where the rippled server instance searches for the
# file, visit:
#
# https://xrpl.org/commandline-usage.html#generic-options
#
# This file should be named xrpld.cfg. This file is UTF-8 with DOS, UNIX,
# This file should be named rippled.cfg. This file is UTF-8 with DOS, UNIX,
# or Mac style end of lines. Blank lines and lines beginning with '#' are
# ignored. Undefined sections are reserved. No escapes are currently defined.
#
@@ -89,8 +89,8 @@
#
#
#
# xrpld offers various server protocols to clients making inbound
# connections. The listening ports xrpld uses are "universal" ports
# rippled offers various server protocols to clients making inbound
# connections. The listening ports rippled uses are "universal" ports
# which may be configured to handshake in one or more of the available
# supported protocols. These universal ports simplify administration:
# A single open port can be used for multiple protocols.
@@ -103,7 +103,7 @@
#
# A list of port names and key/value pairs. A port name must start with a
# letter and contain only letters and numbers. The name is not case-sensitive.
# For each name in this list, xrpld will look for a configuration file
# For each name in this list, rippled will look for a configuration file
# section with the same name and use it to create a listening port. The
# name is informational only; the choice of name does not affect the function
# of the listening port.
@@ -134,7 +134,7 @@
# ip = 127.0.0.1
# protocol = http
#
# When xrpld is used as a command line client (for example, issuing a
# When rippled is used as a command line client (for example, issuing a
# server stop command), the first port advertising the http or https
# protocol will be used to make the connection.
#
@@ -175,7 +175,7 @@
# same time. It is possible have both Websockets and Secure Websockets
# together in one port.
#
# NOTE If no ports support the peer protocol, xrpld cannot
# NOTE If no ports support the peer protocol, rippled cannot
# receive incoming peer connections or become a superpeer.
#
# limit = <number>
@@ -194,7 +194,7 @@
# required. IP address restrictions, if any, will be checked in addition
# to the credentials specified here.
#
# When acting in the client role, xrpld will supply these credentials
# When acting in the client role, rippled will supply these credentials
# using HTTP's Basic Authentication headers when making outbound HTTP/S
# requests.
#
@@ -218,7 +218,7 @@
# administrative commands.
#
# NOTE A common configuration value for the admin field is "localhost".
# If you are listening on all IPv4/IPv6 addresses by specifying
# If you are listening on all IPv4/IPv6 addresses by specifing
# ip = :: then you can use admin = ::ffff:127.0.0.1,::1 to allow
# administrative access from both IPv4 and IPv6 localhost
# connections.
@@ -237,7 +237,7 @@
# WS, or WSS protocol interfaces. If administrative commands are
# disabled for a port, these credentials have no effect.
#
# When acting in the client role, xrpld will supply these credentials
# When acting in the client role, rippled will supply these credentials
# in the submitted JSON for any administrative command requests when
# invoking JSON-RPC commands on remote servers.
#
@@ -258,7 +258,7 @@
# resource controls will default to those for non-administrative users.
#
# The secure_gateway IP addresses are intended to represent
# proxies. Since xrpld trusts these hosts, they must be
# proxies. Since rippled trusts these hosts, they must be
# responsible for properly authenticating the remote user.
#
# If some IP addresses are included for both "admin" and
@@ -272,7 +272,7 @@
# Use the specified files when configuring SSL on the port.
#
# NOTE If no files are specified and secure protocols are selected,
# xrpld will generate an internal self-signed certificate.
# rippled will generate an internal self-signed certificate.
#
# The files have these meanings:
#
@@ -297,12 +297,12 @@
# Control the ciphers which the server will support over SSL on the port,
# specified using the OpenSSL "cipher list format".
#
# NOTE If unspecified, xrpld will automatically configure a modern
# NOTE If unspecified, rippled will automatically configure a modern
# cipher suite. This default suite should be widely supported.
#
# You should not modify this string unless you have a specific
# reason and cryptographic expertise. Incorrect modification may
# keep xrpld from connecting to other instances of xrpld or
# keep rippled from connecting to other instances of rippled or
# prevent RPC and WebSocket clients from connecting.
#
# send_queue_limit = [1..65535]
@@ -382,7 +382,7 @@
#-----------------
#
# These settings control security and access attributes of the Peer to Peer
# server section of the xrpld process. Peer Protocol implements the
# server section of the rippled process. Peer Protocol implements the
# Ripple Payment protocol. It is over peer connections that transactions
# and validations are passed from to machine to machine, to determine the
# contents of validated ledgers.
@@ -396,7 +396,7 @@
# true - enables compression
# false - disables compression [default].
#
# The xrpld server can save bandwidth by compressing its peer-to-peer communications,
# The rippled server can save bandwidth by compressing its peer-to-peer communications,
# at a cost of greater CPU usage. If you enable link compression,
# the server automatically compresses communications with peer servers
# that also have link compression enabled.
@@ -432,7 +432,7 @@
#
# [ips_fixed]
#
# List of IP addresses or hostnames to which xrpld should always attempt to
# List of IP addresses or hostnames to which rippled should always attempt to
# maintain peer connections with. This is useful for manually forming private
# networks, for example to configure a validation server that connects to the
# Ripple network through a public-facing server, or for building a set
@@ -573,7 +573,7 @@
#
# minimum_txn_in_ledger_standalone = <number>
#
# Like minimum_txn_in_ledger when xrpld is running in standalone
# Like minimum_txn_in_ledger when rippled is running in standalone
# mode. Default: 1000.
#
# target_txn_in_ledger = <number>
@@ -710,7 +710,7 @@
#
# [validator_token]
#
# This is an alternative to [validation_seed] that allows xrpld to perform
# This is an alternative to [validation_seed] that allows rippled to perform
# validation without having to store the validator keys on the network
# connected server. The field should contain a single token in the form of a
# base64-encoded blob.
@@ -745,7 +745,7 @@
#
# Specify the file by its name or path.
# Unless an absolute path is specified, it will be considered relative to
# the folder in which the xrpld.cfg file is located.
# the folder in which the rippled.cfg file is located.
#
# Examples:
# /home/ripple/validators.txt
@@ -840,7 +840,7 @@
#
# 0: Disable the ledger replay feature [default]
# 1: Enable the ledger replay feature. With this feature enabled, when
# acquiring a ledger from the network, a xrpld node only downloads
# acquiring a ledger from the network, a rippled node only downloads
# the ledger header and the transactions instead of the whole ledger.
# And the ledger is built by applying the transactions to the parent
# ledger.
@@ -851,7 +851,7 @@
#
#----------------
#
# The xrpld server instance uses HTTPS GET requests in a variety of
# The rippled server instance uses HTTPS GET requests in a variety of
# circumstances, including but not limited to contacting trusted domains to
# fetch information such as mapping an email address to a Ripple Payment
# Network address.
@@ -891,7 +891,7 @@
#
#------------
#
# xrpld creates 4 SQLite database to hold bookkeeping information
# rippled creates 4 SQLite database to hold bookkeeping information
# about transactions, local credentials, and various other things.
# It also creates the NodeDB, which holds all the objects that
# make up the current and historical ledgers.
@@ -902,7 +902,7 @@
# the performance of the server.
#
# Partial pathnames will be considered relative to the location of
# the xrpld.cfg file.
# the rippled.cfg file.
#
# [node_db] Settings for the Node Database (required)
#
@@ -920,11 +920,11 @@
# type = NuDB
#
# NuDB is a high-performance database written by Ripple Labs and optimized
# for xrpld and solid-state drives.
# for rippled and solid-state drives.
#
# NuDB maintains its high speed regardless of the amount of history
# stored. Online delete may be selected, but is not required. NuDB is
# available on all platforms that xrpld runs on.
# available on all platforms that rippled runs on.
#
# type = RocksDB
#
@@ -940,7 +940,23 @@
#
# path Location to store the database
#
# Optional keys for NuDB and RocksDB:
# Optional keys
#
# cache_size Size of cache for database records. Default is 16384.
# Setting this value to 0 will use the default value.
#
# cache_age Length of time in minutes to keep database records
# cached. Default is 5 minutes. Setting this value to
# 0 will use the default value.
#
# Note: if neither cache_size nor cache_age is
# specified, the cache for database records will not
# be created. If only one of cache_size or cache_age
# is specified, the cache will be created using the
# default value for the unspecified parameter.
#
# Note: the cache will not be created if online_delete
# is specified.
#
# fast_load Boolean. If set, load the last persisted ledger
# from disk upon process start before syncing to
@@ -948,6 +964,8 @@
# if sufficient IOPS capacity is available.
# Default 0.
#
# Optional keys for NuDB or RocksDB:
#
# earliest_seq The default is 32570 to match the XRP ledger
# network's earliest allowed sequence. Alternate
# networks may set this value. Minimum value of 1.
@@ -957,47 +975,6 @@
# number of ledger records online. Must be greater
# than or equal to ledger_history.
#
# Optional keys for NuDB only:
#
# nudb_block_size EXPERIMENTAL: Block size in bytes for NuDB storage.
# Must be a power of 2 between 4096 and 32768. Default is 4096.
#
# This parameter controls the fundamental storage unit
# size for NuDB's internal data structures. The choice
# of block size can significantly impact performance
# depending on your storage hardware and filesystem:
#
# - 4096 bytes: Optimal for most standard SSDs and
# traditional filesystems (ext4, NTFS, HFS+).
# Provides good balance of performance and storage
# efficiency. Recommended for most deployments.
# Minimizes memory footprint and provides consistent
# low-latency access patterns across diverse hardware.
#
# - 8192-16384 bytes: May improve performance on
# high-end NVMe SSDs and copy-on-write filesystems
# like ZFS or Btrfs that benefit from larger block
# alignment. Can reduce metadata overhead for large
# databases. Offers better sequential throughput and
# reduced I/O operations at the cost of higher memory
# usage per operation.
#
# - 32768 bytes (32K): Maximum supported block size
# for high-performance scenarios with very fast
# storage. May increase memory usage and reduce
# efficiency for smaller databases. Best suited for
# enterprise environments with abundant RAM.
#
# Performance testing is recommended before deploying
# any non-default block size in production environments.
#
# Note: This setting cannot be changed after database
# creation without rebuilding the entire database.
# Choose carefully based on your hardware and expected
# database size.
#
# Example: nudb_block_size=4096
#
# These keys modify the behavior of online_delete, and thus are only
# relevant if online_delete is defined and non-zero:
#
@@ -1031,7 +1008,7 @@
#
# recovery_wait_seconds
# The online delete process checks periodically
# that xrpld is still in sync with the network,
# that rippled is still in sync with the network,
# and that the validated ledger is less than
# 'age_threshold_seconds' old. If not, then continue
# sleeping for this number of seconds and
@@ -1051,8 +1028,8 @@
# The server creates and maintains 4 to 5 bookkeeping SQLite databases in
# the 'database_path' location. If you omit this configuration setting,
# the server creates a directory called "db" located in the same place as
# your xrpld.cfg file.
# Partial pathnames are relative to the location of the xrpld executable.
# your rippled.cfg file.
# Partial pathnames are relative to the location of the rippled executable.
#
# [sqlite] Tuning settings for the SQLite databases (optional)
#
@@ -1102,7 +1079,7 @@
# The default is "wal", which uses a write-ahead
# log to implement database transactions.
# Alternately, "memory" saves disk I/O, but if
# xrpld crashes during a transaction, the
# rippled crashes during a transaction, the
# database is likely to be corrupted.
# See https://www.sqlite.org/pragma.html#pragma_journal_mode
# for more details about the available options.
@@ -1112,7 +1089,7 @@
# synchronous Valid values: off, normal, full, extra
# The default is "normal", which works well with
# the "wal" journal mode. Alternatively, "off"
# allows xrpld to continue as soon as data is
# allows rippled to continue as soon as data is
# passed to the OS, which can significantly
# increase speed, but risks data corruption if
# the host computer crashes before writing that
@@ -1126,7 +1103,7 @@
# The default is "file", which will use files
# for temporary database tables and indices.
# Alternatively, "memory" may save I/O, but
# xrpld does not currently use many, if any,
# rippled does not currently use many, if any,
# of these temporary objects.
# See https://www.sqlite.org/pragma.html#pragma_temp_store
# for more details about the available options.
@@ -1155,7 +1132,7 @@
#
# These settings are designed to help server administrators diagnose
# problems, and obtain detailed information about the activities being
# performed by the xrpld process.
# performed by the rippled process.
#
#
#
@@ -1172,7 +1149,7 @@
#
# Configuration parameters for the Beast. Insight stats collection module.
#
# Insight is a module that collects information from the areas of xrpld
# Insight is a module that collects information from the areas of rippled
# that have instrumentation. The configuration parameters control where the
# collection metrics are sent. The parameters are expressed as key = value
# pairs with no white space. The main parameter is the choice of server:
@@ -1181,7 +1158,7 @@
#
# Choice of server to send metrics to. Currently the only choice is
# "statsd" which sends UDP packets to a StatsD daemon, which must be
# running while xrpld is running. More information on StatsD is
# running while rippled is running. More information on StatsD is
# available here:
# https://github.com/b/statsd_spec
#
@@ -1191,7 +1168,7 @@
# in the format, n.n.n.n:port.
#
# "prefix" A string prepended to each collected metric. This is used
# to distinguish between different running instances of xrpld.
# to distinguish between different running instances of rippled.
#
# If this section is missing, or the server type is unspecified or unknown,
# statistics are not collected or reported.
@@ -1218,7 +1195,7 @@
#
# Example:
# [perf]
# perf_log=/var/log/xrpld/perf.log
# perf_log=/var/log/rippled/perf.log
# log_interval=2
#
#-------------------------------------------------------------------------------
@@ -1228,7 +1205,7 @@
#----------
#
# The vote settings configure settings for the entire Ripple network.
# While a single instance of xrpld cannot unilaterally enforce network-wide
# While a single instance of rippled cannot unilaterally enforce network-wide
# settings, these choices become part of the instance's vote during the
# consensus process for each voting ledger.
#
@@ -1242,7 +1219,7 @@
# The reference transaction is the simplest form of transaction.
# It represents an XRP payment between two parties.
#
# If this parameter is unspecified, xrpld will use an internal
# If this parameter is unspecified, rippled will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
@@ -1254,7 +1231,7 @@
# account's XRP balance that is at or below the reserve may only be
# spent on transaction fees, and not transferred out of the account.
#
# If this parameter is unspecified, xrpld will use an internal
# If this parameter is unspecified, rippled will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
@@ -1266,7 +1243,7 @@
# each ledger item owned by the account. Ledger items an account may
# own include trust lines, open orders, and tickets.
#
# If this parameter is unspecified, xrpld will use an internal
# If this parameter is unspecified, rippled will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
@@ -1308,7 +1285,7 @@
# tool instead.
#
# This flag has no effect on the "sign" and "sign_for" command line options
# that xrpld makes available.
# that rippled makes available.
#
# The default value of this field is "false"
#
@@ -1387,7 +1364,7 @@
#--------------------
#
# Administrators can use these values as a starting point for configuring
# their instance of xrpld, but each value should be checked to make sure
# their instance of rippled, but each value should be checked to make sure
# it meets the business requirements for the organization.
#
# Server
@@ -1397,7 +1374,7 @@
# "peer"
#
# Peer protocol open to everyone. This is required to accept
# incoming xrpld connections. This does not affect automatic
# incoming rippled connections. This does not affect automatic
# or manual outgoing Peer protocol connections.
#
# "rpc"
@@ -1414,7 +1391,7 @@
#
# ETL commands for Clio. We recommend setting secure_gateway
# in this section to a comma-separated list of the addresses
# of your Clio servers, in order to bypass xrpld's rate limiting.
# of your Clio servers, in order to bypass rippled's rate limiting.
#
# This port is commented out but can be enabled by removing
# the '#' from each corresponding line including the entry under [server]
@@ -1431,8 +1408,8 @@
# NOTE
#
# To accept connections on well known ports such as 80 (HTTP) or
# 443 (HTTPS), most operating systems will require xrpld to
# run with administrator privileges, or else xrpld will not start.
# 443 (HTTPS), most operating systems will require rippled to
# run with administrator privileges, or else rippled will not start.
[server]
port_rpc_admin_local
@@ -1478,7 +1455,7 @@ secure_gateway = 127.0.0.1
#-------------------------------------------------------------------------------
# This is primary persistent datastore for xrpld. This includes transaction
# This is primary persistent datastore for rippled. This includes transaction
# metadata, account states, and ledger headers. Helpful information can be
# found at https://xrpl.org/capacity-planning.html#node-db-type
# type=NuDB is recommended for non-validators with fast SSDs. Validators or
@@ -1493,19 +1470,18 @@ secure_gateway = 127.0.0.1
# deletion.
[node_db]
type=NuDB
path=/var/lib/xrpld/db/nudb
nudb_block_size=4096
path=/var/lib/rippled/db/nudb
online_delete=512
advisory_delete=0
[database_path]
/var/lib/xrpld/db
/var/lib/rippled/db
# This needs to be an absolute directory reference, not a relative one.
# Modify this value as required.
[debug_logfile]
/var/log/xrpld/debug.log
/var/log/rippled/debug.log
# To use the XRP test network
# (see https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html),
@@ -1515,7 +1491,7 @@ advisory_delete=0
# File containing trusted validator keys or validator list publishers.
# Unless an absolute path is specified, it will be considered relative to the
# folder in which the xrpld.cfg file is located.
# folder in which the rippled.cfg file is located.
[validators_file]
validators.txt

View File

@@ -1,7 +1,7 @@
#
# Default validators.txt
#
# This file is located in the same folder as your xrpld.cfg file
# This file is located in the same folder as your rippled.cfg file
# and defines which validators your server trusts not to collude.
#
# This file is UTF-8 with DOS, UNIX, or Mac style line endings.

View File

@@ -1,29 +1,30 @@
macro (exclude_from_default target_)
set_target_properties(${target_} PROPERTIES EXCLUDE_FROM_ALL ON)
set_target_properties(${target_} PROPERTIES EXCLUDE_FROM_DEFAULT_BUILD ON)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_ALL ON)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_DEFAULT_BUILD ON)
endmacro ()
macro (exclude_if_included target_)
get_directory_property(has_parent PARENT_DIRECTORY)
if (has_parent)
exclude_from_default(${target_})
endif ()
get_directory_property(has_parent PARENT_DIRECTORY)
if (has_parent)
exclude_from_default (${target_})
endif ()
endmacro ()
find_package(Git)
function (git_branch branch_val)
if (NOT GIT_FOUND)
return()
endif ()
set(_branch "")
execute_process(COMMAND ${GIT_EXECUTABLE} "rev-parse" "--abbrev-ref" "HEAD"
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_branch
OUTPUT_STRIP_TRAILING_WHITESPACE ERROR_QUIET)
if (_git_exit_code EQUAL 0)
set(_branch ${_temp_branch})
endif ()
set(${branch_val} "${_branch}" PARENT_SCOPE)
if (NOT GIT_FOUND)
return ()
endif ()
set (_branch "")
execute_process (COMMAND ${GIT_EXECUTABLE} "rev-parse" "--abbrev-ref" "HEAD"
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_branch
OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET)
if (_git_exit_code EQUAL 0)
set (_branch ${_temp_branch})
endif ()
set (${branch_val} "${_branch}" PARENT_SCOPE)
endfunction ()

View File

@@ -1,49 +0,0 @@
find_program(CCACHE_PATH "ccache")
if (NOT CCACHE_PATH)
return()
endif ()
# For Linux and macOS we can use the ccache binary directly.
if (NOT MSVC)
set(CMAKE_C_COMPILER_LAUNCHER "${CCACHE_PATH}")
set(CMAKE_CXX_COMPILER_LAUNCHER "${CCACHE_PATH}")
message(STATUS "Found ccache: ${CCACHE_PATH}")
return()
endif ()
# For Windows more effort is required. The code below is a modified version of
# https://github.com/ccache/ccache/wiki/MS-Visual-Studio#usage-with-cmake.
if ("${CCACHE_PATH}" MATCHES "chocolatey")
message(DEBUG "Ccache path: ${CCACHE_PATH}")
# Chocolatey uses a shim executable that we cannot use directly, in which case we have to find the executable it
# points to. If we cannot find the target executable then we cannot use ccache.
find_program(BASH_PATH "bash")
if (NOT BASH_PATH)
message(WARNING "Could not find bash.")
return()
endif ()
execute_process(COMMAND bash -c
"export LC_ALL='en_US.UTF-8'; ${CCACHE_PATH} --shimgen-noop | grep -oP 'path to executable: \\K.+' | head -c -1"
OUTPUT_VARIABLE CCACHE_PATH)
if (NOT CCACHE_PATH)
message(WARNING "Could not find ccache target.")
return()
endif ()
file(TO_CMAKE_PATH "${CCACHE_PATH}" CCACHE_PATH)
endif ()
message(STATUS "Found ccache: ${CCACHE_PATH}")
# Tell cmake to use ccache for compiling with Visual Studio.
file(COPY_FILE ${CCACHE_PATH} ${CMAKE_BINARY_DIR}/cl.exe ONLY_IF_DIFFERENT)
set(CMAKE_VS_GLOBALS "CLToolExe=cl.exe" "CLToolPath=${CMAKE_BINARY_DIR}" "TrackFileAccess=false"
"UseMultiToolTask=true")
# By default Visual Studio generators will use /Zi to capture debug information, which is not compatible with ccache, so
# tell it to use /Z7 instead.
if (MSVC)
foreach (var_ CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE)
string(REPLACE "/Zi" "/Z7" ${var_} "${${var_}}")
endforeach ()
endif ()

View File

@@ -172,47 +172,51 @@ include(CMakeParseArguments)
option(CODE_COVERAGE_VERBOSE "Verbose information" FALSE)
# Check prereqs
find_program(GCOVR_PATH gcovr PATHS ${CMAKE_SOURCE_DIR}/scripts/test)
find_program( GCOVR_PATH gcovr PATHS ${CMAKE_SOURCE_DIR}/scripts/test)
if (DEFINED CODE_COVERAGE_GCOV_TOOL)
set(GCOV_TOOL "${CODE_COVERAGE_GCOV_TOOL}")
elseif (DEFINED ENV{CODE_COVERAGE_GCOV_TOOL})
set(GCOV_TOOL "$ENV{CODE_COVERAGE_GCOV_TOOL}")
elseif ("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if (APPLE)
execute_process(COMMAND xcrun -f llvm-cov OUTPUT_VARIABLE LLVMCOV_PATH OUTPUT_STRIP_TRAILING_WHITESPACE)
else ()
find_program(LLVMCOV_PATH llvm-cov)
endif ()
if (LLVMCOV_PATH)
set(GCOV_TOOL "${LLVMCOV_PATH} gcov")
endif ()
elseif ("${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU")
find_program(GCOV_PATH gcov)
set(GCOV_TOOL "${GCOV_PATH}")
endif ()
if(DEFINED CODE_COVERAGE_GCOV_TOOL)
set(GCOV_TOOL "${CODE_COVERAGE_GCOV_TOOL}")
elseif(DEFINED ENV{CODE_COVERAGE_GCOV_TOOL})
set(GCOV_TOOL "$ENV{CODE_COVERAGE_GCOV_TOOL}")
elseif("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if(APPLE)
execute_process( COMMAND xcrun -f llvm-cov
OUTPUT_VARIABLE LLVMCOV_PATH
OUTPUT_STRIP_TRAILING_WHITESPACE
)
else()
find_program( LLVMCOV_PATH llvm-cov )
endif()
if(LLVMCOV_PATH)
set(GCOV_TOOL "${LLVMCOV_PATH} gcov")
endif()
elseif("${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU")
find_program( GCOV_PATH gcov )
set(GCOV_TOOL "${GCOV_PATH}")
endif()
# Check supported compiler (Clang, GNU and Flang)
get_property(LANGUAGES GLOBAL PROPERTY ENABLED_LANGUAGES)
foreach (LANG ${LANGUAGES})
if ("${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if ("${CMAKE_${LANG}_COMPILER_VERSION}" VERSION_LESS 3)
message(FATAL_ERROR "Clang version must be 3.0.0 or greater! Aborting...")
endif ()
elseif (NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "GNU" AND NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES
"(LLVM)?[Ff]lang")
message(FATAL_ERROR "Compiler is not GNU or Flang! Aborting...")
endif ()
endforeach ()
foreach(LANG ${LANGUAGES})
if("${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if("${CMAKE_${LANG}_COMPILER_VERSION}" VERSION_LESS 3)
message(FATAL_ERROR "Clang version must be 3.0.0 or greater! Aborting...")
endif()
elseif(NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "GNU"
AND NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(LLVM)?[Ff]lang")
message(FATAL_ERROR "Compiler is not GNU or Flang! Aborting...")
endif()
endforeach()
set(COVERAGE_COMPILER_FLAGS "-g --coverage" CACHE INTERNAL "")
set(COVERAGE_COMPILER_FLAGS "-g --coverage"
CACHE INTERNAL "")
set(COVERAGE_CXX_COMPILER_FLAGS "")
set(COVERAGE_C_COMPILER_FLAGS "")
set(COVERAGE_CXX_LINKER_FLAGS "")
set(COVERAGE_C_LINKER_FLAGS "")
if (CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
if(CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
include(CheckCXXCompilerFlag)
include(CheckCCompilerFlag)
include(CheckLinkerFlag)
@@ -223,51 +227,51 @@ if (CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
set(COVERAGE_C_LINKER_FLAGS ${COVERAGE_COMPILER_FLAGS})
check_cxx_compiler_flag(-fprofile-abs-path HAVE_cxx_fprofile_abs_path)
if (HAVE_cxx_fprofile_abs_path)
if(HAVE_cxx_fprofile_abs_path)
set(COVERAGE_CXX_COMPILER_FLAGS "${COVERAGE_CXX_COMPILER_FLAGS} -fprofile-abs-path")
endif ()
endif()
check_c_compiler_flag(-fprofile-abs-path HAVE_c_fprofile_abs_path)
if (HAVE_c_fprofile_abs_path)
if(HAVE_c_fprofile_abs_path)
set(COVERAGE_C_COMPILER_FLAGS "${COVERAGE_C_COMPILER_FLAGS} -fprofile-abs-path")
endif ()
endif()
check_linker_flag(CXX -fprofile-abs-path HAVE_cxx_linker_fprofile_abs_path)
if (HAVE_cxx_linker_fprofile_abs_path)
if(HAVE_cxx_linker_fprofile_abs_path)
set(COVERAGE_CXX_LINKER_FLAGS "${COVERAGE_CXX_LINKER_FLAGS} -fprofile-abs-path")
endif ()
endif()
check_linker_flag(C -fprofile-abs-path HAVE_c_linker_fprofile_abs_path)
if (HAVE_c_linker_fprofile_abs_path)
if(HAVE_c_linker_fprofile_abs_path)
set(COVERAGE_C_LINKER_FLAGS "${COVERAGE_C_LINKER_FLAGS} -fprofile-abs-path")
endif ()
endif()
check_cxx_compiler_flag(-fprofile-update=atomic HAVE_cxx_fprofile_update)
if (HAVE_cxx_fprofile_update)
if(HAVE_cxx_fprofile_update)
set(COVERAGE_CXX_COMPILER_FLAGS "${COVERAGE_CXX_COMPILER_FLAGS} -fprofile-update=atomic")
endif ()
endif()
check_c_compiler_flag(-fprofile-update=atomic HAVE_c_fprofile_update)
if (HAVE_c_fprofile_update)
if(HAVE_c_fprofile_update)
set(COVERAGE_C_COMPILER_FLAGS "${COVERAGE_C_COMPILER_FLAGS} -fprofile-update=atomic")
endif ()
endif()
check_linker_flag(CXX -fprofile-update=atomic HAVE_cxx_linker_fprofile_update)
if (HAVE_cxx_linker_fprofile_update)
if(HAVE_cxx_linker_fprofile_update)
set(COVERAGE_CXX_LINKER_FLAGS "${COVERAGE_CXX_LINKER_FLAGS} -fprofile-update=atomic")
endif ()
endif()
check_linker_flag(C -fprofile-update=atomic HAVE_c_linker_fprofile_update)
if (HAVE_c_linker_fprofile_update)
if(HAVE_c_linker_fprofile_update)
set(COVERAGE_C_LINKER_FLAGS "${COVERAGE_C_LINKER_FLAGS} -fprofile-update=atomic")
endif ()
endif()
endif ()
endif()
get_property(GENERATOR_IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
if (NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG))
if(NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG))
message(WARNING "Code coverage results with an optimised (non-Debug) build may be misleading")
endif () # NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG)
endif() # NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG)
# Defines a target for running and collection code coverage information
# Builds dependencies, runs the given executable and outputs reports.
@@ -291,181 +295,193 @@ endif () # NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG)
# )
# The user can set the variable GCOVR_ADDITIONAL_ARGS to supply additional flags to the
# GCVOR command.
function (setup_target_for_coverage_gcovr)
function(setup_target_for_coverage_gcovr)
set(options NONE)
set(oneValueArgs BASE_DIRECTORY NAME FORMAT)
set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if (NOT GCOV_TOOL)
if(NOT GCOV_TOOL)
message(FATAL_ERROR "Could not find gcov or llvm-cov tool! Aborting...")
endif ()
endif()
if (NOT GCOVR_PATH)
if(NOT GCOVR_PATH)
message(FATAL_ERROR "Could not find gcovr tool! Aborting...")
endif ()
endif()
# Set base directory (as absolute path), or default to PROJECT_SOURCE_DIR
if (DEFINED Coverage_BASE_DIRECTORY)
if(DEFINED Coverage_BASE_DIRECTORY)
get_filename_component(BASEDIR ${Coverage_BASE_DIRECTORY} ABSOLUTE)
else ()
else()
set(BASEDIR ${PROJECT_SOURCE_DIR})
endif ()
endif()
if (NOT DEFINED Coverage_FORMAT)
if(NOT DEFINED Coverage_FORMAT)
set(Coverage_FORMAT xml)
endif ()
endif()
if (NOT DEFINED Coverage_EXECUTABLE AND DEFINED Coverage_EXECUTABLE_ARGS)
if(NOT DEFINED Coverage_EXECUTABLE AND DEFINED Coverage_EXECUTABLE_ARGS)
message(FATAL_ERROR "EXECUTABLE_ARGS must not be set if EXECUTABLE is not set")
endif ()
endif()
if ("--output" IN_LIST GCOVR_ADDITIONAL_ARGS)
if("--output" IN_LIST GCOVR_ADDITIONAL_ARGS)
message(FATAL_ERROR "Unsupported --output option detected in GCOVR_ADDITIONAL_ARGS! Aborting...")
else ()
if ((Coverage_FORMAT STREQUAL "html-details") OR (Coverage_FORMAT STREQUAL "html-nested"))
else()
if((Coverage_FORMAT STREQUAL "html-details")
OR (Coverage_FORMAT STREQUAL "html-nested"))
set(GCOVR_OUTPUT_FILE ${PROJECT_BINARY_DIR}/${Coverage_NAME}/index.html)
set(GCOVR_CREATE_FOLDER ${PROJECT_BINARY_DIR}/${Coverage_NAME})
elseif (Coverage_FORMAT STREQUAL "html-single")
elseif(Coverage_FORMAT STREQUAL "html-single")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.html)
elseif ((Coverage_FORMAT STREQUAL "json-summary") OR (Coverage_FORMAT STREQUAL "json-details")
OR (Coverage_FORMAT STREQUAL "coveralls"))
elseif((Coverage_FORMAT STREQUAL "json-summary")
OR (Coverage_FORMAT STREQUAL "json-details")
OR (Coverage_FORMAT STREQUAL "coveralls"))
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.json)
elseif (Coverage_FORMAT STREQUAL "txt")
elseif(Coverage_FORMAT STREQUAL "txt")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.txt)
elseif (Coverage_FORMAT STREQUAL "csv")
elseif(Coverage_FORMAT STREQUAL "csv")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.csv)
elseif (Coverage_FORMAT STREQUAL "lcov")
elseif(Coverage_FORMAT STREQUAL "lcov")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.lcov)
else ()
else()
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.xml)
endif ()
endif ()
endif()
endif()
if ((Coverage_FORMAT STREQUAL "cobertura") OR (Coverage_FORMAT STREQUAL "xml"))
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura-pretty)
if((Coverage_FORMAT STREQUAL "cobertura")
OR (Coverage_FORMAT STREQUAL "xml"))
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura-pretty )
set(Coverage_FORMAT cobertura) # overwrite xml
elseif (Coverage_FORMAT STREQUAL "sonarqube")
list(APPEND GCOVR_ADDITIONAL_ARGS --sonarqube "${GCOVR_OUTPUT_FILE}")
elseif (Coverage_FORMAT STREQUAL "jacoco")
list(APPEND GCOVR_ADDITIONAL_ARGS --jacoco "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --jacoco-pretty)
elseif (Coverage_FORMAT STREQUAL "clover")
list(APPEND GCOVR_ADDITIONAL_ARGS --clover "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --clover-pretty)
elseif (Coverage_FORMAT STREQUAL "lcov")
list(APPEND GCOVR_ADDITIONAL_ARGS --lcov "${GCOVR_OUTPUT_FILE}")
elseif (Coverage_FORMAT STREQUAL "json-summary")
list(APPEND GCOVR_ADDITIONAL_ARGS --json-summary "${GCOVR_OUTPUT_FILE}")
elseif(Coverage_FORMAT STREQUAL "sonarqube")
list(APPEND GCOVR_ADDITIONAL_ARGS --sonarqube "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "jacoco")
list(APPEND GCOVR_ADDITIONAL_ARGS --jacoco "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --jacoco-pretty )
elseif(Coverage_FORMAT STREQUAL "clover")
list(APPEND GCOVR_ADDITIONAL_ARGS --clover "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --clover-pretty )
elseif(Coverage_FORMAT STREQUAL "lcov")
list(APPEND GCOVR_ADDITIONAL_ARGS --lcov "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "json-summary")
list(APPEND GCOVR_ADDITIONAL_ARGS --json-summary "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --json-summary-pretty)
elseif (Coverage_FORMAT STREQUAL "json-details")
list(APPEND GCOVR_ADDITIONAL_ARGS --json "${GCOVR_OUTPUT_FILE}")
elseif(Coverage_FORMAT STREQUAL "json-details")
list(APPEND GCOVR_ADDITIONAL_ARGS --json "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --json-pretty)
elseif (Coverage_FORMAT STREQUAL "coveralls")
list(APPEND GCOVR_ADDITIONAL_ARGS --coveralls "${GCOVR_OUTPUT_FILE}")
elseif(Coverage_FORMAT STREQUAL "coveralls")
list(APPEND GCOVR_ADDITIONAL_ARGS --coveralls "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --coveralls-pretty)
elseif (Coverage_FORMAT STREQUAL "csv")
list(APPEND GCOVR_ADDITIONAL_ARGS --csv "${GCOVR_OUTPUT_FILE}")
elseif (Coverage_FORMAT STREQUAL "txt")
list(APPEND GCOVR_ADDITIONAL_ARGS --txt "${GCOVR_OUTPUT_FILE}")
elseif (Coverage_FORMAT STREQUAL "html-single")
list(APPEND GCOVR_ADDITIONAL_ARGS --html "${GCOVR_OUTPUT_FILE}")
elseif(Coverage_FORMAT STREQUAL "csv")
list(APPEND GCOVR_ADDITIONAL_ARGS --csv "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "txt")
list(APPEND GCOVR_ADDITIONAL_ARGS --txt "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "html-single")
list(APPEND GCOVR_ADDITIONAL_ARGS --html "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --html-self-contained)
elseif (Coverage_FORMAT STREQUAL "html-nested")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-nested "${GCOVR_OUTPUT_FILE}")
elseif (Coverage_FORMAT STREQUAL "html-details")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-details "${GCOVR_OUTPUT_FILE}")
else ()
elseif(Coverage_FORMAT STREQUAL "html-nested")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-nested "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "html-details")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-details "${GCOVR_OUTPUT_FILE}" )
else()
message(FATAL_ERROR "Unsupported output style ${Coverage_FORMAT}! Aborting...")
endif ()
endif()
# Collect excludes (CMake 3.4+: Also compute absolute paths)
set(GCOVR_EXCLUDES "")
foreach (EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_GCOVR_EXCLUDES})
if (CMAKE_VERSION VERSION_GREATER 3.4)
foreach(EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_GCOVR_EXCLUDES})
if(CMAKE_VERSION VERSION_GREATER 3.4)
get_filename_component(EXCLUDE ${EXCLUDE} ABSOLUTE BASE_DIR ${BASEDIR})
endif ()
endif()
list(APPEND GCOVR_EXCLUDES "${EXCLUDE}")
endforeach ()
endforeach()
list(REMOVE_DUPLICATES GCOVR_EXCLUDES)
# Combine excludes to several -e arguments
set(GCOVR_EXCLUDE_ARGS "")
foreach (EXCLUDE ${GCOVR_EXCLUDES})
foreach(EXCLUDE ${GCOVR_EXCLUDES})
list(APPEND GCOVR_EXCLUDE_ARGS "-e")
list(APPEND GCOVR_EXCLUDE_ARGS "${EXCLUDE}")
endforeach ()
endforeach()
# Set up commands which will be run to generate coverage data
# If EXECUTABLE is not set, the user is expected to run the tests manually
# before running the coverage target NAME
if (DEFINED Coverage_EXECUTABLE)
set(GCOVR_EXEC_TESTS_CMD ${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS})
endif ()
if(DEFINED Coverage_EXECUTABLE)
set(GCOVR_EXEC_TESTS_CMD
${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS}
)
endif()
# Create folder
if (DEFINED GCOVR_CREATE_FOLDER)
set(GCOVR_FOLDER_CMD ${CMAKE_COMMAND} -E make_directory ${GCOVR_CREATE_FOLDER})
endif ()
if(DEFINED GCOVR_CREATE_FOLDER)
set(GCOVR_FOLDER_CMD
${CMAKE_COMMAND} -E make_directory ${GCOVR_CREATE_FOLDER})
endif()
# Running gcovr
set(GCOVR_CMD
${GCOVR_PATH}
--gcov-executable
${GCOV_TOOL}
--gcov-executable ${GCOV_TOOL}
--gcov-ignore-parse-errors=negative_hits.warn_once_per_file
-r
${BASEDIR}
-r ${BASEDIR}
${GCOVR_ADDITIONAL_ARGS}
${GCOVR_EXCLUDE_ARGS}
--object-directory=${PROJECT_BINARY_DIR})
--object-directory=${PROJECT_BINARY_DIR}
)
if (CODE_COVERAGE_VERBOSE)
if(CODE_COVERAGE_VERBOSE)
message(STATUS "Executed command report")
if (NOT "${GCOVR_EXEC_TESTS_CMD}" STREQUAL "")
if(NOT "${GCOVR_EXEC_TESTS_CMD}" STREQUAL "")
message(STATUS "Command to run tests: ")
string(REPLACE ";" " " GCOVR_EXEC_TESTS_CMD_SPACED "${GCOVR_EXEC_TESTS_CMD}")
message(STATUS "${GCOVR_EXEC_TESTS_CMD_SPACED}")
endif ()
endif()
if (NOT "${GCOVR_FOLDER_CMD}" STREQUAL "")
if(NOT "${GCOVR_FOLDER_CMD}" STREQUAL "")
message(STATUS "Command to create a folder: ")
string(REPLACE ";" " " GCOVR_FOLDER_CMD_SPACED "${GCOVR_FOLDER_CMD}")
message(STATUS "${GCOVR_FOLDER_CMD_SPACED}")
endif ()
endif()
message(STATUS "Command to generate gcovr coverage data: ")
string(REPLACE ";" " " GCOVR_CMD_SPACED "${GCOVR_CMD}")
message(STATUS "${GCOVR_CMD_SPACED}")
endif ()
endif()
add_custom_target(${Coverage_NAME}
COMMAND ${GCOVR_EXEC_TESTS_CMD}
COMMAND ${GCOVR_FOLDER_CMD}
COMMAND ${GCOVR_CMD}
BYPRODUCTS ${GCOVR_OUTPUT_FILE}
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
VERBATIM # Protect arguments to commands
COMMENT "Running gcovr to produce code coverage report.")
COMMAND ${GCOVR_EXEC_TESTS_CMD}
COMMAND ${GCOVR_FOLDER_CMD}
COMMAND ${GCOVR_CMD}
BYPRODUCTS ${GCOVR_OUTPUT_FILE}
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
VERBATIM # Protect arguments to commands
COMMENT "Running gcovr to produce code coverage report."
)
# Show info where to find the report
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD COMMAND echo
COMMENT "Code coverage report saved in ${GCOVR_OUTPUT_FILE} formatted as ${Coverage_FORMAT}")
endfunction () # setup_target_for_coverage_gcovr
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
COMMAND echo
COMMENT "Code coverage report saved in ${GCOVR_OUTPUT_FILE} formatted as ${Coverage_FORMAT}"
)
endfunction() # setup_target_for_coverage_gcovr
function (add_code_coverage_to_target name scope)
function(add_code_coverage_to_target name scope)
separate_arguments(COVERAGE_CXX_COMPILER_FLAGS NATIVE_COMMAND "${COVERAGE_CXX_COMPILER_FLAGS}")
separate_arguments(COVERAGE_C_COMPILER_FLAGS NATIVE_COMMAND "${COVERAGE_C_COMPILER_FLAGS}")
separate_arguments(COVERAGE_CXX_LINKER_FLAGS NATIVE_COMMAND "${COVERAGE_CXX_LINKER_FLAGS}")
separate_arguments(COVERAGE_C_LINKER_FLAGS NATIVE_COMMAND "${COVERAGE_C_LINKER_FLAGS}")
# Add compiler options to the target
target_compile_options(${name} ${scope} $<$<COMPILE_LANGUAGE:CXX>:${COVERAGE_CXX_COMPILER_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${COVERAGE_C_COMPILER_FLAGS}>)
target_compile_options(${name} ${scope}
$<$<COMPILE_LANGUAGE:CXX>:${COVERAGE_CXX_COMPILER_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${COVERAGE_C_COMPILER_FLAGS}>)
target_link_libraries(${name} ${scope} $<$<LINK_LANGUAGE:CXX>:${COVERAGE_CXX_LINKER_FLAGS}>
$<$<LINK_LANGUAGE:C>:${COVERAGE_C_LINKER_FLAGS}>)
endfunction () # add_code_coverage_to_target
target_link_libraries (${name} ${scope}
$<$<LINK_LANGUAGE:CXX>:${COVERAGE_CXX_LINKER_FLAGS} gcov>
$<$<LINK_LANGUAGE:C>:${COVERAGE_C_LINKER_FLAGS} gcov>
)
endfunction() # add_code_coverage_to_target

View File

@@ -1,58 +0,0 @@
# Shared detection of compiler, operating system, and architecture.
#
# This module centralizes environment detection so that other CMake modules can use the same variables instead of
# repeating checks on CMAKE_* and built-in platform variables.
# Only run once per configure step.
include_guard(GLOBAL)
# --------------------------------------------------------------------
# Compiler detection (C++)
# --------------------------------------------------------------------
set(is_clang FALSE)
set(is_gcc FALSE)
set(is_msvc FALSE)
set(is_xcode FALSE)
if (CMAKE_CXX_COMPILER_ID MATCHES ".*Clang") # Clang or AppleClang
set(is_clang TRUE)
elseif (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(is_gcc TRUE)
elseif (MSVC)
set(is_msvc TRUE)
else ()
message(FATAL_ERROR "Unsupported C++ compiler: ${CMAKE_CXX_COMPILER_ID}")
endif ()
# Xcode generator detection
if (CMAKE_GENERATOR STREQUAL "Xcode")
set(is_xcode TRUE)
endif ()
# --------------------------------------------------------------------
# Operating system detection
# --------------------------------------------------------------------
set(is_linux FALSE)
set(is_windows FALSE)
set(is_macos FALSE)
if (CMAKE_SYSTEM_NAME STREQUAL "Linux")
set(is_linux TRUE)
elseif (CMAKE_SYSTEM_NAME STREQUAL "Windows")
set(is_windows TRUE)
elseif (CMAKE_SYSTEM_NAME STREQUAL "Darwin")
set(is_macos TRUE)
endif ()
# --------------------------------------------------------------------
# Architecture
# --------------------------------------------------------------------
set(is_amd64 FALSE)
set(is_arm64 FALSE)
if (CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64|AMD64")
set(is_amd64 TRUE)
elseif (CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|arm64|ARM64")
set(is_arm64 TRUE)
else ()
message(FATAL_ERROR "Unknown architecture: ${CMAKE_SYSTEM_PROCESSOR}")
endif ()

54
cmake/RippleConfig.cmake Normal file
View File

@@ -0,0 +1,54 @@
include (CMakeFindDependencyMacro)
# need to represent system dependencies of the lib here
#[=========================================================[
Boost
#]=========================================================]
if (static OR APPLE OR MSVC)
set (Boost_USE_STATIC_LIBS ON)
endif ()
set (Boost_USE_MULTITHREADED ON)
if (static OR MSVC)
set (Boost_USE_STATIC_RUNTIME ON)
else ()
set (Boost_USE_STATIC_RUNTIME OFF)
endif ()
find_dependency (Boost
COMPONENTS
chrono
container
context
coroutine
date_time
filesystem
program_options
regex
system
thread)
#[=========================================================[
OpenSSL
#]=========================================================]
if (NOT DEFINED OPENSSL_ROOT_DIR)
if (DEFINED ENV{OPENSSL_ROOT})
set (OPENSSL_ROOT_DIR $ENV{OPENSSL_ROOT})
elseif (APPLE)
find_program (homebrew brew)
if (homebrew)
execute_process (COMMAND ${homebrew} --prefix openssl
OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif ()
endif ()
file (TO_CMAKE_PATH "${OPENSSL_ROOT_DIR}" OPENSSL_ROOT_DIR)
endif ()
if (static OR APPLE OR MSVC)
set (OPENSSL_USE_STATIC_LIBS ON)
endif ()
set (OPENSSL_MSVC_STATIC_RT ON)
find_dependency (OpenSSL 1.1.1 REQUIRED)
find_dependency (ZLIB)
find_dependency (date)
if (TARGET ZLIB::ZLIB)
set_target_properties(OpenSSL::Crypto PROPERTIES
INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
endif ()

190
cmake/RippledCompiler.cmake Normal file
View File

@@ -0,0 +1,190 @@
#[===================================================================[
setup project-wide compiler settings
#]===================================================================]
#[=========================================================[
TODO some/most of these common settings belong in a
toolchain file, especially the ABI-impacting ones
#]=========================================================]
add_library (common INTERFACE)
add_library (Ripple::common ALIAS common)
# add a single global dependency on this interface lib
link_libraries (Ripple::common)
set_target_properties (common
PROPERTIES INTERFACE_POSITION_INDEPENDENT_CODE ON)
set(CMAKE_CXX_EXTENSIONS OFF)
target_compile_definitions (common
INTERFACE
$<$<CONFIG:Debug>:DEBUG _DEBUG>
$<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG>)
# ^^^^ NOTE: CMAKE release builds already have NDEBUG
# defined, so no need to add it explicitly except for
# this special case of (profile ON) and (assert OFF)
# -- presumably this is because we don't want profile
# builds asserting unless asserts were specifically
# requested
if (MSVC)
# remove existing exception flag since we set it to -EHa
string (REGEX REPLACE "[-/]EH[a-z]+" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
foreach (var_
CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE
CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE)
# also remove dynamic runtime
string (REGEX REPLACE "[-/]MD[d]*" " " ${var_} "${${var_}}")
# /ZI (Edit & Continue debugging information) is incompatible with Gy-
string (REPLACE "/ZI" "/Zi" ${var_} "${${var_}}")
# omit debug info completely under CI (not needed)
if (is_ci)
string (REPLACE "/Zi" " " ${var_} "${${var_}}")
endif ()
endforeach ()
target_compile_options (common
INTERFACE
-bigobj # Increase object file max size
-fp:precise # Floating point behavior
-Gd # __cdecl calling convention
-Gm- # Minimal rebuild: disabled
-Gy- # Function level linking: disabled
-MP # Multiprocessor compilation
-openmp- # pragma omp: disabled
-errorReport:none # No error reporting to Internet
-nologo # Suppress login banner
-wd4018 # Disable signed/unsigned comparison warnings
-wd4244 # Disable float to int possible loss of data warnings
-wd4267 # Disable size_t to T possible loss of data warnings
-wd4800 # Disable C4800(int to bool performance)
-wd4503 # Decorated name length exceeded, name was truncated
$<$<COMPILE_LANGUAGE:CXX>:
-EHa
-GR
>
$<$<CONFIG:Release>:-Ox>
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:
-GS
-Zc:forScope
>
# static runtime
$<$<CONFIG:Debug>:-MTd>
$<$<NOT:$<CONFIG:Debug>>:-MT>
$<$<BOOL:${werr}>:-WX>
)
target_compile_definitions (common
INTERFACE
_WIN32_WINNT=0x6000
_SCL_SECURE_NO_WARNINGS
_CRT_SECURE_NO_WARNINGS
WIN32_CONSOLE
WIN32_LEAN_AND_MEAN
NOMINMAX
# TODO: Resolve these warnings, don't just silence them
_SILENCE_ALL_CXX17_DEPRECATION_WARNINGS
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:_CRTDBG_MAP_ALLOC>)
target_link_libraries (common
INTERFACE
-errorreport:none
-machine:X64)
else ()
target_compile_options (common
INTERFACE
-Wall
-Wdeprecated
$<$<BOOL:${is_clang}>:-Wno-deprecated-declarations>
$<$<BOOL:${wextra}>:-Wextra -Wno-unused-parameter>
$<$<BOOL:${werr}>:-Werror>
-fstack-protector
-Wno-sign-compare
-Wno-unused-but-set-variable
$<$<NOT:$<CONFIG:Debug>>:-fno-strict-aliasing>
# tweak gcc optimization for debug
$<$<AND:$<BOOL:${is_gcc}>,$<CONFIG:Debug>>:-O0>
# Add debug symbols to release config
$<$<CONFIG:Release>:-g>)
target_link_libraries (common
INTERFACE
-rdynamic
$<$<BOOL:${is_linux}>:-Wl,-z,relro,-z,now,--build-id>
# link to static libc/c++ iff:
# * static option set and
# * NOT APPLE (AppleClang does not support static libc/c++) and
# * NOT san (sanitizers typically don't work with static libc/c++)
$<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${san}>>>:
-static-libstdc++
-static-libgcc
>)
endif ()
# Antithesis instrumentation will only be built and deployed using machines running Linux.
if (voidstar)
if (NOT CMAKE_BUILD_TYPE STREQUAL "Debug")
message(FATAL_ERROR "Antithesis instrumentation requires Debug build type, aborting...")
elseif (NOT is_linux)
message(FATAL_ERROR "Antithesis instrumentation requires Linux, aborting...")
elseif (NOT (is_clang AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 16.0))
message(FATAL_ERROR "Antithesis instrumentation requires Clang version 16 or later, aborting...")
endif ()
endif ()
if (use_mold)
# use mold linker if available
execute_process (
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=mold -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "mold")
target_link_libraries (common INTERFACE -fuse-ld=mold)
endif ()
unset (LD_VERSION)
elseif (use_gold AND is_gcc)
# use gold linker if available
execute_process (
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=gold -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
#[=========================================================[
NOTE: THE gold linker inserts -rpath as DT_RUNPATH by
default intead of DT_RPATH, so you might have slightly
unexpected runtime ld behavior if you were expecting
DT_RPATH. Specify --disable-new-dtags to gold if you do
not want the default DT_RUNPATH behavior. This rpath
treatment as well as static/dynamic selection means that
gold does not currently have ideal default behavior when
we are using jemalloc. Thus for simplicity we don't use
it when jemalloc is requested. An alternative to
disabling would be to figure out all the settings
required to make gold play nicely with jemalloc.
#]=========================================================]
if (("${LD_VERSION}" MATCHES "GNU gold") AND (NOT jemalloc))
target_link_libraries (common
INTERFACE
-fuse-ld=gold
-Wl,--no-as-needed
#[=========================================================[
see https://bugs.launchpad.net/ubuntu/+source/eglibc/+bug/1253638/comments/5
DT_RUNPATH does not work great for transitive
dependencies (of which boost has a few) - so just
switch to DT_RPATH if doing dynamic linking with gold
#]=========================================================]
$<$<NOT:$<BOOL:${static}>>:-Wl,--disable-new-dtags>)
endif ()
unset (LD_VERSION)
elseif (use_lld)
# use lld linker if available
execute_process (
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=lld -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "LLD")
target_link_libraries (common INTERFACE -fuse-ld=lld)
endif ()
unset (LD_VERSION)
endif()
if (assert)
foreach (var_ CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_RELEASE)
STRING (REGEX REPLACE "[-/]DNDEBUG" "" ${var_} "${${var_}}")
endforeach ()
endif ()

208
cmake/RippledCore.cmake Normal file
View File

@@ -0,0 +1,208 @@
#[===================================================================[
Exported targets.
#]===================================================================]
include(target_protobuf_sources)
# Protocol buffers cannot participate in a unity build,
# because all the generated sources
# define a bunch of `static const` variables with the same names,
# so we just build them as a separate library.
add_library(xrpl.libpb)
set_target_properties(xrpl.libpb PROPERTIES UNITY_BUILD OFF)
target_protobuf_sources(xrpl.libpb xrpl/proto
LANGUAGE cpp
IMPORT_DIRS include/xrpl/proto
PROTOS include/xrpl/proto/ripple.proto
)
file(GLOB_RECURSE protos "include/xrpl/proto/org/*.proto")
target_protobuf_sources(xrpl.libpb xrpl/proto
LANGUAGE cpp
IMPORT_DIRS include/xrpl/proto
PROTOS "${protos}"
)
target_protobuf_sources(xrpl.libpb xrpl/proto
LANGUAGE grpc
IMPORT_DIRS include/xrpl/proto
PROTOS "${protos}"
PLUGIN protoc-gen-grpc=$<TARGET_FILE:gRPC::grpc_cpp_plugin>
GENERATE_EXTENSIONS .grpc.pb.h .grpc.pb.cc
)
target_compile_options(xrpl.libpb
PUBLIC
$<$<BOOL:${MSVC}>:-wd4996>
$<$<BOOL:${XCODE}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>
PRIVATE
$<$<BOOL:${MSVC}>:-wd4065>
$<$<NOT:$<BOOL:${MSVC}>>:-Wno-deprecated-declarations>
)
target_link_libraries(xrpl.libpb
PUBLIC
protobuf::libprotobuf
gRPC::grpc++
)
# TODO: Clean up the number of library targets later.
add_library(xrpl.imports.main INTERFACE)
target_link_libraries(xrpl.imports.main
INTERFACE
LibArchive::LibArchive
OpenSSL::Crypto
Ripple::boost
Ripple::opts
Ripple::syslibs
absl::random_random
date::date
ed25519::ed25519
secp256k1::secp256k1
xrpl.libpb
xxHash::xxhash
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp>
)
include(add_module)
include(target_link_modules)
# Level 01
add_module(xrpl beast)
target_link_libraries(xrpl.libxrpl.beast PUBLIC xrpl.imports.main)
# Level 02
add_module(xrpl basics)
target_link_libraries(xrpl.libxrpl.basics PUBLIC xrpl.libxrpl.beast)
# Level 03
add_module(xrpl json)
target_link_libraries(xrpl.libxrpl.json PUBLIC xrpl.libxrpl.basics)
add_module(xrpl crypto)
target_link_libraries(xrpl.libxrpl.crypto PUBLIC xrpl.libxrpl.basics)
# Level 04
add_module(xrpl protocol)
target_link_libraries(xrpl.libxrpl.protocol PUBLIC
xrpl.libxrpl.crypto
xrpl.libxrpl.json
)
# Level 05
add_module(xrpl resource)
target_link_libraries(xrpl.libxrpl.resource PUBLIC xrpl.libxrpl.protocol)
# Level 06
add_module(xrpl net)
target_link_libraries(xrpl.libxrpl.net PUBLIC
xrpl.libxrpl.basics
xrpl.libxrpl.json
xrpl.libxrpl.protocol
xrpl.libxrpl.resource
)
add_module(xrpl server)
target_link_libraries(xrpl.libxrpl.server PUBLIC xrpl.libxrpl.protocol)
add_module(xrpl ledger)
target_link_libraries(xrpl.libxrpl.ledger PUBLIC
xrpl.libxrpl.basics
xrpl.libxrpl.json
xrpl.libxrpl.protocol
)
add_library(xrpl.libxrpl)
set_target_properties(xrpl.libxrpl PROPERTIES OUTPUT_NAME xrpl)
add_library(xrpl::libxrpl ALIAS xrpl.libxrpl)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/libxrpl/*.cpp"
)
target_sources(xrpl.libxrpl PRIVATE ${sources})
target_link_modules(xrpl PUBLIC
basics
beast
crypto
json
protocol
resource
server
net
ledger
)
# All headers in libxrpl are in modules.
# Uncomment this stanza if you have not yet moved new headers into a module.
# target_include_directories(xrpl.libxrpl
# PRIVATE
# $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
# PUBLIC
# $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
# $<INSTALL_INTERFACE:include>)
if(xrpld)
add_executable(rippled)
if(tests)
target_compile_definitions(rippled PUBLIC ENABLE_TESTS)
target_compile_definitions(rippled PRIVATE
UNIT_TEST_REFERENCE_FEE=${UNIT_TEST_REFERENCE_FEE}
)
endif()
target_include_directories(rippled
PRIVATE
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/xrpld/*.cpp"
)
target_sources(rippled PRIVATE ${sources})
if(tests)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/test/*.cpp"
)
target_sources(rippled PRIVATE ${sources})
endif()
target_link_libraries(rippled
Ripple::boost
Ripple::opts
Ripple::libs
xrpl.libxrpl
)
exclude_if_included(rippled)
# define a macro for tests that might need to
# be exluded or run differently in CI environment
if(is_ci)
target_compile_definitions(rippled PRIVATE RIPPLED_RUNNING_IN_CI)
endif ()
if(voidstar)
target_compile_options(rippled
PRIVATE
-fsanitize-coverage=trace-pc-guard
)
# rippled requires access to antithesis-sdk-cpp implementation file
# antithesis_instrumentation.h, which is not exported as INTERFACE
target_include_directories(rippled
PRIVATE
${CMAKE_SOURCE_DIR}/external/antithesis-sdk
)
endif()
# any files that don't play well with unity should be added here
if(tests)
set_source_files_properties(
# these two seem to produce conflicts in beast teardown template methods
src/test/rpc/ValidatorRPC_test.cpp
src/test/ledger/Invariants_test.cpp
PROPERTIES SKIP_UNITY_BUILD_INCLUSION TRUE)
endif()
endif()

41
cmake/RippledCov.cmake Normal file
View File

@@ -0,0 +1,41 @@
#[===================================================================[
coverage report target
#]===================================================================]
if(NOT coverage)
message(FATAL_ERROR "Code coverage not enabled! Aborting ...")
endif()
if(CMAKE_CXX_COMPILER_ID MATCHES "MSVC")
message(WARNING "Code coverage on Windows is not supported, ignoring 'coverage' flag")
return()
endif()
include(ProcessorCount)
ProcessorCount(PROCESSOR_COUNT)
include(CodeCoverage)
# The instructions for these commands come from the `CodeCoverage` module,
# which was copied from https://github.com/bilke/cmake-modules, commit fb7d2a3,
# then locally changed (see CHANGES: section in `CodeCoverage.cmake`)
set(GCOVR_ADDITIONAL_ARGS ${coverage_extra_args})
if(NOT GCOVR_ADDITIONAL_ARGS STREQUAL "")
separate_arguments(GCOVR_ADDITIONAL_ARGS)
endif()
list(APPEND GCOVR_ADDITIONAL_ARGS
--exclude-throw-branches
--exclude-noncode-lines
--exclude-unreachable-branches -s
-j ${PROCESSOR_COUNT})
setup_target_for_coverage_gcovr(
NAME coverage
FORMAT ${coverage_format}
EXCLUDE "src/test" "src/tests" "include/xrpl/beast/test" "include/xrpl/beast/unit_test" "${CMAKE_BINARY_DIR}/pb-xrpl.libpb"
DEPENDENCIES rippled xrpl.tests
)
add_code_coverage_to_target(opts INTERFACE)

85
cmake/RippledDocs.cmake Normal file
View File

@@ -0,0 +1,85 @@
#[===================================================================[
docs target (optional)
#]===================================================================]
option(with_docs "Include the docs target?" FALSE)
if(NOT (with_docs OR only_docs))
return()
endif()
find_package(Doxygen)
if(NOT TARGET Doxygen::doxygen)
message(STATUS "doxygen executable not found -- skipping docs target")
return()
endif()
set(doxygen_output_directory "${CMAKE_BINARY_DIR}/docs")
set(doxygen_include_path "${CMAKE_CURRENT_SOURCE_DIR}/src")
set(doxygen_index_file "${doxygen_output_directory}/html/index.html")
set(doxyfile "${CMAKE_CURRENT_SOURCE_DIR}/docs/Doxyfile")
file(GLOB_RECURSE doxygen_input
docs/*.md
include/*.h
include/*.cpp
include/*.md
src/*.h
src/*.cpp
src/*.md
Builds/*.md
*.md)
list(APPEND doxygen_input
external/README.md
)
set(dependencies "${doxygen_input}" "${doxyfile}")
function(verbose_find_path variable name)
# find_path sets a CACHE variable, so don't try using a "local" variable.
find_path(${variable} "${name}" ${ARGN})
if(NOT ${variable})
message(NOTICE "could not find ${name}")
else()
message(STATUS "found ${name}: ${${variable}}/${name}")
endif()
endfunction()
verbose_find_path(doxygen_plantuml_jar_path plantuml.jar PATH_SUFFIXES share/plantuml)
verbose_find_path(doxygen_dot_path dot)
# https://en.cppreference.com/w/Cppreference:Archives
# https://stackoverflow.com/questions/60822559/how-to-move-a-file-download-from-configure-step-to-build-step
set(download_script "${CMAKE_BINARY_DIR}/docs/download-cppreference.cmake")
file(WRITE
"${download_script}"
"file(DOWNLOAD \
https://github.com/PeterFeicht/cppreference-doc/releases/download/v20250209/html-book-20250209.zip \
${CMAKE_BINARY_DIR}/docs/cppreference.zip \
EXPECTED_HASH MD5=bda585f72fbca4b817b29a3d5746567b \
)\n \
execute_process( \
COMMAND \"${CMAKE_COMMAND}\" -E tar -xf cppreference.zip \
)\n"
)
set(tagfile "${CMAKE_BINARY_DIR}/docs/cppreference-doxygen-web.tag.xml")
add_custom_command(
OUTPUT "${tagfile}"
COMMAND "${CMAKE_COMMAND}" -P "${download_script}"
WORKING_DIRECTORY "${CMAKE_BINARY_DIR}/docs"
)
set(doxygen_tagfiles "${tagfile}=http://en.cppreference.com/w/")
add_custom_command(
OUTPUT "${doxygen_index_file}"
COMMAND "${CMAKE_COMMAND}" -E env
"DOXYGEN_OUTPUT_DIRECTORY=${doxygen_output_directory}"
"DOXYGEN_INCLUDE_PATH=${doxygen_include_path}"
"DOXYGEN_TAGFILES=${doxygen_tagfiles}"
"DOXYGEN_PLANTUML_JAR_PATH=${doxygen_plantuml_jar_path}"
"DOXYGEN_DOT_PATH=${doxygen_dot_path}"
"${DOXYGEN_EXECUTABLE}" "${doxyfile}"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
DEPENDS "${dependencies}" "${tagfile}")
add_custom_target(docs
DEPENDS "${doxygen_index_file}"
SOURCES "${dependencies}")

View File

@@ -0,0 +1,83 @@
#[===================================================================[
install stuff
#]===================================================================]
include(create_symbolic_link)
install (
TARGETS
common
opts
ripple_syslibs
ripple_boost
xrpl.imports.main
xrpl.libpb
xrpl.libxrpl.basics
xrpl.libxrpl.beast
xrpl.libxrpl.crypto
xrpl.libxrpl.json
xrpl.libxrpl.protocol
xrpl.libxrpl.resource
xrpl.libxrpl.ledger
xrpl.libxrpl.server
xrpl.libxrpl.net
xrpl.libxrpl
antithesis-sdk-cpp
EXPORT RippleExports
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES DESTINATION include)
install(
DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/include/xrpl"
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}"
)
install(CODE "
set(CMAKE_MODULE_PATH \"${CMAKE_MODULE_PATH}\")
include(create_symbolic_link)
create_symbolic_link(xrpl \
\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_INCLUDEDIR}/ripple)
")
install (EXPORT RippleExports
FILE RippleTargets.cmake
NAMESPACE Ripple::
DESTINATION lib/cmake/ripple)
include (CMakePackageConfigHelpers)
write_basic_package_version_file (
RippleConfigVersion.cmake
VERSION ${rippled_version}
COMPATIBILITY SameMajorVersion)
if (is_root_project AND TARGET rippled)
install (TARGETS rippled RUNTIME DESTINATION bin)
set_target_properties(rippled PROPERTIES INSTALL_RPATH_USE_LINK_PATH ON)
# sample configs should not overwrite existing files
# install if-not-exists workaround as suggested by
# https://cmake.org/Bug/view.php?id=12646
install(CODE "
macro (copy_if_not_exists SRC DEST NEWNAME)
if (NOT EXISTS \"\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
file (INSTALL FILE_PERMISSIONS OWNER_READ OWNER_WRITE DESTINATION \"\${CMAKE_INSTALL_PREFIX}/\${DEST}\" FILES \"\${SRC}\" RENAME \"\${NEWNAME}\")
else ()
message (\"-- Skipping : \$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
endif ()
endmacro()
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/rippled-example.cfg\" etc rippled.cfg)
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/validators-example.txt\" etc validators.txt)
")
install(CODE "
set(CMAKE_MODULE_PATH \"${CMAKE_MODULE_PATH}\")
include(create_symbolic_link)
create_symbolic_link(rippled${suffix} \
\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_BINDIR}/xrpld${suffix})
")
endif ()
install (
FILES
${CMAKE_CURRENT_SOURCE_DIR}/cmake/RippleConfig.cmake
${CMAKE_CURRENT_BINARY_DIR}/RippleConfigVersion.cmake
DESTINATION lib/cmake/ripple)

View File

@@ -0,0 +1,97 @@
#[===================================================================[
rippled compile options/settings via an interface library
#]===================================================================]
add_library (opts INTERFACE)
add_library (Ripple::opts ALIAS opts)
target_compile_definitions (opts
INTERFACE
BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS
BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT
BOOST_CONTAINER_FWD_BAD_DEQUE
HAS_UNCAUGHT_EXCEPTIONS=1
$<$<BOOL:${boost_show_deprecated}>:
BOOST_ASIO_NO_DEPRECATED
BOOST_FILESYSTEM_NO_DEPRECATED
>
$<$<NOT:$<BOOL:${boost_show_deprecated}>>:
BOOST_COROUTINES_NO_DEPRECATION_WARNING
BOOST_BEAST_ALLOW_DEPRECATED
BOOST_FILESYSTEM_DEPRECATED
>
$<$<BOOL:${beast_no_unit_test_inline}>:BEAST_NO_UNIT_TEST_INLINE=1>
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
$<$<BOOL:${single_io_service_thread}>:RIPPLE_SINGLE_IO_SERVICE_THREAD=1>
$<$<BOOL:${voidstar}>:ENABLE_VOIDSTAR>)
target_compile_options (opts
INTERFACE
$<$<AND:$<BOOL:${is_gcc}>,$<COMPILE_LANGUAGE:CXX>>:-Wsuggest-override>
$<$<BOOL:${is_gcc}>:-Wno-maybe-uninitialized>
$<$<BOOL:${perf}>:-fno-omit-frame-pointer>
$<$<BOOL:${profile}>:-pg>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
target_link_libraries (opts
INTERFACE
$<$<BOOL:${profile}>:-pg>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
if(jemalloc)
find_package(jemalloc REQUIRED)
target_compile_definitions(opts INTERFACE PROFILE_JEMALLOC)
target_link_libraries(opts INTERFACE jemalloc::jemalloc)
endif ()
if (san)
target_compile_options (opts
INTERFACE
# sanitizers recommend minimum of -O1 for reasonable performance
$<$<CONFIG:Debug>:-O1>
${SAN_FLAG}
-fno-omit-frame-pointer)
target_compile_definitions (opts
INTERFACE
$<$<STREQUAL:${san},address>:SANITIZER=ASAN>
$<$<STREQUAL:${san},thread>:SANITIZER=TSAN>
$<$<STREQUAL:${san},memory>:SANITIZER=MSAN>
$<$<STREQUAL:${san},undefined>:SANITIZER=UBSAN>)
target_link_libraries (opts INTERFACE ${SAN_FLAG} ${SAN_LIB})
endif ()
#[===================================================================[
rippled transitive library deps via an interface library
#]===================================================================]
add_library (ripple_syslibs INTERFACE)
add_library (Ripple::syslibs ALIAS ripple_syslibs)
target_link_libraries (ripple_syslibs
INTERFACE
$<$<BOOL:${MSVC}>:
legacy_stdio_definitions.lib
Shlwapi
kernel32
user32
gdi32
winspool
comdlg32
advapi32
shell32
ole32
oleaut32
uuid
odbc32
odbccp32
crypt32
>
$<$<NOT:$<BOOL:${MSVC}>>:dl>
$<$<NOT:$<OR:$<BOOL:${MSVC}>,$<BOOL:${APPLE}>>>:rt>)
if (NOT MSVC)
set (THREADS_PREFER_PTHREAD_FLAG ON)
find_package (Threads)
target_link_libraries (ripple_syslibs INTERFACE Threads::Threads)
endif ()
add_library (ripple_libs INTERFACE)
add_library (Ripple::libs ALIAS ripple_libs)
target_link_libraries (ripple_libs INTERFACE Ripple::syslibs)

50
cmake/RippledSanity.cmake Normal file
View File

@@ -0,0 +1,50 @@
#[===================================================================[
sanity checks
#]===================================================================]
get_property(is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set (CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "" FORCE)
if (NOT is_multiconfig)
if (NOT CMAKE_BUILD_TYPE)
message (STATUS "Build type not specified - defaulting to Release")
set (CMAKE_BUILD_TYPE Release CACHE STRING "build type" FORCE)
elseif (NOT (CMAKE_BUILD_TYPE STREQUAL Debug OR CMAKE_BUILD_TYPE STREQUAL Release))
# for simplicity, these are the only two config types we care about. Limiting
# the build types simplifies dealing with external project builds especially
message (FATAL_ERROR " *** Only Debug or Release build types are currently supported ***")
endif ()
endif ()
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES ".*Clang") # both Clang and AppleClang
set (is_clang TRUE)
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang" AND
CMAKE_CXX_COMPILER_VERSION VERSION_LESS 16.0)
message (FATAL_ERROR "This project requires clang 16 or later")
endif ()
elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
set (is_gcc TRUE)
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 12.0)
message (FATAL_ERROR "This project requires GCC 12 or later")
endif ()
endif ()
# check for in-source build and fail
if ("${CMAKE_CURRENT_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
message (FATAL_ERROR "Builds (in-source) are not allowed in "
"${CMAKE_CURRENT_SOURCE_DIR}. Please remove CMakeCache.txt and the CMakeFiles "
"directory from ${CMAKE_CURRENT_SOURCE_DIR} and try building in a separate directory.")
endif ()
if (MSVC AND CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
message (FATAL_ERROR "Visual Studio 32-bit build is not supported.")
endif ()
if (NOT CMAKE_SIZEOF_VOID_P EQUAL 8)
message (FATAL_ERROR "Rippled requires a 64 bit target architecture.\n"
"The most likely cause of this warning is trying to build rippled with a 32-bit OS.")
endif ()
if (APPLE AND NOT HOMEBREW)
find_program (HOMEBREW brew)
endif ()

144
cmake/RippledSettings.cmake Normal file
View File

@@ -0,0 +1,144 @@
#[===================================================================[
declare options and variables
#]===================================================================]
if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
set (is_linux TRUE)
else()
set(is_linux FALSE)
endif()
if("$ENV{CI}" STREQUAL "true" OR "$ENV{CONTINUOUS_INTEGRATION}" STREQUAL "true")
set(is_ci TRUE)
else()
set(is_ci FALSE)
endif()
get_directory_property(has_parent PARENT_DIRECTORY)
if(has_parent)
set(is_root_project OFF)
else()
set(is_root_project ON)
endif()
option(assert "Enables asserts, even in release builds" OFF)
option(xrpld "Build xrpld" ON)
option(tests "Build tests" ON)
if(tests)
# This setting allows making a separate workflow to test fees other than default 10
if(NOT UNIT_TEST_REFERENCE_FEE)
set(UNIT_TEST_REFERENCE_FEE "10" CACHE STRING "")
endif()
endif()
option(unity "Creates a build using UNITY support in cmake." OFF)
if(unity)
if(NOT is_ci)
set(CMAKE_UNITY_BUILD_BATCH_SIZE 15 CACHE STRING "")
endif()
set(CMAKE_UNITY_BUILD ON CACHE BOOL "Do a unity build")
endif()
if(is_clang AND is_linux)
option(voidstar "Enable Antithesis instrumentation." OFF)
endif()
if(is_gcc OR is_clang)
include(ProcessorCount)
ProcessorCount(PROCESSOR_COUNT)
option(coverage "Generates coverage info." OFF)
option(profile "Add profiling flags" OFF)
set(coverage_format "html-details" CACHE STRING
"Output format of the coverage report.")
set(coverage_extra_args "" CACHE STRING
"Additional arguments to pass to gcovr.")
option(wextra "compile with extra gcc/clang warnings enabled" ON)
else()
set(profile OFF CACHE BOOL "gcc/clang only" FORCE)
set(coverage OFF CACHE BOOL "gcc/clang only" FORCE)
set(wextra OFF CACHE BOOL "gcc/clang only" FORCE)
endif()
if(is_linux)
option(BUILD_SHARED_LIBS "build shared ripple libraries" OFF)
option(static "link protobuf, openssl, libc++, and boost statically" ON)
option(perf "Enables flags that assist with perf recording" OFF)
option(use_gold "enables detection of gold (binutils) linker" ON)
option(use_mold "enables detection of mold (binutils) linker" ON)
else()
# we are not ready to allow shared-libs on windows because it would require
# export declarations. On macos it's more feasible, but static openssl
# produces odd linker errors, thus we disable shared lib builds for now.
set(BUILD_SHARED_LIBS OFF CACHE BOOL "build shared ripple libraries - OFF for win/macos" FORCE)
set(static ON CACHE BOOL "static link, linux only. ON for WIN/macos" FORCE)
set(perf OFF CACHE BOOL "perf flags, linux only" FORCE)
set(use_gold OFF CACHE BOOL "gold linker, linux only" FORCE)
set(use_mold OFF CACHE BOOL "mold linker, linux only" FORCE)
endif()
if(is_clang)
option(use_lld "enables detection of lld linker" ON)
else()
set(use_lld OFF CACHE BOOL "try lld linker, clang only" FORCE)
endif()
option(jemalloc "Enables jemalloc for heap profiling" OFF)
option(werr "treat warnings as errors" OFF)
option(local_protobuf
"Force a local build of protobuf instead of looking for an installed version." OFF)
option(local_grpc
"Force a local build of gRPC instead of looking for an installed version." OFF)
# this one is a string and therefore can't be an option
set(san "" CACHE STRING "On gcc & clang, add sanitizer instrumentation")
set_property(CACHE san PROPERTY STRINGS ";undefined;memory;address;thread")
if(san)
string(TOLOWER ${san} san)
set(SAN_FLAG "-fsanitize=${san}")
set(SAN_LIB "")
if(is_gcc)
if(san STREQUAL "address")
set(SAN_LIB "asan")
elseif(san STREQUAL "thread")
set(SAN_LIB "tsan")
elseif(san STREQUAL "memory")
set(SAN_LIB "msan")
elseif(san STREQUAL "undefined")
set(SAN_LIB "ubsan")
endif()
endif()
set(_saved_CRL ${CMAKE_REQUIRED_LIBRARIES})
set(CMAKE_REQUIRED_LIBRARIES "${SAN_FLAG};${SAN_LIB}")
check_cxx_compiler_flag(${SAN_FLAG} COMPILER_SUPPORTS_SAN)
set(CMAKE_REQUIRED_LIBRARIES ${_saved_CRL})
if(NOT COMPILER_SUPPORTS_SAN)
message(FATAL_ERROR "${san} sanitizer does not seem to be supported by your compiler")
endif()
endif()
# the remaining options are obscure and rarely used
option(beast_no_unit_test_inline
"Prevents unit test definitions from being inserted into global table"
OFF)
option(single_io_service_thread
"Restricts the number of threads calling io_service::run to one. \
This can be useful when debugging."
OFF)
option(boost_show_deprecated
"Allow boost to fail on deprecated usage. Only useful if you're trying\
to find deprecated calls."
OFF)
if(WIN32)
option(beast_disable_autolink "Disables autolinking of system libraries on WIN32" OFF)
else()
set(beast_disable_autolink OFF CACHE BOOL "WIN32 only" FORCE)
endif()
if(coverage)
message(STATUS "coverage build requested - forcing Debug build")
set(CMAKE_BUILD_TYPE Debug CACHE STRING "build type" FORCE)
endif()

View File

@@ -0,0 +1,22 @@
option (validator_keys "Enables building of validator-keys-tool as a separate target (imported via FetchContent)" OFF)
if (validator_keys)
git_branch (current_branch)
# default to tracking VK master branch unless we are on release
if (NOT (current_branch STREQUAL "release"))
set (current_branch "master")
endif ()
message (STATUS "tracking ValidatorKeys branch: ${current_branch}")
FetchContent_Declare (
validator_keys_src
GIT_REPOSITORY https://github.com/ripple/validator-keys-tool.git
GIT_TAG "${current_branch}"
)
FetchContent_GetProperties (validator_keys_src)
if (NOT validator_keys_src_POPULATED)
message (STATUS "Pausing to download ValidatorKeys...")
FetchContent_Populate (validator_keys_src)
endif ()
add_subdirectory (${validator_keys_src_SOURCE_DIR} ${CMAKE_BINARY_DIR}/validator-keys)
endif ()

View File

@@ -0,0 +1,15 @@
#[===================================================================[
read version from source
#]===================================================================]
file(STRINGS src/libxrpl/protocol/BuildInfo.cpp BUILD_INFO)
foreach(line_ ${BUILD_INFO})
if(line_ MATCHES "versionString[ ]*=[ ]*\"(.+)\"")
set(rippled_version ${CMAKE_MATCH_1})
endif()
endforeach()
if(rippled_version)
message(STATUS "rippled version: ${rippled_version}")
else()
message(FATAL_ERROR "unable to determine rippled version")
endif()

View File

@@ -1,13 +0,0 @@
include(isolate_headers)
function (xrpl_add_test name)
set(target ${PROJECT_NAME}.test.${name})
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/${name}/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/${name}.cpp")
add_executable(${target} ${ARGN} ${sources})
isolate_headers(${target} "${CMAKE_SOURCE_DIR}" "${CMAKE_SOURCE_DIR}/tests/${name}" PRIVATE)
add_test(NAME ${target} COMMAND ${target})
endfunction ()

View File

@@ -1,200 +0,0 @@
#[===================================================================[
setup project-wide compiler settings
#]===================================================================]
include(CompilationEnv)
#[=========================================================[
TODO some/most of these common settings belong in a
toolchain file, especially the ABI-impacting ones
#]=========================================================]
add_library(common INTERFACE)
add_library(Xrpl::common ALIAS common)
include(XrplSanitizers)
# add a single global dependency on this interface lib
link_libraries(Xrpl::common)
# Respect CMAKE_POSITION_INDEPENDENT_CODE setting (may be set by Conan toolchain)
if (NOT DEFINED CMAKE_POSITION_INDEPENDENT_CODE)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
endif ()
set_target_properties(common PROPERTIES INTERFACE_POSITION_INDEPENDENT_CODE ${CMAKE_POSITION_INDEPENDENT_CODE})
set(CMAKE_CXX_EXTENSIONS OFF)
target_compile_definitions(
common
INTERFACE $<$<CONFIG:Debug>:DEBUG
_DEBUG>
#[===[
NOTE: CMAKE release builds already have NDEBUG defined, so no need to add it
explicitly except for the special case of (profile ON) and (assert OFF).
Presumably this is because we don't want profile builds asserting unless
asserts were specifically requested.
]===]
$<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG>
# TODO: Remove once we have migrated functions from OpenSSL 1.x to 3.x.
OPENSSL_SUPPRESS_DEPRECATED)
if (MSVC)
# remove existing exception flag since we set it to -EHa
string(REGEX REPLACE "[-/]EH[a-z]+" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
foreach (var_ CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE)
# also remove dynamic runtime
string(REGEX REPLACE "[-/]MD[d]*" " " ${var_} "${${var_}}")
# /ZI (Edit & Continue debugging information) is incompatible with Gy-
string(REPLACE "/ZI" "/Zi" ${var_} "${${var_}}")
# omit debug info completely under CI (not needed)
if (is_ci)
string(REPLACE "/Zi" " " ${var_} "${${var_}}")
string(REPLACE "/Z7" " " ${var_} "${${var_}}")
endif ()
endforeach ()
target_compile_options(
common
INTERFACE # Increase object file max size
-bigobj
# Floating point behavior
-fp:precise
# __cdecl calling convention
-Gd
# Minimal rebuild: disabled
-Gm-
# Function level linking: disabled
-Gy-
# Multiprocessor compilation
-MP
# pragma omp: disabled
-openmp-
# No error reporting to Internet
-errorReport:none
# Suppress login banner
-nologo
# Disable signed/unsigned comparison warnings
-wd4018
# Disable float to int possible loss of data warnings
-wd4244
# Disable size_t to T possible loss of data warnings
-wd4267
# Disable C4800(int to bool performance)
-wd4800
# Decorated name length exceeded, name was truncated
-wd4503
$<$<COMPILE_LANGUAGE:CXX>:
-EHa
-GR
>
$<$<CONFIG:Release>:-Ox>
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:
-GS
-Zc:forScope
>
# static runtime
$<$<CONFIG:Debug>:-MTd>
$<$<NOT:$<CONFIG:Debug>>:-MT>
$<$<BOOL:${werr}>:-WX>)
target_compile_definitions(
common
INTERFACE _WIN32_WINNT=0x6000
_SCL_SECURE_NO_WARNINGS
_CRT_SECURE_NO_WARNINGS
WIN32_CONSOLE
WIN32_LEAN_AND_MEAN
NOMINMAX
# TODO: Resolve these warnings, don't just silence them
_SILENCE_ALL_CXX17_DEPRECATION_WARNINGS
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:_CRTDBG_MAP_ALLOC>)
target_link_libraries(common INTERFACE -errorreport:none -machine:X64)
else ()
target_compile_options(
common
INTERFACE -Wall
-Wdeprecated
$<$<BOOL:${is_clang}>:-Wno-deprecated-declarations>
$<$<BOOL:${wextra}>:-Wextra
-Wno-unused-parameter>
$<$<BOOL:${werr}>:-Werror>
-fstack-protector
-Wno-sign-compare
-Wno-unused-but-set-variable
$<$<NOT:$<CONFIG:Debug>>:-fno-strict-aliasing>
# tweak gcc optimization for debug
$<$<AND:$<BOOL:${is_gcc}>,$<CONFIG:Debug>>:-O0>
# Add debug symbols to release config
$<$<CONFIG:Release>:-g>)
target_link_libraries(
common
INTERFACE -rdynamic
$<$<BOOL:${is_linux}>:-Wl,-z,relro,-z,now,--build-id>
# link to static libc/c++ iff: * static option set and * NOT APPLE (AppleClang does not support static
# libc/c++) and * NOT SANITIZERS (sanitizers typically don't work with static libc/c++)
$<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${SANITIZERS_ENABLED}>>>:
-static-libstdc++
-static-libgcc
>)
endif ()
# Antithesis instrumentation will only be built and deployed using machines running Linux.
if (voidstar)
if (NOT CMAKE_BUILD_TYPE STREQUAL "Debug")
message(FATAL_ERROR "Antithesis instrumentation requires Debug build type, aborting...")
elseif (NOT is_linux)
message(FATAL_ERROR "Antithesis instrumentation requires Linux, aborting...")
elseif (NOT (is_clang AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 16.0))
message(FATAL_ERROR "Antithesis instrumentation requires Clang version 16 or later, aborting...")
endif ()
endif ()
if (use_mold)
# use mold linker if available
execute_process(COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=mold -Wl,--version ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "mold")
target_link_libraries(common INTERFACE -fuse-ld=mold)
endif ()
unset(LD_VERSION)
elseif (use_gold AND is_gcc)
# use gold linker if available
execute_process(COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=gold -Wl,--version ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
#[=========================================================[
NOTE: THE gold linker inserts -rpath as DT_RUNPATH by
default instead of DT_RPATH, so you might have slightly
unexpected runtime ld behavior if you were expecting
DT_RPATH. Specify --disable-new-dtags to gold if you do
not want the default DT_RUNPATH behavior. This rpath
treatment as well as static/dynamic selection means that
gold does not currently have ideal default behavior when
we are using jemalloc. Thus for simplicity we don't use
it when jemalloc is requested. An alternative to
disabling would be to figure out all the settings
required to make gold play nicely with jemalloc.
#]=========================================================]
if (("${LD_VERSION}" MATCHES "GNU gold") AND (NOT jemalloc))
target_link_libraries(
common
INTERFACE -fuse-ld=gold
-Wl,--no-as-needed
#[=========================================================[
see https://bugs.launchpad.net/ubuntu/+source/eglibc/+bug/1253638/comments/5
DT_RUNPATH does not work great for transitive
dependencies (of which boost has a few) - so just
switch to DT_RPATH if doing dynamic linking with gold
#]=========================================================]
$<$<NOT:$<BOOL:${static}>>:-Wl,--disable-new-dtags>)
endif ()
unset(LD_VERSION)
elseif (use_lld)
# use lld linker if available
execute_process(COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=lld -Wl,--version ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "LLD")
target_link_libraries(common INTERFACE -fuse-ld=lld)
endif ()
unset(LD_VERSION)
endif ()
if (assert)
foreach (var_ CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_RELEASE)
string(REGEX REPLACE "[-/]DNDEBUG" "" ${var_} "${${var_}}")
endforeach ()
endif ()

View File

@@ -1,52 +0,0 @@
include(CMakeFindDependencyMacro)
# need to represent system dependencies of the lib here
#[=========================================================[
Boost
#]=========================================================]
if (static OR APPLE OR MSVC)
set(Boost_USE_STATIC_LIBS ON)
endif ()
set(Boost_USE_MULTITHREADED ON)
if (static OR MSVC)
set(Boost_USE_STATIC_RUNTIME ON)
else ()
set(Boost_USE_STATIC_RUNTIME OFF)
endif ()
find_dependency(Boost
COMPONENTS
chrono
container
context
coroutine
date_time
filesystem
program_options
regex
system
thread)
#[=========================================================[
OpenSSL
#]=========================================================]
if (NOT DEFINED OPENSSL_ROOT_DIR)
if (DEFINED ENV{OPENSSL_ROOT})
set(OPENSSL_ROOT_DIR $ENV{OPENSSL_ROOT})
elseif (APPLE)
find_program(homebrew brew)
if (homebrew)
execute_process(COMMAND ${homebrew} --prefix openssl OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif ()
endif ()
file(TO_CMAKE_PATH "${OPENSSL_ROOT_DIR}" OPENSSL_ROOT_DIR)
endif ()
if (static OR APPLE OR MSVC)
set(OPENSSL_USE_STATIC_LIBS ON)
endif ()
set(OPENSSL_MSVC_STATIC_RT ON)
find_dependency(OpenSSL REQUIRED)
find_dependency(ZLIB)
find_dependency(date)
if (TARGET ZLIB::ZLIB)
set_target_properties(OpenSSL::Crypto PROPERTIES INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
endif ()

View File

@@ -1,168 +0,0 @@
#[===================================================================[
Exported targets.
#]===================================================================]
include(target_protobuf_sources)
# Protocol buffers cannot participate in a unity build,
# because all the generated sources
# define a bunch of `static const` variables with the same names,
# so we just build them as a separate library.
add_library(xrpl.libpb)
set_target_properties(xrpl.libpb PROPERTIES UNITY_BUILD OFF)
target_protobuf_sources(xrpl.libpb xrpl/proto LANGUAGE cpp IMPORT_DIRS include/xrpl/proto
PROTOS include/xrpl/proto/xrpl.proto)
file(GLOB_RECURSE protos "include/xrpl/proto/org/*.proto")
target_protobuf_sources(xrpl.libpb xrpl/proto LANGUAGE cpp IMPORT_DIRS include/xrpl/proto PROTOS "${protos}")
target_protobuf_sources(
xrpl.libpb xrpl/proto
LANGUAGE grpc
IMPORT_DIRS include/xrpl/proto
PROTOS "${protos}"
PLUGIN protoc-gen-grpc=$<TARGET_FILE:gRPC::grpc_cpp_plugin>
GENERATE_EXTENSIONS .grpc.pb.h .grpc.pb.cc)
target_compile_options(
xrpl.libpb PUBLIC $<$<BOOL:${is_msvc}>:-wd4996> $<$<BOOL:${is_xcode}>: --system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec >
PRIVATE $<$<BOOL:${is_msvc}>:-wd4065> $<$<NOT:$<BOOL:${is_msvc}>>:-Wno-deprecated-declarations>)
target_link_libraries(xrpl.libpb PUBLIC protobuf::libprotobuf gRPC::grpc++)
# TODO: Clean up the number of library targets later.
add_library(xrpl.imports.main INTERFACE)
target_link_libraries(
xrpl.imports.main
INTERFACE absl::random_random
date::date
ed25519::ed25519
LibArchive::LibArchive
OpenSSL::Crypto
Xrpl::boost
Xrpl::libs
Xrpl::opts
Xrpl::syslibs
secp256k1::secp256k1
xrpl.libpb
xxHash::xxhash
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp>)
include(add_module)
include(target_link_modules)
# Level 01
add_module(xrpl beast)
target_link_libraries(xrpl.libxrpl.beast PUBLIC xrpl.imports.main)
# Level 02
add_module(xrpl basics)
target_link_libraries(xrpl.libxrpl.basics PUBLIC xrpl.libxrpl.beast)
# Level 03
add_module(xrpl json)
target_link_libraries(xrpl.libxrpl.json PUBLIC xrpl.libxrpl.basics)
add_module(xrpl crypto)
target_link_libraries(xrpl.libxrpl.crypto PUBLIC xrpl.libxrpl.basics)
# Level 04
add_module(xrpl protocol)
target_link_libraries(xrpl.libxrpl.protocol PUBLIC xrpl.libxrpl.crypto xrpl.libxrpl.json)
# Level 05
add_module(xrpl core)
target_link_libraries(xrpl.libxrpl.core PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol)
# Level 06
add_module(xrpl resource)
target_link_libraries(xrpl.libxrpl.resource PUBLIC xrpl.libxrpl.protocol)
# Level 07
add_module(xrpl net)
target_link_libraries(xrpl.libxrpl.net PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol
xrpl.libxrpl.resource)
add_module(xrpl nodestore)
target_link_libraries(xrpl.libxrpl.nodestore PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol)
add_module(xrpl shamap)
target_link_libraries(xrpl.libxrpl.shamap PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.crypto xrpl.libxrpl.protocol
xrpl.libxrpl.nodestore)
add_module(xrpl rdb)
target_link_libraries(xrpl.libxrpl.rdb PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.core)
add_module(xrpl server)
target_link_libraries(xrpl.libxrpl.server PUBLIC xrpl.libxrpl.protocol xrpl.libxrpl.core xrpl.libxrpl.rdb)
add_module(xrpl ledger)
target_link_libraries(xrpl.libxrpl.ledger PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol
xrpl.libxrpl.rdb)
add_library(xrpl.libxrpl)
set_target_properties(xrpl.libxrpl PROPERTIES OUTPUT_NAME xrpl)
add_library(xrpl::libxrpl ALIAS xrpl.libxrpl)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/libxrpl/*.cpp")
target_sources(xrpl.libxrpl PRIVATE ${sources})
target_link_modules(
xrpl
PUBLIC
basics
beast
core
crypto
json
ledger
net
nodestore
protocol
rdb
resource
server
shamap)
# All headers in libxrpl are in modules.
# Uncomment this stanza if you have not yet moved new headers into a module.
# target_include_directories(xrpl.libxrpl
# PRIVATE
# $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
# PUBLIC
# $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
# $<INSTALL_INTERFACE:include>)
if (xrpld)
add_executable(xrpld)
if (tests)
target_compile_definitions(xrpld PUBLIC ENABLE_TESTS)
target_compile_definitions(xrpld PRIVATE UNIT_TEST_REFERENCE_FEE=${UNIT_TEST_REFERENCE_FEE})
endif ()
target_include_directories(xrpld PRIVATE $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/xrpld/*.cpp")
target_sources(xrpld PRIVATE ${sources})
if (tests)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/test/*.cpp")
target_sources(xrpld PRIVATE ${sources})
endif ()
target_link_libraries(xrpld Xrpl::boost Xrpl::opts Xrpl::libs xrpl.libxrpl)
exclude_if_included(xrpld)
# define a macro for tests that might need to
# be excluded or run differently in CI environment
if (is_ci)
target_compile_definitions(xrpld PRIVATE XRPL_RUNNING_IN_CI)
endif ()
if (voidstar)
target_compile_options(xrpld PRIVATE -fsanitize-coverage=trace-pc-guard)
# xrpld requires access to antithesis-sdk-cpp implementation file
# antithesis_instrumentation.h, which is not exported as INTERFACE
target_include_directories(xrpld PRIVATE ${CMAKE_SOURCE_DIR}/external/antithesis-sdk)
endif ()
endif ()

View File

@@ -1,52 +0,0 @@
#[===================================================================[
coverage report target
#]===================================================================]
if (NOT coverage)
message(FATAL_ERROR "Code coverage not enabled! Aborting ...")
endif ()
if (CMAKE_CXX_COMPILER_ID MATCHES "MSVC")
message(WARNING "Code coverage on Windows is not supported, ignoring 'coverage' flag")
return()
endif ()
include(ProcessorCount)
ProcessorCount(PROCESSOR_COUNT)
include(CodeCoverage)
# The instructions for these commands come from the `CodeCoverage` module, which was copied from
# https://github.com/bilke/cmake-modules, commit fb7d2a3, then locally changed (see CHANGES: section in
# `CodeCoverage.cmake`)
set(GCOVR_ADDITIONAL_ARGS ${coverage_extra_args})
if (NOT GCOVR_ADDITIONAL_ARGS STREQUAL "")
separate_arguments(GCOVR_ADDITIONAL_ARGS)
endif ()
list(APPEND
GCOVR_ADDITIONAL_ARGS
--exclude-throw-branches
--exclude-noncode-lines
--exclude-unreachable-branches
-s
-j
${PROCESSOR_COUNT})
setup_target_for_coverage_gcovr(
NAME
coverage
FORMAT
${coverage_format}
EXCLUDE
"src/test"
"src/tests"
"include/xrpl/beast/test"
"include/xrpl/beast/unit_test"
"${CMAKE_BINARY_DIR}/pb-xrpl.libpb"
DEPENDENCIES
xrpld
xrpl.tests)
add_code_coverage_to_target(opts INTERFACE)

View File

@@ -1,72 +0,0 @@
#[===================================================================[
docs target (optional)
#]===================================================================]
if (NOT only_docs)
return()
endif ()
find_package(Doxygen)
if (NOT TARGET Doxygen::doxygen)
message(STATUS "doxygen executable not found -- skipping docs target")
return()
endif ()
set(doxygen_output_directory "${CMAKE_BINARY_DIR}/docs")
set(doxygen_include_path "${CMAKE_CURRENT_SOURCE_DIR}/src")
set(doxygen_index_file "${doxygen_output_directory}/html/index.html")
set(doxyfile "${CMAKE_CURRENT_SOURCE_DIR}/docs/Doxyfile")
file(GLOB_RECURSE
doxygen_input
docs/*.md
include/*.h
include/*.cpp
include/*.md
src/*.h
src/*.cpp
src/*.md
Builds/*.md
*.md)
list(APPEND doxygen_input external/README.md)
set(dependencies "${doxygen_input}" "${doxyfile}")
function (verbose_find_path variable name)
# find_path sets a CACHE variable, so don't try using a "local" variable.
find_path(${variable} "${name}" ${ARGN})
if (NOT ${variable})
message(NOTICE "could not find ${name}")
else ()
message(STATUS "found ${name}: ${${variable}}/${name}")
endif ()
endfunction ()
verbose_find_path(doxygen_plantuml_jar_path plantuml.jar PATH_SUFFIXES share/plantuml)
verbose_find_path(doxygen_dot_path dot)
# https://en.cppreference.com/w/Cppreference:Archives
# https://stackoverflow.com/questions/60822559/how-to-move-a-file-download-from-configure-step-to-build-step
set(download_script "${CMAKE_BINARY_DIR}/docs/download-cppreference.cmake")
file(WRITE "${download_script}"
"file(DOWNLOAD \
https://github.com/PeterFeicht/cppreference-doc/releases/download/v20250209/html-book-20250209.zip \
${CMAKE_BINARY_DIR}/docs/cppreference.zip \
EXPECTED_HASH MD5=bda585f72fbca4b817b29a3d5746567b \
)\n \
execute_process( \
COMMAND \"${CMAKE_COMMAND}\" -E tar -xf cppreference.zip \
)\n")
set(tagfile "${CMAKE_BINARY_DIR}/docs/cppreference-doxygen-web.tag.xml")
add_custom_command(OUTPUT "${tagfile}" COMMAND "${CMAKE_COMMAND}" -P "${download_script}"
WORKING_DIRECTORY "${CMAKE_BINARY_DIR}/docs")
set(doxygen_tagfiles "${tagfile}=http://en.cppreference.com/w/")
add_custom_command(
OUTPUT "${doxygen_index_file}"
COMMAND "${CMAKE_COMMAND}" -E env "DOXYGEN_OUTPUT_DIRECTORY=${doxygen_output_directory}"
"DOXYGEN_INCLUDE_PATH=${doxygen_include_path}" "DOXYGEN_TAGFILES=${doxygen_tagfiles}"
"DOXYGEN_PLANTUML_JAR_PATH=${doxygen_plantuml_jar_path}" "DOXYGEN_DOT_PATH=${doxygen_dot_path}"
"${DOXYGEN_EXECUTABLE}" "${doxyfile}"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
DEPENDS "${dependencies}" "${tagfile}")
add_custom_target(docs DEPENDS "${doxygen_index_file}" SOURCES "${dependencies}")

View File

@@ -1,74 +0,0 @@
#[===================================================================[
install stuff
#]===================================================================]
include(create_symbolic_link)
# If no suffix is defined for executables (e.g. Windows uses .exe but Linux
# and macOS use none), then explicitly set it to the empty string.
if (NOT DEFINED suffix)
set(suffix "")
endif ()
install(TARGETS common
opts
xrpl_boost
xrpl_libs
xrpl_syslibs
xrpl.imports.main
xrpl.libpb
xrpl.libxrpl
xrpl.libxrpl.basics
xrpl.libxrpl.beast
xrpl.libxrpl.core
xrpl.libxrpl.crypto
xrpl.libxrpl.json
xrpl.libxrpl.rdb
xrpl.libxrpl.ledger
xrpl.libxrpl.net
xrpl.libxrpl.nodestore
xrpl.libxrpl.protocol
xrpl.libxrpl.resource
xrpl.libxrpl.server
xrpl.libxrpl.shamap
antithesis-sdk-cpp
EXPORT XrplExports
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES
DESTINATION include)
install(DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/include/xrpl" DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}")
install(EXPORT XrplExports FILE XrplTargets.cmake NAMESPACE Xrpl:: DESTINATION lib/cmake/xrpl)
include(CMakePackageConfigHelpers)
write_basic_package_version_file(XrplConfigVersion.cmake VERSION ${xrpld_version} COMPATIBILITY SameMajorVersion)
if (is_root_project AND TARGET xrpld)
install(TARGETS xrpld RUNTIME DESTINATION bin)
set_target_properties(xrpld PROPERTIES INSTALL_RPATH_USE_LINK_PATH ON)
# sample configs should not overwrite existing files
# install if-not-exists workaround as suggested by
# https://cmake.org/Bug/view.php?id=12646
install(CODE "
macro (copy_if_not_exists SRC DEST NEWNAME)
if (NOT EXISTS \"\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
file (INSTALL FILE_PERMISSIONS OWNER_READ OWNER_WRITE DESTINATION \"\${CMAKE_INSTALL_PREFIX}/\${DEST}\" FILES \"\${SRC}\" RENAME \"\${NEWNAME}\")
else ()
message (\"-- Skipping : \$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
endif ()
endmacro()
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/xrpld-example.cfg\" etc xrpld.cfg)
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/validators-example.txt\" etc validators.txt)
")
install(CODE "
set(CMAKE_MODULE_PATH \"${CMAKE_MODULE_PATH}\")
include(create_symbolic_link)
create_symbolic_link(xrpld${suffix} \
\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_BINDIR}/rippled${suffix})
")
endif ()
install(FILES ${CMAKE_CURRENT_SOURCE_DIR}/cmake/XrplConfig.cmake ${CMAKE_CURRENT_BINARY_DIR}/XrplConfigVersion.cmake
DESTINATION lib/cmake/xrpl)

View File

@@ -1,83 +0,0 @@
#[===================================================================[
xrpld compile options/settings via an interface library
#]===================================================================]
include(CompilationEnv)
# Set defaults for optional variables to avoid uninitialized variable warnings
if (NOT DEFINED voidstar)
set(voidstar OFF)
endif ()
add_library(opts INTERFACE)
add_library(Xrpl::opts ALIAS opts)
target_compile_definitions(
opts
INTERFACE BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS
BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT
BOOST_CONTAINER_FWD_BAD_DEQUE
HAS_UNCAUGHT_EXCEPTIONS=1
$<$<BOOL:${boost_show_deprecated}>:
BOOST_ASIO_NO_DEPRECATED
BOOST_FILESYSTEM_NO_DEPRECATED
>
$<$<NOT:$<BOOL:${boost_show_deprecated}>>:
BOOST_COROUTINES_NO_DEPRECATION_WARNING
BOOST_BEAST_ALLOW_DEPRECATED
BOOST_FILESYSTEM_DEPRECATED
>
$<$<BOOL:${beast_no_unit_test_inline}>:BEAST_NO_UNIT_TEST_INLINE=1>
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
$<$<BOOL:${single_io_service_thread}>:XRPL_SINGLE_IO_SERVICE_THREAD=1>
$<$<BOOL:${voidstar}>:ENABLE_VOIDSTAR>)
target_compile_options(
opts
INTERFACE $<$<AND:$<BOOL:${is_gcc}>,$<COMPILE_LANGUAGE:CXX>>:-Wsuggest-override>
$<$<BOOL:${is_gcc}>:-Wno-maybe-uninitialized> $<$<BOOL:${perf}>:-fno-omit-frame-pointer>
$<$<BOOL:${profile}>:-pg> $<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
target_link_libraries(opts INTERFACE $<$<BOOL:${profile}>:-pg> $<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
if (jemalloc)
find_package(jemalloc REQUIRED)
target_compile_definitions(opts INTERFACE PROFILE_JEMALLOC)
target_link_libraries(opts INTERFACE jemalloc::jemalloc)
endif ()
#[===================================================================[
xrpld transitive library deps via an interface library
#]===================================================================]
add_library(xrpl_syslibs INTERFACE)
add_library(Xrpl::syslibs ALIAS xrpl_syslibs)
target_link_libraries(
xrpl_syslibs
INTERFACE $<$<BOOL:${is_msvc}>:
legacy_stdio_definitions.lib
Shlwapi
kernel32
user32
gdi32
winspool
comdlg32
advapi32
shell32
ole32
oleaut32
uuid
odbc32
odbccp32
crypt32
>
$<$<NOT:$<BOOL:${is_msvc}>>:dl>
$<$<NOT:$<OR:$<BOOL:${is_msvc}>,$<BOOL:${is_macos}>>>:rt>)
if (NOT is_msvc)
set(THREADS_PREFER_PTHREAD_FLAG ON)
find_package(Threads)
target_link_libraries(xrpl_syslibs INTERFACE Threads::Threads)
endif ()
add_library(xrpl_libs INTERFACE)
add_library(Xrpl::libs ALIAS xrpl_libs)
target_link_libraries(xrpl_libs INTERFACE Xrpl::syslibs)

View File

@@ -1,197 +0,0 @@
#[===================================================================[
Configure sanitizers based on environment variables.
This module reads the following environment variables:
- SANITIZERS: The sanitizers to enable. Possible values:
- "address"
- "address,undefinedbehavior"
- "thread"
- "thread,undefinedbehavior"
- "undefinedbehavior"
The compiler type and platform are detected in CompilationEnv.cmake.
The sanitizer compile options are applied to the 'common' interface library
which is linked to all targets in the project.
Internal flag variables set by this module:
- SANITIZER_TYPES: List of sanitizer types to enable (e.g., "address",
"thread", "undefined"). And two more flags for undefined behavior sanitizer (e.g., "float-divide-by-zero", "unsigned-integer-overflow").
This list is joined with commas and passed to -fsanitize=<list>.
- SANITIZERS_COMPILE_FLAGS: Compiler flags for sanitizer instrumentation.
Includes:
* -fno-omit-frame-pointer: Preserves frame pointers for stack traces
* -O1: Minimum optimization for reasonable performance
* -fsanitize=<types>: Enables sanitizer instrumentation
* -fsanitize-ignorelist=<path>: (Clang only) Compile-time ignorelist
* -mcmodel=large/medium: (GCC only) Code model for large binaries
* -Wno-stringop-overflow: (GCC only) Suppresses false positive warnings
* -Wno-tsan: (For GCC TSAN combination only) Suppresses atomic_thread_fence warnings
- SANITIZERS_LINK_FLAGS: Linker flags for sanitizer runtime libraries.
Includes:
* -fsanitize=<types>: Links sanitizer runtime libraries
* -mcmodel=large/medium: (GCC only) Matches compile-time code model
- SANITIZERS_RELOCATION_FLAGS: (GCC only) Code model flags for linking.
Used to handle large instrumented binaries on x86_64:
* -mcmodel=large: For AddressSanitizer (prevents relocation errors)
* -mcmodel=medium: For ThreadSanitizer (large model is incompatible)
#]===================================================================]
include(CompilationEnv)
# Read environment variable
set(SANITIZERS "")
if (DEFINED ENV{SANITIZERS})
set(SANITIZERS "$ENV{SANITIZERS}")
endif ()
# Set SANITIZERS_ENABLED flag for use in other modules
if (SANITIZERS MATCHES "address|thread|undefinedbehavior")
set(SANITIZERS_ENABLED TRUE)
else ()
set(SANITIZERS_ENABLED FALSE)
return()
endif ()
# Sanitizers are not supported on Windows/MSVC
if (is_msvc)
message(FATAL_ERROR "Sanitizers are not supported on Windows/MSVC. "
"Please unset the SANITIZERS environment variable.")
endif ()
message(STATUS "Configuring sanitizers: ${SANITIZERS}")
# Parse SANITIZERS value to determine which sanitizers to enable
set(enable_asan FALSE)
set(enable_tsan FALSE)
set(enable_ubsan FALSE)
# Normalize SANITIZERS into a list
set(san_list "${SANITIZERS}")
string(REPLACE "," ";" san_list "${san_list}")
separate_arguments(san_list)
foreach (san IN LISTS san_list)
if (san STREQUAL "address")
set(enable_asan TRUE)
elseif (san STREQUAL "thread")
set(enable_tsan TRUE)
elseif (san STREQUAL "undefinedbehavior")
set(enable_ubsan TRUE)
else ()
message(FATAL_ERROR "Unsupported sanitizer type: ${san}"
"Supported: address, thread, undefinedbehavior and their combinations.")
endif ()
endforeach ()
# Validate sanitizer compatibility
if (enable_asan AND enable_tsan)
message(FATAL_ERROR "AddressSanitizer and ThreadSanitizer are incompatible and cannot be enabled simultaneously. "
"Use 'address' or 'thread', optionally with 'undefinedbehavior'.")
endif ()
# Frame pointer is required for meaningful stack traces. Sanitizers recommend minimum of -O1 for reasonable performance
set(SANITIZERS_COMPILE_FLAGS "-fno-omit-frame-pointer" "-O1")
# Build the sanitizer flags list
set(SANITIZER_TYPES)
if (enable_asan)
list(APPEND SANITIZER_TYPES "address")
elseif (enable_tsan)
list(APPEND SANITIZER_TYPES "thread")
endif ()
if (enable_ubsan)
# UB sanitizer flags
list(APPEND SANITIZER_TYPES "undefined" "float-divide-by-zero")
if (is_clang)
# Clang supports additional UB checks. More info here
# https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html
list(APPEND SANITIZER_TYPES "unsigned-integer-overflow")
endif ()
endif ()
# Configure code model for GCC on amd64 Use large code model for ASAN to avoid relocation errors Use medium code model
# for TSAN (large is not compatible with TSAN)
set(SANITIZERS_RELOCATION_FLAGS)
# Compiler-specific configuration
if (is_gcc)
# Disable mold, gold and lld linkers for GCC with sanitizers Use default linker (bfd/ld) which is more lenient with
# mixed code models This is needed since the size of instrumented binary exceeds the limits set by mold, lld and
# gold linkers
set(use_mold OFF CACHE BOOL "Use mold linker" FORCE)
set(use_gold OFF CACHE BOOL "Use gold linker" FORCE)
set(use_lld OFF CACHE BOOL "Use lld linker" FORCE)
message(STATUS " Disabled mold, gold, and lld linkers for GCC with sanitizers")
# Suppress false positive warnings in GCC with stringop-overflow
list(APPEND SANITIZERS_COMPILE_FLAGS "-Wno-stringop-overflow")
if (is_amd64 AND enable_asan)
message(STATUS " Using large code model (-mcmodel=large)")
list(APPEND SANITIZERS_COMPILE_FLAGS "-mcmodel=large")
list(APPEND SANITIZERS_RELOCATION_FLAGS "-mcmodel=large")
elseif (enable_tsan)
# GCC doesn't support atomic_thread_fence with tsan. Suppress warnings.
list(APPEND SANITIZERS_COMPILE_FLAGS "-Wno-tsan")
message(STATUS " Using medium code model (-mcmodel=medium)")
list(APPEND SANITIZERS_COMPILE_FLAGS "-mcmodel=medium")
list(APPEND SANITIZERS_RELOCATION_FLAGS "-mcmodel=medium")
endif ()
# Join sanitizer flags with commas for -fsanitize option
list(JOIN SANITIZER_TYPES "," SANITIZER_TYPES_STR)
# Add sanitizer to compile and link flags
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
set(SANITIZERS_LINK_FLAGS "${SANITIZERS_RELOCATION_FLAGS}" "-fsanitize=${SANITIZER_TYPES_STR}")
elseif (is_clang)
# Add ignorelist for Clang (GCC doesn't support this) Use CMAKE_SOURCE_DIR to get the path to the ignorelist
set(IGNORELIST_PATH "${CMAKE_SOURCE_DIR}/sanitizers/suppressions/sanitizer-ignorelist.txt")
if (NOT EXISTS "${IGNORELIST_PATH}")
message(FATAL_ERROR "Sanitizer ignorelist not found: ${IGNORELIST_PATH}")
endif ()
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize-ignorelist=${IGNORELIST_PATH}")
message(STATUS " Using sanitizer ignorelist: ${IGNORELIST_PATH}")
# Join sanitizer flags with commas for -fsanitize option
list(JOIN SANITIZER_TYPES "," SANITIZER_TYPES_STR)
# Add sanitizer to compile and link flags
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
set(SANITIZERS_LINK_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
endif ()
message(STATUS " Compile flags: ${SANITIZERS_COMPILE_FLAGS}")
message(STATUS " Link flags: ${SANITIZERS_LINK_FLAGS}")
# Apply the sanitizer flags to the 'common' interface library This is the same library used by XrplCompiler.cmake
target_compile_options(common INTERFACE $<$<COMPILE_LANGUAGE:CXX>:${SANITIZERS_COMPILE_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${SANITIZERS_COMPILE_FLAGS}>)
# Apply linker flags
target_link_options(common INTERFACE ${SANITIZERS_LINK_FLAGS})
# Define SANITIZERS macro for BuildInfo.cpp
set(sanitizers_list)
if (enable_asan)
list(APPEND sanitizers_list "ASAN")
endif ()
if (enable_tsan)
list(APPEND sanitizers_list "TSAN")
endif ()
if (enable_ubsan)
list(APPEND sanitizers_list "UBSAN")
endif ()
if (sanitizers_list)
list(JOIN sanitizers_list "." sanitizers_str)
target_compile_definitions(common INTERFACE SANITIZERS=${sanitizers_str})
endif ()

View File

@@ -1,44 +0,0 @@
#[===================================================================[
sanity checks
#]===================================================================]
include(CompilationEnv)
get_property(is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "" FORCE)
if (NOT is_multiconfig)
if (NOT CMAKE_BUILD_TYPE)
message(STATUS "Build type not specified - defaulting to Release")
set(CMAKE_BUILD_TYPE Release CACHE STRING "build type" FORCE)
elseif (NOT (CMAKE_BUILD_TYPE STREQUAL Debug OR CMAKE_BUILD_TYPE STREQUAL Release))
# for simplicity, these are the only two config types we care about. Limiting the build types simplifies dealing
# with external project builds especially
message(FATAL_ERROR " *** Only Debug or Release build types are currently supported ***")
endif ()
endif ()
if (is_clang) # both Clang and AppleClang
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 16.0)
message(FATAL_ERROR "This project requires clang 16 or later")
endif ()
elseif (is_gcc)
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 12.0)
message(FATAL_ERROR "This project requires GCC 12 or later")
endif ()
endif ()
# check for in-source build and fail
if ("${CMAKE_CURRENT_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
message(FATAL_ERROR "Builds (in-source) are not allowed in "
"${CMAKE_CURRENT_SOURCE_DIR}. Please remove CMakeCache.txt and the CMakeFiles "
"directory from ${CMAKE_CURRENT_SOURCE_DIR} and try building in a separate directory.")
endif ()
if (MSVC AND CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
message(FATAL_ERROR "Visual Studio 32-bit build is not supported.")
endif ()
if (APPLE AND NOT HOMEBREW)
find_program(HOMEBREW brew)
endif ()

View File

@@ -1,114 +0,0 @@
#[===================================================================[
declare options and variables
#]===================================================================]
include(CompilationEnv)
set(is_ci FALSE)
if (DEFINED ENV{CI})
if ("$ENV{CI}" STREQUAL "true")
set(is_ci TRUE)
endif ()
endif ()
get_directory_property(has_parent PARENT_DIRECTORY)
if (has_parent)
set(is_root_project OFF)
else ()
set(is_root_project ON)
endif ()
option(assert "Enables asserts, even in release builds" OFF)
option(xrpld "Build xrpld" ON)
option(tests "Build tests" ON)
if (tests)
# This setting allows making a separate workflow to test fees other than default 10
if (NOT UNIT_TEST_REFERENCE_FEE)
set(UNIT_TEST_REFERENCE_FEE "10" CACHE STRING "")
endif ()
endif ()
option(unity "Creates a build using UNITY support in cmake." OFF)
if (unity)
if (NOT is_ci)
set(CMAKE_UNITY_BUILD_BATCH_SIZE 15 CACHE STRING "")
endif ()
set(CMAKE_UNITY_BUILD ON CACHE BOOL "Do a unity build")
endif ()
if (is_clang AND is_linux)
option(voidstar "Enable Antithesis instrumentation." OFF)
endif ()
if (is_gcc OR is_clang)
include(ProcessorCount)
ProcessorCount(PROCESSOR_COUNT)
option(coverage "Generates coverage info." OFF)
option(profile "Add profiling flags" OFF)
set(coverage_format "html-details" CACHE STRING "Output format of the coverage report.")
set(coverage_extra_args "" CACHE STRING "Additional arguments to pass to gcovr.")
option(wextra "compile with extra gcc/clang warnings enabled" ON)
else ()
set(profile OFF CACHE BOOL "gcc/clang only" FORCE)
set(coverage OFF CACHE BOOL "gcc/clang only" FORCE)
set(wextra OFF CACHE BOOL "gcc/clang only" FORCE)
endif ()
if (is_linux AND NOT SANITIZER)
option(BUILD_SHARED_LIBS "build shared xrpl libraries" OFF)
option(static "link protobuf, openssl, libc++, and boost statically" ON)
option(perf "Enables flags that assist with perf recording" OFF)
option(use_gold "enables detection of gold (binutils) linker" ON)
option(use_mold "enables detection of mold (binutils) linker" ON)
# Set a default value for the log flag based on the build type. This provides a sensible default (on for debug, off
# for release) while still allowing the user to override it for any build.
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
set(TRUNCATED_LOGS_DEFAULT ON)
else ()
set(TRUNCATED_LOGS_DEFAULT OFF)
endif ()
option(TRUNCATED_THREAD_NAME_LOGS "Show warnings about truncated thread names on Linux." ${TRUNCATED_LOGS_DEFAULT})
if (TRUNCATED_THREAD_NAME_LOGS)
add_compile_definitions(TRUNCATED_THREAD_NAME_LOGS)
endif ()
else ()
# we are not ready to allow shared-libs on windows because it would require export declarations. On macos it's more
# feasible, but static openssl produces odd linker errors, thus we disable shared lib builds for now.
set(BUILD_SHARED_LIBS OFF CACHE BOOL "build shared xrpl libraries - OFF for win/macos" FORCE)
set(static ON CACHE BOOL "static link, linux only. ON for WIN/macos" FORCE)
set(perf OFF CACHE BOOL "perf flags, linux only" FORCE)
set(use_gold OFF CACHE BOOL "gold linker, linux only" FORCE)
set(use_mold OFF CACHE BOOL "mold linker, linux only" FORCE)
endif ()
if (is_clang)
option(use_lld "enables detection of lld linker" ON)
else ()
set(use_lld OFF CACHE BOOL "try lld linker, clang only" FORCE)
endif ()
option(jemalloc "Enables jemalloc for heap profiling" OFF)
option(werr "treat warnings as errors" OFF)
option(local_protobuf "Force a local build of protobuf instead of looking for an installed version." OFF)
option(local_grpc "Force a local build of gRPC instead of looking for an installed version." OFF)
# the remaining options are obscure and rarely used
option(beast_no_unit_test_inline "Prevents unit test definitions from being inserted into global table" OFF)
option(single_io_service_thread "Restricts the number of threads calling io_context::run to one. \
This can be useful when debugging." OFF)
option(boost_show_deprecated "Allow boost to fail on deprecated usage. Only useful if you're trying\
to find deprecated calls." OFF)
if (WIN32)
option(beast_disable_autolink "Disables autolinking of system libraries on WIN32" OFF)
else ()
set(beast_disable_autolink OFF CACHE BOOL "WIN32 only" FORCE)
endif ()
if (coverage)
message(STATUS "coverage build requested - forcing Debug build")
set(CMAKE_BUILD_TYPE Debug CACHE STRING "build type" FORCE)
endif ()

View File

@@ -1,17 +0,0 @@
option(validator_keys "Enables building of validator-keys tool as a separate target (imported via FetchContent)" OFF)
if (validator_keys)
git_branch(current_branch)
# default to tracking VK master branch unless we are on release
if (NOT (current_branch STREQUAL "release"))
set(current_branch "master")
endif ()
message(STATUS "Tracking ValidatorKeys branch: ${current_branch}")
FetchContent_Declare(validator_keys GIT_REPOSITORY https://github.com/ripple/validator-keys-tool.git
GIT_TAG "${current_branch}")
FetchContent_MakeAvailable(validator_keys)
set_target_properties(validator-keys PROPERTIES RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}")
install(TARGETS validator-keys RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR})
endif ()

View File

@@ -1,15 +0,0 @@
#[===================================================================[
read version from source
#]===================================================================]
file(STRINGS src/libxrpl/protocol/BuildInfo.cpp BUILD_INFO)
foreach (line_ ${BUILD_INFO})
if (line_ MATCHES "versionString[ ]*=[ ]*\"(.+)\"")
set(xrpld_version ${CMAKE_MATCH_1})
endif ()
endforeach ()
if (xrpld_version)
message(STATUS "xrpld version: ${xrpld_version}")
else ()
message(FATAL_ERROR "unable to determine xrpld version")
endif ()

View File

@@ -12,14 +12,26 @@ include(isolate_headers)
# add_module(parent a)
# add_module(parent b)
# target_link_libraries(project.libparent.b PUBLIC project.libparent.a)
function (add_module parent name)
set(target ${PROJECT_NAME}.lib${parent}.${name})
add_library(${target} OBJECT)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}/*.cpp")
target_sources(${target} PRIVATE ${sources})
target_include_directories(${target} PUBLIC "$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>")
isolate_headers(${target} "${CMAKE_CURRENT_SOURCE_DIR}/include"
"${CMAKE_CURRENT_SOURCE_DIR}/include/${parent}/${name}" PUBLIC)
isolate_headers(${target} "${CMAKE_CURRENT_SOURCE_DIR}/src" "${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}"
PRIVATE)
endfunction ()
function(add_module parent name)
set(target ${PROJECT_NAME}.lib${parent}.${name})
add_library(${target} OBJECT)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}/*.cpp"
)
target_sources(${target} PRIVATE ${sources})
target_include_directories(${target} PUBLIC
"$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>"
)
isolate_headers(
${target}
"${CMAKE_CURRENT_SOURCE_DIR}/include"
"${CMAKE_CURRENT_SOURCE_DIR}/include/${parent}/${name}"
PUBLIC
)
isolate_headers(
${target}
"${CMAKE_CURRENT_SOURCE_DIR}/src"
"${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}"
PRIVATE
)
endfunction()

View File

@@ -1,19 +1,20 @@
# file(CREATE_SYMLINK) only works on Windows with administrator privileges. https://stackoverflow.com/a/61244115/618906
function (create_symbolic_link target link)
if (WIN32)
if (NOT IS_SYMLINK "${link}")
if (NOT IS_ABSOLUTE "${target}")
# Relative links work do not work on Windows.
set(target "${link}/../${target}")
endif ()
file(TO_NATIVE_PATH "${target}" target)
file(TO_NATIVE_PATH "${link}" link)
execute_process(COMMAND cmd.exe /c mklink /J "${link}" "${target}")
endif ()
else ()
file(CREATE_LINK "${target}" "${link}" SYMBOLIC)
endif ()
if (NOT IS_SYMLINK "${link}")
message(ERROR "failed to create symlink: <${link}>")
endif ()
endfunction ()
# file(CREATE_SYMLINK) only works on Windows with administrator privileges.
# https://stackoverflow.com/a/61244115/618906
function(create_symbolic_link target link)
if(WIN32)
if(NOT IS_SYMLINK "${link}")
if(NOT IS_ABSOLUTE "${target}")
# Relative links work do not work on Windows.
set(target "${link}/../${target}")
endif()
file(TO_NATIVE_PATH "${target}" target)
file(TO_NATIVE_PATH "${link}" link)
execute_process(COMMAND cmd.exe /c mklink /J "${link}" "${target}")
endif()
else()
file(CREATE_LINK "${target}" "${link}" SYMBOLIC)
endif()
if(NOT IS_SYMLINK "${link}")
message(ERROR "failed to create symlink: <${link}>")
endif()
endfunction()

View File

@@ -1,44 +1,46 @@
include(CompilationEnv)
include(XrplSanitizers)
find_package(Boost 1.82 REQUIRED
COMPONENTS
chrono
container
coroutine
date_time
filesystem
json
program_options
regex
system
thread
)
find_package(Boost REQUIRED
COMPONENTS chrono
container
coroutine
date_time
filesystem
json
program_options
regex
system
thread)
add_library(ripple_boost INTERFACE)
add_library(Ripple::boost ALIAS ripple_boost)
add_library(xrpl_boost INTERFACE)
add_library(Xrpl::boost ALIAS xrpl_boost)
target_link_libraries(
xrpl_boost
INTERFACE Boost::headers
Boost::chrono
Boost::container
Boost::coroutine
Boost::date_time
Boost::filesystem
Boost::json
Boost::process
Boost::program_options
Boost::regex
Boost::thread)
if (Boost_COMPILER)
target_link_libraries(xrpl_boost INTERFACE Boost::disable_autolinking)
endif ()
if (SANITIZERS_ENABLED AND is_clang)
# TODO: gcc does not support -fsanitize-blacklist...can we do something else for gcc ?
if (NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)
get_target_property(Boost_INCLUDE_DIRS Boost::headers INTERFACE_INCLUDE_DIRECTORIES)
endif ()
message(STATUS "Adding [${Boost_INCLUDE_DIRS}] to sanitizer blacklist")
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt "src:${Boost_INCLUDE_DIRS}/*")
target_compile_options(opts INTERFACE # ignore boost headers for sanitizing
-fsanitize-blacklist=${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt)
endif ()
target_link_libraries(ripple_boost
INTERFACE
Boost::headers
Boost::chrono
Boost::container
Boost::coroutine
Boost::date_time
Boost::filesystem
Boost::json
Boost::program_options
Boost::regex
Boost::system
Boost::thread)
if(Boost_COMPILER)
target_link_libraries(ripple_boost INTERFACE Boost::disable_autolinking)
endif()
if(san AND is_clang)
# TODO: gcc does not support -fsanitize-blacklist...can we do something else
# for gcc ?
if(NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)
get_target_property(Boost_INCLUDE_DIRS Boost::headers INTERFACE_INCLUDE_DIRECTORIES)
endif()
message(STATUS "Adding [${Boost_INCLUDE_DIRS}] to sanitizer blacklist")
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt "src:${Boost_INCLUDE_DIRS}/*")
target_compile_options(opts
INTERFACE
# ignore boost headers for sanitizing
-fsanitize-blacklist=${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt)
endif()

View File

@@ -37,12 +37,12 @@ include(create_symbolic_link)
# `${CMAKE_CURRENT_BINARY_DIR}/include/${target}`.
#
# isolate_headers(target A B scope)
function (isolate_headers target A B scope)
file(RELATIVE_PATH C "${A}" "${B}")
set(X "${CMAKE_CURRENT_BINARY_DIR}/modules/${target}")
set(Y "${X}/${C}")
cmake_path(GET Y PARENT_PATH parent)
file(MAKE_DIRECTORY "${parent}")
create_symbolic_link("${B}" "${Y}")
target_include_directories(${target} ${scope} "$<BUILD_INTERFACE:${X}>")
endfunction ()
function(isolate_headers target A B scope)
file(RELATIVE_PATH C "${A}" "${B}")
set(X "${CMAKE_CURRENT_BINARY_DIR}/modules/${target}")
set(Y "${X}/${C}")
cmake_path(GET Y PARENT_PATH parent)
file(MAKE_DIRECTORY "${parent}")
create_symbolic_link("${B}" "${Y}")
target_include_directories(${target} ${scope} "$<BUILD_INTERFACE:${X}>")
endfunction()

View File

@@ -6,19 +6,19 @@
# target_link_libraries(project.libparent.b PUBLIC project.libparent.a)
# add_library(project.libparent)
# target_link_modules(parent PUBLIC a b)
function (target_link_modules parent scope)
set(library ${PROJECT_NAME}.lib${parent})
foreach (name ${ARGN})
set(module ${library}.${name})
get_target_property(sources ${library} SOURCES)
list(LENGTH sources before)
get_target_property(dupes ${module} SOURCES)
list(LENGTH dupes expected)
list(REMOVE_ITEM sources ${dupes})
list(LENGTH sources after)
math(EXPR actual "${before} - ${after}")
message(STATUS "${module} with ${expected} sources took ${actual} sources from ${library}")
set_target_properties(${library} PROPERTIES SOURCES "${sources}")
target_link_libraries(${library} ${scope} ${module})
endforeach ()
endfunction ()
function(target_link_modules parent scope)
set(library ${PROJECT_NAME}.lib${parent})
foreach(name ${ARGN})
set(module ${library}.${name})
get_target_property(sources ${library} SOURCES)
list(LENGTH sources before)
get_target_property(dupes ${module} SOURCES)
list(LENGTH dupes expected)
list(REMOVE_ITEM sources ${dupes})
list(LENGTH sources after)
math(EXPR actual "${before} - ${after}")
message(STATUS "${module} with ${expected} sources took ${actual} sources from ${library}")
set_target_properties(${library} PROPERTIES SOURCES "${sources}")
target_link_libraries(${library} ${scope} ${module})
endforeach()
endfunction()

View File

@@ -35,20 +35,28 @@ find_package(Protobuf REQUIRED)
# This prefix should appear at the start of all your consumer includes.
# ARGN:
# A list of .proto files.
function (target_protobuf_sources target prefix)
set(dir "${CMAKE_CURRENT_BINARY_DIR}/pb-${target}")
file(MAKE_DIRECTORY "${dir}/${prefix}")
function(target_protobuf_sources target prefix)
set(dir "${CMAKE_CURRENT_BINARY_DIR}/pb-${target}")
file(MAKE_DIRECTORY "${dir}/${prefix}")
protobuf_generate(TARGET ${target} PROTOC_OUT_DIR "${dir}/${prefix}" "${ARGN}")
target_include_directories(
${target} SYSTEM
PUBLIC # Allows #include <package/path/to/file.proto> used by consumer files.
$<BUILD_INTERFACE:${dir}>
# Allows #include "path/to/file.proto" used by generated files.
$<BUILD_INTERFACE:${dir}/${prefix}>
# Allows #include <package/path/to/file.proto> used by consumer files.
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>
# Allows #include "path/to/file.proto" used by generated files.
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/${prefix}>)
install(DIRECTORY ${dir}/ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR} FILES_MATCHING PATTERN "*.h")
endfunction ()
protobuf_generate(
TARGET ${target}
PROTOC_OUT_DIR "${dir}/${prefix}"
"${ARGN}"
)
target_include_directories(${target} SYSTEM PUBLIC
# Allows #include <package/path/to/file.proto> used by consumer files.
$<BUILD_INTERFACE:${dir}>
# Allows #include "path/to/file.proto" used by generated files.
$<BUILD_INTERFACE:${dir}/${prefix}>
# Allows #include <package/path/to/file.proto> used by consumer files.
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>
# Allows #include "path/to/file.proto" used by generated files.
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/${prefix}>
)
install(
DIRECTORY ${dir}/
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}
FILES_MATCHING PATTERN "*.h"
)
endfunction()

25
cmake/xrpl_add_test.cmake Normal file
View File

@@ -0,0 +1,25 @@
include(isolate_headers)
function(xrpl_add_test name)
set(target ${PROJECT_NAME}.test.${name})
file(GLOB_RECURSE sources CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/${name}/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/${name}.cpp"
)
add_executable(${target} ${ARGN} ${sources})
isolate_headers(
${target}
"${CMAKE_SOURCE_DIR}"
"${CMAKE_SOURCE_DIR}/tests/${name}"
PRIVATE
)
# Make sure the test isn't optimized away in unity builds
set_target_properties(${target} PROPERTIES
UNITY_BUILD_MODE GROUP
UNITY_BUILD_BATCH_SIZE 0) # Adjust as needed
add_test(NAME ${target} COMMAND ${target})
endfunction()

View File

@@ -1,65 +1,55 @@
{
"version": "0.5",
"requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1765850150.075",
"xxhash/0.8.3#681d36a0a6111fc56e5e45ea182c19cc%1765850149.987",
"sqlite3/3.49.1#8631739a4c9b93bd3d6b753bac548a63%1765850149.926",
"soci/4.0.3#a9f8d773cd33e356b5879a4b0564f287%1765850149.46",
"snappy/1.1.10#968fef506ff261592ec30c574d4a7809%1765850147.878",
"secp256k1/0.7.1#3a61e95e220062ef32c48d019e9c81f7%1770306721.686",
"rocksdb/10.5.1#4a197eca381a3e5ae8adf8cffa5aacd0%1765850186.86",
"re2/20230301#ca3b241baec15bd31ea9187150e0b333%1765850148.103",
"protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1765850161.038",
"openssl/3.5.5#05a4ac5b7323f7a329b2db1391d9941f%1769599205.414",
"nudb/2.0.9#0432758a24204da08fee953ec9ea03cb%1769436073.32",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1765850143.914",
"libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1765842973.492",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1765842973.03",
"libarchive/3.8.1#ffee18995c706e02bf96e7a2f7042e0d%1765850144.736",
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1756234269.497",
"xxhash/0.8.3#681d36a0a6111fc56e5e45ea182c19cc%1756234289.683",
"sqlite3/3.49.1#8631739a4c9b93bd3d6b753bac548a63%1756234266.869",
"soci/4.0.3#a9f8d773cd33e356b5879a4b0564f287%1756234262.318",
"snappy/1.1.10#968fef506ff261592ec30c574d4a7809%1756234314.246",
"rocksdb/10.0.1#85537f46e538974d67da0c3977de48ac%1756234304.347",
"re2/20230301#dfd6e2bf050eb90ddd8729cfb4c844a4%1756234257.976",
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1%1756234251.614",
"openssl/1.1.1w#a8f0792d7c5121b954578a7149d23e03%1756223730.729",
"nudb/2.0.9#c62cfd501e57055a7e0d8ee3d5e5427d%1756234237.107",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1756234228.999",
"libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1756223727.64",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1756230911.03",
"libarchive/3.8.1#5cf685686322e906cb42706ab7e099a8%1756234256.696",
"jemalloc/5.3.0#e951da9cf599e956cebc117880d2d9f8%1729241615.244",
"gtest/1.17.0#5224b3b3ff3b4ce1133cbdd27d53ee7d%1768312129.152",
"grpc/1.72.0#f244a57bff01e708c55a1100b12e1589%1765850193.734",
"ed25519/2015.03#ae761bdc52730a843f0809bdf6c1b1f6%1765850143.772",
"date/3.0.4#862e11e80030356b53c2c38599ceb32b%1765850143.772",
"c-ares/1.34.5#5581c2b62a608b40bb85d965ab3ec7c8%1765850144.336",
"bzip2/1.0.8#c470882369c2d95c5c77e970c0c7e321%1765850143.837",
"boost/1.90.0#d5e8defe7355494953be18524a7f135b%1769454080.269",
"abseil/20250127.0#99262a368bd01c0ccca8790dfced9719%1766517936.993"
"grpc/1.50.1#02291451d1e17200293a409410d1c4e1%1756234248.958",
"doctest/2.4.11#a4211dfc329a16ba9f280f9574025659%1756234220.819",
"date/3.0.4#f74bbba5a08fa388256688743136cb6f%1756234217.493",
"c-ares/1.34.5#b78b91e7cfb1f11ce777a285bbf169c6%1756234217.915",
"bzip2/1.0.8#00b4a4658791c1f06914e087f0e792f5%1756234261.716",
"boost/1.83.0#5d975011d65b51abb2d2f6eb8386b368%1754325043.336",
"abseil/20230802.1#f0f91485b111dc9837a68972cb19ca7b%1756234220.907"
],
"build_requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1765850150.075",
"strawberryperl/5.32.1.1#707032463aa0620fa17ec0d887f5fe41%1765850165.196",
"protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1765850161.038",
"nasm/2.16.01#31e26f2ee3c4346ecd347911bd126904%1765850144.707",
"msys2/cci.latest#eea83308ad7e9023f7318c60d5a9e6cb%1770199879.083",
"m4/1.4.19#70dc8bbb33e981d119d2acc0175cf381%1763158052.846",
"cmake/4.2.0#ae0a44f44a1ef9ab68fd4b3e9a1f8671%1765850153.937",
"cmake/3.31.10#313d16a1aa16bbdb2ca0792467214b76%1765850153.479",
"b2/5.3.3#107c15377719889654eb9a162a673975%1765850144.355",
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1756234269.497",
"strawberryperl/5.32.1.1#707032463aa0620fa17ec0d887f5fe41%1756234281.733",
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1%1756234251.614",
"nasm/2.16.01#31e26f2ee3c4346ecd347911bd126904%1756234232.901",
"msys2/cci.latest#5b73b10144f73cc5bfe0572ed9be39e1%1751977009.857",
"m4/1.4.19#b38ced39a01e31fef5435bc634461fd2%1700758725.451",
"cmake/3.31.8#dde3bde00bb843687e55aea5afa0e220%1756234232.89",
"b2/5.3.3#107c15377719889654eb9a162a673975%1756234226.28",
"automake/1.16.5#b91b7c384c3deaa9d535be02da14d04f%1755524470.56",
"autoconf/2.71#51077f068e61700d65bb05541ea1e4b0%1731054366.86",
"abseil/20250127.0#99262a368bd01c0ccca8790dfced9719%1766517936.993"
"autoconf/2.71#51077f068e61700d65bb05541ea1e4b0%1731054366.86"
],
"python_requires": [],
"overrides": {
"boost/1.90.0#d5e8defe7355494953be18524a7f135b": [
"protobuf/3.21.12": [
null,
"boost/1.90.0"
],
"protobuf/5.27.0": [
"protobuf/6.32.1"
"protobuf/3.21.12"
],
"lz4/1.9.4": [
"lz4/1.10.0"
],
"boost/1.83.0": [
"boost/1.83.0"
],
"sqlite3/3.44.2": [
"sqlite3/3.49.1"
],
"boost/1.83.0": [
"boost/1.90.0"
],
"lz4/[>=1.9.4 <2]": [
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504"
]
},
"config_requires": []

View File

@@ -1,12 +0,0 @@
# Conan lockfile
To achieve reproducible dependencies, we use a [Conan lockfile](https://docs.conan.io/2/tutorial/versioning/lockfiles.html).
The `conan.lock` file in the repository contains a "snapshot" of the current
dependencies. It is implicitly used when running `conan` commands, so you don't
need to specify it.
You have to update this file every time you add a new dependency or change a
revision or version of an existing dependency.
To update a lockfile, run from the repository root: `./conan/lockfile/regenerate.sh`

View File

@@ -1,8 +0,0 @@
[settings]
arch=x86_64
build_type=Release
compiler=gcc
compiler.cppstd=20
compiler.libcxx=libstdc++11
compiler.version=13
os=Linux

View File

@@ -1,8 +0,0 @@
[settings]
arch=armv8
build_type=Release
compiler=apple-clang
compiler.cppstd=20
compiler.libcxx=libc++
compiler.version=17.0
os=Macos

View File

@@ -1,36 +0,0 @@
#!/usr/bin/env bash
set -ex
TEMP_DIR=$(mktemp -d)
trap "rm -rf $TEMP_DIR" EXIT
echo "Using temporary CONAN_HOME: $TEMP_DIR"
# We use a temporary Conan home to avoid polluting the user's existing Conan
# configuration and to not use local cache (which leads to non-reproducible lockfiles).
export CONAN_HOME="$TEMP_DIR"
# Ensure that the xrplf remote is the first to be consulted, so any recipes we
# patched are used. We also add it there to not created huge diff when the
# official Conan Center Index is updated.
conan remote add --force --index 0 xrplf https://conan.ripplex.io
# Delete any existing lockfile.
rm -f conan.lock
# Create a new lockfile that is compatible with Linux, macOS, and Windows. The
# first create command will create a new lockfile, while the subsequent create
# commands will merge any additional dependencies into the created lockfile.
conan lock create . \
--options '&:jemalloc=True' \
--options '&:rocksdb=True' \
--profile:all=conan/lockfile/linux.profile
conan lock create . \
--options '&:jemalloc=True' \
--options '&:rocksdb=True' \
--profile:all=conan/lockfile/macos.profile
conan lock create . \
--options '&:jemalloc=True' \
--options '&:rocksdb=True' \
--profile:all=conan/lockfile/windows.profile

View File

@@ -1,9 +0,0 @@
[settings]
arch=x86_64
build_type=Release
compiler=msvc
compiler.cppstd=20
compiler.runtime=dynamic
compiler.runtime_type=Release
compiler.version=194
os=Windows

Some files were not shown because too many files have changed in this diff Show More