Compare commits

...

50 Commits

Author SHA1 Message Date
Mayukha Vadari
6d369e0f02 docs: Update API changelog, add APIv2+APIv3 version documentation (#6308)
This change cleans up the `API-CHANGELOG.md` file. It moves the version-specific documentation to other files and fleshes out the changelog with all the API-related changes in each version.
2026-02-03 02:12:26 +00:00
Bart
b182430178 fix: Restore config changes that broke standalone mode (#6301)
When support was added for `xrpld.cfg` in addition to `rippled.cfg` in https://github.com/XRPLF/rippled/pull/6098, as part of an effort to rename occurrences of ripple(d) to xrpl(d), the clearing and creation of the data directory were modified for what, at the time, seemed to result in an equivalent code flow. This has turned out to not be true, which is why this change restores two modifications to `Config.cpp` that currently break running the binary in standalone mode.
2026-02-03 01:15:56 +00:00
Ed Hennis
fe31cdc9f6 chore: Add upper-case match for ARM64 in CompilationEnv (#6315) 2026-02-02 23:57:10 +00:00
Ayaz Salikhov
ff4520cc45 ci: Update hashes of XRPLF/actions (#6316)
This updates the hashes of all XRPLF/actions to their latest versions.
2026-02-02 19:37:06 +00:00
Ayaz Salikhov
fe9c8d568f chore: Format all cmake files without comments (#6294) 2026-01-29 18:19:32 +00:00
Ayaz Salikhov
a0e09187b9 chore: Add cmake-format pre-commit hook (#6279)
This change adds `cmake-format` as. a pre-commit hook. The style file closely matches that in Clio, and they will be made to be equivalent over time. For now, some files have been excluded, as those need some manual adjustments, which will be done in future changes.
2026-01-29 13:33:24 +00:00
Ayaz Salikhov
f3627fb5d5 chore: Remove unnecessary boost::system requirement from conanfile (#6290) 2026-01-28 19:14:31 +00:00
Ayaz Salikhov
5f638f5553 chore: Set ColumnLimit to 120 in clang-format (#6288)
This change updates the ColumnLimit from 80 to 120, and applies clang-format to reformat the code.
2026-01-28 18:09:50 +00:00
Jingchen
92046785d1 test: Fix the xrpl.net unit test using async read (#6241)
This change makes the `read` function call in `handleConnection` async, adds a new class `TestSink` to help debugging, and adds a new target `xrpl.tests.helpers` to put the helper class in.
2026-01-28 15:14:35 +00:00
Bart
b90a843ddd ci: Upload Conan recipes for develop, release candidates, and releases (#6286)
To allow developers to consume the latest unstable and (near-)stable versions of our `xrpl` Conan recipe, we should export and upload it whenever a push occurs to the corresponding branch or a release tag has been created. This way, developers do not have to figure out themselves what the most recent shortened commit hash was to determine the latest unstable recipe version (e.g. `3.2.0-b0+a1b2c3d`) or what the most recent release (candidate) was to determine the latest (near-)stable recipe version (e.g. `3.1.0-rc2`).

Now, pushes to the `develop` branch will produce the `develop` recipe version, pushes to the `release` branch will produce the `rc` recipe version, and creation of versioned tags will produce the `release` recipe version.
2026-01-28 10:02:34 +00:00
Jingchen
bb529d0317 fix: Stop embedded tests from hanging on ARM by using atomic_flag (#6248)
This change replaces the mutex `stoppingMutex_`, the `atomic_bool` variable `isTimeToStop`, and the conditional variable `stoppingCondition_` with an `atomic_flag` variable.

When `xrpld` is running the embedded tests as a child process, it has a control thread (the app bundle thread) that starts the application, and an application thread (the thread that executes `app_->run()`). Due to the relaxed memory ordering on ARM, it's not guaranteed that the application thread can see the change of the value resulting from the `isTimeToStop.exchange(true)` call before it is notified by `stoppingCondition_.notify_all()`, even though they do happen in the right order in the app bundle thread in `ApplicationImp::signalStop`. We therefore often get into the situation where `isTimeToStop` is `true`, but the application thread is waiting for `stoppingCondition_` to notify, because the app bundle thread may have already notified before the application thread actually starts waiting.

Switching to a single `atomic_flag` variable makes sure that there's only one synchronisation object and then the memory order guarantee provided by c++ can make sure that `notify_all` gets synchronised after `test_and_set` does.

Fixing this issue will stop the unit tests hanging forever and then we should see less (or hopefully no) time out errors in daily github action runs
2026-01-26 21:39:28 +00:00
Ed Hennis
a2f1973574 fix: Remove DEFAULT fields that change to the default in associateAsset (#6259) (#6273)
- Add Vault creation tests for showing valid range for AssetsMaximum
2026-01-26 19:58:12 +00:00
Bart
847e875635 refactor: Update Boost to 1.90 (#6280)
Upcoming feature work requires functionality present in a newer Boost version. These newer versions also have improvements for sanitizers.
2026-01-26 18:54:43 +00:00
Mayukha Vadari
778da954b4 refactor: clean up uses of std::source_location (#6272)
Since the minimum Clang version we support is 16, the checks for version < 15 are no longer necessary. This change therefore removes the macros checking if the clang version is < 15 and simplifies uses of `std::source_location`.
2026-01-23 14:09:00 -05:00
Bart
0586b5678e ci: Pass missing sanitizers input to actions (#6266)
The `upload-conan-deps` workflow that's triggered on push is supposed to upload the Conan dependencies to our remote, so future PR commits can pull those dependencies from the remote. However, as the `sanitize` argument is missing, it was building different dependencies than what the PRs are building for the asan/tsan/ubsan job, so the latter would not find anything in the remote that they could use. This change sets the missing `sanitizers` input variable when running the `build-deps` action. 

Separately, the `setup-conan` action showed the default profile, while we are using the `ci` profile. To ensure the profile is correctly printed when sanitizers are enabled, the environment variable the profile uses is set before calling the action.
2026-01-23 06:40:55 -05:00
Bart
66158d786f ci: Properly propagate Conan credentials (#6265)
The export and upload steps were initially in a separate action, where GitHub Actions does not support the `secrets` keyword, but only `inputs` for the credentials. After they were moved to a reusable workflow, only part of the references to the credentials were updated. This change correctly references to the Conan credentials via `secrets` instead of `inputs`.
2026-01-22 16:05:15 -05:00
Bart
c57ffdbcb8 ci: Explicitly set version when exporting the Conan recipe (#6264)
By default the Conan recipe extracts the version from `BuildInfo.cpp`, but in some of the cases we want to upload a recipe with a suffix derived from the commit hash. This currently then results in the uploading to fail, since there is a version mismatch.

Here we explicitly set the version, and then simplify the steps in the upload workflow since we now need the recipe name (embedded within the conanfile.py but also needed when uploading), the recipe version, and the recipe ref (name/version).
2026-01-22 19:05:59 +00:00
Bart
4e3f953fc4 ci: Use plus instead of hyphen for Conan recipe version suffix (#6261)
Conan recipes use semantic versioning, and since our version already contains a hyphen the second hyphen causes Conan to ignore it. The plus sign is a valid separator we can use instead, so this change uses a `+` to separate a version suffix (commit hash) instead of a `-`.
2026-01-22 16:42:53 +00:00
Pratik Mankawde
a4f8aa623f chore: Detect uninitialized variables in CMake files (#6247)
There were a few uninitialized variables in CMake files. This change will make sure we always check if a variable has been initialized before using them, or in come cases initialize them by default. This change will raise an error on CI if a developer introduced an uninitialized variable in CMake files.
2026-01-22 11:16:18 -05:00
Bart
8695313565 ci: Run on-trigger and on-pr when generate-version is modified (#6257)
This change ensures that the `on-pr` and `on-trigger` workflows run when the generate-version action is modified.
2026-01-22 13:48:50 +00:00
Valentin Balaschenko
68c9d5ca0f refactor: Enforce 15-char limit and simplify labels for thread naming (#6212)
This change continues the thread naming work from #5691 and #5758, which enables more useful lock contention profiling by ensuring threads/jobs have short, stable, human-readable names (rather than being truncated/failing due to OS limits). This changes diagnostic naming only (thread names and job/load-event labels), not behavior.

Specific modifications are:
* Shortens all thread/job names used with `beast::setCurrentThreadName`, so the effective Linux thread name stays within the 15-character limit.
* Removes per-ledger sequence numbers from job/thread names to avoid long labels. This improves aggregation in lock contention profiling for short-lived job executions.
2026-01-22 08:19:29 -05:00
David Fuelling
211054baff docs: Update Ripple Bug Bounty public key (#6258)
The Ripple Bug Bounty program recently changed the public keys that security researchers can use to encrypt vulnerabilities and messages for submission to the program. This information was updated on https://ripple.com/legal/bug-bounty/ and this PR updates the `SECURITY.md` to align.
2026-01-21 19:55:56 -05:00
Bart
4fd4e93b3e ci: Add missing commit hash to Conan recipe version (#6256)
During several iterations of development of https://github.com/XRPLF/rippled/pull/6235, the commit hash was supposed to be moved into the `run:` statement, but it slipped through the cracks and did not get added. This change adds the commit hash as suffix to the Conan recipe version.
2026-01-21 19:17:05 -05:00
Ayaz Salikhov
4cd6cc3e01 fix: Include <functional> header in Number.h (#6254)
The `Number.h` header file now has `std::reference_wrapper` from `<functional>`, but the include is missing, causing downstream build problems. This change adds the header.
2026-01-21 18:52:22 -05:00
Bart
a37c556079 ci: Upload Conan recipe for merges into develop and commits to release (#6235)
This change uploads the `libxrpl` library as a Conan recipe to our remote when (i) merging into the `develop` branch, (ii) committing to a PR that targets a `release*` branch, and (iii) a versioned tag is applied. Clio is only notified in the second case. The user and channel are no longer used when uploading the recipe.

Specific changes are:
* A `generate-version` action is added, which extracts the build version from `BuildInfo.cpp` and appends the short 7-character commit hash to it for merges into the `develop` branch and for commits to a PR that targets a `release*` branch. When a tag is applied, however, the tag itself is used as the version. This functionality has been turned into a separate action as we will use the same versioning logic for creating .rpm and .deb packages, as well as Docker images.
* An `upload-recipe` action is added, which calls the `generate-version` action and further handles the uploading of the recipe to Conan.
* This action is called by both the `on-pr` and `on-trigger` workflows, and a new `on-tag` workflow.

The reason for this change is that we have downstream uses for the `libxrpl` library, but currently only upload the recipe to check for compatibility with Clio when making commits to a PR that targets the release branch.
2026-01-21 17:31:44 -05:00
Pratik Mankawde
5e808794d8 Limit reply size on TMGetObjectByHash queries (#6110)
`PeerImp` processes `TMGetObjectByHash` queries with an unbounded per-request loop, which performs a `NodeStore` fetch and then appends retrieved data to the reply for each queried object without a local count cap or reply-byte budget. However, the `Nodestore` fetches are expensive when high in numbers, which might slow down the process overall. Hence this code change adds an upper cap on the response size.
2026-01-21 09:19:53 -05:00
Bart
12c0d67ff6 ci: remove 'master' branch as a trigger (#6234)
This change removes the `master` branch as a trigger for the CI pipelines, and updates comments accordingly. It also fixes the pre-commit workflow, so it will run on all release branches.
2026-01-16 15:01:53 -05:00
Ed Hennis
00d3cee6cc Improve ledger_entry lookups for fee, amendments, NUNL, and hashes (#5644)
These "fixed location" objects can be found in multiple ways:

1. The lookup parameters use the same format as other ledger objects, but the only valid value is true or the valid index of the object: 
  - Amendments: "amendments" : true
  - FeeSettings: "fee" : true
  - NegativeUNL: "nunl" : true
  - LedgerHashes: "hashes" : true (For the "short" list. See below.)

2. With RPC API >= 3, using special case values to "index", such as "index" : "amendments". Uses the same names as above. Note that for "hashes", this option will only return the recent ledger hashes / "short" skip list.

3. LedgerHashes has two types: "short", which stores recent ledger hashes, and "long", which stores the flag ledger hashes for a particular ledger range.
  - To find a "long" LedgerHashes object, request '"hashes" : <ledger sequence>'. <ledger sequence> must be a number that evaluates to an unsigned integer.
  - To find the "short" LedgerHashes object, request "hashes": true as with the other fixed objects.

The following queries are all functionally equivalent:

  - "amendments" : true
  - "index" : "amendments" (API >=3 only)
  - "amendments" : "7DB0788C020F02780A673DC74757F23823FA3014C1866E72CC4CD8B226CD6EF4"
  - "index" : "7DB0788C020F02780A673DC74757F23823FA3014C1866E72CC4CD8B226CD6EF4"

Finally, whether the object is found or not, if a valid index is computed, that index will be returned. This can be used to confirm the query was valid, or to save the index for future use.
2026-01-16 12:26:30 -05:00
Pratik Mankawde
96d17b7f66 ci: Add sanitizers to CI builds (#5996)
This change adds support for sanitizer build options in CI builds workflow. Currently `asan+ubsan` is enabled, while `tsan+ubsan` is left disabled as more changes are required.
2026-01-15 16:18:14 +00:00
Ayaz Salikhov
ec44347ffc test: Use gtest instead of doctest (#6216)
This change switches over the doctest framework to the gtest framework.
2026-01-15 08:36:13 -05:00
Ed Hennis
c9458b72ca test: Suppress "parse failed" message in Batch tests (#6207) 2026-01-14 23:45:00 +00:00
Mayukha Vadari
ebcfd6645d test: Replace failed string in Vault test case (#6214)
The word `failed` in the test case makes it hard to search through the test logs when an actual test failure occurs, so this change renames the word to just `fail` instead.
2026-01-14 14:40:07 -05:00
Ed Hennis
efa57e872b Change LendingProtocol feature and dependencies to supported (#6146) 2026-01-13 21:53:40 +00:00
Ed Hennis
33f4c92b61 Expand Number to support the full integer range (#6025)
- Refactor Number internals away from int64 to uint64 & a sign flag
  - ctors and accessors use `rep`. Very few things expose
    `internalrep`.
  - An exception is "unchecked" and the new "normalized", which explicitly
    take an internalrep. But with those special control flags, it's easier
    to distinguish and control when they are used.

- For now, skip the larger mantissas in AMM transactions and tests

- Remove trailing zeros from scientific notation Number strings
  - Update tests. This has the happy side effect of making some of the string
    representations _more_ consistent between the small and large
    mantissa ranges.

- Add semi-automatic rounding of STNumbers based on Asset types
  - Create a new SField metadata enum, sMD_NeedsAsset, which indicates
    the field should be associated with an Asset so it can be rounded.
  - Add a new STTakesAsset intermediate class to handle the Asset
    association to a derived ST class. Currently only used in STNumber,
    but could be used by other types in the future.
  - Add "associateAsset" which takes an SLE and an Asset, finds the
    sMD_NeedsAsset fields, and associates the Asset to them. In the case
    of STNumber, that both stores the Asset, and rounds the value
    immediately.
  - Transactors only need to add a call to associateAsset _after_ all of
    the STNumbers have been set. Unfortunately, the inner workings of
    STObject do not do the association correctly with uninitialized
    fields.
  - When serializing an STNumber that has an Asset, round it before
    serializing.
  - Add an override of roundToAsset, which rounds a Number value in place
    to an Asset, but without any additional scale.
  - Update and fix a bunch of Loan-related tests to accommodate the
    expanded Number class.

---------

Co-authored-by: Vito <5780819+Tapanito@users.noreply.github.com>
2026-01-13 21:01:11 +00:00
Ed Hennis
2601442e16 Improve and fix bugs in Lending Protocol (#6102)
- Spec: XLS-66

    Fix overpayment asserts (#6084)

    MPTTester::operator() parameter should be std::int64_t
    - Originally defined as uint64_t, but the testIssuerLoan() test called
      it with a negative number, causing an overflow to a very large number
      that in some circumstances could be silently cast back to an int64_t,
      but might not be. I believe this is UB, and we don't want to rely on
      that.

    Review feedback from @Tapanito: overpayment value change
    - In overpayment results, the management fee was being calculated twice:
      once as part of the value change, and as part of the fees paid.
      Exclude it from the value change.

    Fix Overpayment Calculation  (#6087)
    - Adds additional unit tests to cover math calculations.
    - Removes unused methods.

    Review feedback from @shawnxie999: even more rounding
    - Round the initial total value computation upward, unless there is
      0-interest.
    - Rename getVaultScale to getAssetsTotalScale, and convert one incorrect
      computation to use it.
    - Use adjustImpreciseNumber for LossUnrealized.
    - Add some logging to computeLoanProperties.

    Fix LoanBrokerSet debtMaximum limits (#6116)

    Fix some minor bugs in Lending Protocol (#6101)
    - add nodiscard to unimpairLoan, and check result in LoanPay
    - add a check to verify that issuer exists
    - improve LoanManage error code for dust amounts

    Check permissions in LoanSet and LoanPay (#6108)

    Disallow pseudo accounts to be Destination for LoanBrokerCoverWithdraw (#6106)

    Ensure vault asset cap is not exceeded (#6124)

    Fix Overpayment ValueChange calculation in Lending Protocol (#6114)
    - Adds loan state to LoanProperties.
    - Cleans up computeLoanProperties.
    - Fixes missing management fee from overpayment.

    fix: Enable LP Deposits when the broker is the asset issuer (#6119)
    * Replace accountHolds with accountSpendable when checking
    for account funds in VaultDeposit and LoanBrokerCoverDeposit

    Add a few minor changes (#6158)
    - Updates or fixes a couple of things I noticed while reviewing changes
      to the spec.
    - Rename sfPreviousPaymentDate to sfPreviousPaymentDueDate.
    - Make the vault asset cap check added in #6124 a little more robust:
      1. Check in preflight if the vault is _already_ over the limit.
      2. Prevent overflow when checking with the loan value. (Subtract
         instead of adding, in case the values are near maxint. Both return
         the same result. Also add a unit test so each case is covered.

    Add minimum grace period validation (#6133)

    Fix bugs: frozen pseudo-account, and FLC cutoff (#6170)

    refactor: Rename raw state to theoretical state (#6187)

    Check if a withdrawal amount exceeds any applicable receiving limit. (#6117)

    Fix overpayment result calculation (#6195)

    Address review feedback from Lending Protocol re-review (#6161)

---------

Co-authored-by: Gregory Tsipenyuk <gregtatcam@users.noreply.github.com>
Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
Co-authored-by: Vito Tumas <5780819+Tapanito@users.noreply.github.com>
Co-authored-by: Shawn Xie <35279399+shawnxie999@users.noreply.github.com>
Co-authored-by: Jingchen <a1q123456@users.noreply.github.com>
2026-01-13 19:42:58 +00:00
Bart
9686604963 fix: Update Conan lock file with changed OpenSSL recipe (#6211)
This change updates the `conan.lock` file with a changed OpenSSL recipe that contains a fix regarding options passed to the compiler
2026-01-13 17:29:04 +00:00
Ayaz Salikhov
0efae5d16e ci: Update actions/images to use cmake 4.2.1 and conan 2.24.0 (#6209) 2026-01-13 11:52:10 -05:00
Bart
4755bb8606 refactor: Remove unnecessary version number and options in cmake find_package (#6169)
This change removes unnecessary version numbers in the OpenSSL and Boost `find_package` CMake statements. An unnecessary OpenSSL definition is removed, while Conan options for SSL are updated to disable insecure ciphers. Moreover, the statements are now ordered alphabetically and more logically.
2026-01-12 19:14:39 -05:00
Bart
92d40de4cb chore: Pin pre-commit hooks to commit hashes (#6205)
This change updates and pins the Black and CSpell pre-commit hooks.
2026-01-12 12:53:46 -05:00
Ed Hennis
b2c5927b48 fix: Inner batch transactions never have valid signatures (#6069)
- Introduces amendment `fixBatchInnerSigs`
- Update Batch unit tests
  - Fix all the Env instantiations to _use_ the "features" parameter.
  - testInnerSubmitRPC runs with Batch enabled and disabled.
  - Add a test to testInnerSubmitRPC for a correctly signed tx incorrectly
    using the tfInnerBatchTxn flag.
  - Generalize the submitAndValidate lambda in testInnerSubmitRPC.
  - With the fix amendment, a transaction never reaches the transaction
    engine (Transactor and derived classes.)
  - Test submitting a pseudo-transaction. Stopped before reaching the
    transaction engine, but with different errors.
- The tests verify that without the amendment, a transaction with
  tfInnerBatchTxn is immediately rejected. Without the amendment, things
  are safe. The amendment just makes things safer and more future-proof.
2026-01-10 03:10:04 +00:00
Bart
7c1183547a chore: Change /Zi to /Z7 for ccache, remove debug symbols in CI (#6198)
As the `/Zi` compiler flag is unsupported by ccache, this change switches it to `/Z7` instead. For CI runs all debug info is omitted.
2026-01-09 21:44:43 +00:00
Vito Tumas
14467fba5e VaultClawback: Burn shares of an empty vault (#6120)
- Adds a mechanism for the vault owner to burn user shares when the vault is stuck. If the Vault has 0 AssetsAvailable and Total, the owner may submit a VaultClawback to reclaim the worthless fees, and thus allow the Vault to be deleted. The Amount must be left off (unless the owner is the asset issuer), specified as 0 Shares, or specified as the number of Shares held.
2026-01-09 14:58:02 -05:00
Zhanibek Bakin
fc00723836 fix: Truncate thread name to 15 chars on Linux (#5758)
This change:
* Truncates thread names if more than 15 chars with `snprintf`.
* Adds warnings for truncated thread names if `-DTRUNCATED_THREAD_NAME_LOGS=ON`.
* Add a static assert for string literals to stop compiling if > 15 chars.
* Shortens `Resource::Manager` to `Resource::Mngr` to fix the static assert failure.
* Updates `CurrentThreadName_test` unit test specifically for Linux to verify truncation.
2026-01-09 13:37:55 -05:00
oncecelll
c24a6041f7 docs: Fix minor spelling issues in comments (#6194) 2026-01-09 13:15:05 -05:00
Bart
e1d97bea12 ci: Use updated prepare-runner in actions and worfklows (#6188)
This change updates the XRPLF pre-commit workflow and prepare-runner action to their latest versions. For naming consistency the prepare-runner action changed the disable_ccache variable into enable_ccache, which matches our naming.
2026-01-08 15:02:59 -05:00
Mayukha Vadari
53aa5ca903 refactor: Fix typos, enable cspell pre-commit (#5719)
This change fixes the last of the spelling issues, and enables the pre-commit (and CI) check for spelling. There are no functionality changes, but it does rename some enum values.
2026-01-08 10:34:49 -05:00
Denis Angell
510c0d82e9 fix: Reorder Batch Preflight Errors (#6176)
This change fixes https://github.com/XRPLF/rippled/issues/6058.
2026-01-08 13:48:39 +00:00
Mayukha Vadari
17565d21d4 refactor: Remove unused credentials signature hash prefix (#6186)
This change removes the unused credentials signature hash prefix from `HashPrefix.h`.
2026-01-08 08:29:59 -05:00
Mayukha Vadari
07ff532d30 refactor: Fix spelling issues in all variables/functions (#6184)
This change fixes many typos in comments, variables, and public functions. There is no functionality change.
2026-01-07 21:30:35 +00:00
Mayukha Vadari
2c37ef7762 refactor: Fix spelling issues in private/local variables and functions (#6182)
This change fixes several typos in private/local variables and private functions. There is no functionality change.
2026-01-07 14:26:14 -05:00
1115 changed files with 37857 additions and 69220 deletions

View File

@@ -37,7 +37,7 @@ BinPackParameters: false
BreakBeforeBinaryOperators: false BreakBeforeBinaryOperators: false
BreakBeforeTernaryOperators: true BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true BreakConstructorInitializersBeforeComma: true
ColumnLimit: 80 ColumnLimit: 120
CommentPragmas: "^ IWYU pragma:" CommentPragmas: "^ IWYU pragma:"
ConstructorInitializerAllOnOneLineOrOnePerLine: true ConstructorInitializerAllOnOneLineOrOnePerLine: true
ConstructorInitializerIndentWidth: 4 ConstructorInitializerIndentWidth: 4

247
.cmake-format.yaml Normal file
View File

@@ -0,0 +1,247 @@
_help_parse: Options affecting listfile parsing
parse:
_help_additional_commands:
- Specify structure for custom cmake functions
additional_commands:
target_protobuf_sources:
pargs:
- target
- prefix
kwargs:
PROTOS: "*"
LANGUAGE: cpp
IMPORT_DIRS: "*"
GENERATE_EXTENSIONS: "*"
PLUGIN: "*"
_help_override_spec:
- Override configurations per-command where available
override_spec: {}
_help_vartags:
- Specify variable tags.
vartags: []
_help_proptags:
- Specify property tags.
proptags: []
_help_format: Options affecting formatting.
format:
_help_disable:
- Disable formatting entirely, making cmake-format a no-op
disable: false
_help_line_width:
- How wide to allow formatted cmake files
line_width: 120
_help_tab_size:
- How many spaces to tab for indent
tab_size: 4
_help_use_tabchars:
- If true, lines are indented using tab characters (utf-8
- 0x09) instead of <tab_size> space characters (utf-8 0x20).
- In cases where the layout would require a fractional tab
- character, the behavior of the fractional indentation is
- governed by <fractional_tab_policy>
use_tabchars: false
_help_fractional_tab_policy:
- If <use_tabchars> is True, then the value of this variable
- indicates how fractional indentions are handled during
- whitespace replacement. If set to 'use-space', fractional
- indentation is left as spaces (utf-8 0x20). If set to
- "`round-up` fractional indentation is replaced with a single"
- tab character (utf-8 0x09) effectively shifting the column
- to the next tabstop
fractional_tab_policy: use-space
_help_max_subgroups_hwrap:
- If an argument group contains more than this many sub-groups
- (parg or kwarg groups) then force it to a vertical layout.
max_subgroups_hwrap: 4
_help_max_pargs_hwrap:
- If a positional argument group contains more than this many
- arguments, then force it to a vertical layout.
max_pargs_hwrap: 5
_help_max_rows_cmdline:
- If a cmdline positional group consumes more than this many
- lines without nesting, then invalidate the layout (and nest)
max_rows_cmdline: 2
_help_separate_ctrl_name_with_space:
- If true, separate flow control names from their parentheses
- with a space
separate_ctrl_name_with_space: true
_help_separate_fn_name_with_space:
- If true, separate function names from parentheses with a
- space
separate_fn_name_with_space: false
_help_dangle_parens:
- If a statement is wrapped to more than one line, than dangle
- the closing parenthesis on its own line.
dangle_parens: false
_help_dangle_align:
- If the trailing parenthesis must be 'dangled' on its on
- "line, then align it to this reference: `prefix`: the start"
- "of the statement, `prefix-indent`: the start of the"
- "statement, plus one indentation level, `child`: align to"
- the column of the arguments
dangle_align: prefix
_help_min_prefix_chars:
- If the statement spelling length (including space and
- parenthesis) is smaller than this amount, then force reject
- nested layouts.
min_prefix_chars: 18
_help_max_prefix_chars:
- If the statement spelling length (including space and
- parenthesis) is larger than the tab width by more than this
- amount, then force reject un-nested layouts.
max_prefix_chars: 10
_help_max_lines_hwrap:
- If a candidate layout is wrapped horizontally but it exceeds
- this many lines, then reject the layout.
max_lines_hwrap: 2
_help_line_ending:
- What style line endings to use in the output.
line_ending: unix
_help_command_case:
- Format command names consistently as 'lower' or 'upper' case
command_case: canonical
_help_keyword_case:
- Format keywords consistently as 'lower' or 'upper' case
keyword_case: unchanged
_help_always_wrap:
- A list of command names which should always be wrapped
always_wrap: []
_help_enable_sort:
- If true, the argument lists which are known to be sortable
- will be sorted lexicographicall
enable_sort: true
_help_autosort:
- If true, the parsers may infer whether or not an argument
- list is sortable (without annotation).
autosort: true
_help_require_valid_layout:
- By default, if cmake-format cannot successfully fit
- everything into the desired linewidth it will apply the
- last, most aggressive attempt that it made. If this flag is
- True, however, cmake-format will print error, exit with non-
- zero status code, and write-out nothing
require_valid_layout: false
_help_layout_passes:
- A dictionary mapping layout nodes to a list of wrap
- decisions. See the documentation for more information.
layout_passes: {}
_help_markup: Options affecting comment reflow and formatting.
markup:
_help_bullet_char:
- What character to use for bulleted lists
bullet_char: "-"
_help_enum_char:
- What character to use as punctuation after numerals in an
- enumerated list
enum_char: .
_help_first_comment_is_literal:
- If comment markup is enabled, don't reflow the first comment
- block in each listfile. Use this to preserve formatting of
- your copyright/license statements.
first_comment_is_literal: false
_help_literal_comment_pattern:
- If comment markup is enabled, don't reflow any comment block
- which matches this (regex) pattern. Default is `None`
- (disabled).
literal_comment_pattern: null
_help_fence_pattern:
- Regular expression to match preformat fences in comments
- default= ``r'^\s*([`~]{3}[`~]*)(.*)$'``
fence_pattern: ^\s*([`~]{3}[`~]*)(.*)$
_help_ruler_pattern:
- Regular expression to match rulers in comments default=
- '``r''^\s*[^\w\s]{3}.*[^\w\s]{3}$''``'
ruler_pattern: ^\s*[^\w\s]{3}.*[^\w\s]{3}$
_help_explicit_trailing_pattern:
- If a comment line matches starts with this pattern then it
- is explicitly a trailing comment for the preceding
- argument. Default is '#<'
explicit_trailing_pattern: "#<"
_help_hashruler_min_length:
- If a comment line starts with at least this many consecutive
- hash characters, then don't lstrip() them off. This allows
- for lazy hash rulers where the first hash char is not
- separated by space
hashruler_min_length: 10
_help_canonicalize_hashrulers:
- If true, then insert a space between the first hash char and
- remaining hash chars in a hash ruler, and normalize its
- length to fill the column
canonicalize_hashrulers: true
_help_enable_markup:
- enable comment markup parsing and reflow
enable_markup: false
_help_lint: Options affecting the linter
lint:
_help_disabled_codes:
- a list of lint codes to disable
disabled_codes: []
_help_function_pattern:
- regular expression pattern describing valid function names
function_pattern: "[0-9a-z_]+"
_help_macro_pattern:
- regular expression pattern describing valid macro names
macro_pattern: "[0-9A-Z_]+"
_help_global_var_pattern:
- regular expression pattern describing valid names for
- variables with global (cache) scope
global_var_pattern: "[A-Z][0-9A-Z_]+"
_help_internal_var_pattern:
- regular expression pattern describing valid names for
- variables with global scope (but internal semantic)
internal_var_pattern: _[A-Z][0-9A-Z_]+
_help_local_var_pattern:
- regular expression pattern describing valid names for
- variables with local scope
local_var_pattern: "[a-z][a-z0-9_]+"
_help_private_var_pattern:
- regular expression pattern describing valid names for
- privatedirectory variables
private_var_pattern: _[0-9a-z_]+
_help_public_var_pattern:
- regular expression pattern describing valid names for public
- directory variables
public_var_pattern: "[A-Z][0-9A-Z_]+"
_help_argument_var_pattern:
- regular expression pattern describing valid names for
- function/macro arguments and loop variables.
argument_var_pattern: "[a-z][a-z0-9_]+"
_help_keyword_pattern:
- regular expression pattern describing valid names for
- keywords used in functions or macros
keyword_pattern: "[A-Z][0-9A-Z_]+"
_help_max_conditionals_custom_parser:
- In the heuristic for C0201, how many conditionals to match
- within a loop in before considering the loop a parser.
max_conditionals_custom_parser: 2
_help_min_statement_spacing:
- Require at least this many newlines between statements
min_statement_spacing: 1
_help_max_statement_spacing:
- Require no more than this many newlines between statements
max_statement_spacing: 2
max_returns: 6
max_branches: 12
max_arguments: 5
max_localvars: 15
max_statements: 50
_help_encode: Options affecting file encoding
encode:
_help_emit_byteorder_mark:
- If true, emit the unicode byte-order mark (BOM) at the start
- of the file
emit_byteorder_mark: false
_help_input_encoding:
- Specify the encoding of the input file. Defaults to utf-8
input_encoding: utf-8
_help_output_encoding:
- Specify the encoding of the output file. Defaults to utf-8.
- Note that cmake only claims to support utf-8 so be careful
- when using anything else
output_encoding: utf-8
_help_misc: Miscellaneous configurations options.
misc:
_help_per_command:
- A dictionary containing any per-command configuration
- overrides. Currently only `command_case` is supported.
per_command: {}

View File

@@ -28,6 +28,7 @@ ignoreRegExpList:
- /[\['"`]-[DWw][a-zA-Z0-9_-]+['"`\]]/g # compile flags - /[\['"`]-[DWw][a-zA-Z0-9_-]+['"`\]]/g # compile flags
suggestWords: suggestWords:
- xprl->xrpl - xprl->xrpl
- xprld->xrpld
- unsynched->unsynced - unsynched->unsynced
- synched->synced - synched->synced
- synch->sync - synch->sync
@@ -51,6 +52,7 @@ words:
- Btrfs - Btrfs
- canonicality - canonicality
- checkme - checkme
- choco
- chrono - chrono
- citardauq - citardauq
- clawback - clawback
@@ -60,6 +62,7 @@ words:
- compr - compr
- conanfile - conanfile
- conanrun - conanrun
- confs
- connectability - connectability
- coro - coro
- coros - coros
@@ -68,6 +71,7 @@ words:
- cryptoconditional - cryptoconditional
- cryptoconditions - cryptoconditions
- csprng - csprng
- ctest
- ctid - ctid
- currenttxhash - currenttxhash
- daria - daria
@@ -83,19 +87,21 @@ words:
- doxyfile - doxyfile
- dxrpl - dxrpl
- endmacro - endmacro
- endpointv
- exceptioned - exceptioned
- Falco - Falco
- finalizers - finalizers
- firewalled - firewalled
- fmtdur - fmtdur
- fsanitize
- funclets - funclets
- gcov - gcov
- gcovr - gcovr
- ghead
- Gnutella - Gnutella
- gpgcheck - gpgcheck
- gpgkey - gpgkey
- hotwallet - hotwallet
- hwrap
- ifndef - ifndef
- inequation - inequation
- insuf - insuf
@@ -103,11 +109,14 @@ words:
- iou - iou
- ious - ious
- isrdc - isrdc
- itype
- jemalloc - jemalloc
- jlog - jlog
- keylet - keylet
- keylets - keylets
- keyvadb - keyvadb
- kwarg
- kwargs
- ledgerentry - ledgerentry
- ledgerhash - ledgerhash
- ledgerindex - ledgerindex
@@ -123,6 +132,7 @@ words:
- lseq - lseq
- lsmf - lsmf
- ltype - ltype
- mcmodel
- MEMORYSTATUSEX - MEMORYSTATUSEX
- Merkle - Merkle
- Metafuncton - Metafuncton
@@ -156,6 +166,7 @@ words:
- nunl - nunl
- Nyffenegger - Nyffenegger
- ostr - ostr
- pargs
- partitioner - partitioner
- paychan - paychan
- paychans - paychans
@@ -191,10 +202,12 @@ words:
- roundings - roundings
- sahyadri - sahyadri
- Satoshi - Satoshi
- scons
- secp - secp
- sendq - sendq
- seqit - seqit
- sf - sf
- SFIELD
- shamap - shamap
- shamapitem - shamapitem
- sidechain - sidechain
@@ -221,6 +234,8 @@ words:
- takergets - takergets
- takerpays - takerpays
- ters - ters
- TMEndpointv2
- trixie
- tx - tx
- txid - txid
- txids - txids
@@ -228,6 +243,8 @@ words:
- txn - txn
- txns - txns
- txs - txs
- UBSAN
- ubsan
- umant - umant
- unacquired - unacquired
- unambiguity - unambiguity
@@ -263,6 +280,7 @@ words:
- xbridge - xbridge
- xchain - xchain
- ximinez - ximinez
- EXPECT_STREQ
- XMACRO - XMACRO
- xrpkuwait - xrpkuwait
- xrpl - xrpl

1
.gitattributes vendored
View File

@@ -1,5 +1,6 @@
# Set default behaviour, in case users don't have core.autocrlf set. # Set default behaviour, in case users don't have core.autocrlf set.
#* text=auto #* text=auto
# cspell: disable
# Visual Studio # Visual Studio
*.sln text eol=crlf *.sln text eol=crlf

View File

@@ -18,6 +18,10 @@ inputs:
description: "The logging verbosity." description: "The logging verbosity."
required: false required: false
default: "verbose" default: "verbose"
sanitizers:
description: "The sanitizers to enable."
required: false
default: ""
runs: runs:
using: composite using: composite
@@ -29,9 +33,11 @@ runs:
BUILD_OPTION: ${{ inputs.force_build == 'true' && '*' || 'missing' }} BUILD_OPTION: ${{ inputs.force_build == 'true' && '*' || 'missing' }}
BUILD_TYPE: ${{ inputs.build_type }} BUILD_TYPE: ${{ inputs.build_type }}
LOG_VERBOSITY: ${{ inputs.log_verbosity }} LOG_VERBOSITY: ${{ inputs.log_verbosity }}
SANITIZERS: ${{ inputs.sanitizers }}
run: | run: |
echo 'Installing dependencies.' echo 'Installing dependencies.'
conan install \ conan install \
--profile ci \
--build="${BUILD_OPTION}" \ --build="${BUILD_OPTION}" \
--options:host='&:tests=True' \ --options:host='&:tests=True' \
--options:host='&:xrpld=True' \ --options:host='&:xrpld=True' \

View File

@@ -0,0 +1,44 @@
name: Generate build version number
description: "Generate build version number."
outputs:
version:
description: "The generated build version number."
value: ${{ steps.version.outputs.version }}
runs:
using: composite
steps:
# When a tag is pushed, the version is used as-is.
- name: Generate version for tag event
if: ${{ github.event_name == 'tag' }}
shell: bash
env:
VERSION: ${{ github.ref_name }}
run: echo "VERSION=${VERSION}" >> "${GITHUB_ENV}"
# When a tag is not pushed, then the version (e.g. 1.2.3-b0) is extracted
# from the BuildInfo.cpp file and the shortened commit hash appended to it.
# We use a plus sign instead of a hyphen because Conan recipe versions do
# not support two hyphens.
- name: Generate version for non-tag event
if: ${{ github.event_name != 'tag' }}
shell: bash
run: |
echo 'Extracting version from BuildInfo.cpp.'
VERSION="$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" | awk -F '"' '{print $2}')"
if [[ -z "${VERSION}" ]]; then
echo 'Unable to extract version from BuildInfo.cpp.'
exit 1
fi
echo 'Appending shortened commit hash to version.'
SHA='${{ github.sha }}'
VERSION="${VERSION}+${SHA:0:7}"
echo "VERSION=${VERSION}" >> "${GITHUB_ENV}"
- name: Output version
id: version
shell: bash
run: echo "version=${VERSION}" >> "${GITHUB_OUTPUT}"

View File

@@ -2,11 +2,11 @@ name: Setup Conan
description: "Set up Conan configuration, profile, and remote." description: "Set up Conan configuration, profile, and remote."
inputs: inputs:
conan_remote_name: remote_name:
description: "The name of the Conan remote to use." description: "The name of the Conan remote to use."
required: false required: false
default: xrplf default: xrplf
conan_remote_url: remote_url:
description: "The URL of the Conan endpoint to use." description: "The URL of the Conan endpoint to use."
required: false required: false
default: https://conan.ripplex.io default: https://conan.ripplex.io
@@ -28,19 +28,19 @@ runs:
shell: bash shell: bash
run: | run: |
echo 'Installing profile.' echo 'Installing profile.'
conan config install conan/profiles/default -tf $(conan config home)/profiles/ conan config install conan/profiles/ -tf $(conan config home)/profiles/
echo 'Conan profile:' echo 'Conan profile:'
conan profile show conan profile show --profile ci
- name: Set up Conan remote - name: Set up Conan remote
shell: bash shell: bash
env: env:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }} REMOTE_NAME: ${{ inputs.remote_name }}
CONAN_REMOTE_URL: ${{ inputs.conan_remote_url }} REMOTE_URL: ${{ inputs.remote_url }}
run: | run: |
echo "Adding Conan remote '${CONAN_REMOTE_NAME}' at '${CONAN_REMOTE_URL}'." echo "Adding Conan remote '${REMOTE_NAME}' at '${REMOTE_URL}'."
conan remote add --index 0 --force "${CONAN_REMOTE_NAME}" "${CONAN_REMOTE_URL}" conan remote add --index 0 --force "${REMOTE_NAME}" "${REMOTE_URL}"
echo 'Listing Conan remotes.' echo 'Listing Conan remotes.'
conan remote list conan remote list

View File

@@ -84,7 +84,7 @@ It generates many files of [results](results):
to the destination module, de-duped, and with frequency counts. to the destination module, de-duped, and with frequency counts.
- `includes/`: A directory where each file represents a module and - `includes/`: A directory where each file represents a module and
contains a list of modules and counts that the module _includes_. contains a list of modules and counts that the module _includes_.
- `includedby/`: Similar to `includes/`, but the other way around. Each - `included_by/`: Similar to `includes/`, but the other way around. Each
file represents a module and contains a list of modules and counts file represents a module and contains a list of modules and counts
that _include_ the module. that _include_ the module.
- [`loops.txt`](results/loops.txt): A list of direct loops detected - [`loops.txt`](results/loops.txt): A list of direct loops detected

View File

@@ -29,7 +29,7 @@ pushd results
oldifs=${IFS} oldifs=${IFS}
IFS=: IFS=:
mkdir includes mkdir includes
mkdir includedby mkdir included_by
echo Build levelization paths echo Build levelization paths
exec 3< ${includes} # open rawincludes.txt for input exec 3< ${includes} # open rawincludes.txt for input
while read -r -u 3 file include while read -r -u 3 file include
@@ -59,7 +59,7 @@ do
echo $level $includelevel | tee -a paths.txt echo $level $includelevel | tee -a paths.txt
fi fi
done done
echo Sort and dedup paths echo Sort and deduplicate paths
sort -ds paths.txt | uniq -c | tee sortedpaths.txt sort -ds paths.txt | uniq -c | tee sortedpaths.txt
mv sortedpaths.txt paths.txt mv sortedpaths.txt paths.txt
exec 3>&- #close fd 3 exec 3>&- #close fd 3
@@ -71,7 +71,7 @@ exec 4<paths.txt # open paths.txt for input
while read -r -u 4 count level include while read -r -u 4 count level include
do do
echo ${include} ${count} | tee -a includes/${level} echo ${include} ${count} | tee -a includes/${level}
echo ${level} ${count} | tee -a includedby/${include} echo ${level} ${count} | tee -a included_by/${include}
done done
exec 4>&- #close fd 4 exec 4>&- #close fd 4

View File

@@ -104,6 +104,7 @@ test.overlay > xrpl.basics
test.overlay > xrpld.app test.overlay > xrpld.app
test.overlay > xrpld.overlay test.overlay > xrpld.overlay
test.overlay > xrpld.peerfinder test.overlay > xrpld.peerfinder
test.overlay > xrpl.nodestore
test.overlay > xrpl.protocol test.overlay > xrpl.protocol
test.overlay > xrpl.shamap test.overlay > xrpl.shamap
test.peerfinder > test.beast test.peerfinder > test.beast

View File

@@ -19,7 +19,7 @@ run from the repository root.
1. `.github/scripts/rename/definitions.sh`: This script will rename all 1. `.github/scripts/rename/definitions.sh`: This script will rename all
definitions, such as include guards, from `RIPPLE_XXX` and `RIPPLED_XXX` to definitions, such as include guards, from `RIPPLE_XXX` and `RIPPLED_XXX` to
`XRPL_XXX`. `XRPL_XXX`.
2. `.github/scripts/rename/copyright.sh`: This script will remove superflous 2. `.github/scripts/rename/copyright.sh`: This script will remove superfluous
copyright notices. copyright notices.
3. `.github/scripts/rename/cmake.sh`: This script will rename all CMake files 3. `.github/scripts/rename/cmake.sh`: This script will rename all CMake files
from `RippleXXX.cmake` or `RippledXXX.cmake` to `XrplXXX.cmake`, and any from `RippleXXX.cmake` or `RippledXXX.cmake` to `XrplXXX.cmake`, and any

View File

@@ -56,7 +56,7 @@ for DIRECTORY in "${DIRECTORIES[@]}"; do
done done
${SED_COMMAND} -i 's/rippled/xrpld/g' cfg/xrpld-example.cfg ${SED_COMMAND} -i 's/rippled/xrpld/g' cfg/xrpld-example.cfg
${SED_COMMAND} -i 's/rippled/xrpld/g' src/test/core/Config_test.cpp ${SED_COMMAND} -i 's/rippled/xrpld/g' src/test/core/Config_test.cpp
${SED_COMMAND} -i 's/ripplevalidators/xrplvalidators/g' src/test/core/Config_test.cpp ${SED_COMMAND} -i 's/ripplevalidators/xrplvalidators/g' src/test/core/Config_test.cpp # cspell: disable-line
${SED_COMMAND} -i 's/rippleConfig/xrpldConfig/g' src/test/core/Config_test.cpp ${SED_COMMAND} -i 's/rippleConfig/xrpldConfig/g' src/test/core/Config_test.cpp
${SED_COMMAND} -i 's@ripple/@xrpld/@g' src/test/core/Config_test.cpp ${SED_COMMAND} -i 's@ripple/@xrpld/@g' src/test/core/Config_test.cpp
${SED_COMMAND} -i 's/Rippled/File/g' src/test/core/Config_test.cpp ${SED_COMMAND} -i 's/Rippled/File/g' src/test/core/Config_test.cpp

View File

@@ -50,11 +50,11 @@ for DIRECTORY in "${DIRECTORIES[@]}"; do
# Handle the cases where the copyright notice is enclosed in /* ... */ # Handle the cases where the copyright notice is enclosed in /* ... */
# and usually surrounded by //---- and //======. # and usually surrounded by //---- and //======.
${SED_COMMAND} -z -i -E 's@^//-------+\n+@@' "${FILE}" ${SED_COMMAND} -z -i -E 's@^//-------+\n+@@' "${FILE}"
${SED_COMMAND} -z -i -E 's@^.*Copyright.+(Ripple|Bougalis|Falco|Hinnant|Null|Ritchford|XRPLF).+PERFORMANCE OF THIS SOFTWARE\.\n\*/\n+@@' "${FILE}" ${SED_COMMAND} -z -i -E 's@^.*Copyright.+(Ripple|Bougalis|Falco|Hinnant|Null|Ritchford|XRPLF).+PERFORMANCE OF THIS SOFTWARE\.\n\*/\n+@@' "${FILE}" # cspell: ignore Bougalis Falco Hinnant Ritchford
${SED_COMMAND} -z -i -E 's@^//=======+\n+@@' "${FILE}" ${SED_COMMAND} -z -i -E 's@^//=======+\n+@@' "${FILE}"
# Handle the cases where the copyright notice is commented out with //. # Handle the cases where the copyright notice is commented out with //.
${SED_COMMAND} -z -i -E 's@^//\n// Copyright.+Falco \(vinnie dot falco at gmail dot com\)\n//\n+@@' "${FILE}" ${SED_COMMAND} -z -i -E 's@^//\n// Copyright.+Falco \(vinnie dot falco at gmail dot com\)\n//\n+@@' "${FILE}" # cspell: ignore Vinnie Falco
done done
done done
@@ -83,16 +83,16 @@ if ! grep -q 'Dev Null' src/xrpld/rpc/handlers/ValidatorInfo.cpp; then
echo -e "// Copyright (c) 2019 Dev Null Productions\n\n$(cat src/xrpld/rpc/handlers/ValidatorInfo.cpp)" > src/xrpld/rpc/handlers/ValidatorInfo.cpp echo -e "// Copyright (c) 2019 Dev Null Productions\n\n$(cat src/xrpld/rpc/handlers/ValidatorInfo.cpp)" > src/xrpld/rpc/handlers/ValidatorInfo.cpp
fi fi
if ! grep -q 'Bougalis' include/xrpl/basics/SlabAllocator.h; then if ! grep -q 'Bougalis' include/xrpl/basics/SlabAllocator.h; then
echo -e "// Copyright (c) 2022, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/SlabAllocator.h)" > include/xrpl/basics/SlabAllocator.h echo -e "// Copyright (c) 2022, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/SlabAllocator.h)" > include/xrpl/basics/SlabAllocator.h # cspell: ignore Nikolaos Bougalis nikb
fi fi
if ! grep -q 'Bougalis' include/xrpl/basics/spinlock.h; then if ! grep -q 'Bougalis' include/xrpl/basics/spinlock.h; then
echo -e "// Copyright (c) 2022, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/spinlock.h)" > include/xrpl/basics/spinlock.h echo -e "// Copyright (c) 2022, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/spinlock.h)" > include/xrpl/basics/spinlock.h # cspell: ignore Nikolaos Bougalis nikb
fi fi
if ! grep -q 'Bougalis' include/xrpl/basics/tagged_integer.h; then if ! grep -q 'Bougalis' include/xrpl/basics/tagged_integer.h; then
echo -e "// Copyright (c) 2014, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/tagged_integer.h)" > include/xrpl/basics/tagged_integer.h echo -e "// Copyright (c) 2014, Nikolaos D. Bougalis <nikb@bougalis.net>\n\n$(cat include/xrpl/basics/tagged_integer.h)" > include/xrpl/basics/tagged_integer.h # cspell: ignore Nikolaos Bougalis nikb
fi fi
if ! grep -q 'Ritchford' include/xrpl/beast/utility/Zero.h; then if ! grep -q 'Ritchford' include/xrpl/beast/utility/Zero.h; then
echo -e "// Copyright (c) 2014, Tom Ritchford <tom@swirly.com>\n\n$(cat include/xrpl/beast/utility/Zero.h)" > include/xrpl/beast/utility/Zero.h echo -e "// Copyright (c) 2014, Tom Ritchford <tom@swirly.com>\n\n$(cat include/xrpl/beast/utility/Zero.h)" > include/xrpl/beast/utility/Zero.h # cspell: ignore Ritchford
fi fi
# Restore newlines and tabs in string literals in the affected file. # Restore newlines and tabs in string literals in the affected file.

View File

@@ -20,8 +20,8 @@ class Config:
Generate a strategy matrix for GitHub Actions CI. Generate a strategy matrix for GitHub Actions CI.
On each PR commit we will build a selection of Debian, RHEL, Ubuntu, MacOS, and On each PR commit we will build a selection of Debian, RHEL, Ubuntu, MacOS, and
Windows configurations, while upon merge into the develop, release, or master Windows configurations, while upon merge into the develop or release branches,
branches, we will build all configurations, and test most of them. we will build all configurations, and test most of them.
We will further set additional CMake arguments as follows: We will further set additional CMake arguments as follows:
- All builds will have the `tests`, `werr`, and `xrpld` options. - All builds will have the `tests`, `werr`, and `xrpld` options.
@@ -240,6 +240,41 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
# Add the configuration to the list, with the most unique fields first, # Add the configuration to the list, with the most unique fields first,
# so that they are easier to identify in the GitHub Actions UI, as long # so that they are easier to identify in the GitHub Actions UI, as long
# names get truncated. # names get truncated.
# Add Address and Thread (both coupled with UB) sanitizers for specific bookworm distros.
# GCC-Asan rippled-embedded tests are failing because of https://github.com/google/sanitizers/issues/856
if (
os["distro_version"] == "bookworm"
and f"{os['compiler_name']}-{os['compiler_version']}" == "clang-20"
):
# Add ASAN + UBSAN configuration.
configurations.append(
{
"config_name": config_name + "-asan-ubsan",
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
"sanitizers": "address,undefinedbehavior",
}
)
# TSAN is deactivated due to seg faults with latest compilers.
activate_tsan = False
if activate_tsan:
configurations.append(
{
"config_name": config_name + "-tsan-ubsan",
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
"sanitizers": "thread,undefinedbehavior",
}
)
else:
configurations.append( configurations.append(
{ {
"config_name": config_name, "config_name": config_name,
@@ -249,6 +284,7 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
"build_type": build_type, "build_type": build_type,
"os": os, "os": os,
"architecture": architecture, "architecture": architecture,
"sanitizers": "",
} }
) )

View File

@@ -15,196 +15,196 @@
"distro_version": "bookworm", "distro_version": "bookworm",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "12", "compiler_version": "12",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "debian", "distro_name": "debian",
"distro_version": "bookworm", "distro_version": "bookworm",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "13", "compiler_version": "13",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "debian", "distro_name": "debian",
"distro_version": "bookworm", "distro_version": "bookworm",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "14", "compiler_version": "14",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "debian", "distro_name": "debian",
"distro_version": "bookworm", "distro_version": "bookworm",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "15", "compiler_version": "15",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "debian", "distro_name": "debian",
"distro_version": "bookworm", "distro_version": "bookworm",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "16", "compiler_version": "16",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "debian", "distro_name": "debian",
"distro_version": "bookworm", "distro_version": "bookworm",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "17", "compiler_version": "17",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "debian", "distro_name": "debian",
"distro_version": "bookworm", "distro_version": "bookworm",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "18", "compiler_version": "18",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "debian", "distro_name": "debian",
"distro_version": "bookworm", "distro_version": "bookworm",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "19", "compiler_version": "19",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "debian", "distro_name": "debian",
"distro_version": "bookworm", "distro_version": "bookworm",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "20", "compiler_version": "20",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "debian", "distro_name": "debian",
"distro_version": "trixie", "distro_version": "trixie",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "14", "compiler_version": "14",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "debian", "distro_name": "debian",
"distro_version": "trixie", "distro_version": "trixie",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "15", "compiler_version": "15",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "debian", "distro_name": "debian",
"distro_version": "trixie", "distro_version": "trixie",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "20", "compiler_version": "20",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "debian", "distro_name": "debian",
"distro_version": "trixie", "distro_version": "trixie",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "21", "compiler_version": "21",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "rhel", "distro_name": "rhel",
"distro_version": "8", "distro_version": "8",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "14", "compiler_version": "14",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "rhel", "distro_name": "rhel",
"distro_version": "8", "distro_version": "8",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "any", "compiler_version": "any",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "rhel", "distro_name": "rhel",
"distro_version": "9", "distro_version": "9",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "12", "compiler_version": "12",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "rhel", "distro_name": "rhel",
"distro_version": "9", "distro_version": "9",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "13", "compiler_version": "13",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "rhel", "distro_name": "rhel",
"distro_version": "9", "distro_version": "9",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "14", "compiler_version": "14",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "rhel", "distro_name": "rhel",
"distro_version": "9", "distro_version": "9",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "any", "compiler_version": "any",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "rhel", "distro_name": "rhel",
"distro_version": "10", "distro_version": "10",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "14", "compiler_version": "14",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "rhel", "distro_name": "rhel",
"distro_version": "10", "distro_version": "10",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "any", "compiler_version": "any",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "ubuntu", "distro_name": "ubuntu",
"distro_version": "jammy", "distro_version": "jammy",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "12", "compiler_version": "12",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "ubuntu", "distro_name": "ubuntu",
"distro_version": "noble", "distro_version": "noble",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "13", "compiler_version": "13",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "ubuntu", "distro_name": "ubuntu",
"distro_version": "noble", "distro_version": "noble",
"compiler_name": "gcc", "compiler_name": "gcc",
"compiler_version": "14", "compiler_version": "14",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "ubuntu", "distro_name": "ubuntu",
"distro_version": "noble", "distro_version": "noble",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "16", "compiler_version": "16",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "ubuntu", "distro_name": "ubuntu",
"distro_version": "noble", "distro_version": "noble",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "17", "compiler_version": "17",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "ubuntu", "distro_name": "ubuntu",
"distro_version": "noble", "distro_version": "noble",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "18", "compiler_version": "18",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
}, },
{ {
"distro_name": "ubuntu", "distro_name": "ubuntu",
"distro_version": "noble", "distro_version": "noble",
"compiler_name": "clang", "compiler_name": "clang",
"compiler_version": "19", "compiler_version": "19",
"image_sha": "cc09fd3" "image_sha": "ab4d1f0"
} }
], ],
"build_type": ["Debug", "Release"], "build_type": ["Debug", "Release"],

View File

@@ -1,7 +1,8 @@
# This workflow runs all workflows to check, build and test the project on # This workflow runs all workflows to check, build and test the project on
# various Linux flavors, as well as on MacOS and Windows, on every push to a # various Linux flavors, as well as on MacOS and Windows, on every push to a
# user branch. However, it will not run if the pull request is a draft unless it # user branch. However, it will not run if the pull request is a draft unless it
# has the 'DraftRunCI' label. # has the 'DraftRunCI' label. For commits to PRs that target a release branch,
# it also uploads the libxrpl recipe to the Conan remote.
name: PR name: PR
on: on:
@@ -53,12 +54,12 @@ jobs:
.github/scripts/rename/** .github/scripts/rename/**
.github/workflows/reusable-check-levelization.yml .github/workflows/reusable-check-levelization.yml
.github/workflows/reusable-check-rename.yml .github/workflows/reusable-check-rename.yml
.github/workflows/reusable-notify-clio.yml
.github/workflows/on-pr.yml .github/workflows/on-pr.yml
# Keep the paths below in sync with those in `on-trigger.yml`. # Keep the paths below in sync with those in `on-trigger.yml`.
.github/actions/build-deps/** .github/actions/build-deps/**
.github/actions/build-test/** .github/actions/build-test/**
.github/actions/generate-version/**
.github/actions/setup-conan/** .github/actions/setup-conan/**
.github/scripts/strategy-matrix/** .github/scripts/strategy-matrix/**
.github/workflows/reusable-build.yml .github/workflows/reusable-build.yml
@@ -66,6 +67,7 @@ jobs:
.github/workflows/reusable-build-test.yml .github/workflows/reusable-build-test.yml
.github/workflows/reusable-strategy-matrix.yml .github/workflows/reusable-strategy-matrix.yml
.github/workflows/reusable-test.yml .github/workflows/reusable-test.yml
.github/workflows/reusable-upload-recipe.yml
.codecov.yml .codecov.yml
cmake/** cmake/**
conan/** conan/**
@@ -121,22 +123,42 @@ jobs:
secrets: secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
notify-clio: upload-recipe:
needs: needs:
- should-run - should-run
- build-test - build-test
if: ${{ needs.should-run.outputs.go == 'true' && (startsWith(github.base_ref, 'release') || github.base_ref == 'master') }} # Only run when committing to a PR that targets a release branch in the
uses: ./.github/workflows/reusable-notify-clio.yml # XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && needs.should-run.outputs.go == 'true' && startsWith(github.ref, 'refs/heads/release') }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets: secrets:
clio_notify_token: ${{ secrets.CLIO_NOTIFY_TOKEN }} remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
conan_remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }} remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}
conan_remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}
notify-clio:
needs: upload-recipe
runs-on: ubuntu-latest
steps:
# Notify the Clio repository about the newly proposed release version, so
# it can be checked for compatibility before the release is actually made.
- name: Notify Clio
env:
GH_TOKEN: ${{ secrets.CLIO_NOTIFY_TOKEN }}
PR_URL: ${{ github.event.pull_request.html_url }}
run: |
gh api --method POST -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" \
/repos/xrplf/clio/dispatches -f "event_type=check_libxrpl" \
-F "client_payload[ref]=${{ needs.upload-recipe.outputs.recipe_ref }}" \
-F "client_payload[pr_url]=${PR_URL}"
passed: passed:
if: failure() || cancelled() if: failure() || cancelled()
needs: needs:
- build-test
- check-levelization - check-levelization
- check-rename
- build-test
- upload-recipe
- notify-clio
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Fail - name: Fail

25
.github/workflows/on-tag.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
# This workflow uploads the libxrpl recipe to the Conan remote when a versioned
# tag is pushed.
name: Tag
on:
push:
tags:
- "v*"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
upload-recipe:
# Only run when a tag is pushed to the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}

View File

@@ -1,9 +1,7 @@
# This workflow runs all workflows to build the dependencies required for the # This workflow runs all workflows to build and test the code on various Linux
# project on various Linux flavors, as well as on MacOS and Windows, on a # flavors, as well as on MacOS and Windows, on a scheduled basis, on merge into
# scheduled basis, on merge into the 'develop', 'release', or 'master' branches, # the 'develop' or 'release*' branches, or when requested manually. Upon pushes
# or manually. The missing commits check is only run when the code is merged # to the develop branch it also uploads the libxrpl recipe to the Conan remote.
# into the 'develop' or 'release' branches, and the documentation is built when
# the code is merged into the 'develop' branch.
name: Trigger name: Trigger
on: on:
@@ -11,7 +9,6 @@ on:
branches: branches:
- "develop" - "develop"
- "release*" - "release*"
- "master"
paths: paths:
# These paths are unique to `on-trigger.yml`. # These paths are unique to `on-trigger.yml`.
- ".github/workflows/on-trigger.yml" - ".github/workflows/on-trigger.yml"
@@ -19,6 +16,7 @@ on:
# Keep the paths below in sync with those in `on-pr.yml`. # Keep the paths below in sync with those in `on-pr.yml`.
- ".github/actions/build-deps/**" - ".github/actions/build-deps/**"
- ".github/actions/build-test/**" - ".github/actions/build-test/**"
- ".github/actions/generate-version/**"
- ".github/actions/setup-conan/**" - ".github/actions/setup-conan/**"
- ".github/scripts/strategy-matrix/**" - ".github/scripts/strategy-matrix/**"
- ".github/workflows/reusable-build.yml" - ".github/workflows/reusable-build.yml"
@@ -26,6 +24,7 @@ on:
- ".github/workflows/reusable-build-test.yml" - ".github/workflows/reusable-build-test.yml"
- ".github/workflows/reusable-strategy-matrix.yml" - ".github/workflows/reusable-strategy-matrix.yml"
- ".github/workflows/reusable-test.yml" - ".github/workflows/reusable-test.yml"
- ".github/workflows/reusable-upload-recipe.yml"
- ".codecov.yml" - ".codecov.yml"
- "cmake/**" - "cmake/**"
- "conan/**" - "conan/**"
@@ -70,11 +69,20 @@ jobs:
with: with:
# Enable ccache only for events targeting the XRPLF repository, since # Enable ccache only for events targeting the XRPLF repository, since
# other accounts will not have access to our remote cache storage. # other accounts will not have access to our remote cache storage.
# However, we do not enable ccache for events targeting the master or a # However, we do not enable ccache for events targeting a release branch,
# release branch, to protect against the rare case that the output # to protect against the rare case that the output produced by ccache is
# produced by ccache is not identical to a regular compilation. # not identical to a regular compilation.
ccache_enabled: ${{ github.repository_owner == 'XRPLF' && !(github.base_ref == 'master' || startsWith(github.base_ref, 'release')) }} ccache_enabled: ${{ github.repository_owner == 'XRPLF' && !startsWith(github.ref, 'refs/heads/release') }}
os: ${{ matrix.os }} os: ${{ matrix.os }}
strategy_matrix: ${{ github.event_name == 'schedule' && 'all' || 'minimal' }} strategy_matrix: ${{ github.event_name == 'schedule' && 'all' || 'minimal' }}
secrets: secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
upload-recipe:
needs: build-test
# Only run when pushing to the develop branch in the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && github.event_name == 'push' && github.ref == 'refs/heads/develop' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}

View File

@@ -3,13 +3,15 @@ name: Run pre-commit hooks
on: on:
pull_request: pull_request:
push: push:
branches: [develop, release, master] branches:
- "develop"
- "release*"
workflow_dispatch: workflow_dispatch:
jobs: jobs:
# Call the workflow in the XRPLF/actions repo that runs the pre-commit hooks. # Call the workflow in the XRPLF/actions repo that runs the pre-commit hooks.
run-hooks: run-hooks:
uses: XRPLF/actions/.github/workflows/pre-commit.yml@34790936fae4c6c751f62ec8c06696f9c1a5753a uses: XRPLF/actions/.github/workflows/pre-commit.yml@320be44621ca2a080f05aeb15817c44b84518108
with: with:
runs_on: ubuntu-latest runs_on: ubuntu-latest
container: '{ "image": "ghcr.io/xrplf/ci/tools-rippled-pre-commit:sha-a8c7be1" }' container: '{ "image": "ghcr.io/xrplf/ci/tools-rippled-pre-commit:sha-ab4d1f0" }'

View File

@@ -36,7 +36,7 @@ jobs:
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0 uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Get number of processors - name: Get number of processors
uses: XRPLF/actions/get-nproc@2ece4ec6ab7de266859a6f053571425b2bd684b6 uses: XRPLF/actions/get-nproc@cf0433aa74563aead044a1e395610c96d65a37cf
id: nproc id: nproc
with: with:
subtract: ${{ env.NPROC_SUBTRACT }} subtract: ${{ env.NPROC_SUBTRACT }}

View File

@@ -51,6 +51,12 @@ on:
type: number type: number
default: 2 default: 2
sanitizers:
description: "The sanitizers to enable."
required: false
type: string
default: ""
secrets: secrets:
CODECOV_TOKEN: CODECOV_TOKEN:
description: "The Codecov token to use for uploading coverage reports." description: "The Codecov token to use for uploading coverage reports."
@@ -91,18 +97,19 @@ jobs:
# Determine if coverage and voidstar should be enabled. # Determine if coverage and voidstar should be enabled.
COVERAGE_ENABLED: ${{ contains(inputs.cmake_args, '-Dcoverage=ON') }} COVERAGE_ENABLED: ${{ contains(inputs.cmake_args, '-Dcoverage=ON') }}
VOIDSTAR_ENABLED: ${{ contains(inputs.cmake_args, '-Dvoidstar=ON') }} VOIDSTAR_ENABLED: ${{ contains(inputs.cmake_args, '-Dvoidstar=ON') }}
SANITIZERS_ENABLED: ${{ inputs.sanitizers != '' }}
steps: steps:
- name: Cleanup workspace (macOS and Windows) - name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }} if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/cleanup-workspace@2ece4ec6ab7de266859a6f053571425b2bd684b6 uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0 uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Prepare runner - name: Prepare runner
uses: XRPLF/actions/prepare-runner@2ece4ec6ab7de266859a6f053571425b2bd684b6 uses: XRPLF/actions/prepare-runner@2cbf481018d930656e9276fcc20dc0e3a0be5b6d
with: with:
disable_ccache: ${{ !inputs.ccache_enabled }} enable_ccache: ${{ inputs.ccache_enabled }}
- name: Set ccache log file - name: Set ccache log file
if: ${{ inputs.ccache_enabled && runner.debug == '1' }} if: ${{ inputs.ccache_enabled && runner.debug == '1' }}
@@ -112,12 +119,14 @@ jobs:
uses: ./.github/actions/print-env uses: ./.github/actions/print-env
- name: Get number of processors - name: Get number of processors
uses: XRPLF/actions/get-nproc@2ece4ec6ab7de266859a6f053571425b2bd684b6 uses: XRPLF/actions/get-nproc@cf0433aa74563aead044a1e395610c96d65a37cf
id: nproc id: nproc
with: with:
subtract: ${{ inputs.nproc_subtract }} subtract: ${{ inputs.nproc_subtract }}
- name: Setup Conan - name: Setup Conan
env:
SANITIZERS: ${{ inputs.sanitizers }}
uses: ./.github/actions/setup-conan uses: ./.github/actions/setup-conan
- name: Build dependencies - name: Build dependencies
@@ -128,11 +137,13 @@ jobs:
# Set the verbosity to "quiet" for Windows to avoid an excessive # Set the verbosity to "quiet" for Windows to avoid an excessive
# amount of logs. For other OSes, the "verbose" logs are more useful. # amount of logs. For other OSes, the "verbose" logs are more useful.
log_verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }} log_verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }}
sanitizers: ${{ inputs.sanitizers }}
- name: Configure CMake - name: Configure CMake
working-directory: ${{ env.BUILD_DIR }} working-directory: ${{ env.BUILD_DIR }}
env: env:
BUILD_TYPE: ${{ inputs.build_type }} BUILD_TYPE: ${{ inputs.build_type }}
SANITIZERS: ${{ inputs.sanitizers }}
CMAKE_ARGS: ${{ inputs.cmake_args }} CMAKE_ARGS: ${{ inputs.cmake_args }}
run: | run: |
cmake \ cmake \
@@ -174,7 +185,7 @@ jobs:
if-no-files-found: error if-no-files-found: error
- name: Check linking (Linux) - name: Check linking (Linux)
if: ${{ runner.os == 'Linux' }} if: ${{ runner.os == 'Linux' && env.SANITIZERS_ENABLED == 'false' }}
working-directory: ${{ env.BUILD_DIR }} working-directory: ${{ env.BUILD_DIR }}
run: | run: |
ldd ./xrpld ldd ./xrpld
@@ -191,6 +202,14 @@ jobs:
run: | run: |
./xrpld --version | grep libvoidstar ./xrpld --version | grep libvoidstar
- name: Set sanitizer options
if: ${{ !inputs.build_only && env.SANITIZERS_ENABLED == 'true' }}
run: |
echo "ASAN_OPTIONS=print_stacktrace=1:detect_container_overflow=0:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/asan.supp" >> ${GITHUB_ENV}
echo "TSAN_OPTIONS=second_deadlock_stack=1:halt_on_error=0:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/tsan.supp" >> ${GITHUB_ENV}
echo "UBSAN_OPTIONS=suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/ubsan.supp" >> ${GITHUB_ENV}
echo "LSAN_OPTIONS=suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/lsan.supp" >> ${GITHUB_ENV}
- name: Run the separate tests - name: Run the separate tests
if: ${{ !inputs.build_only }} if: ${{ !inputs.build_only }}
working-directory: ${{ env.BUILD_DIR }} working-directory: ${{ env.BUILD_DIR }}

View File

@@ -57,5 +57,6 @@ jobs:
runs_on: ${{ toJSON(matrix.architecture.runner) }} runs_on: ${{ toJSON(matrix.architecture.runner) }}
image: ${{ contains(matrix.architecture.platform, 'linux') && format('ghcr.io/xrplf/ci/{0}-{1}:{2}-{3}-sha-{4}', matrix.os.distro_name, matrix.os.distro_version, matrix.os.compiler_name, matrix.os.compiler_version, matrix.os.image_sha) || '' }} image: ${{ contains(matrix.architecture.platform, 'linux') && format('ghcr.io/xrplf/ci/{0}-{1}:{2}-{3}-sha-{4}', matrix.os.distro_name, matrix.os.distro_version, matrix.os.compiler_name, matrix.os.compiler_version, matrix.os.image_sha) || '' }}
config_name: ${{ matrix.config_name }} config_name: ${{ matrix.config_name }}
sanitizers: ${{ matrix.sanitizers }}
secrets: secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -1,91 +0,0 @@
# This workflow exports the built libxrpl package to the Conan remote on a
# a channel named after the pull request, and notifies the Clio repository about
# the new version so it can check for compatibility.
name: Notify Clio
# This workflow can only be triggered by other workflows.
on:
workflow_call:
inputs:
conan_remote_name:
description: "The name of the Conan remote to use."
required: false
type: string
default: xrplf
conan_remote_url:
description: "The URL of the Conan endpoint to use."
required: false
type: string
default: https://conan.ripplex.io
secrets:
clio_notify_token:
description: "The GitHub token to notify Clio about new versions."
required: true
conan_remote_username:
description: "The username for logging into the Conan remote."
required: true
conan_remote_password:
description: "The password for logging into the Conan remote."
required: true
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-clio
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
upload:
if: ${{ github.event.pull_request.head.repo.full_name == github.repository }}
runs-on: ubuntu-latest
container: ghcr.io/xrplf/ci/ubuntu-noble:gcc-13-sha-5dd7158
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Generate outputs
id: generate
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
run: |
echo 'Generating user and channel.'
echo "user=clio" >> "${GITHUB_OUTPUT}"
echo "channel=pr_${PR_NUMBER}" >> "${GITHUB_OUTPUT}"
echo 'Extracting version.'
echo "version=$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" | awk -F '"' '{print $2}')" >> "${GITHUB_OUTPUT}"
- name: Calculate conan reference
id: conan_ref
run: |
echo "conan_ref=${{ steps.generate.outputs.version }}@${{ steps.generate.outputs.user }}/${{ steps.generate.outputs.channel }}" >> "${GITHUB_OUTPUT}"
- name: Set up Conan
uses: ./.github/actions/setup-conan
with:
conan_remote_name: ${{ inputs.conan_remote_name }}
conan_remote_url: ${{ inputs.conan_remote_url }}
- name: Log into Conan remote
env:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
run: conan remote login "${CONAN_REMOTE_NAME}" "${{ secrets.conan_remote_username }}" --password "${{ secrets.conan_remote_password }}"
- name: Upload package
env:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
run: |
conan export --user=${{ steps.generate.outputs.user }} --channel=${{ steps.generate.outputs.channel }} .
conan upload --confirm --check --remote="${CONAN_REMOTE_NAME}" xrpl/${{ steps.conan_ref.outputs.conan_ref }}
outputs:
conan_ref: ${{ steps.conan_ref.outputs.conan_ref }}
notify:
needs: upload
runs-on: ubuntu-latest
steps:
- name: Notify Clio
env:
GH_TOKEN: ${{ secrets.clio_notify_token }}
PR_URL: ${{ github.event.pull_request.html_url }}
run: |
gh api --method POST -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" \
/repos/xrplf/clio/dispatches -f "event_type=check_libxrpl" \
-F "client_payload[conan_ref]=${{ needs.upload.outputs.conan_ref }}" \
-F "client_payload[pr_url]=${PR_URL}"

View File

@@ -0,0 +1,97 @@
# This workflow exports the built libxrpl package to the Conan remote.
name: Upload Conan recipe
# This workflow can only be triggered by other workflows.
on:
workflow_call:
inputs:
remote_name:
description: "The name of the Conan remote to use."
required: false
type: string
default: xrplf
remote_url:
description: "The URL of the Conan endpoint to use."
required: false
type: string
default: https://conan.ripplex.io
secrets:
remote_username:
description: "The username for logging into the Conan remote."
required: true
remote_password:
description: "The password for logging into the Conan remote."
required: true
outputs:
recipe_ref:
description: "The Conan recipe reference ('name/version') that was uploaded."
value: ${{ jobs.upload.outputs.ref }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-upload-recipe
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
upload:
runs-on: ubuntu-latest
container: ghcr.io/xrplf/ci/ubuntu-noble:gcc-13-sha-5dd7158
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Generate build version number
id: version
uses: ./.github/actions/generate-version
- name: Set up Conan
uses: ./.github/actions/setup-conan
with:
remote_name: ${{ inputs.remote_name }}
remote_url: ${{ inputs.remote_url }}
- name: Log into Conan remote
env:
REMOTE_NAME: ${{ inputs.remote_name }}
REMOTE_USERNAME: ${{ secrets.remote_username }}
REMOTE_PASSWORD: ${{ secrets.remote_password }}
run: conan remote login "${REMOTE_NAME}" "${REMOTE_USERNAME}" --password "${REMOTE_PASSWORD}"
- name: Upload Conan recipe (version)
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=${{ steps.version.outputs.version }}
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/${{ steps.version.outputs.version }}
- name: Upload Conan recipe (develop)
if: ${{ github.ref == 'refs/heads/develop' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=develop
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/develop
- name: Upload Conan recipe (rc)
if: ${{ startsWith(github.ref, 'refs/heads/release') }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=rc
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/rc
- name: Upload Conan recipe (release)
if: ${{ github.event_name == 'tag' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=release
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/release
outputs:
ref: xrpl/${{ steps.version.outputs.version }}

View File

@@ -64,30 +64,32 @@ jobs:
steps: steps:
- name: Cleanup workspace (macOS and Windows) - name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }} if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/cleanup-workspace@2ece4ec6ab7de266859a6f053571425b2bd684b6 uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0 uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Prepare runner - name: Prepare runner
uses: XRPLF/actions/prepare-runner@2ece4ec6ab7de266859a6f053571425b2bd684b6 uses: XRPLF/actions/prepare-runner@2cbf481018d930656e9276fcc20dc0e3a0be5b6d
with: with:
disable_ccache: true enable_ccache: false
- name: Print build environment - name: Print build environment
uses: ./.github/actions/print-env uses: ./.github/actions/print-env
- name: Get number of processors - name: Get number of processors
uses: XRPLF/actions/get-nproc@2ece4ec6ab7de266859a6f053571425b2bd684b6 uses: XRPLF/actions/get-nproc@cf0433aa74563aead044a1e395610c96d65a37cf
id: nproc id: nproc
with: with:
subtract: ${{ env.NPROC_SUBTRACT }} subtract: ${{ env.NPROC_SUBTRACT }}
- name: Setup Conan - name: Setup Conan
env:
SANITIZERS: ${{ matrix.sanitizers }}
uses: ./.github/actions/setup-conan uses: ./.github/actions/setup-conan
with: with:
conan_remote_name: ${{ env.CONAN_REMOTE_NAME }} remote_name: ${{ env.CONAN_REMOTE_NAME }}
conan_remote_url: ${{ env.CONAN_REMOTE_URL }} remote_url: ${{ env.CONAN_REMOTE_URL }}
- name: Build dependencies - name: Build dependencies
uses: ./.github/actions/build-deps uses: ./.github/actions/build-deps
@@ -98,6 +100,7 @@ jobs:
# Set the verbosity to "quiet" for Windows to avoid an excessive # Set the verbosity to "quiet" for Windows to avoid an excessive
# amount of logs. For other OSes, the "verbose" logs are more useful. # amount of logs. For other OSes, the "verbose" logs are more useful.
log_verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }} log_verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }}
sanitizers: ${{ matrix.sanitizers }}
- name: Log into Conan remote - name: Log into Conan remote
if: ${{ github.repository_owner == 'XRPLF' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }} if: ${{ github.repository_owner == 'XRPLF' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}

1
.gitignore vendored
View File

@@ -1,4 +1,5 @@
# .gitignore # .gitignore
# cspell: disable
# Macintosh Desktop Services Store files. # Macintosh Desktop Services Store files.
.DS_Store .DS_Store

View File

@@ -26,30 +26,36 @@ repos:
args: [--style=file] args: [--style=file]
"types_or": [c++, c, proto] "types_or": [c++, c, proto]
- repo: https://github.com/cheshirekow/cmake-format-precommit
rev: e2c2116d86a80e72e7146a06e68b7c228afc6319 # frozen: v0.6.13
hooks:
- id: cmake-format
additional_dependencies: [PyYAML]
- repo: https://github.com/rbubley/mirrors-prettier - repo: https://github.com/rbubley/mirrors-prettier
rev: 5ba47274f9b181bce26a5150a725577f3c336011 # frozen: v3.6.2 rev: 5ba47274f9b181bce26a5150a725577f3c336011 # frozen: v3.6.2
hooks: hooks:
- id: prettier - id: prettier
- repo: https://github.com/psf/black-pre-commit-mirror - repo: https://github.com/psf/black-pre-commit-mirror
rev: 25.11.0 rev: 831207fd435b47aeffdf6af853097e64322b4d44 # frozen: v25.12.0
hooks: hooks:
- id: black - id: black
# - repo: https://github.com/streetsidesoftware/cspell-cli - repo: https://github.com/streetsidesoftware/cspell-cli
# rev: v9.2.0 rev: 1cfa010f078c354f3ffb8413616280cc28f5ba21 # frozen: v9.4.0
# hooks: hooks:
# - id: cspell # Spell check changed files - id: cspell # Spell check changed files
# - id: cspell # Spell check the commit message exclude: .config/cspell.config.yaml
# name: check commit message spelling - id: cspell # Spell check the commit message
# args: name: check commit message spelling
# - --no-must-find-files args:
# - --no-progress - --no-must-find-files
# - --no-summary - --no-progress
# - --files - --no-summary
# - .git/COMMIT_EDITMSG - --files
# stages: [commit-msg] - .git/COMMIT_EDITMSG
# always_run: true # This might not be necessary. stages: [commit-msg]
exclude: | exclude: |
(?x)^( (?x)^(

View File

@@ -6,90 +6,85 @@ For info about how [API versioning](https://xrpl.org/request-formatting.html#api
The API version controls the API behavior you see. This includes what properties you see in responses, what parameters you're permitted to send in requests, and so on. You specify the API version in each of your requests. When a breaking change is introduced to the `rippled` API, a new version is released. To avoid breaking your code, you should set (or increase) your version when you're ready to upgrade. The API version controls the API behavior you see. This includes what properties you see in responses, what parameters you're permitted to send in requests, and so on. You specify the API version in each of your requests. When a breaking change is introduced to the `rippled` API, a new version is released. To avoid breaking your code, you should set (or increase) your version when you're ready to upgrade.
The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conventions/request-formatting/#commandline-format) always uses the latest API version. The command line is intended for ad-hoc usage by humans, not programs or automated scripts. The command line is not meant for use in production code.
For a log of breaking changes, see the **API Version [number]** headings. In general, breaking changes are associated with a particular API Version number. For non-breaking changes, scroll to the **XRP Ledger version [x.y.z]** headings. Non-breaking changes are associated with a particular XRP Ledger (`rippled`) release. For a log of breaking changes, see the **API Version [number]** headings. In general, breaking changes are associated with a particular API Version number. For non-breaking changes, scroll to the **XRP Ledger version [x.y.z]** headings. Non-breaking changes are associated with a particular XRP Ledger (`rippled`) release.
## API Version 3 (Beta)
API version 3 is currently a beta API. It requires enabling `[beta_rpc_api]` in the rippled configuration to use. See [API-VERSION-3.md](API-VERSION-3.md) for the full list of changes in API version 3.
## API Version 2 ## API Version 2
API version 2 is available in `rippled` version 2.0.0 and later. To use this API, clients specify `"api_version" : 2` in each request. API version 2 is available in `rippled` version 2.0.0 and later. See [API-VERSION-2.md](API-VERSION-2.md) for the full list of changes in API version 2.
#### Removed methods
In API version 2, the following deprecated methods are no longer available: (https://github.com/XRPLF/rippled/pull/4759)
- `tx_history` - Instead, use other methods such as `account_tx` or `ledger` with the `transactions` field set to `true`.
- `ledger_header` - Instead, use the `ledger` method.
#### Modifications to JSON transaction element in V2
In API version 2, JSON elements for transaction output have been changed and made consistent for all methods which output transactions. (https://github.com/XRPLF/rippled/pull/4775)
This helps to unify the JSON serialization format of transactions. (https://github.com/XRPLF/clio/issues/722, https://github.com/XRPLF/rippled/issues/4727)
- JSON transaction element is named `tx_json`
- Binary transaction element is named `tx_blob`
- JSON transaction metadata element is named `meta`
- Binary transaction metadata element is named `meta_blob`
Additionally, these elements are now consistently available next to `tx_json` (i.e. sibling elements), where possible:
- `hash` - Transaction ID. This data was stored inside transaction output in API version 1, but in API version 2 is a sibling element.
- `ledger_index` - Ledger index (only set on validated ledgers)
- `ledger_hash` - Ledger hash (only set on closed or validated ledgers)
- `close_time_iso` - Ledger close time expressed in ISO 8601 time format (only set on validated ledgers)
- `validated` - Bool element set to `true` if the transaction is in a validated ledger, otherwise `false`
This change affects the following methods:
- `tx` - Transaction data moved into element `tx_json` (was inline inside `result`) or, if binary output was requested, moved from `tx` to `tx_blob`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `account_tx` - Renamed transaction element from `tx` to `tx_json`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `transaction_entry` - Renamed transaction metadata element from `metadata` to `meta`. Changed location of `hash` and added new elements
- `subscribe` - Renamed transaction element from `transaction` to `tx_json`. Changed location of `hash` and added new elements
- `sign`, `sign_for`, `submit` and `submit_multisigned` - Changed location of `hash` element.
#### Modification to `Payment` transaction JSON schema
When reading Payments, the `Amount` field should generally **not** be used. Instead, use [delivered_amount](https://xrpl.org/partial-payments.html#the-delivered_amount-field) to see the amount that the Payment delivered. To clarify its meaning, the `Amount` field is being renamed to `DeliverMax`. (https://github.com/XRPLF/rippled/pull/4733)
- In `Payment` transaction type, JSON RPC field `Amount` is renamed to `DeliverMax`. To enable smooth client transition, `Amount` is still handled, as described below: (https://github.com/XRPLF/rippled/pull/4733)
- On JSON RPC input (e.g. `submit_multisigned` etc. methods), `Amount` is recognized as an alias to `DeliverMax` for both API version 1 and version 2 clients.
- On JSON RPC input, submitting both `Amount` and `DeliverMax` fields is allowed _only_ if they are identical; otherwise such input is rejected with `rpcINVALID_PARAMS` error.
- On JSON RPC output (e.g. `subscribe`, `account_tx` etc. methods), `DeliverMax` is present in both API version 1 and version 2.
- On JSON RPC output, `Amount` is only present in API version 1 and _not_ in version 2.
#### Modifications to account_info response
- `signer_lists` is returned in the root of the response. (In API version 1, it was nested under `account_data`.) (https://github.com/XRPLF/rippled/pull/3770)
- When using an invalid `signer_lists` value, the API now returns an "invalidParams" error. (https://github.com/XRPLF/rippled/pull/4585)
- (`signer_lists` must be a boolean. In API version 1, strings were accepted and may return a normal response - i.e. as if `signer_lists` were `true`.)
#### Modifications to [account_tx](https://xrpl.org/account_tx.html#account_tx) response
- Using `ledger_index_min`, `ledger_index_max`, and `ledger_index` returns `invalidParams` because if you use `ledger_index_min` or `ledger_index_max`, then it does not make sense to also specify `ledger_index`. In API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4571)
- The same applies for `ledger_index_min`, `ledger_index_max`, and `ledger_hash`. (https://github.com/XRPLF/rippled/issues/4545#issuecomment-1565065579)
- Using a `ledger_index_min` or `ledger_index_max` beyond the range of ledgers that the server has:
- returns `lgrIdxMalformed` in API version 2. Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/issues/4288)
- Attempting to use a non-boolean value (such as a string) for the `binary` or `forward` parameters returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
#### Modifications to [noripple_check](https://xrpl.org/noripple_check.html#noripple_check) response
- Attempting to use a non-boolean value (such as a string) for the `transactions` parameter returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
## API Version 1 ## API Version 1
This version is supported by all `rippled` versions. For WebSocket and HTTP JSON-RPC requests, it is currently the default API version used when no `api_version` is specified. This version is supported by all `rippled` versions. For WebSocket and HTTP JSON-RPC requests, it is currently the default API version used when no `api_version` is specified.
The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conventions/request-formatting/#commandline-format) always uses the latest API version. The command line is intended for ad-hoc usage by humans, not programs or automated scripts. The command line is not meant for use in production code. ## XRP Ledger server version 3.1.0
### Inconsistency: server_info - network_id [Version 3.1.0](https://github.com/XRPLF/rippled/releases/tag/3.1.0) was released on Jan 27, 2026.
The `network_id` field was added in the `server_info` response in version 1.5.0 (2019), but it is not returned in [reporting mode](https://xrpl.org/rippled-server-modes.html#reporting-mode). However, use of reporting mode is now discouraged, in favor of using [Clio](https://github.com/XRPLF/clio) instead. ### Additions in 3.1.0
- `vault_info`: New RPC method to retrieve information about a specific vault (part of XLS-66 Lending Protocol). ([#6156](https://github.com/XRPLF/rippled/pull/6156))
## XRP Ledger server version 3.0.0
[Version 3.0.0](https://github.com/XRPLF/rippled/releases/tag/3.0.0) was released on Dec 9, 2025.
### Additions in 3.0.0
- `ledger_entry`: Supports all ledger entry types with dedicated parsers. ([#5237](https://github.com/XRPLF/rippled/pull/5237))
- `ledger_entry`: New error codes `entryNotFound` and `unexpectedLedgerType` for more specific error handling. ([#5237](https://github.com/XRPLF/rippled/pull/5237))
- `ledger_entry`: Improved error messages with more context (e.g., specifying which field is invalid or missing). ([#5237](https://github.com/XRPLF/rippled/pull/5237))
- `ledger_entry`: Assorted bug fixes in RPC processing. ([#5237](https://github.com/XRPLF/rippled/pull/5237))
- `simulate`: Supports additional metadata in the response. ([#5754](https://github.com/XRPLF/rippled/pull/5754))
## XRP Ledger server version 2.6.2
[Version 2.6.2](https://github.com/XRPLF/rippled/releases/tag/2.6.2) was released on Nov 19, 2025.
This release contains bug fixes only and no API changes.
## XRP Ledger server version 2.6.1
[Version 2.6.1](https://github.com/XRPLF/rippled/releases/tag/2.6.1) was released on Sep 30, 2025.
This release contains bug fixes only and no API changes.
## XRP Ledger server version 2.6.0
[Version 2.6.0](https://github.com/XRPLF/rippled/releases/tag/2.6.0) was released on Aug 27, 2025.
### Additions in 2.6.0
- `account_info`: Added `allowTrustLineLocking` flag in response. ([#5525](https://github.com/XRPLF/rippled/pull/5525))
- `ledger`: Removed the type filter from the RPC command. ([#4934](https://github.com/XRPLF/rippled/pull/4934))
- `subscribe` (`validations` stream): `network_id` is now included. ([#5579](https://github.com/XRPLF/rippled/pull/5579))
- `subscribe` (`transactions` stream): `nftoken_id`, `nftoken_ids`, and `offer_id` are now included in transaction metadata. ([#5230](https://github.com/XRPLF/rippled/pull/5230))
## XRP Ledger server version 2.5.1
[Version 2.5.1](https://github.com/XRPLF/rippled/releases/tag/2.5.1) was released on Sep 17, 2025.
This release contains bug fixes only and no API changes.
## XRP Ledger server version 2.5.0 ## XRP Ledger server version 2.5.0
As of 2025-04-04, version 2.5.0 is in development. You can use a pre-release version by building from source or [using the `nightly` package](https://xrpl.org/docs/infrastructure/installation/install-rippled-on-ubuntu). [Version 2.5.0](https://github.com/XRPLF/rippled/releases/tag/2.5.0) was released on Jun 24, 2025.
### Additions and bugfixes in 2.5.0 ### Additions and bugfixes in 2.5.0
- `channel_authorize`: If `signing_support` is not enabled in the config, the RPC is disabled. - `tx`: Added `ctid` field to the response and improved error handling. ([#4738](https://github.com/XRPLF/rippled/pull/4738))
- `ledger_entry`: Improved error messages in `permissioned_domain`. ([#5344](https://github.com/XRPLF/rippled/pull/5344))
- `simulate`: Improved multi-sign usage. ([#5479](https://github.com/XRPLF/rippled/pull/5479))
- `channel_authorize`: If `signing_support` is not enabled in the config, the RPC is disabled. ([#5385](https://github.com/XRPLF/rippled/pull/5385))
- `subscribe` (admin): Removed webhook queue limit to prevent dropping notifications; reduced HTTP timeout from 10 minutes to 30 seconds. ([#5163](https://github.com/XRPLF/rippled/pull/5163))
- `ledger_data` (gRPC): Fixed crashing issue with some invalid markers. ([#5137](https://github.com/XRPLF/rippled/pull/5137))
- `account_lines`: Fixed error with `no_ripple` and `no_ripple_peer` sometimes showing up incorrectly. ([#5345](https://github.com/XRPLF/rippled/pull/5345))
- `account_tx`: Fixed issue with incorrect CTIDs. ([#5408](https://github.com/XRPLF/rippled/pull/5408))
## XRP Ledger server version 2.4.0 ## XRP Ledger server version 2.4.0
@@ -97,11 +92,19 @@ As of 2025-04-04, version 2.5.0 is in development. You can use a pre-release ver
### Additions and bugfixes in 2.4.0 ### Additions and bugfixes in 2.4.0
- `ledger_entry`: `state` is added an alias for `ripple_state`. - `simulate`: A new RPC that executes a [dry run of a transaction submission](https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0069d-simulate#2-rpc-simulate). ([#5069](https://github.com/XRPLF/rippled/pull/5069))
- `ledger_entry`: Enables case-insensitive filtering by canonical name in addition to case-sensitive filtering by RPC name. - Signing methods (`sign`, `sign_for`, `submit`): Autofill fees better, properly handle transactions without a base fee, and autofill the `NetworkID` field. ([#5069](https://github.com/XRPLF/rippled/pull/5069))
- `validators`: Added new field `validator_list_threshold` in response. - `ledger_entry`: `state` is added as an alias for `ripple_state`. ([#5199](https://github.com/XRPLF/rippled/pull/5199))
- `simulate`: A new RPC that executes a [dry run of a transaction submission](https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0069d-simulate#2-rpc-simulate) - `ledger`, `ledger_data`, `account_objects`: Support filtering ledger entry types by their canonical names (case-insensitive). ([#5271](https://github.com/XRPLF/rippled/pull/5271))
- Signing methods autofill fees better and properly handle transactions that don't have a base fee, and will also autofill the `NetworkID` field. - `validators`: Added new field `validator_list_threshold` in response. ([#5112](https://github.com/XRPLF/rippled/pull/5112))
- `server_info`: Added git commit hash info on admin connection. ([#5225](https://github.com/XRPLF/rippled/pull/5225))
- `server_definitions`: Changed larger `UInt` serialized types to `Hash`. ([#5231](https://github.com/XRPLF/rippled/pull/5231))
## XRP Ledger server version 2.3.1
[Version 2.3.1](https://github.com/XRPLF/rippled/releases/tag/2.3.1) was released on Jan 29, 2025.
This release contains bug fixes only and no API changes.
## XRP Ledger server version 2.3.0 ## XRP Ledger server version 2.3.0
@@ -109,19 +112,30 @@ As of 2025-04-04, version 2.5.0 is in development. You can use a pre-release ver
### Breaking changes in 2.3.0 ### Breaking changes in 2.3.0
- `book_changes`: If the requested ledger version is not available on this node, a `ledgerNotFound` error is returned and the node does not attempt to acquire the ledger from the p2p network (as with other non-admin RPCs). - `book_changes`: If the requested ledger version is not available on this node, a `ledgerNotFound` error is returned and the node does not attempt to acquire the ledger from the p2p network (as with other non-admin RPCs). Admins can still attempt to retrieve old ledgers with the `ledger_request` RPC.
Admins can still attempt to retrieve old ledgers with the `ledger_request` RPC.
### Additions and bugfixes in 2.3.0 ### Additions and bugfixes in 2.3.0
- `book_changes`: Returns a `validated` field in its response, which was missing in prior versions. - `book_changes`: Returns a `validated` field in its response. ([#5096](https://github.com/XRPLF/rippled/pull/5096))
- `book_changes`: Accepts shortcut strings (`current`, `closed`, `validated`) for the `ledger_index` parameter. ([#5096](https://github.com/XRPLF/rippled/pull/5096))
- `server_definitions`: Include `index` in response. ([#5190](https://github.com/XRPLF/rippled/pull/5190))
- `account_nfts`: Fix issue where unassociated marker would return incorrect results. ([#5045](https://github.com/XRPLF/rippled/pull/5045))
- `account_objects`: Fix issue where invalid marker would not return an error. ([#5046](https://github.com/XRPLF/rippled/pull/5046))
- `account_objects`: Disallow filtering by ledger entry types that an account cannot hold. ([#5056](https://github.com/XRPLF/rippled/pull/5056))
- `tx`: Allow lowercase CTID. ([#5049](https://github.com/XRPLF/rippled/pull/5049))
- `feature`: Better error handling for invalid values of `feature`. ([#5063](https://github.com/XRPLF/rippled/pull/5063))
## XRP Ledger server version 2.2.0 ## XRP Ledger server version 2.2.0
[Version 2.2.0](https://github.com/XRPLF/rippled/releases/tag/2.2.0) was released on Jun 5, 2024. The following additions are non-breaking (because they are purely additive): [Version 2.2.0](https://github.com/XRPLF/rippled/releases/tag/2.2.0) was released on Jun 5, 2024. The following additions are non-breaking (because they are purely additive):
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781)) - `feature`: Add a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 2.0.1
[Version 2.0.1](https://github.com/XRPLF/rippled/releases/tag/2.0.1) was released on Jan 29, 2024. The following additions are non-breaking:
- `path_find`: Fixes unbounded memory growth. ([#4822](https://github.com/XRPLF/rippled/pull/4822))
## XRP Ledger server version 2.0.0 ## XRP Ledger server version 2.0.0
@@ -129,24 +143,18 @@ Admins can still attempt to retrieve old ledgers with the `ledger_request` RPC.
- `server_definitions`: A new RPC that generates a `definitions.json`-like output that can be used in XRPL libraries. - `server_definitions`: A new RPC that generates a `definitions.json`-like output that can be used in XRPL libraries.
- In `Payment` transactions, `DeliverMax` has been added. This is a replacement for the `Amount` field, which should not be used. Typically, the `delivered_amount` (in transaction metadata) should be used. To ease the transition, `DeliverMax` is present regardless of API version, since adding a field is non-breaking. - In `Payment` transactions, `DeliverMax` has been added. This is a replacement for the `Amount` field, which should not be used. Typically, the `delivered_amount` (in transaction metadata) should be used. To ease the transition, `DeliverMax` is present regardless of API version, since adding a field is non-breaking.
- API version 2 has been moved from beta to supported, meaning that it is generally available (regardless of the `beta_rpc_api` setting). - API version 2 has been moved from beta to supported, meaning that it is generally available (regardless of the `beta_rpc_api` setting). The full list of changes is in [API-VERSION-2.md](API-VERSION-2.md).
## XRP Ledger server version 2.2.0
The following is a non-breaking addition to the API.
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 1.12.0 ## XRP Ledger server version 1.12.0
[Version 1.12.0](https://github.com/XRPLF/rippled/releases/tag/1.12.0) was released on Sep 6, 2023. The following additions are non-breaking (because they are purely additive). [Version 1.12.0](https://github.com/XRPLF/rippled/releases/tag/1.12.0) was released on Sep 6, 2023. The following additions are non-breaking (because they are purely additive):
- `server_info`: Added `ports`, an array which advertises the RPC and WebSocket ports. This information is also included in the `/crawl` endpoint (which calls `server_info` internally). `grpc` and `peer` ports are also included. (https://github.com/XRPLF/rippled/pull/4427) - `server_info`: Added `ports`, an array which advertises the RPC and WebSocket ports. This information is also included in the `/crawl` endpoint (which calls `server_info` internally). `grpc` and `peer` ports are also included. ([#4427](https://github.com/XRPLF/rippled/pull/4427))
- `ports` contains objects, each containing a `port` for the listening port (a number string), and a `protocol` array listing the supported protocols on that port. - `ports` contains objects, each containing a `port` for the listening port (a number string), and a `protocol` array listing the supported protocols on that port.
- This allows crawlers to build a more detailed topology without needing to port-scan nodes. - This allows crawlers to build a more detailed topology without needing to port-scan nodes.
- (For peers and other non-admin clients, the info about admin ports is excluded.) - (For peers and other non-admin clients, the info about admin ports is excluded.)
- Clawback: The following additions are gated by the Clawback amendment (`featureClawback`). (https://github.com/XRPLF/rippled/pull/4553) - Clawback: The following additions are gated by the Clawback amendment (`featureClawback`). ([#4553](https://github.com/XRPLF/rippled/pull/4553))
- Adds an [AccountRoot flag](https://xrpl.org/accountroot.html#accountroot-flags) called `lsfAllowTrustLineClawback` (https://github.com/XRPLF/rippled/pull/4617) - Adds an [AccountRoot flag](https://xrpl.org/accountroot.html#accountroot-flags) called `lsfAllowTrustLineClawback`. ([#4617](https://github.com/XRPLF/rippled/pull/4617))
- Adds the corresponding `asfAllowTrustLineClawback` [AccountSet Flag](https://xrpl.org/accountset.html#accountset-flags) as well. - Adds the corresponding `asfAllowTrustLineClawback` [AccountSet Flag](https://xrpl.org/accountset.html#accountset-flags) as well.
- Clawback is disabled by default, so if an issuer desires the ability to claw back funds, they must use an `AccountSet` transaction to set the AllowTrustLineClawback flag. They must do this before creating any trust lines, offers, escrows, payment channels, or checks. - Clawback is disabled by default, so if an issuer desires the ability to claw back funds, they must use an `AccountSet` transaction to set the AllowTrustLineClawback flag. They must do this before creating any trust lines, offers, escrows, payment channels, or checks.
- Adds the [Clawback transaction type](https://github.com/XRPLF/XRPL-Standards/blob/master/XLS-39d-clawback/README.md#331-clawback-transaction), containing these fields: - Adds the [Clawback transaction type](https://github.com/XRPLF/XRPL-Standards/blob/master/XLS-39d-clawback/README.md#331-clawback-transaction), containing these fields:
@@ -181,16 +189,16 @@ The following is a non-breaking addition to the API.
### Breaking changes in 1.11 ### Breaking changes in 1.11
- Added the ability to mark amendments as obsolete. For the `feature` admin API, there is a new possible value for the `vetoed` field. (https://github.com/XRPLF/rippled/pull/4291) - Added the ability to mark amendments as obsolete. For the `feature` admin API, there is a new possible value for the `vetoed` field. ([#4291](https://github.com/XRPLF/rippled/pull/4291))
- The value of `vetoed` can now be `true`, `false`, or `"Obsolete"`. - The value of `vetoed` can now be `true`, `false`, or `"Obsolete"`.
- Removed the acceptance of seeds or public keys in place of account addresses. (https://github.com/XRPLF/rippled/pull/4404) - Removed the acceptance of seeds or public keys in place of account addresses. ([#4404](https://github.com/XRPLF/rippled/pull/4404))
- This simplifies the API and encourages better security practices (i.e. seeds should never be sent over the network). - This simplifies the API and encourages better security practices (i.e. seeds should never be sent over the network).
- For the `ledger_data` method, when all entries are filtered out, the `state` field of the response is now an empty list (in other words, an empty array, `[]`). (Previously, it would return `null`.) While this is technically a breaking change, the new behavior is consistent with the documentation, so this is considered only a bug fix. (https://github.com/XRPLF/rippled/pull/4398) - For the `ledger_data` method, when all entries are filtered out, the `state` field of the response is now an empty list (in other words, an empty array, `[]`). (Previously, it would return `null`.) While this is technically a breaking change, the new behavior is consistent with the documentation, so this is considered only a bug fix. ([#4398](https://github.com/XRPLF/rippled/pull/4398))
- If and when the `fixNFTokenRemint` amendment activates, there will be a new AccountRoot field, `FirstNFTSequence`. This field is set to the current account sequence when the account issues their first NFT. If an account has not issued any NFTs, then the field is not set. ([#4406](https://github.com/XRPLF/rippled/pull/4406)) - If and when the `fixNFTokenRemint` amendment activates, there will be a new AccountRoot field, `FirstNFTSequence`. This field is set to the current account sequence when the account issues their first NFT. If an account has not issued any NFTs, then the field is not set. ([#4406](https://github.com/XRPLF/rippled/pull/4406))
- There is a new account deletion restriction: an account can only be deleted if `FirstNFTSequence` + `MintedNFTokens` + `256` is less than the current ledger sequence. - There is a new account deletion restriction: an account can only be deleted if `FirstNFTSequence` + `MintedNFTokens` + `256` is less than the current ledger sequence.
- This is potentially a breaking change if clients have logic for determining whether an account can be deleted. - This is potentially a breaking change if clients have logic for determining whether an account can be deleted.
- NetworkID - NetworkID
- For sidechains and networks with a network ID greater than 1024, there is a new [transaction common field](https://xrpl.org/transaction-common-fields.html), `NetworkID`. (https://github.com/XRPLF/rippled/pull/4370) - For sidechains and networks with a network ID greater than 1024, there is a new [transaction common field](https://xrpl.org/transaction-common-fields.html), `NetworkID`. ([#4370](https://github.com/XRPLF/rippled/pull/4370))
- This field helps to prevent replay attacks and is now required for chains whose network ID is 1025 or higher. - This field helps to prevent replay attacks and is now required for chains whose network ID is 1025 or higher.
- The field must be omitted for Mainnet, so there is no change for Mainnet users. - The field must be omitted for Mainnet, so there is no change for Mainnet users.
- There are three new local error codes: - There are three new local error codes:
@@ -200,10 +208,10 @@ The following is a non-breaking addition to the API.
### Additions and bug fixes in 1.11 ### Additions and bug fixes in 1.11
- Added `nftoken_id`, `nftoken_ids` and `offer_id` meta fields into NFT `tx` and `account_tx` responses. (https://github.com/XRPLF/rippled/pull/4447) - Added `nftoken_id`, `nftoken_ids` and `offer_id` meta fields into NFT `tx` and `account_tx` responses. ([#4447](https://github.com/XRPLF/rippled/pull/4447))
- Added an `account_flags` object to the `account_info` method response. (https://github.com/XRPLF/rippled/pull/4459) - Added an `account_flags` object to the `account_info` method response. ([#4459](https://github.com/XRPLF/rippled/pull/4459))
- Added `NFTokenPages` to the `account_objects` RPC. (https://github.com/XRPLF/rippled/pull/4352) - Added `NFTokenPages` to the `account_objects` RPC. ([#4352](https://github.com/XRPLF/rippled/pull/4352))
- Fixed: `marker` returned from the `account_lines` command would not work on subsequent commands. (https://github.com/XRPLF/rippled/pull/4361) - Fixed: `marker` returned from the `account_lines` command would not work on subsequent commands. ([#4361](https://github.com/XRPLF/rippled/pull/4361))
## XRP Ledger server version 1.10.0 ## XRP Ledger server version 1.10.0

66
API-VERSION-2.md Normal file
View File

@@ -0,0 +1,66 @@
# API Version 2
API version 2 is available in `rippled` version 2.0.0 and later. To use this API, clients specify `"api_version" : 2` in each request.
For info about how [API versioning](https://xrpl.org/request-formatting.html#api-versioning) works, including examples, please view the [XLS-22d spec](https://github.com/XRPLF/XRPL-Standards/discussions/54). For details about the implementation of API versioning, view the [implementation PR](https://github.com/XRPLF/rippled/pull/3155). API versioning ensures existing integrations and users continue to receive existing behavior, while those that request a higher API version will experience new behavior.
## Removed methods
In API version 2, the following deprecated methods are no longer available: ([#4759](https://github.com/XRPLF/rippled/pull/4759))
- `tx_history` - Instead, use other methods such as `account_tx` or `ledger` with the `transactions` field set to `true`.
- `ledger_header` - Instead, use the `ledger` method.
## Modifications to JSON transaction element in API version 2
In API version 2, JSON elements for transaction output have been changed and made consistent for all methods which output transactions. ([#4775](https://github.com/XRPLF/rippled/pull/4775))
This helps to unify the JSON serialization format of transactions. ([clio#722](https://github.com/XRPLF/clio/issues/722), [#4727](https://github.com/XRPLF/rippled/issues/4727))
- JSON transaction element is named `tx_json`
- Binary transaction element is named `tx_blob`
- JSON transaction metadata element is named `meta`
- Binary transaction metadata element is named `meta_blob`
Additionally, these elements are now consistently available next to `tx_json` (i.e. sibling elements), where possible:
- `hash` - Transaction ID. This data was stored inside transaction output in API version 1, but in API version 2 is a sibling element.
- `ledger_index` - Ledger index (only set on validated ledgers)
- `ledger_hash` - Ledger hash (only set on closed or validated ledgers)
- `close_time_iso` - Ledger close time expressed in ISO 8601 time format (only set on validated ledgers)
- `validated` - Bool element set to `true` if the transaction is in a validated ledger, otherwise `false`
This change affects the following methods:
- `tx` - Transaction data moved into element `tx_json` (was inline inside `result`) or, if binary output was requested, moved from `tx` to `tx_blob`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `account_tx` - Renamed transaction element from `tx` to `tx_json`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `transaction_entry` - Renamed transaction metadata element from `metadata` to `meta`. Changed location of `hash` and added new elements
- `subscribe` - Renamed transaction element from `transaction` to `tx_json`. Changed location of `hash` and added new elements
- `sign`, `sign_for`, `submit` and `submit_multisigned` - Changed location of `hash` element.
## Modifications to `Payment` transaction JSON schema
When reading Payments, the `Amount` field should generally **not** be used. Instead, use [delivered_amount](https://xrpl.org/partial-payments.html#the-delivered_amount-field) to see the amount that the Payment delivered. To clarify its meaning, the `Amount` field is being renamed to `DeliverMax`. ([#4733](https://github.com/XRPLF/rippled/pull/4733))
- In `Payment` transaction type, JSON RPC field `Amount` is renamed to `DeliverMax`. To enable smooth client transition, `Amount` is still handled, as described below: ([#4733](https://github.com/XRPLF/rippled/pull/4733))
- On JSON RPC input (e.g. `submit_multisigned` etc. methods), `Amount` is recognized as an alias to `DeliverMax` for both API version 1 and version 2 clients.
- On JSON RPC input, submitting both `Amount` and `DeliverMax` fields is allowed _only_ if they are identical; otherwise such input is rejected with `rpcINVALID_PARAMS` error.
- On JSON RPC output (e.g. `subscribe`, `account_tx` etc. methods), `DeliverMax` is present in both API version 1 and version 2.
- On JSON RPC output, `Amount` is only present in API version 1 and _not_ in version 2.
## Modifications to account_info response
- `signer_lists` is returned in the root of the response. (In API version 1, it was nested under `account_data`.) ([#3770](https://github.com/XRPLF/rippled/pull/3770))
- When using an invalid `signer_lists` value, the API now returns an "invalidParams" error. ([#4585](https://github.com/XRPLF/rippled/pull/4585))
- (`signer_lists` must be a boolean. In API version 1, strings were accepted and may return a normal response - i.e. as if `signer_lists` were `true`.)
## Modifications to [account_tx](https://xrpl.org/account_tx.html#account_tx) response
- Using `ledger_index_min`, `ledger_index_max`, and `ledger_index` returns `invalidParams` because if you use `ledger_index_min` or `ledger_index_max`, then it does not make sense to also specify `ledger_index`. In API version 1, no error was returned. ([#4571](https://github.com/XRPLF/rippled/pull/4571))
- The same applies for `ledger_index_min`, `ledger_index_max`, and `ledger_hash`. ([#4545](https://github.com/XRPLF/rippled/issues/4545#issuecomment-1565065579))
- Using a `ledger_index_min` or `ledger_index_max` beyond the range of ledgers that the server has:
- returns `lgrIdxMalformed` in API version 2. Previously, in API version 1, no error was returned. ([#4288](https://github.com/XRPLF/rippled/issues/4288))
- Attempting to use a non-boolean value (such as a string) for the `binary` or `forward` parameters returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. ([#4620](https://github.com/XRPLF/rippled/pull/4620))
## Modifications to [noripple_check](https://xrpl.org/noripple_check.html#noripple_check) response
- Attempting to use a non-boolean value (such as a string) for the `transactions` parameter returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. ([#4620](https://github.com/XRPLF/rippled/pull/4620))

27
API-VERSION-3.md Normal file
View File

@@ -0,0 +1,27 @@
# API Version 3
API version 3 is currently a **beta API**. It requires enabling `[beta_rpc_api]` in the rippled configuration to use. To use this API, clients specify `"api_version" : 3` in each request.
For info about how [API versioning](https://xrpl.org/request-formatting.html#api-versioning) works, including examples, please view the [XLS-22d spec](https://github.com/XRPLF/XRPL-Standards/discussions/54). For details about the implementation of API versioning, view the [implementation PR](https://github.com/XRPLF/rippled/pull/3155). API versioning ensures existing integrations and users continue to receive existing behavior, while those that request a higher API version will experience new behavior.
## Breaking Changes
### Modifications to `amm_info`
The order of error checks has been changed to provide more specific error messages. ([#4924](https://github.com/XRPLF/rippled/pull/4924))
- **Before (API v2)**: When sending an invalid account or asset to `amm_info` while other parameters are not set as expected, the method returns a generic `rpcINVALID_PARAMS` error.
- **After (API v3)**: The same scenario returns a more specific error: `rpcISSUE_MALFORMED` for malformed assets or `rpcACT_MALFORMED` for malformed accounts.
### Modifications to `ledger_entry`
Added support for string shortcuts to look up fixed-location ledger entries using the `"index"` parameter. ([#5644](https://github.com/XRPLF/rippled/pull/5644))
In API version 3, the following string values can be used with the `"index"` parameter:
- `"index": "amendments"` - Returns the `Amendments` ledger entry
- `"index": "fee"` - Returns the `FeeSettings` ledger entry
- `"index": "nunl"` - Returns the `NegativeUNL` ledger entry
- `"index": "hashes"` - Returns the "short" `LedgerHashes` ledger entry (recent ledger hashes)
These shortcuts are only available in API version 3 and later. In API versions 1 and 2, these string values would result in an error.

View File

@@ -1,5 +1,5 @@
| :warning: **WARNING** :warning: | :warning: **WARNING** :warning: |
|---| | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| These instructions assume you have a C++ development environment ready with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up on Linux, macOS, or Windows, [see this guide](./docs/build/environment.md). | | These instructions assume you have a C++ development environment ready with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up on Linux, macOS, or Windows, [see this guide](./docs/build/environment.md). |
> These instructions also assume a basic familiarity with Conan and CMake. > These instructions also assume a basic familiarity with Conan and CMake.
@@ -148,7 +148,8 @@ function extract_version {
} }
# Define which recipes to export. # Define which recipes to export.
recipes=(ed25519 grpc secp256k1 snappy soci) recipes=('ed25519' 'grpc' 'nudb' 'openssl' 'secp256k1' 'snappy' 'soci')
folders=('all' 'all' 'all' '3.x.x' 'all' 'all' 'all')
# Selectively check out the recipes from our CCI fork. # Selectively check out the recipes from our CCI fork.
cd external cd external
@@ -157,20 +158,24 @@ cd conan-center-index
git init git init
git remote add origin git@github.com:XRPLF/conan-center-index.git git remote add origin git@github.com:XRPLF/conan-center-index.git
git sparse-checkout init git sparse-checkout init
for recipe in ${recipes[@]}; do for ((index = 1; index <= ${#recipes[@]}; index++)); do
echo "Checking out ${recipe}..." recipe=${recipes[index]}
git sparse-checkout add recipes/${recipe}/all folder=${folders[index]}
echo "Checking out recipe '${recipe}' from folder '${folder}'..."
git sparse-checkout add recipes/${recipe}/${folder}
done done
git fetch origin master git fetch origin master
git checkout master git checkout master
cd ../.. cd ../..
# Export the recipes into the local cache. # Export the recipes into the local cache.
for recipe in ${recipes[@]}; do for ((index = 1; index <= ${#recipes[@]}; index++)); do
recipe=${recipes[index]}
folder=${folders[index]}
version=$(extract_version ${recipe}) version=$(extract_version ${recipe})
echo "Exporting ${recipe}/${version}..." echo "Exporting '${recipe}/${version}' from '${recipe}/${folder}'..."
conan export --version $(extract_version ${recipe}) \ conan export --version $(extract_version ${recipe}) \
external/conan-center-index/recipes/${recipe}/all external/conan-center-index/recipes/${recipe}/${folder}
done done
``` ```
@@ -518,13 +523,27 @@ stored inside the build directory, as either of:
- file named `coverage.`_extension_, with a suitable extension for the report format, or - file named `coverage.`_extension_, with a suitable extension for the report format, or
- directory named `coverage`, with the `index.html` and other files inside, for the `html-details` or `html-nested` report formats. - directory named `coverage`, with the `index.html` and other files inside, for the `html-details` or `html-nested` report formats.
## Sanitizers
To build dependencies and xrpld with sanitizer instrumentation, set the
`SANITIZERS` environment variable (only once before running conan and cmake) and use the `sanitizers` profile in conan:
```bash
export SANITIZERS=address,undefinedbehavior
conan install .. --output-folder . --profile:all sanitizers --build missing --settings build_type=Debug
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Debug -Dxrpld=ON -Dtests=ON ..
```
See [Sanitizers docs](./docs/build/sanitizers.md) for more details.
## Options ## Options
| Option | Default Value | Description | | Option | Default Value | Description |
| ---------- | ------------- | ------------------------------------------------------------------ | | ---------- | ------------- | -------------------------------------------------------------- |
| `assert` | OFF | Enable assertions. | | `assert` | OFF | Enable assertions. |
| `coverage` | OFF | Prepare the coverage report. | | `coverage` | OFF | Prepare the coverage report. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
| `tests` | OFF | Build tests. | | `tests` | OFF | Build tests. |
| `unity` | OFF | Configure a unity build. | | `unity` | OFF | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld application, and not just the libxrpl library. | | `xrpld` | OFF | Build the xrpld application, and not just the libxrpl library. |

View File

@@ -8,7 +8,9 @@ if(POLICY CMP0077)
endif () endif ()
# Fix "unrecognized escape" issues when passing CMAKE_MODULE_PATH on Windows. # Fix "unrecognized escape" issues when passing CMAKE_MODULE_PATH on Windows.
if (DEFINED CMAKE_MODULE_PATH)
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH) file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
endif ()
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake") list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
project(xrpl) project(xrpl)
@@ -16,14 +18,16 @@ set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD 20) set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON) set(CMAKE_CXX_STANDARD_REQUIRED ON)
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU") include(CompilationEnv)
if (is_gcc)
# GCC-specific fixes # GCC-specific fixes
add_compile_options(-Wno-unknown-pragmas -Wno-subobject-linkage) add_compile_options(-Wno-unknown-pragmas -Wno-subobject-linkage)
# -Wno-subobject-linkage can be removed when we upgrade GCC version to at least 13.3 # -Wno-subobject-linkage can be removed when we upgrade GCC version to at least 13.3
elseif(CMAKE_CXX_COMPILER_ID MATCHES "Clang") elseif (is_clang)
# Clang-specific fixes # Clang-specific fixes
add_compile_options(-Wno-unknown-warning-option) # Ignore unknown warning options add_compile_options(-Wno-unknown-warning-option) # Ignore unknown warning options
elseif(MSVC) elseif (is_msvc)
# MSVC-specific fixes # MSVC-specific fixes
add_compile_options(/wd4068) # Ignore unknown pragmas add_compile_options(/wd4068) # Ignore unknown pragmas
endif () endif ()
@@ -52,7 +56,8 @@ if(Git_FOUND)
endif () # git endif () # git
if (thread_safety_analysis) if (thread_safety_analysis)
add_compile_options(-Wthread-safety -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS -DXRPL_ENABLE_THREAD_SAFETY_ANNOTATIONS) add_compile_options(-Wthread-safety -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS
-DXRPL_ENABLE_THREAD_SAFETY_ANNOTATIONS)
add_compile_options("-stdlib=libc++") add_compile_options("-stdlib=libc++")
add_link_options("-stdlib=libc++") add_link_options("-stdlib=libc++")
endif () endif ()
@@ -68,8 +73,7 @@ endif ()
include(XrplSanity) include(XrplSanity)
include(XrplVersion) include(XrplVersion)
include(XrplSettings) include(XrplSettings)
# this check has to remain in the top-level cmake # this check has to remain in the top-level cmake because of the early return statement
# because of the early return statement
if (packages_only) if (packages_only)
if (NOT TARGET rpm) if (NOT TARGET rpm)
message(FATAL_ERROR "packages_only requested, but targets were not created - is docker installed?") message(FATAL_ERROR "packages_only requested, but targets were not created - is docker installed?")
@@ -77,6 +81,7 @@ if (packages_only)
return() return()
endif () endif ()
include(XrplCompiler) include(XrplCompiler)
include(XrplSanitizers)
include(XrplInterface) include(XrplInterface)
option(only_docs "Include only the docs target?" FALSE) option(only_docs "Include only the docs target?" FALSE)
@@ -85,48 +90,37 @@ if(only_docs)
return() return()
endif () endif ()
###
include(deps/Boost) include(deps/Boost)
find_package(OpenSSL 1.1.1 REQUIRED)
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2
)
add_subdirectory(external/antithesis-sdk) add_subdirectory(external/antithesis-sdk)
find_package(gRPC REQUIRED)
find_package(lz4 REQUIRED)
# Target names with :: are not allowed in a generator expression.
# We need to pull the include directories and imported location properties
# from separate targets.
find_package(LibArchive REQUIRED)
find_package(SOCI REQUIRED)
find_package(SQLite3 REQUIRED)
option(rocksdb "Enable RocksDB" ON)
if(rocksdb)
find_package(RocksDB REQUIRED)
set_target_properties(RocksDB::rocksdb PROPERTIES
INTERFACE_COMPILE_DEFINITIONS XRPL_ROCKSDB_AVAILABLE=1
)
target_link_libraries(xrpl_libs INTERFACE RocksDB::rocksdb)
endif()
find_package(date REQUIRED) find_package(date REQUIRED)
find_package(ed25519 REQUIRED) find_package(ed25519 REQUIRED)
find_package(gRPC REQUIRED)
find_package(LibArchive REQUIRED)
find_package(lz4 REQUIRED)
find_package(nudb REQUIRED) find_package(nudb REQUIRED)
find_package(OpenSSL REQUIRED)
find_package(secp256k1 REQUIRED) find_package(secp256k1 REQUIRED)
find_package(SOCI REQUIRED)
find_package(SQLite3 REQUIRED)
find_package(xxHash REQUIRED) find_package(xxHash REQUIRED)
target_link_libraries(xrpl_libs INTERFACE target_link_libraries(
ed25519::ed25519 xrpl_libs
INTERFACE ed25519::ed25519
lz4::lz4 lz4::lz4
OpenSSL::Crypto OpenSSL::Crypto
OpenSSL::SSL OpenSSL::SSL
secp256k1::secp256k1 secp256k1::secp256k1
soci::soci soci::soci
SQLite::SQLite3 SQLite::SQLite3)
)
option(rocksdb "Enable RocksDB" ON)
if (rocksdb)
find_package(RocksDB REQUIRED)
set_target_properties(RocksDB::rocksdb PROPERTIES INTERFACE_COMPILE_DEFINITIONS XRPL_ROCKSDB_AVAILABLE=1)
target_link_libraries(xrpl_libs INTERFACE RocksDB::rocksdb)
endif ()
# Work around changes to Conan recipe for now. # Work around changes to Conan recipe for now.
if (TARGET nudb::core) if (TARGET nudb::core)

View File

@@ -872,7 +872,8 @@ git push --delete upstream-push master-next
11. [Create a new release on 11. [Create a new release on
Github](https://github.com/XRPLF/rippled/releases). Be sure that Github](https://github.com/XRPLF/rippled/releases). Be sure that
"Set as the latest release" is checked. "Set as the latest release" is checked.
12. Finally [reverse merge the release into `develop`](#follow-up-reverse-merge). 12. Open a PR to update the [API-CHANGELOG](API-CHANGELOG.md) and `API-VERSION-[n].md` with the changes for this release (if any are missing).
13. Finally, [reverse merge the release into `develop`](#follow-up-reverse-merge).
#### Special cases: point releases, hotfixes, etc. #### Special cases: point releases, hotfixes, etc.

View File

@@ -78,72 +78,61 @@ To report a qualifying bug, please send a detailed report to:
| Email Address | bugs@ripple.com | | Email Address | bugs@ripple.com |
| :-----------: | :-------------------------------------------------- | | :-----------: | :-------------------------------------------------- |
| Short Key ID | `0xC57929BE` | | Short Key ID | `0xA9F514E0` |
| Long Key ID | `0xCD49A0AFC57929BE` | | Long Key ID | `0xD900855AA9F514E0` |
| Fingerprint | `24E6 3B02 37E0 FA9C 5E96 8974 CD49 A0AF C579 29BE` | | Fingerprint | `B72C 0654 2F2A E250 2763 A268 D900 855A A9F5 14E0` |
The full PGP key for this address, which is also available on several key servers (e.g. on [keyserver.ubuntu.com](https://keyserver.ubuntu.com)), is: The full PGP key for this address, which is also available on several key servers (e.g. on [keyserver.ubuntu.com](https://keyserver.ubuntu.com)), is:
``` ```
-----BEGIN PGP PUBLIC KEY BLOCK----- -----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFUwGHYBEAC0wpGpBPkd8W1UdQjg9+cEFzeIEJRaoZoeuJD8mofwI5Ejnjdt mQINBGkSZAQBEACprU199OhgdsOsygNjiQV4msuN3vDOUooehL+NwfsGfW79Tbqq
kCpUYEDal0ygkKobu8SzOoATcDl18iCrScX39VpTm96vISFZMhmOryYCIp4QLJNN Q2u7uQ3NZjW+M2T4nsDwuhkr7pe7xSReR5W8ssaczvtUyxkvbMClilcgZ2OSCAuC
4HKc2ZdBj6W4igNi6vj5Qo6JMyGpLY2mz4CZskbt0TNuUxWrGood+UrCzpY8x7/N N9tzJsqOqkwBvXoNXkn//T2jnPz0ZU2wSF+NrEibq5FeuyGdoX3yXXBxq9pW9HzK
a93fcvNw+prgCr0rCH3hAPmAFfsOBbtGzNnmq7xf3jg5r4Z4sDiNIF1X1y53DAfV HkQll63QSl6BzVSGRQq+B6lGgaZGLwf3mzmIND9Z5VGLNK2jKynyz9z091whNG/M
rWDx49IKsuCEJfPMp1MnBSvDvLaQ2hKXs+cOpx1BCZgHn3skouEUxxgqbtTzBLt1 kV+E7/r/bujHk7WIVId07G5/COTXmSr7kFnNEkd2Umw42dkgfiNKvlmJ9M7c1wLK
xXpmuijsaltWngPnGO7mOAzbpZSdBm82/Emrk9bPMuD0QaLQjWr7HkTSUs6ZsKt4 KbL9Eb4ADuW6rRc5k4s1e6GT8R4/VPliWbCl9SE32hXH8uTkqVIFZP2eyM5WRRHs
7CLPdWqxyY/QVw9UaxeHEtWGQGMIQGgVJGh1fjtUr5O1sC9z9jXcQ0HuIHnRCTls aKzitkQG9UK9gcb0kdgUkxOvvgPHAe5IuZlcHFzU4y0dBbU1VEFWVpiLU0q+IuNw
GP7hklJmfH5V4SyAJQ06/hLuEhUJ7dn+BlqCsT0tLmYTgZYNzNcLHcqBFMEZHvHw 5BRemeHc59YNsngkmAZ+/9zouoShRusZmC8Wzotv75C2qVBcjijPvmjWAUz0Zunm
9GENMx/tDXgajKql4bJnzuTK0iGU/YepanANLd1JHECJ4jzTtmKOus9SOGlB2/l1 Lsr+O71vqHE73pERjD07wuD/ISjiYRYYE/bVrXtXLZijC7qAH4RE3nID+2ojcZyO
0t0ADDYAS3eqOdOcUvo9ElSLCI5vSVHhShSte/n2FMWU+kMUboTUisEG8CgQnrng /2jMQvt7un56RsGH4UBHi3aBHi9bUoDGCXKiQY981cEuNaOxpou7Mh3x/ONzzSvk
g2CvvQvqDkeOtZeqMcC7HdiZS0q3LJUWtwA/ViwxrVlBDCxiTUXCotyBWwARAQAB sTV6nl1LOZHykN1JyKwaNbTSAiuyoN+7lOBqbV04DNYAHL88PrT21P83aQARAQAB
tDBSaXBwbGUgTGFicyBCdWcgQm91bnR5IFByb2dyYW0gPGJ1Z3NAcmlwcGxlLmNv tB1SaXBwbGUgTGFicyA8YnVnc0ByaXBwbGUuY29tPokCTgQTAQgAOBYhBLcsBlQv
bT6JAjcEEwEKACEFAlUwGHYCGwMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ KuJQJ2OiaNkAhVqp9RTgBQJpEmQEAhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheA
zUmgr8V5Kb6R0g//SwY/mVJY59k87iL26/KayauSoOcz7xjcST26l4ZHVVX85gOY AAoJENkAhVqp9RTgBzgP/i7y+aDWl1maig1XMdyb+o0UGusumFSW4Hmj278wlKVv
HYZl8k0+m8X3zxeYm9a3QAoAml8sfoaFRFQP8ynnefRrLUPaZ2MjbJ0SACMwZNef usgLPihYgHE0PKrv6WRyKOMC1tQEcYYN93M+OeQ1vFhS2YyURq6RCMmh4zq/awXG
T6o7Mi8LBAaiNZdYVyIfX1oM6YXtqYkuJdav6ZCyvVYqc9OvMJPY2ZzJYuI/ZtvQ uZbG36OURB5NH8lGBOHiN/7O+nY0CgenBT2JWm+GW3nEOAVOVm4+r5GlpPlv+Dp1
/lTndxCeg9ALNX/iezOLGdfMpf4HuIFVwcPPlwGi+HDlB9/bggDEHC8z434SXVFc NPBThcKXFMnH73++NpSQoDzTfRYHPxhDAX3jkLi/moXfSanOLlR6l94XNNN0jBHW
aQatXAPcDkjMUweU7y0CZtYEj00HITd4pSX6MqGiHrxlDZTqinCOPs1Ieqp7qufs Quao0rzf4WSXq9g6AS224xhAA5JyIcFl8TX7hzj5HaFn3VWo3COoDu4U7H+BM0fl
MzlM6irLGucxj1+wa16ieyYvEtGaPIsksUKkywx0O7cf8N2qKg+eIkUk6O0Uc6eO 85yqiMQypp7EhN2gxpMMWaHY5TFM85U/bFXFYfEgihZ4/gt4uoIzsNI9jlX7mYvG
CszizmiXIXy4O6OiLlVHGKkXHMSW9Nwe9GE95O8G9WR8OZCEuDv+mHPAutO+IjdP KFdDij+oTlRsuOxdIy60B3dKcwOH9nZZCz0SPsN/zlRWgKzK4gDKdGhFkU9OlvPu
PDAAUvy+3XnkceO+HGWRpVvJZfFP2YH4A33InFL5yqlJmSoR/yVingGLxk55bZDM 94ZqscanoiWKDoZkF96+sjgfjkuHsDK7Lwc1Xi+T4drHG/3aVpkYabXox+lrKB/S
+HYGR3VeMb8Xj1rf/02qERsZyccMCFdAvKDbTwmvglyHdVLu5sPmktxbBYiemfyJ yxZjeqOIQzWPhnLgCaLyvsKo5hxKzL0w3eURu8F3IS7RgOOlljv4M+Me9sEVcdNV
qxMxmYXCc9S0hWrWZW7edktBa9NpE58z1mx+hRIrDNbS2sDHrib9PULYCySyVYcF aN3/tQwbaomSX1X5D5YXqhBwC3rU3wXwamsscRTGEpkV+JCX6KUqGP7nWmxCpAly
P+PWEe1CAS5jqkR2ker5td2/pHNnJIycynBEs7l6zbc9fu+nktFJz0q2B+GJAhwE FL05XuOd5SVHJjXLeuje0JqLUpN514uL+bThWwDbDTdAdwW3oK/2WbXz7IfJRLBj
EAEKAAYFAlUwGaQACgkQ+tiY1qQ2QkjMFw//f2hNY3BPNe+1qbhzumMDCnbTnGif uQINBGkSZAQBEADdI3SL2F72qkrgFqXWE6HSRBu9bsAvTE5QrRPWk7ux6at537r4
kLuAGl9OKt81VHG1f6RnaGiLpR696+6Ja45KzH15cQ5JJl5Bgs1YkR/noTGX8IAD S4sIw2dOwLvbyIrDgKNq3LQ5wCK88NO/NeCOFm4AiCJSl3pJHXYnTDoUxTrrxx+o
c70eNwiFu8JXTaaeeJrsmFkF9Tueufb364risYkvPP8tNUD3InBFEZT3WN7JKwix vSRI4I3fHEql/MqzgiAb0YUezjgFdh3vYheMPp/309PFbOLhiFqEcx80Mx5h06UH
coD4/BwekUwOZVDd/uCFEyhlhZsROxdKNisNo3VtAq2s+3tIBAmTrriFUl0K+ZC5 gDzu1qNj3Ec+31NLic5zwkrAkvFvD54d6bqYR3SEgMau6aYEewpGHbWBi2pLqSi2
zgavcpnPN57zMtW9aK+VO3wXqAKYLYmtgxkVzSLUZt2M7JuwOaAdyuYWAneKZPCu lQcAeOFixqGpTwDmAnYR8YtjBYepy0MojEAdTHcQQlOYSDk4q4elG+io2N8vECfU
1AXkmyo+d84sd5mZaKOr5xArAFiNMWPUcZL4rkS1Fq4dKtGAqzzR7a7hWtA5o27T rD6ORecN48GXdZINYWTAdslrUeanmBdgQrYkSpce8TSghgT9P01SNaXxmyaehVUO
6vynuxZ1n0PPh0er2O/zF4znIjm5RhTlfjp/VmhZdQfpulFEQ/dMxxGkQ9z5IYbX lqI4pcg5G2oojAE8ncNS3TwDtt7daTaTC3bAdr4PXDVAzNAiewjMNZPB7xidkDGQ
mTlSDbCSb+FMsanRBJ7Drp5EmBIudVGY6SHI5Re1RQiEh7GoDfUMUwZO+TVDII5R Y4W1LxTMXyJVWxehYOH7tsbBRKninlfRnLgYzmtIbNRAAvNcsxU6ihv3AV0WFknN
Ra7WyuimYleJgDo/+7HyfuIyGDaUCVj6pwVtYtYIdOI3tTw1R1Mr0V8yaNVnJghL YbSzotEv1Xq/5wk309x8zCDe+sP0cQicvbXafXmUzPAZzeqFg+VLFn7F9MP1WGlW
CHcEJQL+YHSmiMM3ySil3O6tm1By6lFz8bVe/rgG/5uklQrnjMR37jYboi1orCC4 B1u7VIvBF1Mp9Nd3EAGBAoLRdRu+0dVWIjPTQuPIuD9cCatJA0wVaKUrjYbBMl88
yeIoQeV0ItlxeTyBwYIV/o1DBNxDevTZvJabC93WiGLw2XFjpZ0q/9+zI2rJUZJh a12LixNVGeSFS9N7ADHx0/o7GNT6l88YbaLP6zggUHpUD/bR+cDN7vllIQARAQAB
qxmKP+D4e27lCI65Ag0EVTAYdgEQAMvttYNqeRNBRpSX8fk45WVIV8Fb21fWdwk6 iQI2BBgBCAAgFiEEtywGVC8q4lAnY6Jo2QCFWqn1FOAFAmkSZAQCGwwACgkQ2QCF
2SkZnJURbiC0LxQnOi7wrtii7DeFZtwM2kFHihS1VHekBnIKKZQSgGoKuFAQMGyu Wqn1FOAfAA/8CYq4p0p4bobY20CKEMsZrkBTFJyPDqzFwMeTjgpzqbD7Y3Qq5QCK
a426H4ZsSmA9Ufd7kRbvdtEcp7/RTAanhrSL4lkBhaKJrXlxBJ27o3nd7/rh7r3a OBbvY02GWdiIsNOzKdBxiuam2xYP9WHZj4y7/uWEvT0qlPVmDFu+HXjoJ43oxwFd
OszbPY6DJ5bWClX3KooPTDl/RF2lHn+fweFk58UvuunHIyo4BWJUdilSXIjLun+P CUp2gMuQ4cSL3X94VRJ3BkVL+tgBm8CNY0vnTLLOO3kum/R69VsGJS1JSGUWjNM+
Qaik4ZAsZVwNhdNz05d+vtai4AwbYoO7adboMLRkYaXSQwGytkm+fM6r7OpXHYuS 4qwS3mz+73xJu1HmERyN2RZF/DGIZI2PyONQQ6aH85G1Dd2ohu2/DBAkQAMBrPbj
cR4zB/OK5hxCVEpWfiwN71N2NMvnEMaWd/9uhqxJzyvYgkVUXV9274TUe16pzXnW FrbDaBLyFhODxU3kTWqnfLlaElSm2EGdIU2yx7n4BggEa//NZRMm5kyeo4vzhtlQ
ZLfmitjwc91e7mJBBfKNenDdhaLEIlDRwKTLj7k58f9srpMnyZFacntu5pUMNblB YIVUMLAOLZvnEqDnsLKp+22FzNR/O+htBQC4lPywl53oYSALdhz1IQlcAC1ru5KR
cjXwWxz5ZaQikLnKYhIvrIEwtWPyjqOzNXNvYfZamve/LJ8HmWGCKao3QHoAIDvB XPzhIXV6IIzkcx9xNkEclZxmsuy5ERXyKEmLbIHAlzFmnrldlt2ZgXDtzaorLmxj
9XBxrDyTJDpxbog6Qu4SY8AdgVlan6c/PsLDc7EUegeYiNTzsOK+eq3G5/E92eIu klKibxd5tF50qOpOivz+oPtFo7n+HmFa1nlVAMxlDCUdM0pEVeYDKI5zfVwalyhZ
TsUXlciypFcRm1q8vLRr+HYYe2mJDo4GetB1zLkAFBcYJm/x9iJQbu0hn5NxJvZO NnjpakdZSXMwgc7NP/hH9buF35hKDp7EckT2y3JNYwHsDdy1icXN2q40XZw5tSIn
R0Y5nOJQdyi+muJzKYwhkuzaOlswzqVXkq/7+QCjg7QsycdcwDjiQh3OrsgXHrwl zkPWdu3OUY8PISohN6Pw4h0RH4ZmoX97E8sEfmdKaT58U4Hf2aAv5r9IWCSrAVqY
M7gyafL9ABEBAAGJAh8EGAEKAAkFAlUwGHYCGwwACgkQzUmgr8V5Kb50BxAAhj9T u5jvac29CzQR9Kal0A+8phHAXHNFD83SwzIC0syaT9ficAguwGH8X6Q=
TwmNrgRldTHszj+Qc+v8RWqV6j+R+zc0cn5XlUa6XFaXI1OFFg71H4dhCPEiYeN0 =nGuD
IrnocyMNvCol+eKIlPKbPTmoixjQ4udPTR1DC1Bx1MyW5FqOrsgBl5t0e1VwEViM
NspSStxu5Hsr6oWz2GD48lXZWJOgoL1RLs+uxjcyjySD/em2fOKASwchYmI+ezRv
plfhAFIMKTSCN2pgVTEOaaz13M0U+MoprThqF1LWzkGkkC7n/1V1f5tn83BWiagG
2N2Q4tHLfyouzMUKnX28kQ9sXfxwmYb2sA9FNIgxy+TdKU2ofLxivoWT8zS189z/
Yj9fErmiMjns2FzEDX+bipAw55X4D/RsaFgC+2x2PDbxeQh6JalRA2Wjq32Ouubx
u+I4QhEDJIcVwt9x6LPDuos1F+M5QW0AiUhKrZJ17UrxOtaquh/nPUL9T3l2qPUn
1ChrZEEEhHO6vA8+jn0+cV9n5xEz30Str9iHnDQ5QyR5LyV4UBPgTdWyQzNVKA69
KsSr9lbHEtQFRzGuBKwt6UlSFv9vPWWJkJit5XDKAlcKuGXj0J8OlltToocGElkF
+gEBZfoOWi/IBjRLrFW2cT3p36DTR5O1Ud/1DLnWRqgWNBLrbs2/KMKE6EnHttyD
7Tz8SQkuxltX/yBXMV3Ddy0t6nWV2SZEfuxJAQI=
=spg4
-----END PGP PUBLIC KEY BLOCK----- -----END PGP PUBLIC KEY BLOCK-----
``` ```

View File

@@ -21,8 +21,7 @@ function (git_branch branch_val)
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR} WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_branch OUTPUT_VARIABLE _temp_branch
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_STRIP_TRAILING_WHITESPACE ERROR_QUIET)
ERROR_QUIET)
if (_git_exit_code EQUAL 0) if (_git_exit_code EQUAL 0)
set(_branch ${_temp_branch}) set(_branch ${_temp_branch})
endif () endif ()

View File

@@ -15,17 +15,16 @@ endif ()
# https://github.com/ccache/ccache/wiki/MS-Visual-Studio#usage-with-cmake. # https://github.com/ccache/ccache/wiki/MS-Visual-Studio#usage-with-cmake.
if ("${CCACHE_PATH}" MATCHES "chocolatey") if ("${CCACHE_PATH}" MATCHES "chocolatey")
message(DEBUG "Ccache path: ${CCACHE_PATH}") message(DEBUG "Ccache path: ${CCACHE_PATH}")
# Chocolatey uses a shim executable that we cannot use directly, in which # Chocolatey uses a shim executable that we cannot use directly, in which case we have to find the executable it
# case we have to find the executable it points to. If we cannot find the # points to. If we cannot find the target executable then we cannot use ccache.
# target executable then we cannot use ccache.
find_program(BASH_PATH "bash") find_program(BASH_PATH "bash")
if (NOT BASH_PATH) if (NOT BASH_PATH)
message(WARNING "Could not find bash.") message(WARNING "Could not find bash.")
return() return()
endif () endif ()
execute_process( execute_process(COMMAND bash -c
COMMAND bash -c "export LC_ALL='en_US.UTF-8'; ${CCACHE_PATH} --shimgen-noop | grep -oP 'path to executable: \\K.+' | head -c -1" "export LC_ALL='en_US.UTF-8'; ${CCACHE_PATH} --shimgen-noop | grep -oP 'path to executable: \\K.+' | head -c -1"
OUTPUT_VARIABLE CCACHE_PATH) OUTPUT_VARIABLE CCACHE_PATH)
if (NOT CCACHE_PATH) if (NOT CCACHE_PATH)
@@ -37,15 +36,14 @@ endif ()
message(STATUS "Found ccache: ${CCACHE_PATH}") message(STATUS "Found ccache: ${CCACHE_PATH}")
# Tell cmake to use ccache for compiling with Visual Studio. # Tell cmake to use ccache for compiling with Visual Studio.
file(COPY_FILE file(COPY_FILE ${CCACHE_PATH} ${CMAKE_BINARY_DIR}/cl.exe ONLY_IF_DIFFERENT)
${CCACHE_PATH} ${CMAKE_BINARY_DIR}/cl.exe set(CMAKE_VS_GLOBALS "CLToolExe=cl.exe" "CLToolPath=${CMAKE_BINARY_DIR}" "TrackFileAccess=false"
ONLY_IF_DIFFERENT)
set(CMAKE_VS_GLOBALS
"CLToolExe=cl.exe"
"CLToolPath=${CMAKE_BINARY_DIR}"
"TrackFileAccess=false"
"UseMultiToolTask=true") "UseMultiToolTask=true")
# By default Visual Studio generators will use /Zi, which is not compatible with # By default Visual Studio generators will use /Zi to capture debug information, which is not compatible with ccache, so
# ccache, so tell it to use /Z7 instead. # tell it to use /Z7 instead.
set(CMAKE_MSVC_DEBUG_INFORMATION_FORMAT "$<$<CONFIG:Debug,RelWithDebInfo>:Embedded>") if (MSVC)
foreach (var_ CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE)
string(REPLACE "/Zi" "/Z7" ${var_} "${${var_}}")
endforeach ()
endif ()

View File

@@ -180,10 +180,7 @@ elseif(DEFINED ENV{CODE_COVERAGE_GCOV_TOOL})
set(GCOV_TOOL "$ENV{CODE_COVERAGE_GCOV_TOOL}") set(GCOV_TOOL "$ENV{CODE_COVERAGE_GCOV_TOOL}")
elseif ("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang") elseif ("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if (APPLE) if (APPLE)
execute_process( COMMAND xcrun -f llvm-cov execute_process(COMMAND xcrun -f llvm-cov OUTPUT_VARIABLE LLVMCOV_PATH OUTPUT_STRIP_TRAILING_WHITESPACE)
OUTPUT_VARIABLE LLVMCOV_PATH
OUTPUT_STRIP_TRAILING_WHITESPACE
)
else () else ()
find_program(LLVMCOV_PATH llvm-cov) find_program(LLVMCOV_PATH llvm-cov)
endif () endif ()
@@ -202,14 +199,13 @@ foreach(LANG ${LANGUAGES})
if ("${CMAKE_${LANG}_COMPILER_VERSION}" VERSION_LESS 3) if ("${CMAKE_${LANG}_COMPILER_VERSION}" VERSION_LESS 3)
message(FATAL_ERROR "Clang version must be 3.0.0 or greater! Aborting...") message(FATAL_ERROR "Clang version must be 3.0.0 or greater! Aborting...")
endif () endif ()
elseif(NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "GNU" elseif (NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "GNU" AND NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES
AND NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(LLVM)?[Ff]lang") "(LLVM)?[Ff]lang")
message(FATAL_ERROR "Compiler is not GNU or Flang! Aborting...") message(FATAL_ERROR "Compiler is not GNU or Flang! Aborting...")
endif () endif ()
endforeach () endforeach ()
set(COVERAGE_COMPILER_FLAGS "-g --coverage" set(COVERAGE_COMPILER_FLAGS "-g --coverage" CACHE INTERNAL "")
CACHE INTERNAL "")
set(COVERAGE_CXX_COMPILER_FLAGS "") set(COVERAGE_CXX_COMPILER_FLAGS "")
set(COVERAGE_C_COMPILER_FLAGS "") set(COVERAGE_C_COMPILER_FLAGS "")
@@ -327,14 +323,12 @@ function(setup_target_for_coverage_gcovr)
if ("--output" IN_LIST GCOVR_ADDITIONAL_ARGS) if ("--output" IN_LIST GCOVR_ADDITIONAL_ARGS)
message(FATAL_ERROR "Unsupported --output option detected in GCOVR_ADDITIONAL_ARGS! Aborting...") message(FATAL_ERROR "Unsupported --output option detected in GCOVR_ADDITIONAL_ARGS! Aborting...")
else () else ()
if((Coverage_FORMAT STREQUAL "html-details") if ((Coverage_FORMAT STREQUAL "html-details") OR (Coverage_FORMAT STREQUAL "html-nested"))
OR (Coverage_FORMAT STREQUAL "html-nested"))
set(GCOVR_OUTPUT_FILE ${PROJECT_BINARY_DIR}/${Coverage_NAME}/index.html) set(GCOVR_OUTPUT_FILE ${PROJECT_BINARY_DIR}/${Coverage_NAME}/index.html)
set(GCOVR_CREATE_FOLDER ${PROJECT_BINARY_DIR}/${Coverage_NAME}) set(GCOVR_CREATE_FOLDER ${PROJECT_BINARY_DIR}/${Coverage_NAME})
elseif (Coverage_FORMAT STREQUAL "html-single") elseif (Coverage_FORMAT STREQUAL "html-single")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.html) set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.html)
elseif((Coverage_FORMAT STREQUAL "json-summary") elseif ((Coverage_FORMAT STREQUAL "json-summary") OR (Coverage_FORMAT STREQUAL "json-details")
OR (Coverage_FORMAT STREQUAL "json-details")
OR (Coverage_FORMAT STREQUAL "coveralls")) OR (Coverage_FORMAT STREQUAL "coveralls"))
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.json) set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.json)
elseif (Coverage_FORMAT STREQUAL "txt") elseif (Coverage_FORMAT STREQUAL "txt")
@@ -348,8 +342,7 @@ function(setup_target_for_coverage_gcovr)
endif () endif ()
endif () endif ()
if((Coverage_FORMAT STREQUAL "cobertura") if ((Coverage_FORMAT STREQUAL "cobertura") OR (Coverage_FORMAT STREQUAL "xml"))
OR (Coverage_FORMAT STREQUAL "xml"))
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura "${GCOVR_OUTPUT_FILE}") list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura "${GCOVR_OUTPUT_FILE}")
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura-pretty) list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura-pretty)
set(Coverage_FORMAT cobertura) # overwrite xml set(Coverage_FORMAT cobertura) # overwrite xml
@@ -408,27 +401,25 @@ function(setup_target_for_coverage_gcovr)
# If EXECUTABLE is not set, the user is expected to run the tests manually # If EXECUTABLE is not set, the user is expected to run the tests manually
# before running the coverage target NAME # before running the coverage target NAME
if (DEFINED Coverage_EXECUTABLE) if (DEFINED Coverage_EXECUTABLE)
set(GCOVR_EXEC_TESTS_CMD set(GCOVR_EXEC_TESTS_CMD ${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS})
${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS}
)
endif () endif ()
# Create folder # Create folder
if (DEFINED GCOVR_CREATE_FOLDER) if (DEFINED GCOVR_CREATE_FOLDER)
set(GCOVR_FOLDER_CMD set(GCOVR_FOLDER_CMD ${CMAKE_COMMAND} -E make_directory ${GCOVR_CREATE_FOLDER})
${CMAKE_COMMAND} -E make_directory ${GCOVR_CREATE_FOLDER})
endif () endif ()
# Running gcovr # Running gcovr
set(GCOVR_CMD set(GCOVR_CMD
${GCOVR_PATH} ${GCOVR_PATH}
--gcov-executable ${GCOV_TOOL} --gcov-executable
${GCOV_TOOL}
--gcov-ignore-parse-errors=negative_hits.warn_once_per_file --gcov-ignore-parse-errors=negative_hits.warn_once_per_file
-r ${BASEDIR} -r
${BASEDIR}
${GCOVR_ADDITIONAL_ARGS} ${GCOVR_ADDITIONAL_ARGS}
${GCOVR_EXCLUDE_ARGS} ${GCOVR_EXCLUDE_ARGS}
--object-directory=${PROJECT_BINARY_DIR} --object-directory=${PROJECT_BINARY_DIR})
)
if (CODE_COVERAGE_VERBOSE) if (CODE_COVERAGE_VERBOSE)
message(STATUS "Executed command report") message(STATUS "Executed command report")
@@ -454,19 +445,15 @@ function(setup_target_for_coverage_gcovr)
COMMAND ${GCOVR_EXEC_TESTS_CMD} COMMAND ${GCOVR_EXEC_TESTS_CMD}
COMMAND ${GCOVR_FOLDER_CMD} COMMAND ${GCOVR_FOLDER_CMD}
COMMAND ${GCOVR_CMD} COMMAND ${GCOVR_CMD}
BYPRODUCTS ${GCOVR_OUTPUT_FILE} BYPRODUCTS ${GCOVR_OUTPUT_FILE}
WORKING_DIRECTORY ${PROJECT_BINARY_DIR} WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES} DEPENDS ${Coverage_DEPENDENCIES}
VERBATIM # Protect arguments to commands VERBATIM # Protect arguments to commands
COMMENT "Running gcovr to produce code coverage report." COMMENT "Running gcovr to produce code coverage report.")
)
# Show info where to find the report # Show info where to find the report
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD add_custom_command(TARGET ${Coverage_NAME} POST_BUILD COMMAND echo
COMMAND echo COMMENT "Code coverage report saved in ${GCOVR_OUTPUT_FILE} formatted as ${Coverage_FORMAT}")
COMMENT "Code coverage report saved in ${GCOVR_OUTPUT_FILE} formatted as ${Coverage_FORMAT}"
)
endfunction () # setup_target_for_coverage_gcovr endfunction () # setup_target_for_coverage_gcovr
function (add_code_coverage_to_target name scope) function (add_code_coverage_to_target name scope)
@@ -476,12 +463,14 @@ function(add_code_coverage_to_target name scope)
separate_arguments(COVERAGE_C_LINKER_FLAGS NATIVE_COMMAND "${COVERAGE_C_LINKER_FLAGS}") separate_arguments(COVERAGE_C_LINKER_FLAGS NATIVE_COMMAND "${COVERAGE_C_LINKER_FLAGS}")
# Add compiler options to the target # Add compiler options to the target
target_compile_options(${name} ${scope} target_compile_options(${name} ${scope} $<$<COMPILE_LANGUAGE:CXX>:${COVERAGE_CXX_COMPILER_FLAGS}>
$<$<COMPILE_LANGUAGE:CXX>:${COVERAGE_CXX_COMPILER_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${COVERAGE_C_COMPILER_FLAGS}>) $<$<COMPILE_LANGUAGE:C>:${COVERAGE_C_COMPILER_FLAGS}>)
target_link_libraries (${name} ${scope} target_link_libraries(
$<$<LINK_LANGUAGE:CXX>:${COVERAGE_CXX_LINKER_FLAGS} gcov> ${name}
$<$<LINK_LANGUAGE:C>:${COVERAGE_C_LINKER_FLAGS} gcov> ${scope}
) $<$<LINK_LANGUAGE:CXX>:${COVERAGE_CXX_LINKER_FLAGS}
gcov>
$<$<LINK_LANGUAGE:C>:${COVERAGE_C_LINKER_FLAGS}
gcov>)
endfunction () # add_code_coverage_to_target endfunction () # add_code_coverage_to_target

View File

@@ -0,0 +1,58 @@
# Shared detection of compiler, operating system, and architecture.
#
# This module centralizes environment detection so that other CMake modules can use the same variables instead of
# repeating checks on CMAKE_* and built-in platform variables.
# Only run once per configure step.
include_guard(GLOBAL)
# --------------------------------------------------------------------
# Compiler detection (C++)
# --------------------------------------------------------------------
set(is_clang FALSE)
set(is_gcc FALSE)
set(is_msvc FALSE)
set(is_xcode FALSE)
if (CMAKE_CXX_COMPILER_ID MATCHES ".*Clang") # Clang or AppleClang
set(is_clang TRUE)
elseif (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(is_gcc TRUE)
elseif (MSVC)
set(is_msvc TRUE)
else ()
message(FATAL_ERROR "Unsupported C++ compiler: ${CMAKE_CXX_COMPILER_ID}")
endif ()
# Xcode generator detection
if (CMAKE_GENERATOR STREQUAL "Xcode")
set(is_xcode TRUE)
endif ()
# --------------------------------------------------------------------
# Operating system detection
# --------------------------------------------------------------------
set(is_linux FALSE)
set(is_windows FALSE)
set(is_macos FALSE)
if (CMAKE_SYSTEM_NAME STREQUAL "Linux")
set(is_linux TRUE)
elseif (CMAKE_SYSTEM_NAME STREQUAL "Windows")
set(is_windows TRUE)
elseif (CMAKE_SYSTEM_NAME STREQUAL "Darwin")
set(is_macos TRUE)
endif ()
# --------------------------------------------------------------------
# Architecture
# --------------------------------------------------------------------
set(is_amd64 FALSE)
set(is_arm64 FALSE)
if (CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64|AMD64")
set(is_amd64 TRUE)
elseif (CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|arm64|ARM64")
set(is_arm64 TRUE)
else ()
message(FATAL_ERROR "Unknown architecture: ${CMAKE_SYSTEM_PROCESSOR}")
endif ()

View File

@@ -3,23 +3,14 @@ include(isolate_headers)
function (xrpl_add_test name) function (xrpl_add_test name)
set(target ${PROJECT_NAME}.test.${name}) set(target ${PROJECT_NAME}.test.${name})
file(GLOB_RECURSE sources CONFIGURE_DEPENDS file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/${name}/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/${name}/*.cpp" "${CMAKE_CURRENT_SOURCE_DIR}/${name}.cpp")
"${CMAKE_CURRENT_SOURCE_DIR}/${name}.cpp"
)
add_executable(${target} ${ARGN} ${sources}) add_executable(${target} ${ARGN} ${sources})
isolate_headers( isolate_headers(${target} "${CMAKE_SOURCE_DIR}" "${CMAKE_SOURCE_DIR}/tests/${name}" PRIVATE)
${target}
"${CMAKE_SOURCE_DIR}"
"${CMAKE_SOURCE_DIR}/tests/${name}"
PRIVATE
)
# Make sure the test isn't optimized away in unity builds # Make sure the test isn't optimized away in unity builds
set_target_properties(${target} PROPERTIES set_target_properties(${target} PROPERTIES UNITY_BUILD_MODE GROUP UNITY_BUILD_BATCH_SIZE 0) # Adjust as needed
UNITY_BUILD_MODE GROUP
UNITY_BUILD_BATCH_SIZE 0) # Adjust as needed
add_test(NAME ${target} COMMAND ${target}) add_test(NAME ${target} COMMAND ${target})
endfunction () endfunction ()

View File

@@ -2,20 +2,27 @@
setup project-wide compiler settings setup project-wide compiler settings
#]===================================================================] #]===================================================================]
include(CompilationEnv)
#[=========================================================[ #[=========================================================[
TODO some/most of these common settings belong in a TODO some/most of these common settings belong in a
toolchain file, especially the ABI-impacting ones toolchain file, especially the ABI-impacting ones
#]=========================================================] #]=========================================================]
add_library(common INTERFACE) add_library(common INTERFACE)
add_library(Xrpl::common ALIAS common) add_library(Xrpl::common ALIAS common)
include(XrplSanitizers)
# add a single global dependency on this interface lib # add a single global dependency on this interface lib
link_libraries(Xrpl::common) link_libraries(Xrpl::common)
set_target_properties (common # Respect CMAKE_POSITION_INDEPENDENT_CODE setting (may be set by Conan toolchain)
PROPERTIES INTERFACE_POSITION_INDEPENDENT_CODE ON) if (NOT DEFINED CMAKE_POSITION_INDEPENDENT_CODE)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
endif ()
set_target_properties(common PROPERTIES INTERFACE_POSITION_INDEPENDENT_CODE ${CMAKE_POSITION_INDEPENDENT_CODE})
set(CMAKE_CXX_EXTENSIONS OFF) set(CMAKE_CXX_EXTENSIONS OFF)
target_compile_definitions (common target_compile_definitions(
INTERFACE common
$<$<CONFIG:Debug>:DEBUG _DEBUG> INTERFACE $<$<CONFIG:Debug>:DEBUG
_DEBUG>
#[===[ #[===[
NOTE: CMAKE release builds already have NDEBUG defined, so no need to add it NOTE: CMAKE release builds already have NDEBUG defined, so no need to add it
explicitly except for the special case of (profile ON) and (assert OFF). explicitly except for the special case of (profile ON) and (assert OFF).
@@ -24,16 +31,13 @@ target_compile_definitions (common
]===] ]===]
$<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG> $<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG>
# TODO: Remove once we have migrated functions from OpenSSL 1.x to 3.x. # TODO: Remove once we have migrated functions from OpenSSL 1.x to 3.x.
OPENSSL_SUPPRESS_DEPRECATED OPENSSL_SUPPRESS_DEPRECATED)
)
if (MSVC) if (MSVC)
# remove existing exception flag since we set it to -EHa # remove existing exception flag since we set it to -EHa
string(REGEX REPLACE "[-/]EH[a-z]+" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}") string(REGEX REPLACE "[-/]EH[a-z]+" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
foreach (var_ foreach (var_ CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE)
CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE
CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE)
# also remove dynamic runtime # also remove dynamic runtime
string(REGEX REPLACE "[-/]MD[d]*" " " ${var_} "${${var_}}") string(REGEX REPLACE "[-/]MD[d]*" " " ${var_} "${${var_}}")
@@ -44,25 +48,40 @@ if (MSVC)
# omit debug info completely under CI (not needed) # omit debug info completely under CI (not needed)
if (is_ci) if (is_ci)
string(REPLACE "/Zi" " " ${var_} "${${var_}}") string(REPLACE "/Zi" " " ${var_} "${${var_}}")
string(REPLACE "/Z7" " " ${var_} "${${var_}}")
endif () endif ()
endforeach () endforeach ()
target_compile_options (common target_compile_options(
INTERFACE common
-bigobj # Increase object file max size INTERFACE # Increase object file max size
-fp:precise # Floating point behavior -bigobj
-Gd # __cdecl calling convention # Floating point behavior
-Gm- # Minimal rebuild: disabled -fp:precise
-Gy- # Function level linking: disabled # __cdecl calling convention
-MP # Multiprocessor compilation -Gd
-openmp- # pragma omp: disabled # Minimal rebuild: disabled
-errorReport:none # No error reporting to Internet -Gm-
-nologo # Suppress login banner # Function level linking: disabled
-wd4018 # Disable signed/unsigned comparison warnings -Gy-
-wd4244 # Disable float to int possible loss of data warnings # Multiprocessor compilation
-wd4267 # Disable size_t to T possible loss of data warnings -MP
-wd4800 # Disable C4800(int to bool performance) # pragma omp: disabled
-wd4503 # Decorated name length exceeded, name was truncated -openmp-
# No error reporting to Internet
-errorReport:none
# Suppress login banner
-nologo
# Disable signed/unsigned comparison warnings
-wd4018
# Disable float to int possible loss of data warnings
-wd4244
# Disable size_t to T possible loss of data warnings
-wd4267
# Disable C4800(int to bool performance)
-wd4800
# Decorated name length exceeded, name was truncated
-wd4503
$<$<COMPILE_LANGUAGE:CXX>: $<$<COMPILE_LANGUAGE:CXX>:
-EHa -EHa
-GR -GR
@@ -75,11 +94,10 @@ if (MSVC)
# static runtime # static runtime
$<$<CONFIG:Debug>:-MTd> $<$<CONFIG:Debug>:-MTd>
$<$<NOT:$<CONFIG:Debug>>:-MT> $<$<NOT:$<CONFIG:Debug>>:-MT>
$<$<BOOL:${werr}>:-WX> $<$<BOOL:${werr}>:-WX>)
) target_compile_definitions(
target_compile_definitions (common common
INTERFACE INTERFACE _WIN32_WINNT=0x6000
_WIN32_WINNT=0x6000
_SCL_SECURE_NO_WARNINGS _SCL_SECURE_NO_WARNINGS
_CRT_SECURE_NO_WARNINGS _CRT_SECURE_NO_WARNINGS
WIN32_CONSOLE WIN32_CONSOLE
@@ -88,17 +106,15 @@ if (MSVC)
# TODO: Resolve these warnings, don't just silence them # TODO: Resolve these warnings, don't just silence them
_SILENCE_ALL_CXX17_DEPRECATION_WARNINGS _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:_CRTDBG_MAP_ALLOC>) $<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:_CRTDBG_MAP_ALLOC>)
target_link_libraries (common target_link_libraries(common INTERFACE -errorreport:none -machine:X64)
INTERFACE
-errorreport:none
-machine:X64)
else () else ()
target_compile_options (common target_compile_options(
INTERFACE common
-Wall INTERFACE -Wall
-Wdeprecated -Wdeprecated
$<$<BOOL:${is_clang}>:-Wno-deprecated-declarations> $<$<BOOL:${is_clang}>:-Wno-deprecated-declarations>
$<$<BOOL:${wextra}>:-Wextra -Wno-unused-parameter> $<$<BOOL:${wextra}>:-Wextra
-Wno-unused-parameter>
$<$<BOOL:${werr}>:-Werror> $<$<BOOL:${werr}>:-Werror>
-fstack-protector -fstack-protector
-Wno-sign-compare -Wno-sign-compare
@@ -108,15 +124,13 @@ else ()
$<$<AND:$<BOOL:${is_gcc}>,$<CONFIG:Debug>>:-O0> $<$<AND:$<BOOL:${is_gcc}>,$<CONFIG:Debug>>:-O0>
# Add debug symbols to release config # Add debug symbols to release config
$<$<CONFIG:Release>:-g>) $<$<CONFIG:Release>:-g>)
target_link_libraries (common target_link_libraries(
INTERFACE common
-rdynamic INTERFACE -rdynamic
$<$<BOOL:${is_linux}>:-Wl,-z,relro,-z,now,--build-id> $<$<BOOL:${is_linux}>:-Wl,-z,relro,-z,now,--build-id>
# link to static libc/c++ iff: # link to static libc/c++ iff: * static option set and * NOT APPLE (AppleClang does not support static
# * static option set and # libc/c++) and * NOT SANITIZERS (sanitizers typically don't work with static libc/c++)
# * NOT APPLE (AppleClang does not support static libc/c++) and $<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${SANITIZERS_ENABLED}>>>:
# * NOT san (sanitizers typically don't work with static libc/c++)
$<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${san}>>>:
-static-libstdc++ -static-libstdc++
-static-libgcc -static-libgcc
>) >)
@@ -135,21 +149,17 @@ endif ()
if (use_mold) if (use_mold)
# use mold linker if available # use mold linker if available
execute_process ( execute_process(COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=mold -Wl,--version ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=mold -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "mold") if ("${LD_VERSION}" MATCHES "mold")
target_link_libraries(common INTERFACE -fuse-ld=mold) target_link_libraries(common INTERFACE -fuse-ld=mold)
endif () endif ()
unset(LD_VERSION) unset(LD_VERSION)
elseif (use_gold AND is_gcc) elseif (use_gold AND is_gcc)
# use gold linker if available # use gold linker if available
execute_process ( execute_process(COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=gold -Wl,--version ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=gold -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
#[=========================================================[ #[=========================================================[
NOTE: THE gold linker inserts -rpath as DT_RUNPATH by NOTE: THE gold linker inserts -rpath as DT_RUNPATH by
default intead of DT_RPATH, so you might have slightly default instead of DT_RPATH, so you might have slightly
unexpected runtime ld behavior if you were expecting unexpected runtime ld behavior if you were expecting
DT_RPATH. Specify --disable-new-dtags to gold if you do DT_RPATH. Specify --disable-new-dtags to gold if you do
not want the default DT_RUNPATH behavior. This rpath not want the default DT_RUNPATH behavior. This rpath
@@ -161,9 +171,9 @@ elseif (use_gold AND is_gcc)
required to make gold play nicely with jemalloc. required to make gold play nicely with jemalloc.
#]=========================================================] #]=========================================================]
if (("${LD_VERSION}" MATCHES "GNU gold") AND (NOT jemalloc)) if (("${LD_VERSION}" MATCHES "GNU gold") AND (NOT jemalloc))
target_link_libraries (common target_link_libraries(
INTERFACE common
-fuse-ld=gold INTERFACE -fuse-ld=gold
-Wl,--no-as-needed -Wl,--no-as-needed
#[=========================================================[ #[=========================================================[
see https://bugs.launchpad.net/ubuntu/+source/eglibc/+bug/1253638/comments/5 see https://bugs.launchpad.net/ubuntu/+source/eglibc/+bug/1253638/comments/5
@@ -176,18 +186,15 @@ elseif (use_gold AND is_gcc)
unset(LD_VERSION) unset(LD_VERSION)
elseif (use_lld) elseif (use_lld)
# use lld linker if available # use lld linker if available
execute_process ( execute_process(COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=lld -Wl,--version ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=lld -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "LLD") if ("${LD_VERSION}" MATCHES "LLD")
target_link_libraries(common INTERFACE -fuse-ld=lld) target_link_libraries(common INTERFACE -fuse-ld=lld)
endif () endif ()
unset(LD_VERSION) unset(LD_VERSION)
endif () endif ()
if (assert) if (assert)
foreach (var_ CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_RELEASE) foreach (var_ CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_RELEASE)
STRING (REGEX REPLACE "[-/]DNDEBUG" "" ${var_} "${${var_}}") string(REGEX REPLACE "[-/]DNDEBUG" "" ${var_} "${${var_}}")
endforeach () endforeach ()
endif () endif ()

View File

@@ -33,8 +33,7 @@ if (NOT DEFINED OPENSSL_ROOT_DIR)
elseif (APPLE) elseif (APPLE)
find_program(homebrew brew) find_program(homebrew brew)
if (homebrew) if (homebrew)
execute_process (COMMAND ${homebrew} --prefix openssl execute_process(COMMAND ${homebrew} --prefix openssl OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE) OUTPUT_STRIP_TRAILING_WHITESPACE)
endif () endif ()
endif () endif ()
@@ -49,6 +48,5 @@ find_dependency (OpenSSL REQUIRED)
find_dependency(ZLIB) find_dependency(ZLIB)
find_dependency(date) find_dependency(date)
if (TARGET ZLIB::ZLIB) if (TARGET ZLIB::ZLIB)
set_target_properties(OpenSSL::Crypto PROPERTIES set_target_properties(OpenSSL::Crypto PROPERTIES INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
endif () endif ()

View File

@@ -10,50 +10,32 @@ include(target_protobuf_sources)
# so we just build them as a separate library. # so we just build them as a separate library.
add_library(xrpl.libpb) add_library(xrpl.libpb)
set_target_properties(xrpl.libpb PROPERTIES UNITY_BUILD OFF) set_target_properties(xrpl.libpb PROPERTIES UNITY_BUILD OFF)
target_protobuf_sources(xrpl.libpb xrpl/proto target_protobuf_sources(xrpl.libpb xrpl/proto LANGUAGE cpp IMPORT_DIRS include/xrpl/proto
LANGUAGE cpp PROTOS include/xrpl/proto/xrpl.proto)
IMPORT_DIRS include/xrpl/proto
PROTOS include/xrpl/proto/xrpl.proto
)
file(GLOB_RECURSE protos "include/xrpl/proto/org/*.proto") file(GLOB_RECURSE protos "include/xrpl/proto/org/*.proto")
target_protobuf_sources(xrpl.libpb xrpl/proto target_protobuf_sources(xrpl.libpb xrpl/proto LANGUAGE cpp IMPORT_DIRS include/xrpl/proto PROTOS "${protos}")
LANGUAGE cpp target_protobuf_sources(
IMPORT_DIRS include/xrpl/proto xrpl.libpb xrpl/proto
PROTOS "${protos}"
)
target_protobuf_sources(xrpl.libpb xrpl/proto
LANGUAGE grpc LANGUAGE grpc
IMPORT_DIRS include/xrpl/proto IMPORT_DIRS include/xrpl/proto
PROTOS "${protos}" PROTOS "${protos}"
PLUGIN protoc-gen-grpc=$<TARGET_FILE:gRPC::grpc_cpp_plugin> PLUGIN protoc-gen-grpc=$<TARGET_FILE:gRPC::grpc_cpp_plugin>
GENERATE_EXTENSIONS .grpc.pb.h .grpc.pb.cc GENERATE_EXTENSIONS .grpc.pb.h .grpc.pb.cc)
)
target_compile_options(xrpl.libpb target_compile_options(
PUBLIC xrpl.libpb PUBLIC $<$<BOOL:${is_msvc}>:-wd4996> $<$<BOOL:${is_xcode}>: --system-header-prefix="google/protobuf"
$<$<BOOL:${MSVC}>:-wd4996> -Wno-deprecated-dynamic-exception-spec >
$<$<BOOL:${XCODE}>: PRIVATE $<$<BOOL:${is_msvc}>:-wd4065> $<$<NOT:$<BOOL:${is_msvc}>>:-Wno-deprecated-declarations>)
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>
PRIVATE
$<$<BOOL:${MSVC}>:-wd4065>
$<$<NOT:$<BOOL:${MSVC}>>:-Wno-deprecated-declarations>
)
target_link_libraries(xrpl.libpb target_link_libraries(xrpl.libpb PUBLIC protobuf::libprotobuf gRPC::grpc++)
PUBLIC
protobuf::libprotobuf
gRPC::grpc++
)
# TODO: Clean up the number of library targets later. # TODO: Clean up the number of library targets later.
add_library(xrpl.imports.main INTERFACE) add_library(xrpl.imports.main INTERFACE)
target_link_libraries(xrpl.imports.main target_link_libraries(
INTERFACE xrpl.imports.main
absl::random_random INTERFACE absl::random_random
date::date date::date
ed25519::ed25519 ed25519::ed25519
LibArchive::LibArchive LibArchive::LibArchive
@@ -65,8 +47,7 @@ target_link_libraries(xrpl.imports.main
secp256k1::secp256k1 secp256k1::secp256k1
xrpl.libpb xrpl.libpb
xxHash::xxhash xxHash::xxhash
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp> $<$<BOOL:${voidstar}>:antithesis-sdk-cpp>)
)
include(add_module) include(add_module)
include(target_link_modules) include(target_link_modules)
@@ -88,18 +69,11 @@ target_link_libraries(xrpl.libxrpl.crypto PUBLIC xrpl.libxrpl.basics)
# Level 04 # Level 04
add_module(xrpl protocol) add_module(xrpl protocol)
target_link_libraries(xrpl.libxrpl.protocol PUBLIC target_link_libraries(xrpl.libxrpl.protocol PUBLIC xrpl.libxrpl.crypto xrpl.libxrpl.json)
xrpl.libxrpl.crypto
xrpl.libxrpl.json
)
# Level 05 # Level 05
add_module(xrpl core) add_module(xrpl core)
target_link_libraries(xrpl.libxrpl.core PUBLIC target_link_libraries(xrpl.libxrpl.core PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol)
xrpl.libxrpl.basics
xrpl.libxrpl.json
xrpl.libxrpl.protocol
)
# Level 06 # Level 06
add_module(xrpl resource) add_module(xrpl resource)
@@ -107,49 +81,33 @@ target_link_libraries(xrpl.libxrpl.resource PUBLIC xrpl.libxrpl.protocol)
# Level 07 # Level 07
add_module(xrpl net) add_module(xrpl net)
target_link_libraries(xrpl.libxrpl.net PUBLIC target_link_libraries(xrpl.libxrpl.net PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol
xrpl.libxrpl.basics xrpl.libxrpl.resource)
xrpl.libxrpl.json
xrpl.libxrpl.protocol
xrpl.libxrpl.resource
)
add_module(xrpl server) add_module(xrpl server)
target_link_libraries(xrpl.libxrpl.server PUBLIC xrpl.libxrpl.protocol) target_link_libraries(xrpl.libxrpl.server PUBLIC xrpl.libxrpl.protocol)
add_module(xrpl nodestore) add_module(xrpl nodestore)
target_link_libraries(xrpl.libxrpl.nodestore PUBLIC target_link_libraries(xrpl.libxrpl.nodestore PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol)
xrpl.libxrpl.basics
xrpl.libxrpl.json
xrpl.libxrpl.protocol
)
add_module(xrpl shamap) add_module(xrpl shamap)
target_link_libraries(xrpl.libxrpl.shamap PUBLIC target_link_libraries(xrpl.libxrpl.shamap PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.crypto xrpl.libxrpl.protocol
xrpl.libxrpl.basics xrpl.libxrpl.nodestore)
xrpl.libxrpl.crypto
xrpl.libxrpl.protocol
xrpl.libxrpl.nodestore
)
add_module(xrpl ledger) add_module(xrpl ledger)
target_link_libraries(xrpl.libxrpl.ledger PUBLIC target_link_libraries(xrpl.libxrpl.ledger PUBLIC xrpl.libxrpl.basics xrpl.libxrpl.json xrpl.libxrpl.protocol)
xrpl.libxrpl.basics
xrpl.libxrpl.json
xrpl.libxrpl.protocol
)
add_library(xrpl.libxrpl) add_library(xrpl.libxrpl)
set_target_properties(xrpl.libxrpl PROPERTIES OUTPUT_NAME xrpl) set_target_properties(xrpl.libxrpl PROPERTIES OUTPUT_NAME xrpl)
add_library(xrpl::libxrpl ALIAS xrpl.libxrpl) add_library(xrpl::libxrpl ALIAS xrpl.libxrpl)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/libxrpl/*.cpp")
"${CMAKE_CURRENT_SOURCE_DIR}/src/libxrpl/*.cpp"
)
target_sources(xrpl.libxrpl PRIVATE ${sources}) target_sources(xrpl.libxrpl PRIVATE ${sources})
target_link_modules(xrpl PUBLIC target_link_modules(
xrpl
PUBLIC
basics basics
beast beast
core core
@@ -161,8 +119,7 @@ target_link_modules(xrpl PUBLIC
nodestore nodestore
shamap shamap
net net
ledger ledger)
)
# All headers in libxrpl are in modules. # All headers in libxrpl are in modules.
# Uncomment this stanza if you have not yet moved new headers into a module. # Uncomment this stanza if you have not yet moved new headers into a module.
@@ -177,33 +134,19 @@ if(xrpld)
add_executable(xrpld) add_executable(xrpld)
if (tests) if (tests)
target_compile_definitions(xrpld PUBLIC ENABLE_TESTS) target_compile_definitions(xrpld PUBLIC ENABLE_TESTS)
target_compile_definitions(xrpld PRIVATE target_compile_definitions(xrpld PRIVATE UNIT_TEST_REFERENCE_FEE=${UNIT_TEST_REFERENCE_FEE})
UNIT_TEST_REFERENCE_FEE=${UNIT_TEST_REFERENCE_FEE}
)
endif () endif ()
target_include_directories(xrpld target_include_directories(xrpld PRIVATE $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>)
PRIVATE
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/xrpld/*.cpp")
"${CMAKE_CURRENT_SOURCE_DIR}/src/xrpld/*.cpp"
)
target_sources(xrpld PRIVATE ${sources}) target_sources(xrpld PRIVATE ${sources})
if (tests) if (tests)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/test/*.cpp")
"${CMAKE_CURRENT_SOURCE_DIR}/src/test/*.cpp"
)
target_sources(xrpld PRIVATE ${sources}) target_sources(xrpld PRIVATE ${sources})
endif () endif ()
target_link_libraries(xrpld target_link_libraries(xrpld Xrpl::boost Xrpl::opts Xrpl::libs xrpl.libxrpl)
Xrpl::boost
Xrpl::opts
Xrpl::libs
xrpl.libxrpl
)
exclude_if_included(xrpld) exclude_if_included(xrpld)
# define a macro for tests that might need to # define a macro for tests that might need to
# be excluded or run differently in CI environment # be excluded or run differently in CI environment
@@ -212,24 +155,17 @@ if(xrpld)
endif () endif ()
if (voidstar) if (voidstar)
target_compile_options(xrpld target_compile_options(xrpld PRIVATE -fsanitize-coverage=trace-pc-guard)
PRIVATE
-fsanitize-coverage=trace-pc-guard
)
# xrpld requires access to antithesis-sdk-cpp implementation file # xrpld requires access to antithesis-sdk-cpp implementation file
# antithesis_instrumentation.h, which is not exported as INTERFACE # antithesis_instrumentation.h, which is not exported as INTERFACE
target_include_directories(xrpld target_include_directories(xrpld PRIVATE ${CMAKE_SOURCE_DIR}/external/antithesis-sdk)
PRIVATE
${CMAKE_SOURCE_DIR}/external/antithesis-sdk
)
endif () endif ()
# any files that don't play well with unity should be added here # any files that don't play well with unity should be added here
if (tests) if (tests)
set_source_files_properties( set_source_files_properties(
# these two seem to produce conflicts in beast teardown template methods # these two seem to produce conflicts in beast teardown template methods
src/test/rpc/ValidatorRPC_test.cpp src/test/rpc/ValidatorRPC_test.cpp src/test/ledger/Invariants_test.cpp PROPERTIES SKIP_UNITY_BUILD_INCLUSION
src/test/ledger/Invariants_test.cpp TRUE)
PROPERTIES SKIP_UNITY_BUILD_INCLUSION TRUE)
endif () endif ()
endif () endif ()

View File

@@ -16,26 +16,37 @@ ProcessorCount(PROCESSOR_COUNT)
include(CodeCoverage) include(CodeCoverage)
# The instructions for these commands come from the `CodeCoverage` module, # The instructions for these commands come from the `CodeCoverage` module, which was copied from
# which was copied from https://github.com/bilke/cmake-modules, commit fb7d2a3, # https://github.com/bilke/cmake-modules, commit fb7d2a3, then locally changed (see CHANGES: section in
# then locally changed (see CHANGES: section in `CodeCoverage.cmake`) # `CodeCoverage.cmake`)
set(GCOVR_ADDITIONAL_ARGS ${coverage_extra_args}) set(GCOVR_ADDITIONAL_ARGS ${coverage_extra_args})
if (NOT GCOVR_ADDITIONAL_ARGS STREQUAL "") if (NOT GCOVR_ADDITIONAL_ARGS STREQUAL "")
separate_arguments(GCOVR_ADDITIONAL_ARGS) separate_arguments(GCOVR_ADDITIONAL_ARGS)
endif () endif ()
list(APPEND GCOVR_ADDITIONAL_ARGS list(APPEND
GCOVR_ADDITIONAL_ARGS
--exclude-throw-branches --exclude-throw-branches
--exclude-noncode-lines --exclude-noncode-lines
--exclude-unreachable-branches -s --exclude-unreachable-branches
-j ${PROCESSOR_COUNT}) -s
-j
${PROCESSOR_COUNT})
setup_target_for_coverage_gcovr( setup_target_for_coverage_gcovr(
NAME coverage NAME
FORMAT ${coverage_format} coverage
EXCLUDE "src/test" "src/tests" "include/xrpl/beast/test" "include/xrpl/beast/unit_test" "${CMAKE_BINARY_DIR}/pb-xrpl.libpb" FORMAT
DEPENDENCIES xrpld xrpl.tests ${coverage_format}
) EXCLUDE
"src/test"
"src/tests"
"include/xrpl/beast/test"
"include/xrpl/beast/unit_test"
"${CMAKE_BINARY_DIR}/pb-xrpl.libpb"
DEPENDENCIES
xrpld
xrpl.tests)
add_code_coverage_to_target(opts INTERFACE) add_code_coverage_to_target(opts INTERFACE)

View File

@@ -17,7 +17,8 @@ set(doxygen_include_path "${CMAKE_CURRENT_SOURCE_DIR}/src")
set(doxygen_index_file "${doxygen_output_directory}/html/index.html") set(doxygen_index_file "${doxygen_output_directory}/html/index.html")
set(doxyfile "${CMAKE_CURRENT_SOURCE_DIR}/docs/Doxyfile") set(doxyfile "${CMAKE_CURRENT_SOURCE_DIR}/docs/Doxyfile")
file(GLOB_RECURSE doxygen_input file(GLOB_RECURSE
doxygen_input
docs/*.md docs/*.md
include/*.h include/*.h
include/*.cpp include/*.cpp
@@ -27,9 +28,7 @@ file(GLOB_RECURSE doxygen_input
src/*.md src/*.md
Builds/*.md Builds/*.md
*.md) *.md)
list(APPEND doxygen_input list(APPEND doxygen_input external/README.md)
external/README.md
)
set(dependencies "${doxygen_input}" "${doxyfile}") set(dependencies "${doxygen_input}" "${doxyfile}")
function (verbose_find_path variable name) function (verbose_find_path variable name)
@@ -48,8 +47,7 @@ verbose_find_path(doxygen_dot_path dot)
# https://en.cppreference.com/w/Cppreference:Archives # https://en.cppreference.com/w/Cppreference:Archives
# https://stackoverflow.com/questions/60822559/how-to-move-a-file-download-from-configure-step-to-build-step # https://stackoverflow.com/questions/60822559/how-to-move-a-file-download-from-configure-step-to-build-step
set(download_script "${CMAKE_BINARY_DIR}/docs/download-cppreference.cmake") set(download_script "${CMAKE_BINARY_DIR}/docs/download-cppreference.cmake")
file(WRITE file(WRITE "${download_script}"
"${download_script}"
"file(DOWNLOAD \ "file(DOWNLOAD \
https://github.com/PeterFeicht/cppreference-doc/releases/download/v20250209/html-book-20250209.zip \ https://github.com/PeterFeicht/cppreference-doc/releases/download/v20250209/html-book-20250209.zip \
${CMAKE_BINARY_DIR}/docs/cppreference.zip \ ${CMAKE_BINARY_DIR}/docs/cppreference.zip \
@@ -57,27 +55,18 @@ file(WRITE
)\n \ )\n \
execute_process( \ execute_process( \
COMMAND \"${CMAKE_COMMAND}\" -E tar -xf cppreference.zip \ COMMAND \"${CMAKE_COMMAND}\" -E tar -xf cppreference.zip \
)\n" )\n")
)
set(tagfile "${CMAKE_BINARY_DIR}/docs/cppreference-doxygen-web.tag.xml") set(tagfile "${CMAKE_BINARY_DIR}/docs/cppreference-doxygen-web.tag.xml")
add_custom_command( add_custom_command(OUTPUT "${tagfile}" COMMAND "${CMAKE_COMMAND}" -P "${download_script}"
OUTPUT "${tagfile}" WORKING_DIRECTORY "${CMAKE_BINARY_DIR}/docs")
COMMAND "${CMAKE_COMMAND}" -P "${download_script}"
WORKING_DIRECTORY "${CMAKE_BINARY_DIR}/docs"
)
set(doxygen_tagfiles "${tagfile}=http://en.cppreference.com/w/") set(doxygen_tagfiles "${tagfile}=http://en.cppreference.com/w/")
add_custom_command( add_custom_command(
OUTPUT "${doxygen_index_file}" OUTPUT "${doxygen_index_file}"
COMMAND "${CMAKE_COMMAND}" -E env COMMAND "${CMAKE_COMMAND}" -E env "DOXYGEN_OUTPUT_DIRECTORY=${doxygen_output_directory}"
"DOXYGEN_OUTPUT_DIRECTORY=${doxygen_output_directory}" "DOXYGEN_INCLUDE_PATH=${doxygen_include_path}" "DOXYGEN_TAGFILES=${doxygen_tagfiles}"
"DOXYGEN_INCLUDE_PATH=${doxygen_include_path}" "DOXYGEN_PLANTUML_JAR_PATH=${doxygen_plantuml_jar_path}" "DOXYGEN_DOT_PATH=${doxygen_dot_path}"
"DOXYGEN_TAGFILES=${doxygen_tagfiles}"
"DOXYGEN_PLANTUML_JAR_PATH=${doxygen_plantuml_jar_path}"
"DOXYGEN_DOT_PATH=${doxygen_dot_path}"
"${DOXYGEN_EXECUTABLE}" "${doxyfile}" "${DOXYGEN_EXECUTABLE}" "${doxyfile}"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}" WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
DEPENDS "${dependencies}" "${tagfile}") DEPENDS "${dependencies}" "${tagfile}")
add_custom_target(docs add_custom_target(docs DEPENDS "${doxygen_index_file}" SOURCES "${dependencies}")
DEPENDS "${doxygen_index_file}"
SOURCES "${dependencies}")

View File

@@ -4,9 +4,13 @@
include(create_symbolic_link) include(create_symbolic_link)
install ( # If no suffix is defined for executables (e.g. Windows uses .exe but Linux
TARGETS # and macOS use none), then explicitly set it to the empty string.
common if (NOT DEFINED suffix)
set(suffix "")
endif ()
install(TARGETS common
opts opts
xrpl_boost xrpl_boost
xrpl_libs xrpl_libs
@@ -31,22 +35,14 @@ install (
LIBRARY DESTINATION lib LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin RUNTIME DESTINATION bin
INCLUDES DESTINATION include) INCLUDES
DESTINATION include)
install( install(DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/include/xrpl" DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}")
DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/include/xrpl"
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}"
)
install (EXPORT XrplExports install(EXPORT XrplExports FILE XrplTargets.cmake NAMESPACE Xrpl:: DESTINATION lib/cmake/xrpl)
FILE XrplTargets.cmake
NAMESPACE Xrpl::
DESTINATION lib/cmake/xrpl)
include(CMakePackageConfigHelpers) include(CMakePackageConfigHelpers)
write_basic_package_version_file ( write_basic_package_version_file(XrplConfigVersion.cmake VERSION ${xrpld_version} COMPATIBILITY SameMajorVersion)
XrplConfigVersion.cmake
VERSION ${xrpld_version}
COMPATIBILITY SameMajorVersion)
if (is_root_project AND TARGET xrpld) if (is_root_project AND TARGET xrpld)
install(TARGETS xrpld RUNTIME DESTINATION bin) install(TARGETS xrpld RUNTIME DESTINATION bin)
@@ -73,8 +69,5 @@ if (is_root_project AND TARGET xrpld)
") ")
endif () endif ()
install ( install(FILES ${CMAKE_CURRENT_SOURCE_DIR}/cmake/XrplConfig.cmake ${CMAKE_CURRENT_BINARY_DIR}/XrplConfigVersion.cmake
FILES
${CMAKE_CURRENT_SOURCE_DIR}/cmake/XrplConfig.cmake
${CMAKE_CURRENT_BINARY_DIR}/XrplConfigVersion.cmake
DESTINATION lib/cmake/xrpl) DESTINATION lib/cmake/xrpl)

View File

@@ -2,11 +2,18 @@
xrpld compile options/settings via an interface library xrpld compile options/settings via an interface library
#]===================================================================] #]===================================================================]
include(CompilationEnv)
# Set defaults for optional variables to avoid uninitialized variable warnings
if (NOT DEFINED voidstar)
set(voidstar OFF)
endif ()
add_library(opts INTERFACE) add_library(opts INTERFACE)
add_library(Xrpl::opts ALIAS opts) add_library(Xrpl::opts ALIAS opts)
target_compile_definitions (opts target_compile_definitions(
INTERFACE opts
BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS INTERFACE BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS
BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT
BOOST_CONTAINER_FWD_BAD_DEQUE BOOST_CONTAINER_FWD_BAD_DEQUE
HAS_UNCAUGHT_EXCEPTIONS=1 HAS_UNCAUGHT_EXCEPTIONS=1
@@ -23,18 +30,13 @@ target_compile_definitions (opts
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1> $<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
$<$<BOOL:${single_io_service_thread}>:XRPL_SINGLE_IO_SERVICE_THREAD=1> $<$<BOOL:${single_io_service_thread}>:XRPL_SINGLE_IO_SERVICE_THREAD=1>
$<$<BOOL:${voidstar}>:ENABLE_VOIDSTAR>) $<$<BOOL:${voidstar}>:ENABLE_VOIDSTAR>)
target_compile_options (opts target_compile_options(
INTERFACE opts
$<$<AND:$<BOOL:${is_gcc}>,$<COMPILE_LANGUAGE:CXX>>:-Wsuggest-override> INTERFACE $<$<AND:$<BOOL:${is_gcc}>,$<COMPILE_LANGUAGE:CXX>>:-Wsuggest-override>
$<$<BOOL:${is_gcc}>:-Wno-maybe-uninitialized> $<$<BOOL:${is_gcc}>:-Wno-maybe-uninitialized> $<$<BOOL:${perf}>:-fno-omit-frame-pointer>
$<$<BOOL:${perf}>:-fno-omit-frame-pointer> $<$<BOOL:${profile}>:-pg> $<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
$<$<BOOL:${profile}>:-pg>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
target_link_libraries (opts target_link_libraries(opts INTERFACE $<$<BOOL:${profile}>:-pg> $<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
INTERFACE
$<$<BOOL:${profile}>:-pg>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
if (jemalloc) if (jemalloc)
find_package(jemalloc REQUIRED) find_package(jemalloc REQUIRED)
@@ -42,31 +44,15 @@ if(jemalloc)
target_link_libraries(opts INTERFACE jemalloc::jemalloc) target_link_libraries(opts INTERFACE jemalloc::jemalloc)
endif () endif ()
if (san)
target_compile_options (opts
INTERFACE
# sanitizers recommend minimum of -O1 for reasonable performance
$<$<CONFIG:Debug>:-O1>
${SAN_FLAG}
-fno-omit-frame-pointer)
target_compile_definitions (opts
INTERFACE
$<$<STREQUAL:${san},address>:SANITIZER=ASAN>
$<$<STREQUAL:${san},thread>:SANITIZER=TSAN>
$<$<STREQUAL:${san},memory>:SANITIZER=MSAN>
$<$<STREQUAL:${san},undefined>:SANITIZER=UBSAN>)
target_link_libraries (opts INTERFACE ${SAN_FLAG} ${SAN_LIB})
endif ()
#[===================================================================[ #[===================================================================[
xrpld transitive library deps via an interface library xrpld transitive library deps via an interface library
#]===================================================================] #]===================================================================]
add_library(xrpl_syslibs INTERFACE) add_library(xrpl_syslibs INTERFACE)
add_library(Xrpl::syslibs ALIAS xrpl_syslibs) add_library(Xrpl::syslibs ALIAS xrpl_syslibs)
target_link_libraries (xrpl_syslibs target_link_libraries(
INTERFACE xrpl_syslibs
$<$<BOOL:${MSVC}>: INTERFACE $<$<BOOL:${is_msvc}>:
legacy_stdio_definitions.lib legacy_stdio_definitions.lib
Shlwapi Shlwapi
kernel32 kernel32
@@ -83,10 +69,10 @@ target_link_libraries (xrpl_syslibs
odbccp32 odbccp32
crypt32 crypt32
> >
$<$<NOT:$<BOOL:${MSVC}>>:dl> $<$<NOT:$<BOOL:${is_msvc}>>:dl>
$<$<NOT:$<OR:$<BOOL:${MSVC}>,$<BOOL:${APPLE}>>>:rt>) $<$<NOT:$<OR:$<BOOL:${is_msvc}>,$<BOOL:${is_macos}>>>:rt>)
if (NOT MSVC) if (NOT is_msvc)
set(THREADS_PREFER_PTHREAD_FLAG ON) set(THREADS_PREFER_PTHREAD_FLAG ON)
find_package(Threads) find_package(Threads)
target_link_libraries(xrpl_syslibs INTERFACE Threads::Threads) target_link_libraries(xrpl_syslibs INTERFACE Threads::Threads)

197
cmake/XrplSanitizers.cmake Normal file
View File

@@ -0,0 +1,197 @@
#[===================================================================[
Configure sanitizers based on environment variables.
This module reads the following environment variables:
- SANITIZERS: The sanitizers to enable. Possible values:
- "address"
- "address,undefinedbehavior"
- "thread"
- "thread,undefinedbehavior"
- "undefinedbehavior"
The compiler type and platform are detected in CompilationEnv.cmake.
The sanitizer compile options are applied to the 'common' interface library
which is linked to all targets in the project.
Internal flag variables set by this module:
- SANITIZER_TYPES: List of sanitizer types to enable (e.g., "address",
"thread", "undefined"). And two more flags for undefined behavior sanitizer (e.g., "float-divide-by-zero", "unsigned-integer-overflow").
This list is joined with commas and passed to -fsanitize=<list>.
- SANITIZERS_COMPILE_FLAGS: Compiler flags for sanitizer instrumentation.
Includes:
* -fno-omit-frame-pointer: Preserves frame pointers for stack traces
* -O1: Minimum optimization for reasonable performance
* -fsanitize=<types>: Enables sanitizer instrumentation
* -fsanitize-ignorelist=<path>: (Clang only) Compile-time ignorelist
* -mcmodel=large/medium: (GCC only) Code model for large binaries
* -Wno-stringop-overflow: (GCC only) Suppresses false positive warnings
* -Wno-tsan: (For GCC TSAN combination only) Suppresses atomic_thread_fence warnings
- SANITIZERS_LINK_FLAGS: Linker flags for sanitizer runtime libraries.
Includes:
* -fsanitize=<types>: Links sanitizer runtime libraries
* -mcmodel=large/medium: (GCC only) Matches compile-time code model
- SANITIZERS_RELOCATION_FLAGS: (GCC only) Code model flags for linking.
Used to handle large instrumented binaries on x86_64:
* -mcmodel=large: For AddressSanitizer (prevents relocation errors)
* -mcmodel=medium: For ThreadSanitizer (large model is incompatible)
#]===================================================================]
include(CompilationEnv)
# Read environment variable
set(SANITIZERS "")
if (DEFINED ENV{SANITIZERS})
set(SANITIZERS "$ENV{SANITIZERS}")
endif ()
# Set SANITIZERS_ENABLED flag for use in other modules
if (SANITIZERS MATCHES "address|thread|undefinedbehavior")
set(SANITIZERS_ENABLED TRUE)
else ()
set(SANITIZERS_ENABLED FALSE)
return()
endif ()
# Sanitizers are not supported on Windows/MSVC
if (is_msvc)
message(FATAL_ERROR "Sanitizers are not supported on Windows/MSVC. "
"Please unset the SANITIZERS environment variable.")
endif ()
message(STATUS "Configuring sanitizers: ${SANITIZERS}")
# Parse SANITIZERS value to determine which sanitizers to enable
set(enable_asan FALSE)
set(enable_tsan FALSE)
set(enable_ubsan FALSE)
# Normalize SANITIZERS into a list
set(san_list "${SANITIZERS}")
string(REPLACE "," ";" san_list "${san_list}")
separate_arguments(san_list)
foreach (san IN LISTS san_list)
if (san STREQUAL "address")
set(enable_asan TRUE)
elseif (san STREQUAL "thread")
set(enable_tsan TRUE)
elseif (san STREQUAL "undefinedbehavior")
set(enable_ubsan TRUE)
else ()
message(FATAL_ERROR "Unsupported sanitizer type: ${san}"
"Supported: address, thread, undefinedbehavior and their combinations.")
endif ()
endforeach ()
# Validate sanitizer compatibility
if (enable_asan AND enable_tsan)
message(FATAL_ERROR "AddressSanitizer and ThreadSanitizer are incompatible and cannot be enabled simultaneously. "
"Use 'address' or 'thread', optionally with 'undefinedbehavior'.")
endif ()
# Frame pointer is required for meaningful stack traces. Sanitizers recommend minimum of -O1 for reasonable performance
set(SANITIZERS_COMPILE_FLAGS "-fno-omit-frame-pointer" "-O1")
# Build the sanitizer flags list
set(SANITIZER_TYPES)
if (enable_asan)
list(APPEND SANITIZER_TYPES "address")
elseif (enable_tsan)
list(APPEND SANITIZER_TYPES "thread")
endif ()
if (enable_ubsan)
# UB sanitizer flags
list(APPEND SANITIZER_TYPES "undefined" "float-divide-by-zero")
if (is_clang)
# Clang supports additional UB checks. More info here
# https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html
list(APPEND SANITIZER_TYPES "unsigned-integer-overflow")
endif ()
endif ()
# Configure code model for GCC on amd64 Use large code model for ASAN to avoid relocation errors Use medium code model
# for TSAN (large is not compatible with TSAN)
set(SANITIZERS_RELOCATION_FLAGS)
# Compiler-specific configuration
if (is_gcc)
# Disable mold, gold and lld linkers for GCC with sanitizers Use default linker (bfd/ld) which is more lenient with
# mixed code models This is needed since the size of instrumented binary exceeds the limits set by mold, lld and
# gold linkers
set(use_mold OFF CACHE BOOL "Use mold linker" FORCE)
set(use_gold OFF CACHE BOOL "Use gold linker" FORCE)
set(use_lld OFF CACHE BOOL "Use lld linker" FORCE)
message(STATUS " Disabled mold, gold, and lld linkers for GCC with sanitizers")
# Suppress false positive warnings in GCC with stringop-overflow
list(APPEND SANITIZERS_COMPILE_FLAGS "-Wno-stringop-overflow")
if (is_amd64 AND enable_asan)
message(STATUS " Using large code model (-mcmodel=large)")
list(APPEND SANITIZERS_COMPILE_FLAGS "-mcmodel=large")
list(APPEND SANITIZERS_RELOCATION_FLAGS "-mcmodel=large")
elseif (enable_tsan)
# GCC doesn't support atomic_thread_fence with tsan. Suppress warnings.
list(APPEND SANITIZERS_COMPILE_FLAGS "-Wno-tsan")
message(STATUS " Using medium code model (-mcmodel=medium)")
list(APPEND SANITIZERS_COMPILE_FLAGS "-mcmodel=medium")
list(APPEND SANITIZERS_RELOCATION_FLAGS "-mcmodel=medium")
endif ()
# Join sanitizer flags with commas for -fsanitize option
list(JOIN SANITIZER_TYPES "," SANITIZER_TYPES_STR)
# Add sanitizer to compile and link flags
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
set(SANITIZERS_LINK_FLAGS "${SANITIZERS_RELOCATION_FLAGS}" "-fsanitize=${SANITIZER_TYPES_STR}")
elseif (is_clang)
# Add ignorelist for Clang (GCC doesn't support this) Use CMAKE_SOURCE_DIR to get the path to the ignorelist
set(IGNORELIST_PATH "${CMAKE_SOURCE_DIR}/sanitizers/suppressions/sanitizer-ignorelist.txt")
if (NOT EXISTS "${IGNORELIST_PATH}")
message(FATAL_ERROR "Sanitizer ignorelist not found: ${IGNORELIST_PATH}")
endif ()
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize-ignorelist=${IGNORELIST_PATH}")
message(STATUS " Using sanitizer ignorelist: ${IGNORELIST_PATH}")
# Join sanitizer flags with commas for -fsanitize option
list(JOIN SANITIZER_TYPES "," SANITIZER_TYPES_STR)
# Add sanitizer to compile and link flags
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
set(SANITIZERS_LINK_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
endif ()
message(STATUS " Compile flags: ${SANITIZERS_COMPILE_FLAGS}")
message(STATUS " Link flags: ${SANITIZERS_LINK_FLAGS}")
# Apply the sanitizer flags to the 'common' interface library This is the same library used by XrplCompiler.cmake
target_compile_options(common INTERFACE $<$<COMPILE_LANGUAGE:CXX>:${SANITIZERS_COMPILE_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${SANITIZERS_COMPILE_FLAGS}>)
# Apply linker flags
target_link_options(common INTERFACE ${SANITIZERS_LINK_FLAGS})
# Define SANITIZERS macro for BuildInfo.cpp
set(sanitizers_list)
if (enable_asan)
list(APPEND sanitizers_list "ASAN")
endif ()
if (enable_tsan)
list(APPEND sanitizers_list "TSAN")
endif ()
if (enable_ubsan)
list(APPEND sanitizers_list "UBSAN")
endif ()
if (sanitizers_list)
list(JOIN sanitizers_list "." sanitizers_str)
target_compile_definitions(common INTERFACE SANITIZERS=${sanitizers_str})
endif ()

View File

@@ -2,6 +2,8 @@
sanity checks sanity checks
#]===================================================================] #]===================================================================]
include(CompilationEnv)
get_property(is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG) get_property(is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "" FORCE) set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "" FORCE)
@@ -10,20 +12,17 @@ if (NOT is_multiconfig)
message(STATUS "Build type not specified - defaulting to Release") message(STATUS "Build type not specified - defaulting to Release")
set(CMAKE_BUILD_TYPE Release CACHE STRING "build type" FORCE) set(CMAKE_BUILD_TYPE Release CACHE STRING "build type" FORCE)
elseif (NOT (CMAKE_BUILD_TYPE STREQUAL Debug OR CMAKE_BUILD_TYPE STREQUAL Release)) elseif (NOT (CMAKE_BUILD_TYPE STREQUAL Debug OR CMAKE_BUILD_TYPE STREQUAL Release))
# for simplicity, these are the only two config types we care about. Limiting # for simplicity, these are the only two config types we care about. Limiting the build types simplifies dealing
# the build types simplifies dealing with external project builds especially # with external project builds especially
message(FATAL_ERROR " *** Only Debug or Release build types are currently supported ***") message(FATAL_ERROR " *** Only Debug or Release build types are currently supported ***")
endif () endif ()
endif () endif ()
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES ".*Clang") # both Clang and AppleClang if (is_clang) # both Clang and AppleClang
set (is_clang TRUE) if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 16.0)
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang" AND
CMAKE_CXX_COMPILER_VERSION VERSION_LESS 16.0)
message(FATAL_ERROR "This project requires clang 16 or later") message(FATAL_ERROR "This project requires clang 16 or later")
endif () endif ()
elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU") elseif (is_gcc)
set (is_gcc TRUE)
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 12.0) if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 12.0)
message(FATAL_ERROR "This project requires GCC 12 or later") message(FATAL_ERROR "This project requires GCC 12 or later")
endif () endif ()
@@ -40,11 +39,6 @@ if (MSVC AND CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
message(FATAL_ERROR "Visual Studio 32-bit build is not supported.") message(FATAL_ERROR "Visual Studio 32-bit build is not supported.")
endif () endif ()
if (NOT CMAKE_SIZEOF_VOID_P EQUAL 8)
message (FATAL_ERROR "Xrpld requires a 64 bit target architecture.\n"
"The most likely cause of this warning is trying to build xrpld with a 32-bit OS.")
endif ()
if (APPLE AND NOT HOMEBREW) if (APPLE AND NOT HOMEBREW)
find_program(HOMEBREW brew) find_program(HOMEBREW brew)
endif () endif ()

View File

@@ -2,16 +2,13 @@
declare options and variables declare options and variables
#]===================================================================] #]===================================================================]
if(CMAKE_SYSTEM_NAME STREQUAL "Linux") include(CompilationEnv)
set (is_linux TRUE)
else()
set(is_linux FALSE)
endif()
if("$ENV{CI}" STREQUAL "true" OR "$ENV{CONTINUOUS_INTEGRATION}" STREQUAL "true")
set(is_ci TRUE)
else()
set(is_ci FALSE) set(is_ci FALSE)
if (DEFINED ENV{CI})
if ("$ENV{CI}" STREQUAL "true")
set(is_ci TRUE)
endif ()
endif () endif ()
get_directory_property(has_parent PARENT_DIRECTORY) get_directory_property(has_parent PARENT_DIRECTORY)
@@ -51,10 +48,8 @@ if(is_gcc OR is_clang)
option(coverage "Generates coverage info." OFF) option(coverage "Generates coverage info." OFF)
option(profile "Add profiling flags" OFF) option(profile "Add profiling flags" OFF)
set(coverage_format "html-details" CACHE STRING set(coverage_format "html-details" CACHE STRING "Output format of the coverage report.")
"Output format of the coverage report.") set(coverage_extra_args "" CACHE STRING "Additional arguments to pass to gcovr.")
set(coverage_extra_args "" CACHE STRING
"Additional arguments to pass to gcovr.")
option(wextra "compile with extra gcc/clang warnings enabled" ON) option(wextra "compile with extra gcc/clang warnings enabled" ON)
else () else ()
set(profile OFF CACHE BOOL "gcc/clang only" FORCE) set(profile OFF CACHE BOOL "gcc/clang only" FORCE)
@@ -62,16 +57,26 @@ else()
set(wextra OFF CACHE BOOL "gcc/clang only" FORCE) set(wextra OFF CACHE BOOL "gcc/clang only" FORCE)
endif () endif ()
if(is_linux) if (is_linux AND NOT SANITIZER)
option(BUILD_SHARED_LIBS "build shared xrpl libraries" OFF) option(BUILD_SHARED_LIBS "build shared xrpl libraries" OFF)
option(static "link protobuf, openssl, libc++, and boost statically" ON) option(static "link protobuf, openssl, libc++, and boost statically" ON)
option(perf "Enables flags that assist with perf recording" OFF) option(perf "Enables flags that assist with perf recording" OFF)
option(use_gold "enables detection of gold (binutils) linker" ON) option(use_gold "enables detection of gold (binutils) linker" ON)
option(use_mold "enables detection of mold (binutils) linker" ON) option(use_mold "enables detection of mold (binutils) linker" ON)
# Set a default value for the log flag based on the build type. This provides a sensible default (on for debug, off
# for release) while still allowing the user to override it for any build.
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
set(TRUNCATED_LOGS_DEFAULT ON)
else () else ()
# we are not ready to allow shared-libs on windows because it would require set(TRUNCATED_LOGS_DEFAULT OFF)
# export declarations. On macos it's more feasible, but static openssl endif ()
# produces odd linker errors, thus we disable shared lib builds for now. option(TRUNCATED_THREAD_NAME_LOGS "Show warnings about truncated thread names on Linux." ${TRUNCATED_LOGS_DEFAULT})
if (TRUNCATED_THREAD_NAME_LOGS)
add_compile_definitions(TRUNCATED_THREAD_NAME_LOGS)
endif ()
else ()
# we are not ready to allow shared-libs on windows because it would require export declarations. On macos it's more
# feasible, but static openssl produces odd linker errors, thus we disable shared lib builds for now.
set(BUILD_SHARED_LIBS OFF CACHE BOOL "build shared xrpl libraries - OFF for win/macos" FORCE) set(BUILD_SHARED_LIBS OFF CACHE BOOL "build shared xrpl libraries - OFF for win/macos" FORCE)
set(static ON CACHE BOOL "static link, linux only. ON for WIN/macos" FORCE) set(static ON CACHE BOOL "static link, linux only. ON for WIN/macos" FORCE)
set(perf OFF CACHE BOOL "perf flags, linux only" FORCE) set(perf OFF CACHE BOOL "perf flags, linux only" FORCE)
@@ -87,50 +92,15 @@ endif()
option(jemalloc "Enables jemalloc for heap profiling" OFF) option(jemalloc "Enables jemalloc for heap profiling" OFF)
option(werr "treat warnings as errors" OFF) option(werr "treat warnings as errors" OFF)
option(local_protobuf option(local_protobuf "Force a local build of protobuf instead of looking for an installed version." OFF)
"Force a local build of protobuf instead of looking for an installed version." OFF) option(local_grpc "Force a local build of gRPC instead of looking for an installed version." OFF)
option(local_grpc
"Force a local build of gRPC instead of looking for an installed version." OFF)
# this one is a string and therefore can't be an option
set(san "" CACHE STRING "On gcc & clang, add sanitizer instrumentation")
set_property(CACHE san PROPERTY STRINGS ";undefined;memory;address;thread")
if(san)
string(TOLOWER ${san} san)
set(SAN_FLAG "-fsanitize=${san}")
set(SAN_LIB "")
if(is_gcc)
if(san STREQUAL "address")
set(SAN_LIB "asan")
elseif(san STREQUAL "thread")
set(SAN_LIB "tsan")
elseif(san STREQUAL "memory")
set(SAN_LIB "msan")
elseif(san STREQUAL "undefined")
set(SAN_LIB "ubsan")
endif()
endif()
set(_saved_CRL ${CMAKE_REQUIRED_LIBRARIES})
set(CMAKE_REQUIRED_LIBRARIES "${SAN_FLAG};${SAN_LIB}")
check_cxx_compiler_flag(${SAN_FLAG} COMPILER_SUPPORTS_SAN)
set(CMAKE_REQUIRED_LIBRARIES ${_saved_CRL})
if(NOT COMPILER_SUPPORTS_SAN)
message(FATAL_ERROR "${san} sanitizer does not seem to be supported by your compiler")
endif()
endif()
# the remaining options are obscure and rarely used # the remaining options are obscure and rarely used
option(beast_no_unit_test_inline option(beast_no_unit_test_inline "Prevents unit test definitions from being inserted into global table" OFF)
"Prevents unit test definitions from being inserted into global table" option(single_io_service_thread "Restricts the number of threads calling io_context::run to one. \
OFF) This can be useful when debugging." OFF)
option(single_io_service_thread option(boost_show_deprecated "Allow boost to fail on deprecated usage. Only useful if you're trying\
"Restricts the number of threads calling io_context::run to one. \ to find deprecated calls." OFF)
This can be useful when debugging."
OFF)
option(boost_show_deprecated
"Allow boost to fail on deprecated usage. Only useful if you're trying\
to find deprecated calls."
OFF)
if (WIN32) if (WIN32)
option(beast_disable_autolink "Disables autolinking of system libraries on WIN32" OFF) option(beast_disable_autolink "Disables autolinking of system libraries on WIN32" OFF)

View File

@@ -8,11 +8,8 @@ if (validator_keys)
endif () endif ()
message(STATUS "Tracking ValidatorKeys branch: ${current_branch}") message(STATUS "Tracking ValidatorKeys branch: ${current_branch}")
FetchContent_Declare ( FetchContent_Declare(validator_keys GIT_REPOSITORY https://github.com/ripple/validator-keys-tool.git
validator_keys GIT_TAG "${current_branch}")
GIT_REPOSITORY https://github.com/ripple/validator-keys-tool.git
GIT_TAG "${current_branch}"
)
FetchContent_MakeAvailable(validator_keys) FetchContent_MakeAvailable(validator_keys)
set_target_properties(validator-keys PROPERTIES RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}") set_target_properties(validator-keys PROPERTIES RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}")
install(TARGETS validator-keys RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR}) install(TARGETS validator-keys RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR})

View File

@@ -15,23 +15,11 @@ include(isolate_headers)
function (add_module parent name) function (add_module parent name)
set(target ${PROJECT_NAME}.lib${parent}.${name}) set(target ${PROJECT_NAME}.lib${parent}.${name})
add_library(${target} OBJECT) add_library(${target} OBJECT)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS file(GLOB_RECURSE sources CONFIGURE_DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}/*.cpp")
"${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}/*.cpp"
)
target_sources(${target} PRIVATE ${sources}) target_sources(${target} PRIVATE ${sources})
target_include_directories(${target} PUBLIC target_include_directories(${target} PUBLIC "$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>")
"$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>" isolate_headers(${target} "${CMAKE_CURRENT_SOURCE_DIR}/include"
) "${CMAKE_CURRENT_SOURCE_DIR}/include/${parent}/${name}" PUBLIC)
isolate_headers( isolate_headers(${target} "${CMAKE_CURRENT_SOURCE_DIR}/src" "${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}"
${target} PRIVATE)
"${CMAKE_CURRENT_SOURCE_DIR}/include"
"${CMAKE_CURRENT_SOURCE_DIR}/include/${parent}/${name}"
PUBLIC
)
isolate_headers(
${target}
"${CMAKE_CURRENT_SOURCE_DIR}/src"
"${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}"
PRIVATE
)
endfunction () endfunction ()

View File

@@ -1,5 +1,4 @@
# file(CREATE_SYMLINK) only works on Windows with administrator privileges. # file(CREATE_SYMLINK) only works on Windows with administrator privileges. https://stackoverflow.com/a/61244115/618906
# https://stackoverflow.com/a/61244115/618906
function (create_symbolic_link target link) function (create_symbolic_link target link)
if (WIN32) if (WIN32)
if (NOT IS_SYMLINK "${link}") if (NOT IS_SYMLINK "${link}")

View File

@@ -1,6 +1,8 @@
find_package(Boost 1.82 REQUIRED include(CompilationEnv)
COMPONENTS include(XrplSanitizers)
chrono
find_package(Boost REQUIRED
COMPONENTS chrono
container container
coroutine coroutine
date_time date_time
@@ -9,15 +11,14 @@ find_package(Boost 1.82 REQUIRED
program_options program_options
regex regex
system system
thread thread)
)
add_library(xrpl_boost INTERFACE) add_library(xrpl_boost INTERFACE)
add_library(Xrpl::boost ALIAS xrpl_boost) add_library(Xrpl::boost ALIAS xrpl_boost)
target_link_libraries(xrpl_boost target_link_libraries(
INTERFACE xrpl_boost
Boost::headers INTERFACE Boost::headers
Boost::chrono Boost::chrono
Boost::container Boost::container
Boost::coroutine Boost::coroutine
@@ -27,21 +28,17 @@ target_link_libraries(xrpl_boost
Boost::process Boost::process
Boost::program_options Boost::program_options
Boost::regex Boost::regex
Boost::system
Boost::thread) Boost::thread)
if (Boost_COMPILER) if (Boost_COMPILER)
target_link_libraries(xrpl_boost INTERFACE Boost::disable_autolinking) target_link_libraries(xrpl_boost INTERFACE Boost::disable_autolinking)
endif () endif ()
if(san AND is_clang) if (SANITIZERS_ENABLED AND is_clang)
# TODO: gcc does not support -fsanitize-blacklist...can we do something else # TODO: gcc does not support -fsanitize-blacklist...can we do something else for gcc ?
# for gcc ?
if (NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers) if (NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)
get_target_property(Boost_INCLUDE_DIRS Boost::headers INTERFACE_INCLUDE_DIRECTORIES) get_target_property(Boost_INCLUDE_DIRS Boost::headers INTERFACE_INCLUDE_DIRECTORIES)
endif () endif ()
message(STATUS "Adding [${Boost_INCLUDE_DIRS}] to sanitizer blacklist") message(STATUS "Adding [${Boost_INCLUDE_DIRS}] to sanitizer blacklist")
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt "src:${Boost_INCLUDE_DIRS}/*") file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt "src:${Boost_INCLUDE_DIRS}/*")
target_compile_options(opts target_compile_options(opts INTERFACE # ignore boost headers for sanitizing
INTERFACE
# ignore boost headers for sanitizing
-fsanitize-blacklist=${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt) -fsanitize-blacklist=${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt)
endif () endif ()

View File

@@ -39,24 +39,16 @@ function(target_protobuf_sources target prefix)
set(dir "${CMAKE_CURRENT_BINARY_DIR}/pb-${target}") set(dir "${CMAKE_CURRENT_BINARY_DIR}/pb-${target}")
file(MAKE_DIRECTORY "${dir}/${prefix}") file(MAKE_DIRECTORY "${dir}/${prefix}")
protobuf_generate( protobuf_generate(TARGET ${target} PROTOC_OUT_DIR "${dir}/${prefix}" "${ARGN}")
TARGET ${target} target_include_directories(
PROTOC_OUT_DIR "${dir}/${prefix}" ${target} SYSTEM
"${ARGN}" PUBLIC # Allows #include <package/path/to/file.proto> used by consumer files.
)
target_include_directories(${target} SYSTEM PUBLIC
# Allows #include <package/path/to/file.proto> used by consumer files.
$<BUILD_INTERFACE:${dir}> $<BUILD_INTERFACE:${dir}>
# Allows #include "path/to/file.proto" used by generated files. # Allows #include "path/to/file.proto" used by generated files.
$<BUILD_INTERFACE:${dir}/${prefix}> $<BUILD_INTERFACE:${dir}/${prefix}>
# Allows #include <package/path/to/file.proto> used by consumer files. # Allows #include <package/path/to/file.proto> used by consumer files.
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}> $<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>
# Allows #include "path/to/file.proto" used by generated files. # Allows #include "path/to/file.proto" used by generated files.
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/${prefix}> $<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/${prefix}>)
) install(DIRECTORY ${dir}/ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR} FILES_MATCHING PATTERN "*.h")
install(
DIRECTORY ${dir}/
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}
FILES_MATCHING PATTERN "*.h"
)
endfunction () endfunction ()

View File

@@ -1,59 +1,63 @@
{ {
"version": "0.5", "version": "0.5",
"requires": [ "requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1756234269.497", "zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1765850150.075",
"xxhash/0.8.3#681d36a0a6111fc56e5e45ea182c19cc%1756234289.683", "xxhash/0.8.3#681d36a0a6111fc56e5e45ea182c19cc%1765850149.987",
"sqlite3/3.49.1#8631739a4c9b93bd3d6b753bac548a63%1756234266.869", "sqlite3/3.49.1#8631739a4c9b93bd3d6b753bac548a63%1765850149.926",
"soci/4.0.3#a9f8d773cd33e356b5879a4b0564f287%1756234262.318", "soci/4.0.3#a9f8d773cd33e356b5879a4b0564f287%1765850149.46",
"snappy/1.1.10#968fef506ff261592ec30c574d4a7809%1756234314.246", "snappy/1.1.10#968fef506ff261592ec30c574d4a7809%1765850147.878",
"secp256k1/0.7.0#9c4ab67bdc3860c16ea5b36aed8f74ea%1765202256.763", "secp256k1/0.7.0#9c4ab67bdc3860c16ea5b36aed8f74ea%1765850147.928",
"rocksdb/10.5.1#4a197eca381a3e5ae8adf8cffa5aacd0%1762797952.535", "rocksdb/10.5.1#4a197eca381a3e5ae8adf8cffa5aacd0%1765850186.86",
"re2/20230301#ca3b241baec15bd31ea9187150e0b333%1764175362.029", "re2/20230301#ca3b241baec15bd31ea9187150e0b333%1765850148.103",
"protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1764863245.83", "protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1765850161.038",
"openssl/3.5.4#a1d5835cc6ed5c5b8f3cd5b9b5d24205%1760106486.594", "openssl/3.5.4#1b986e61b38fdfda3b40bebc1b234393%1768312656.257",
"nudb/2.0.9#fb8dfd1a5557f5e0528114c2da17721e%1763150366.909", "nudb/2.0.9#0432758a24204da08fee953ec9ea03cb%1769436073.32",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1756234228.999", "lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1765850143.914",
"libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1756223727.64", "libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1765842973.492",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1756230911.03", "libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1765842973.03",
"libarchive/3.8.1#ffee18995c706e02bf96e7a2f7042e0d%1764175360.142", "libarchive/3.8.1#ffee18995c706e02bf96e7a2f7042e0d%1765850144.736",
"jemalloc/5.3.0#e951da9cf599e956cebc117880d2d9f8%1729241615.244", "jemalloc/5.3.0#e951da9cf599e956cebc117880d2d9f8%1729241615.244",
"grpc/1.72.0#f244a57bff01e708c55a1100b12e1589%1763158050.628", "gtest/1.17.0#5224b3b3ff3b4ce1133cbdd27d53ee7d%1768312129.152",
"ed25519/2015.03#ae761bdc52730a843f0809bdf6c1b1f6%1764270189.893", "grpc/1.72.0#f244a57bff01e708c55a1100b12e1589%1765850193.734",
"doctest/2.4.12#eb9fb352fb2fdfc8abb17ec270945165%1762797941.757", "ed25519/2015.03#ae761bdc52730a843f0809bdf6c1b1f6%1765850143.772",
"date/3.0.4#862e11e80030356b53c2c38599ceb32b%1763584497.32", "date/3.0.4#862e11e80030356b53c2c38599ceb32b%1765850143.772",
"c-ares/1.34.5#5581c2b62a608b40bb85d965ab3ec7c8%1764175359.429", "c-ares/1.34.5#5581c2b62a608b40bb85d965ab3ec7c8%1765850144.336",
"bzip2/1.0.8#c470882369c2d95c5c77e970c0c7e321%1764175359.429", "bzip2/1.0.8#c470882369c2d95c5c77e970c0c7e321%1765850143.837",
"boost/1.88.0#8852c0b72ce8271fb8ff7c53456d4983%1756223752.326", "boost/1.90.0#d5e8defe7355494953be18524a7f135b%1765955095.179",
"abseil/20250127.0#9e8e8cfc89a1324139fc0ee3bd4d8c8c%1753819045.301" "abseil/20250127.0#99262a368bd01c0ccca8790dfced9719%1766517936.993"
], ],
"build_requires": [ "build_requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1756234269.497", "zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1765850150.075",
"strawberryperl/5.32.1.1#707032463aa0620fa17ec0d887f5fe41%1756234281.733", "strawberryperl/5.32.1.1#707032463aa0620fa17ec0d887f5fe41%1765850165.196",
"protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1764863245.83", "protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1765850161.038",
"nasm/2.16.01#31e26f2ee3c4346ecd347911bd126904%1756234232.901", "nasm/2.16.01#31e26f2ee3c4346ecd347911bd126904%1765850144.707",
"msys2/cci.latest#1996656c3c98e5765b25b60ff5cf77b4%1764840888.758", "msys2/cci.latest#1996656c3c98e5765b25b60ff5cf77b4%1764840888.758",
"m4/1.4.19#70dc8bbb33e981d119d2acc0175cf381%1763158052.846", "m4/1.4.19#70dc8bbb33e981d119d2acc0175cf381%1763158052.846",
"cmake/4.2.0#ae0a44f44a1ef9ab68fd4b3e9a1f8671%1764175359.44", "cmake/4.2.0#ae0a44f44a1ef9ab68fd4b3e9a1f8671%1765850153.937",
"cmake/3.31.10#313d16a1aa16bbdb2ca0792467214b76%1764175359.429", "cmake/3.31.10#313d16a1aa16bbdb2ca0792467214b76%1765850153.479",
"b2/5.3.3#107c15377719889654eb9a162a673975%1756234226.28", "b2/5.3.3#107c15377719889654eb9a162a673975%1765850144.355",
"automake/1.16.5#b91b7c384c3deaa9d535be02da14d04f%1755524470.56", "automake/1.16.5#b91b7c384c3deaa9d535be02da14d04f%1755524470.56",
"autoconf/2.71#51077f068e61700d65bb05541ea1e4b0%1731054366.86", "autoconf/2.71#51077f068e61700d65bb05541ea1e4b0%1731054366.86",
"abseil/20250127.0#9e8e8cfc89a1324139fc0ee3bd4d8c8c%1753819045.301" "abseil/20250127.0#99262a368bd01c0ccca8790dfced9719%1766517936.993"
], ],
"python_requires": [], "python_requires": [],
"overrides": { "overrides": {
"boost/1.90.0#d5e8defe7355494953be18524a7f135b": [
null,
"boost/1.90.0"
],
"protobuf/5.27.0": [ "protobuf/5.27.0": [
"protobuf/6.32.1" "protobuf/6.32.1"
], ],
"lz4/1.9.4": [ "lz4/1.9.4": [
"lz4/1.10.0" "lz4/1.10.0"
], ],
"boost/1.83.0": [
"boost/1.88.0"
],
"sqlite3/3.44.2": [ "sqlite3/3.44.2": [
"sqlite3/3.49.1" "sqlite3/3.49.1"
], ],
"boost/1.83.0": [
"boost/1.90.0"
],
"lz4/[>=1.9.4 <2]": [ "lz4/[>=1.9.4 <2]": [
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504" "lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504"
] ]

1
conan/profiles/ci Normal file
View File

@@ -0,0 +1 @@
include(sanitizers)

59
conan/profiles/sanitizers Normal file
View File

@@ -0,0 +1,59 @@
include(default)
{% set compiler, version, compiler_exe = detect_api.detect_default_compiler() %}
{% set sanitizers = os.getenv("SANITIZERS") %}
[conf]
{% if sanitizers %}
{% if compiler == "gcc" %}
{% if "address" in sanitizers or "thread" in sanitizers or "undefinedbehavior" in sanitizers %}
{% set sanitizer_list = [] %}
{% set model_code = "" %}
{% set extra_cxxflags = ["-fno-omit-frame-pointer", "-O1", "-Wno-stringop-overflow"] %}
{% if "address" in sanitizers %}
{% set _ = sanitizer_list.append("address") %}
{% set model_code = "-mcmodel=large" %}
{% elif "thread" in sanitizers %}
{% set _ = sanitizer_list.append("thread") %}
{% set model_code = "-mcmodel=medium" %}
{% set _ = extra_cxxflags.append("-Wno-tsan") %}
{% endif %}
{% if "undefinedbehavior" in sanitizers %}
{% set _ = sanitizer_list.append("undefined") %}
{% set _ = sanitizer_list.append("float-divide-by-zero") %}
{% endif %}
{% set sanitizer_flags = "-fsanitize=" ~ ",".join(sanitizer_list) ~ " " ~ model_code %}
tools.build:cxxflags+=['{{sanitizer_flags}} {{" ".join(extra_cxxflags)}}']
tools.build:sharedlinkflags+=['{{sanitizer_flags}}']
tools.build:exelinkflags+=['{{sanitizer_flags}}']
{% endif %}
{% elif compiler == "apple-clang" or compiler == "clang" %}
{% if "address" in sanitizers or "thread" in sanitizers or "undefinedbehavior" in sanitizers %}
{% set sanitizer_list = [] %}
{% set extra_cxxflags = ["-fno-omit-frame-pointer", "-O1"] %}
{% if "address" in sanitizers %}
{% set _ = sanitizer_list.append("address") %}
{% elif "thread" in sanitizers %}
{% set _ = sanitizer_list.append("thread") %}
{% endif %}
{% if "undefinedbehavior" in sanitizers %}
{% set _ = sanitizer_list.append("undefined") %}
{% set _ = sanitizer_list.append("float-divide-by-zero") %}
{% set _ = sanitizer_list.append("unsigned-integer-overflow") %}
{% endif %}
{% set sanitizer_flags = "-fsanitize=" ~ ",".join(sanitizer_list) %}
tools.build:cxxflags+=['{{sanitizer_flags}} {{" ".join(extra_cxxflags)}}']
tools.build:sharedlinkflags+=['{{sanitizer_flags}}']
tools.build:exelinkflags+=['{{sanitizer_flags}}']
{% endif %}
{% endif %}
{% endif %}
tools.info.package_id:confs+=["tools.build:cxxflags", "tools.build:exelinkflags", "tools.build:sharedlinkflags"]

View File

@@ -39,7 +39,7 @@ class Xrpl(ConanFile):
] ]
test_requires = [ test_requires = [
"doctest/2.4.12", "gtest/1.17.0",
] ]
tool_requires = [ tool_requires = [
@@ -87,7 +87,13 @@ class Xrpl(ConanFile):
"libarchive/*:with_xattr": False, "libarchive/*:with_xattr": False,
"libarchive/*:with_zlib": False, "libarchive/*:with_zlib": False,
"lz4/*:shared": False, "lz4/*:shared": False,
"openssl/*:no_dtls": True,
"openssl/*:no_ssl": True,
"openssl/*:no_ssl3": True,
"openssl/*:no_tls1": True,
"openssl/*:no_tls1_1": True,
"openssl/*:shared": False, "openssl/*:shared": False,
"openssl/*:tls_security_level": 2,
"protobuf/*:shared": False, "protobuf/*:shared": False,
"protobuf/*:with_zlib": True, "protobuf/*:with_zlib": True,
"rocksdb/*:enable_sse": False, "rocksdb/*:enable_sse": False,
@@ -125,7 +131,7 @@ class Xrpl(ConanFile):
transitive_headers_opt = ( transitive_headers_opt = (
{"transitive_headers": True} if conan_version.split(".")[0] == "2" else {} {"transitive_headers": True} if conan_version.split(".")[0] == "2" else {}
) )
self.requires("boost/1.88.0", force=True, **transitive_headers_opt) self.requires("boost/1.90.0", force=True, **transitive_headers_opt)
self.requires("date/3.0.4", **transitive_headers_opt) self.requires("date/3.0.4", **transitive_headers_opt)
self.requires("lz4/1.10.0", force=True) self.requires("lz4/1.10.0", force=True)
self.requires("protobuf/6.32.1", force=True) self.requires("protobuf/6.32.1", force=True)
@@ -197,7 +203,6 @@ class Xrpl(ConanFile):
"boost::program_options", "boost::program_options",
"boost::process", "boost::process",
"boost::regex", "boost::regex",
"boost::system",
"boost::thread", "boost::thread",
"date::date", "date::date",
"ed25519::ed25519", "ed25519::ed25519",

207
docs/build/sanitizers.md vendored Normal file
View File

@@ -0,0 +1,207 @@
# Sanitizer Configuration for Rippled
This document explains how to properly configure and run sanitizers (AddressSanitizer, undefinedbehaviorSanitizer, ThreadSanitizer) with the xrpld project.
Corresponding suppression files are located in the `sanitizers/suppressions` directory.
- [Sanitizer Configuration for Rippled](#sanitizer-configuration-for-rippled)
- [Building with Sanitizers](#building-with-sanitizers)
- [Summary](#summary)
- [Build steps:](#build-steps)
- [Install dependencies](#install-dependencies)
- [Call CMake](#call-cmake)
- [Build](#build)
- [Running Tests with Sanitizers](#running-tests-with-sanitizers)
- [AddressSanitizer (ASAN)](#addresssanitizer-asan)
- [ThreadSanitizer (TSan)](#threadsanitizer-tsan)
- [LeakSanitizer (LSan)](#leaksanitizer-lsan)
- [UndefinedBehaviorSanitizer (UBSan)](#undefinedbehaviorsanitizer-ubsan)
- [Suppression Files](#suppression-files)
- [`asan.supp`](#asansupp)
- [`lsan.supp`](#lsansupp)
- [`ubsan.supp`](#ubsansupp)
- [`tsan.supp`](#tsansupp)
- [`sanitizer-ignorelist.txt`](#sanitizer-ignorelisttxt)
- [Troubleshooting](#troubleshooting)
- ["ASAN is ignoring requested \_\_asan_handle_no_return" warnings](#asan-is-ignoring-requested-__asan_handle_no_return-warnings)
- [Sanitizer Mismatch Errors](#sanitizer-mismatch-errors)
- [References](#references)
## Building with Sanitizers
### Summary
Follow the same instructions as mentioned in [BUILD.md](../../BUILD.md) but with the following changes:
1. Make sure you have a clean build directory.
2. Set the `SANITIZERS` environment variable before calling conan install and cmake. Only set it once. Make sure both conan and cmake read the same values.
Example: `export SANITIZERS=address,undefinedbehavior`
3. Optionally use `--profile:all sanitizers` with Conan to build dependencies with sanitizer instrumentation. [!NOTE]Building with sanitizer-instrumented dependencies is slower but produces fewer false positives.
4. Set `ASAN_OPTIONS`, `LSAN_OPTIONS`, `UBSAN_OPTIONS` and `TSAN_OPTIONS` environment variables to configure sanitizer behavior when running executables. [More details below](#running-tests-with-sanitizers).
---
### Build steps:
```bash
cd /path/to/rippled
rm -rf .build
mkdir .build
cd .build
```
#### Install dependencies
The `SANITIZERS` environment variable is used by both Conan and CMake.
```bash
export SANITIZERS=address,undefinedbehavior
# Standard build (without instrumenting dependencies)
conan install .. --output-folder . --build missing --settings build_type=Debug
# Or with sanitizer-instrumented dependencies (takes longer but fewer false positives)
conan install .. --output-folder . --profile:all sanitizers --build missing --settings build_type=Debug
```
[!CAUTION]
Do not mix Address and Thread sanitizers - they are incompatible.
Since you already set the `SANITIZERS` environment variable when running Conan, same values will be read for the next part.
#### Call CMake
```bash
cmake .. -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=Debug \
-Dtests=ON -Dxrpld=ON
```
#### Build
```bash
cmake --build . --parallel 4
```
## Running Tests with Sanitizers
### AddressSanitizer (ASAN)
**IMPORTANT**: ASAN with Boost produces many false positives. Use these options:
```bash
export ASAN_OPTIONS="print_stacktrace=1:detect_container_overflow=0:suppressions=path/to/asan.supp:halt_on_error=0:log_path=asan.log"
export LSAN_OPTIONS="suppressions=path/to/lsan.supp:halt_on_error=0:log_path=lsan.log"
# Run tests
./xrpld --unittest --unittest-jobs=5
```
**Why `detect_container_overflow=0`?**
- Boost intrusive containers (used in `aged_unordered_container`) trigger false positives
- Boost context switching (used in `Workers.cpp`) confuses ASAN's stack tracking
- Since we usually don't build Boost (because we don't want to instrument Boost and detect issues in Boost code) with ASAN but use Boost containers in ASAN instrumented rippled code, it generates false positives.
- Building dependencies with ASAN instrumentation reduces false positives. But we don't want to instrument dependencies like Boost with ASAN because it is slow (to compile as well as run tests) and not necessary.
- See: https://github.com/google/sanitizers/wiki/AddressSanitizerContainerOverflow
- More such flags are detailed [here](https://github.com/google/sanitizers/wiki/AddressSanitizerFlags)
### ThreadSanitizer (TSan)
```bash
export TSAN_OPTIONS="suppressions=path/to/tsan.supp halt_on_error=0 log_path=tsan.log"
# Run tests
./xrpld --unittest --unittest-jobs=5
```
More details [here](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual).
### LeakSanitizer (LSan)
LSan is automatically enabled with ASAN. To disable it:
```bash
export ASAN_OPTIONS="detect_leaks=0"
```
More details [here](https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer).
### UndefinedBehaviorSanitizer (UBSan)
```bash
export UBSAN_OPTIONS="suppressions=path/to/ubsan.supp:print_stacktrace=1:halt_on_error=0:log_path=ubsan.log"
# Run tests
./xrpld --unittest --unittest-jobs=5
```
More details [here](https://clang.llvm.org/docs/undefinedbehaviorSanitizer.html).
## Suppression Files
[!NOTE] Attached files contain more details.
### [`asan.supp`](../../sanitizers/suppressions/asan.supp)
- **Purpose**: Suppress AddressSanitizer (ASAN) errors only
- **Format**: `interceptor_name:<pattern>` where pattern matches file names. Supported suppression types are:
- interceptor_name
- interceptor_via_fun
- interceptor_via_lib
- odr_violation
- **More info**: [AddressSanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizer)
- **Note**: Cannot suppress stack-buffer-overflow, container-overflow, etc.
### [`lsan.supp`](../../sanitizers/suppressions/lsan.supp)
- **Purpose**: Suppress LeakSanitizer (LSan) errors only
- **Format**: `leak:<pattern>` where pattern matches function/file names
- **More info**: [LeakSanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer)
### [`ubsan.supp`](../../sanitizers/suppressions/ubsan.supp)
- **Purpose**: Suppress undefinedbehaviorSanitizer errors
- **Format**: `<error_type>:<pattern>` (e.g., `unsigned-integer-overflow:protobuf`)
- **Covers**: Intentional overflows in sanitizers/suppressions libraries (protobuf, gRPC, stdlib)
- More info [UBSan suppressions](https://clang.llvm.org/docs/SanitizerSpecialCaseList.html).
### [`tsan.supp`](../../sanitizers/suppressions/tsan.supp)
- **Purpose**: Suppress ThreadSanitizer data race warnings
- **Format**: `race:<pattern>` where pattern matches function/file names
- **More info**: [ThreadSanitizer suppressions](https://github.com/google/sanitizers/wiki/ThreadSanitizerSuppressions)
### [`sanitizer-ignorelist.txt`](../../sanitizers/suppressions/sanitizer-ignorelist.txt)
- **Purpose**: Compile-time ignorelist for all sanitizers
- **Usage**: Passed via `-fsanitize-ignorelist=absolute/path/to/sanitizer-ignorelist.txt`
- **Format**: `<level>:<pattern>` (e.g., `src:Workers.cpp`)
## Troubleshooting
### "ASAN is ignoring requested \_\_asan_handle_no_return" warnings
These warnings appear when using Boost context switching and are harmless. They indicate potential false positives.
### Sanitizer Mismatch Errors
If you see undefined symbols like `___tsan_atomic_load` when building with ASAN:
**Problem**: Dependencies were built with a different sanitizer than the main project.
**Solution**: Rebuild everything with the same sanitizer:
```bash
rm -rf .build
# Then follow the build instructions above
```
Then review the log files: `asan.log.*`, `ubsan.log.*`, `tsan.log.*`
## References
- [AddressSanitizer Wiki](https://github.com/google/sanitizers/wiki/AddressSanitizer)
- [AddressSanitizer Flags](https://github.com/google/sanitizers/wiki/AddressSanitizerFlags)
- [Container Overflow Detection](https://github.com/google/sanitizers/wiki/AddressSanitizerContainerOverflow)
- [UndefinedBehavior Sanitizer](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html)
- [ThreadSanitizer](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual)

View File

@@ -13,9 +13,7 @@ namespace xrpl {
@throws runtime_error @throws runtime_error
*/ */
void void
extractTarLz4( extractTarLz4(boost::filesystem::path const& src, boost::filesystem::path const& dst);
boost::filesystem::path const& src,
boost::filesystem::path const& dst);
} // namespace xrpl } // namespace xrpl

View File

@@ -14,8 +14,7 @@
namespace xrpl { namespace xrpl {
using IniFileSections = using IniFileSections = std::unordered_map<std::string, std::vector<std::string>>;
std::unordered_map<std::string, std::vector<std::string>>;
//------------------------------------------------------------------------------ //------------------------------------------------------------------------------
@@ -86,8 +85,7 @@ public:
if (lines_.empty()) if (lines_.empty())
return ""; return "";
if (lines_.size() > 1) if (lines_.size() > 1)
Throw<std::runtime_error>( Throw<std::runtime_error>("A legacy value must have exactly one line. Section: " + name_);
"A legacy value must have exactly one line. Section: " + name_);
return lines_[0]; return lines_[0];
} }
@@ -233,10 +231,7 @@ public:
The previous value, if any, is overwritten. The previous value, if any, is overwritten.
*/ */
void void
overwrite( overwrite(std::string const& section, std::string const& key, std::string const& value);
std::string const& section,
std::string const& key,
std::string const& value);
/** Remove all the key/value pairs from the section. /** Remove all the key/value pairs from the section.
*/ */
@@ -274,9 +269,7 @@ public:
bool bool
had_trailing_comments() const had_trailing_comments() const
{ {
return std::any_of(map_.cbegin(), map_.cend(), [](auto s) { return std::any_of(map_.cbegin(), map_.cend(), [](auto s) { return s.second.had_trailing_comments(); });
return s.second.had_trailing_comments();
});
} }
protected: protected:
@@ -315,10 +308,7 @@ set(T& target, std::string const& name, Section const& section)
*/ */
template <class T> template <class T>
bool bool
set(T& target, set(T& target, T const& defaultValue, std::string const& name, Section const& section)
T const& defaultValue,
std::string const& name,
Section const& section)
{ {
bool found_and_valid = set<T>(target, name, section); bool found_and_valid = set<T>(target, name, section);
if (!found_and_valid) if (!found_and_valid)
@@ -333,9 +323,7 @@ set(T& target,
// NOTE This routine might be more clumsy than the previous two // NOTE This routine might be more clumsy than the previous two
template <class T = std::string> template <class T = std::string>
T T
get(Section const& section, get(Section const& section, std::string const& name, T const& defaultValue = T{})
std::string const& name,
T const& defaultValue = T{})
{ {
try try
{ {

View File

@@ -25,8 +25,7 @@ public:
Buffer() = default; Buffer() = default;
/** Create an uninitialized buffer with the given size. */ /** Create an uninitialized buffer with the given size. */
explicit Buffer(std::size_t size) explicit Buffer(std::size_t size) : p_(size ? new std::uint8_t[size] : nullptr), size_(size)
: p_(size ? new std::uint8_t[size] : nullptr), size_(size)
{ {
} }
@@ -62,8 +61,7 @@ public:
/** Move-construct. /** Move-construct.
The other buffer is reset. The other buffer is reset.
*/ */
Buffer(Buffer&& other) noexcept Buffer(Buffer&& other) noexcept : p_(std::move(other.p_)), size_(other.size_)
: p_(std::move(other.p_)), size_(other.size_)
{ {
other.size_ = 0; other.size_ = 0;
} }
@@ -94,8 +92,7 @@ public:
{ {
// Ensure the slice isn't a subset of the buffer. // Ensure the slice isn't a subset of the buffer.
XRPL_ASSERT( XRPL_ASSERT(
s.size() == 0 || size_ == 0 || s.data() < p_.get() || s.size() == 0 || size_ == 0 || s.data() < p_.get() || s.data() >= p_.get() + size_,
s.data() >= p_.get() + size_,
"xrpl::Buffer::operator=(Slice) : input not a subset"); "xrpl::Buffer::operator=(Slice) : input not a subset");
if (auto p = alloc(s.size())) if (auto p = alloc(s.size()))

View File

@@ -36,10 +36,7 @@ lz4Compress(void const* in, std::size_t inSize, BufferFactory&& bf)
auto compressed = bf(outCapacity); auto compressed = bf(outCapacity);
auto compressedSize = LZ4_compress_default( auto compressedSize = LZ4_compress_default(
reinterpret_cast<char const*>(in), reinterpret_cast<char const*>(in), reinterpret_cast<char*>(compressed), inSize, outCapacity);
reinterpret_cast<char*>(compressed),
inSize,
outCapacity);
if (compressedSize == 0) if (compressedSize == 0)
Throw<std::runtime_error>("lz4 compress: failed"); Throw<std::runtime_error>("lz4 compress: failed");
@@ -70,10 +67,8 @@ lz4Decompress(
Throw<std::runtime_error>("lz4Decompress: integer overflow (output)"); Throw<std::runtime_error>("lz4Decompress: integer overflow (output)");
if (LZ4_decompress_safe( if (LZ4_decompress_safe(
reinterpret_cast<char const*>(in), reinterpret_cast<char const*>(in), reinterpret_cast<char*>(decompressed), inSize, decompressedSize) !=
reinterpret_cast<char*>(decompressed), decompressedSize)
inSize,
decompressedSize) != decompressedSize)
Throw<std::runtime_error>("lz4Decompress: failed"); Throw<std::runtime_error>("lz4Decompress: failed");
return decompressedSize; return decompressedSize;
@@ -89,11 +84,7 @@ lz4Decompress(
*/ */
template <typename InputStream> template <typename InputStream>
std::size_t std::size_t
lz4Decompress( lz4Decompress(InputStream& in, std::size_t inSize, std::uint8_t* decompressed, std::size_t decompressedSize)
InputStream& in,
std::size_t inSize,
std::uint8_t* decompressed,
std::size_t decompressedSize)
{ {
std::vector<std::uint8_t> compressed; std::vector<std::uint8_t> compressed;
std::uint8_t const* chunk = nullptr; std::uint8_t const* chunk = nullptr;
@@ -116,9 +107,7 @@ lz4Decompress(
compressed.resize(inSize); compressed.resize(inSize);
} }
chunkSize = chunkSize < (inSize - copiedInSize) chunkSize = chunkSize < (inSize - copiedInSize) ? chunkSize : (inSize - copiedInSize);
? chunkSize
: (inSize - copiedInSize);
std::copy(chunk, chunk + chunkSize, compressed.data() + copiedInSize); std::copy(chunk, chunk + chunkSize, compressed.data() + copiedInSize);
@@ -135,8 +124,7 @@ lz4Decompress(
if (in.ByteCount() > (currentBytes + copiedInSize)) if (in.ByteCount() > (currentBytes + copiedInSize))
in.BackUp(in.ByteCount() - currentBytes - copiedInSize); in.BackUp(in.ByteCount() - currentBytes - copiedInSize);
if ((copiedInSize == 0 && chunkSize < inSize) || if ((copiedInSize == 0 && chunkSize < inSize) || (copiedInSize > 0 && copiedInSize != inSize))
(copiedInSize > 0 && copiedInSize != inSize))
Throw<std::runtime_error>("lz4 decompress: insufficient input size"); Throw<std::runtime_error>("lz4 decompress: insufficient input size");
return lz4Decompress(chunk, inSize, decompressed, decompressedSize); return lz4Decompress(chunk, inSize, decompressed, decompressedSize);

View File

@@ -56,9 +56,7 @@ private:
if (m_value != value_type()) if (m_value != value_type())
{ {
std::size_t elapsed = std::size_t elapsed = std::chrono::duration_cast<std::chrono::seconds>(now - m_when).count();
std::chrono::duration_cast<std::chrono::seconds>(now - m_when)
.count();
// A span larger than four times the window decays the // A span larger than four times the window decays the
// value to an insignificant amount so just reset it. // value to an insignificant amount so just reset it.

View File

@@ -108,23 +108,20 @@ Unexpected(E (&)[N]) -> Unexpected<E const*>;
// Definition of Expected. All of the machinery comes from boost::result. // Definition of Expected. All of the machinery comes from boost::result.
template <class T, class E> template <class T, class E>
class [[nodiscard]] Expected class [[nodiscard]] Expected : private boost::outcome_v2::result<T, E, detail::throw_policy>
: private boost::outcome_v2::result<T, E, detail::throw_policy>
{ {
using Base = boost::outcome_v2::result<T, E, detail::throw_policy>; using Base = boost::outcome_v2::result<T, E, detail::throw_policy>;
public: public:
template <typename U> template <typename U>
requires std::convertible_to<U, T> requires std::convertible_to<U, T>
constexpr Expected(U&& r) constexpr Expected(U&& r) : Base(boost::outcome_v2::in_place_type_t<T>{}, std::forward<U>(r))
: Base(boost::outcome_v2::in_place_type_t<T>{}, std::forward<U>(r))
{ {
} }
template <typename U> template <typename U>
requires std::convertible_to<U, E> && (!std::is_reference_v<U>) requires std::convertible_to<U, E> && (!std::is_reference_v<U>)
constexpr Expected(Unexpected<U> e) constexpr Expected(Unexpected<U> e) : Base(boost::outcome_v2::in_place_type_t<E>{}, std::move(e.value()))
: Base(boost::outcome_v2::in_place_type_t<E>{}, std::move(e.value()))
{ {
} }
@@ -195,8 +192,7 @@ public:
// Specialization of Expected<void, E>. Allows returning either success // Specialization of Expected<void, E>. Allows returning either success
// (without a value) or the reason for the failure. // (without a value) or the reason for the failure.
template <class E> template <class E>
class [[nodiscard]] Expected<void, E> class [[nodiscard]] Expected<void, E> : private boost::outcome_v2::result<void, E, detail::throw_policy>
: private boost::outcome_v2::result<void, E, detail::throw_policy>
{ {
using Base = boost::outcome_v2::result<void, E, detail::throw_policy>; using Base = boost::outcome_v2::result<void, E, detail::throw_policy>;

View File

@@ -15,10 +15,7 @@ getFileContents(
std::optional<std::size_t> maxSize = std::nullopt); std::optional<std::size_t> maxSize = std::nullopt);
void void
writeFileContents( writeFileContents(boost::system::error_code& ec, boost::filesystem::path const& destPath, std::string const& contents);
boost::system::error_code& ec,
boost::filesystem::path const& destPath,
std::string const& contents);
} // namespace xrpl } // namespace xrpl

View File

@@ -45,8 +45,8 @@ struct SharedIntrusiveAdoptNoIncrementTag
// //
template <class T> template <class T>
concept CAdoptTag = std::is_same_v<T, SharedIntrusiveAdoptIncrementStrongTag> || concept CAdoptTag =
std::is_same_v<T, SharedIntrusiveAdoptNoIncrementTag>; std::is_same_v<T, SharedIntrusiveAdoptIncrementStrongTag> || std::is_same_v<T, SharedIntrusiveAdoptNoIncrementTag>;
//------------------------------------------------------------------------------ //------------------------------------------------------------------------------
@@ -58,7 +58,7 @@ concept CAdoptTag = std::is_same_v<T, SharedIntrusiveAdoptIncrementStrongTag> ||
When the strong pointer count goes to zero, the "partialDestructor" is When the strong pointer count goes to zero, the "partialDestructor" is
called. This can be used to destroy as much of the object as possible while called. This can be used to destroy as much of the object as possible while
still retaining the reference counts. For example, for SHAMapInnerNodes the still retaining the reference counts. For example, for SHAMapInnerNodes the
children may be reset in that function. Note that std::shared_poiner WILL children may be reset in that function. Note that std::shared_pointer WILL
run the destructor when the strong count reaches zero, but may not free the run the destructor when the strong count reaches zero, but may not free the
memory used by the object until the weak count reaches zero. In rippled, we memory used by the object until the weak count reaches zero. In rippled, we
typically allocate shared pointers with the `make_shared` function. When typically allocate shared pointers with the `make_shared` function. When
@@ -122,9 +122,7 @@ public:
controlled by the rhs param. controlled by the rhs param.
*/ */
template <class TT> template <class TT>
SharedIntrusive( SharedIntrusive(StaticCastTagSharedIntrusive, SharedIntrusive<TT> const& rhs);
StaticCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs);
/** Create a new SharedIntrusive by statically casting the pointer /** Create a new SharedIntrusive by statically casting the pointer
controlled by the rhs param. controlled by the rhs param.
@@ -136,9 +134,7 @@ public:
controlled by the rhs param. controlled by the rhs param.
*/ */
template <class TT> template <class TT>
SharedIntrusive( SharedIntrusive(DynamicCastTagSharedIntrusive, SharedIntrusive<TT> const& rhs);
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs);
/** Create a new SharedIntrusive by dynamically casting the pointer /** Create a new SharedIntrusive by dynamically casting the pointer
controlled by the rhs param. controlled by the rhs param.
@@ -304,9 +300,7 @@ class SharedWeakUnion
// Tagged pointer. Low bit determines if this is a strong or a weak // Tagged pointer. Low bit determines if this is a strong or a weak
// pointer. The low bit must be masked to zero when converting back to a // pointer. The low bit must be masked to zero when converting back to a
// pointer. If the low bit is '1', this is a weak pointer. // pointer. If the low bit is '1', this is a weak pointer.
static_assert( static_assert(alignof(T) >= 2, "Bad alignment: Combo pointer requires low bit to be zero");
alignof(T) >= 2,
"Bad alignment: Combo pointer requires low bit to be zero");
public: public:
SharedWeakUnion() = default; SharedWeakUnion() = default;
@@ -450,9 +444,7 @@ make_SharedIntrusive(Args&&... args)
auto p = new TT(std::forward<Args>(args)...); auto p = new TT(std::forward<Args>(args)...);
static_assert( static_assert(
noexcept(SharedIntrusive<TT>( noexcept(SharedIntrusive<TT>(std::declval<TT*>(), std::declval<SharedIntrusiveAdoptNoIncrementTag>())),
std::declval<TT*>(),
std::declval<SharedIntrusiveAdoptNoIncrementTag>())),
"SharedIntrusive constructor should not throw or this can leak " "SharedIntrusive constructor should not throw or this can leak "
"memory"); "memory");

View File

@@ -12,9 +12,7 @@ template <class T>
template <CAdoptTag TAdoptTag> template <CAdoptTag TAdoptTag>
SharedIntrusive<T>::SharedIntrusive(T* p, TAdoptTag) noexcept : ptr_{p} SharedIntrusive<T>::SharedIntrusive(T* p, TAdoptTag) noexcept : ptr_{p}
{ {
if constexpr (std::is_same_v< if constexpr (std::is_same_v<TAdoptTag, SharedIntrusiveAdoptIncrementStrongTag>)
TAdoptTag,
SharedIntrusiveAdoptIncrementStrongTag>)
{ {
if (p) if (p)
p->addStrongRef(); p->addStrongRef();
@@ -46,16 +44,14 @@ SharedIntrusive<T>::SharedIntrusive(SharedIntrusive<TT> const& rhs)
} }
template <class T> template <class T>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive&& rhs) SharedIntrusive<T>::SharedIntrusive(SharedIntrusive&& rhs) : ptr_{rhs.unsafeExchange(nullptr)}
: ptr_{rhs.unsafeExchange(nullptr)}
{ {
} }
template <class T> template <class T>
template <class TT> template <class TT>
requires std::convertible_to<TT*, T*> requires std::convertible_to<TT*, T*>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive<TT>&& rhs) SharedIntrusive<T>::SharedIntrusive(SharedIntrusive<TT>&& rhs) : ptr_{rhs.unsafeExchange(nullptr)}
: ptr_{rhs.unsafeExchange(nullptr)}
{ {
} }
template <class T> template <class T>
@@ -112,9 +108,7 @@ requires std::convertible_to<TT*, T*>
SharedIntrusive<T>& SharedIntrusive<T>&
SharedIntrusive<T>::operator=(SharedIntrusive<TT>&& rhs) SharedIntrusive<T>::operator=(SharedIntrusive<TT>&& rhs)
{ {
static_assert( static_assert(!std::is_same_v<T, TT>, "This overload should not be instantiated for T == TT");
!std::is_same_v<T, TT>,
"This overload should not be instantiated for T == TT");
unsafeReleaseAndStore(rhs.unsafeExchange(nullptr)); unsafeReleaseAndStore(rhs.unsafeExchange(nullptr));
return *this; return *this;
@@ -139,9 +133,7 @@ template <CAdoptTag TAdoptTag>
void void
SharedIntrusive<T>::adopt(T* p) SharedIntrusive<T>::adopt(T* p)
{ {
if constexpr (std::is_same_v< if constexpr (std::is_same_v<TAdoptTag, SharedIntrusiveAdoptIncrementStrongTag>)
TAdoptTag,
SharedIntrusiveAdoptIncrementStrongTag>)
{ {
if (p) if (p)
p->addStrongRef(); p->addStrongRef();
@@ -157,9 +149,7 @@ SharedIntrusive<T>::~SharedIntrusive()
template <class T> template <class T>
template <class TT> template <class TT>
SharedIntrusive<T>::SharedIntrusive( SharedIntrusive<T>::SharedIntrusive(StaticCastTagSharedIntrusive, SharedIntrusive<TT> const& rhs)
StaticCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs)
: ptr_{[&] { : ptr_{[&] {
auto p = static_cast<T*>(rhs.unsafeGetRawPtr()); auto p = static_cast<T*>(rhs.unsafeGetRawPtr());
if (p) if (p)
@@ -171,18 +161,14 @@ SharedIntrusive<T>::SharedIntrusive(
template <class T> template <class T>
template <class TT> template <class TT>
SharedIntrusive<T>::SharedIntrusive( SharedIntrusive<T>::SharedIntrusive(StaticCastTagSharedIntrusive, SharedIntrusive<TT>&& rhs)
StaticCastTagSharedIntrusive,
SharedIntrusive<TT>&& rhs)
: ptr_{static_cast<T*>(rhs.unsafeExchange(nullptr))} : ptr_{static_cast<T*>(rhs.unsafeExchange(nullptr))}
{ {
} }
template <class T> template <class T>
template <class TT> template <class TT>
SharedIntrusive<T>::SharedIntrusive( SharedIntrusive<T>::SharedIntrusive(DynamicCastTagSharedIntrusive, SharedIntrusive<TT> const& rhs)
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs)
: ptr_{[&] { : ptr_{[&] {
auto p = dynamic_cast<T*>(rhs.unsafeGetRawPtr()); auto p = dynamic_cast<T*>(rhs.unsafeGetRawPtr());
if (p) if (p)
@@ -194,9 +180,7 @@ SharedIntrusive<T>::SharedIntrusive(
template <class T> template <class T>
template <class TT> template <class TT>
SharedIntrusive<T>::SharedIntrusive( SharedIntrusive<T>::SharedIntrusive(DynamicCastTagSharedIntrusive, SharedIntrusive<TT>&& rhs)
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT>&& rhs)
{ {
// This can be simplified without the `exchange`, but the `exchange` is kept // This can be simplified without the `exchange`, but the `exchange` is kept
// in anticipation of supporting atomic operations. // in anticipation of supporting atomic operations.
@@ -315,8 +299,7 @@ WeakIntrusive<T>::WeakIntrusive(WeakIntrusive&& rhs) : ptr_{rhs.ptr_}
} }
template <class T> template <class T>
WeakIntrusive<T>::WeakIntrusive(SharedIntrusive<T> const& rhs) WeakIntrusive<T>::WeakIntrusive(SharedIntrusive<T> const& rhs) : ptr_{rhs.unsafeGetRawPtr()}
: ptr_{rhs.unsafeGetRawPtr()}
{ {
if (ptr_) if (ptr_)
ptr_->addWeakRef(); ptr_->addWeakRef();

View File

@@ -160,22 +160,19 @@ private:
See description of the `refCounts` field for a fuller description of See description of the `refCounts` field for a fuller description of
this field. this field.
*/ */
static constexpr FieldType partialDestroyStartedMask = static constexpr FieldType partialDestroyStartedMask = (one << (FieldTypeBits - 1));
(one << (FieldTypeBits - 1));
/** Flag that is set when the partialDestroy function has finished running /** Flag that is set when the partialDestroy function has finished running
See description of the `refCounts` field for a fuller description of See description of the `refCounts` field for a fuller description of
this field. this field.
*/ */
static constexpr FieldType partialDestroyFinishedMask = static constexpr FieldType partialDestroyFinishedMask = (one << (FieldTypeBits - 2));
(one << (FieldTypeBits - 2));
/** Mask that will zero out all the `count` bits and leave the tag bits /** Mask that will zero out all the `count` bits and leave the tag bits
unchanged. unchanged.
*/ */
static constexpr FieldType tagMask = static constexpr FieldType tagMask = partialDestroyStartedMask | partialDestroyFinishedMask;
partialDestroyStartedMask | partialDestroyFinishedMask;
/** Mask that will zero out the `tag` bits and leave the count bits /** Mask that will zero out the `tag` bits and leave the count bits
unchanged. unchanged.
@@ -184,13 +181,11 @@ private:
/** Mask that will zero out everything except the strong count. /** Mask that will zero out everything except the strong count.
*/ */
static constexpr FieldType strongMask = static constexpr FieldType strongMask = ((one << StrongCountNumBits) - 1) & valueMask;
((one << StrongCountNumBits) - 1) & valueMask;
/** Mask that will zero out everything except the weak count. /** Mask that will zero out everything except the weak count.
*/ */
static constexpr FieldType weakMask = static constexpr FieldType weakMask = (((one << WeakCountNumBits) - 1) << StrongCountNumBits) & valueMask;
(((one << WeakCountNumBits) - 1) << StrongCountNumBits) & valueMask;
/** Unpack the count and tag fields from the packed atomic integer form. */ /** Unpack the count and tag fields from the packed atomic integer form. */
struct RefCountPair struct RefCountPair
@@ -215,10 +210,8 @@ private:
FieldType FieldType
combinedValue() const noexcept; combinedValue() const noexcept;
static constexpr CountType maxStrongValue = static constexpr CountType maxStrongValue = static_cast<CountType>((one << StrongCountNumBits) - 1);
static_cast<CountType>((one << StrongCountNumBits) - 1); static constexpr CountType maxWeakValue = static_cast<CountType>((one << WeakCountNumBits) - 1);
static constexpr CountType maxWeakValue =
static_cast<CountType>((one << WeakCountNumBits) - 1);
/** Put an extra margin to detect when running up against limits. /** Put an extra margin to detect when running up against limits.
This is only used in debug code, and is useful if we reduce the This is only used in debug code, and is useful if we reduce the
number of bits in the strong and weak counts (to 16 and 14 bits). number of bits in the strong and weak counts (to 16 and 14 bits).
@@ -274,8 +267,7 @@ IntrusiveRefCounts::releaseStrongRef() const
} }
} }
if (refCounts.compare_exchange_weak( if (refCounts.compare_exchange_weak(prevIntVal, nextIntVal, std::memory_order_acq_rel))
prevIntVal, nextIntVal, std::memory_order_acq_rel))
{ {
// Can't be in partial destroy because only decrementing the strong // Can't be in partial destroy because only decrementing the strong
// count to zero can start a partial destroy, and that can't happen // count to zero can start a partial destroy, and that can't happen
@@ -331,8 +323,7 @@ IntrusiveRefCounts::addWeakReleaseStrongRef() const
action = partialDestroy; action = partialDestroy;
} }
} }
if (refCounts.compare_exchange_weak( if (refCounts.compare_exchange_weak(prevIntVal, nextIntVal, std::memory_order_acq_rel))
prevIntVal, nextIntVal, std::memory_order_acq_rel))
{ {
XRPL_ASSERT( XRPL_ASSERT(
(!(prevIntVal & partialDestroyStartedMask)), (!(prevIntVal & partialDestroyStartedMask)),
@@ -376,8 +367,7 @@ IntrusiveRefCounts::checkoutStrongRefFromWeak() const noexcept
auto curValue = RefCountPair{1, 1}.combinedValue(); auto curValue = RefCountPair{1, 1}.combinedValue();
auto desiredValue = RefCountPair{2, 1}.combinedValue(); auto desiredValue = RefCountPair{2, 1}.combinedValue();
while (!refCounts.compare_exchange_weak( while (!refCounts.compare_exchange_weak(curValue, desiredValue, std::memory_order_acq_rel))
curValue, desiredValue, std::memory_order_acq_rel))
{ {
RefCountPair const prev{curValue}; RefCountPair const prev{curValue};
if (!prev.strong) if (!prev.strong)
@@ -406,20 +396,15 @@ inline IntrusiveRefCounts::~IntrusiveRefCounts() noexcept
{ {
#ifndef NDEBUG #ifndef NDEBUG
auto v = refCounts.load(std::memory_order_acquire); auto v = refCounts.load(std::memory_order_acquire);
XRPL_ASSERT( XRPL_ASSERT((!(v & valueMask)), "xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : count must be zero");
(!(v & valueMask)),
"xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : count must be zero");
auto t = v & tagMask; auto t = v & tagMask;
XRPL_ASSERT( XRPL_ASSERT((!t || t == tagMask), "xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : valid tag");
(!t || t == tagMask),
"xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : valid tag");
#endif #endif
} }
//------------------------------------------------------------------------------ //------------------------------------------------------------------------------
inline IntrusiveRefCounts::RefCountPair::RefCountPair( inline IntrusiveRefCounts::RefCountPair::RefCountPair(IntrusiveRefCounts::FieldType v) noexcept
IntrusiveRefCounts::FieldType v) noexcept
: strong{static_cast<CountType>(v & strongMask)} : strong{static_cast<CountType>(v & strongMask)}
, weak{static_cast<CountType>((v & weakMask) >> StrongCountNumBits)} , weak{static_cast<CountType>((v & weakMask) >> StrongCountNumBits)}
, partialDestroyStartedBit{v & partialDestroyStartedMask} , partialDestroyStartedBit{v & partialDestroyStartedMask}
@@ -449,10 +434,8 @@ IntrusiveRefCounts::RefCountPair::combinedValue() const noexcept
(strong < checkStrongMaxValue && weak < checkWeakMaxValue), (strong < checkStrongMaxValue && weak < checkWeakMaxValue),
"xrpl::IntrusiveRefCounts::RefCountPair::combinedValue : inputs " "xrpl::IntrusiveRefCounts::RefCountPair::combinedValue : inputs "
"inside range"); "inside range");
return (static_cast<IntrusiveRefCounts::FieldType>(weak) return (static_cast<IntrusiveRefCounts::FieldType>(weak) << IntrusiveRefCounts::StrongCountNumBits) |
<< IntrusiveRefCounts::StrongCountNumBits) | static_cast<IntrusiveRefCounts::FieldType>(strong) | partialDestroyStartedBit | partialDestroyFinishedBit;
static_cast<IntrusiveRefCounts::FieldType>(strong) |
partialDestroyStartedBit | partialDestroyFinishedBit;
} }
template <class T> template <class T>
@@ -460,11 +443,9 @@ inline void
partialDestructorFinished(T** o) partialDestructorFinished(T** o)
{ {
T& self = **o; T& self = **o;
IntrusiveRefCounts::RefCountPair p = IntrusiveRefCounts::RefCountPair p = self.refCounts.fetch_or(IntrusiveRefCounts::partialDestroyFinishedMask);
self.refCounts.fetch_or(IntrusiveRefCounts::partialDestroyFinishedMask);
XRPL_ASSERT( XRPL_ASSERT(
(!p.partialDestroyFinishedBit && p.partialDestroyStartedBit && (!p.partialDestroyFinishedBit && p.partialDestroyStartedBit && !p.strong),
!p.strong),
"xrpl::partialDestructorFinished : not a weak ref"); "xrpl::partialDestructorFinished : not a weak ref");
if (!p.weak) if (!p.weak)
{ {

View File

@@ -55,8 +55,7 @@ template <class = void>
boost::thread_specific_ptr<detail::LocalValues>& boost::thread_specific_ptr<detail::LocalValues>&
getLocalValues() getLocalValues()
{ {
static boost::thread_specific_ptr<detail::LocalValues> tsp( static boost::thread_specific_ptr<detail::LocalValues> tsp(&detail::LocalValues::cleanup);
&detail::LocalValues::cleanup);
return tsp; return tsp;
} }
@@ -105,9 +104,7 @@ LocalValue<T>::operator*()
} }
return *reinterpret_cast<T*>( return *reinterpret_cast<T*>(
lvs->values lvs->values.emplace(this, std::make_unique<detail::LocalValues::Value<T>>(t_)).first->second->get());
.emplace(this, std::make_unique<detail::LocalValues::Value<T>>(t_))
.first->second->get());
} }
} // namespace xrpl } // namespace xrpl

View File

@@ -39,22 +39,17 @@ private:
std::string partition_; std::string partition_;
public: public:
Sink( Sink(std::string const& partition, beast::severities::Severity thresh, Logs& logs);
std::string const& partition,
beast::severities::Severity thresh,
Logs& logs);
Sink(Sink const&) = delete; Sink(Sink const&) = delete;
Sink& Sink&
operator=(Sink const&) = delete; operator=(Sink const&) = delete;
void void
write(beast::severities::Severity level, std::string const& text) write(beast::severities::Severity level, std::string const& text) override;
override;
void void
writeAlways(beast::severities::Severity level, std::string const& text) writeAlways(beast::severities::Severity level, std::string const& text) override;
override;
}; };
/** Manages a system file containing logged output. /** Manages a system file containing logged output.
@@ -140,11 +135,7 @@ private:
}; };
std::mutex mutable mutex_; std::mutex mutable mutex_;
std::map< std::map<std::string, std::unique_ptr<beast::Journal::Sink>, boost::beast::iless> sinks_;
std::string,
std::unique_ptr<beast::Journal::Sink>,
boost::beast::iless>
sinks_;
beast::severities::Severity thresh_; beast::severities::Severity thresh_;
File file_; File file_;
bool silent_ = false; bool silent_ = false;
@@ -180,11 +171,7 @@ public:
partition_severities() const; partition_severities() const;
void void
write( write(beast::severities::Severity level, std::string const& partition, std::string const& text, bool console);
beast::severities::Severity level,
std::string const& partition,
std::string const& text,
bool console);
std::string std::string
rotate(); rotate();
@@ -201,9 +188,7 @@ public:
} }
virtual std::unique_ptr<beast::Journal::Sink> virtual std::unique_ptr<beast::Journal::Sink>
makeSink( makeSink(std::string const& partition, beast::severities::Severity startingLevel);
std::string const& partition,
beast::severities::Severity startingLevel);
public: public:
static LogSeverity static LogSeverity

View File

@@ -1,8 +1,12 @@
#ifndef XRPL_BASICS_NUMBER_H_INCLUDED #ifndef XRPL_BASICS_NUMBER_H_INCLUDED
#define XRPL_BASICS_NUMBER_H_INCLUDED #define XRPL_BASICS_NUMBER_H_INCLUDED
#include <xrpl/beast/utility/instrumentation.h>
#include <cstdint> #include <cstdint>
#include <functional>
#include <limits> #include <limits>
#include <optional>
#include <ostream> #include <ostream>
#include <string> #include <string>
@@ -13,42 +17,237 @@ class Number;
std::string std::string
to_string(Number const& amount); to_string(Number const& amount);
template <typename T>
constexpr std::optional<int>
logTen(T value)
{
int log = 0;
while (value >= 10 && value % 10 == 0)
{
value /= 10;
++log;
}
if (value == 1)
return log;
return std::nullopt;
}
template <typename T> template <typename T>
constexpr bool constexpr bool
isPowerOfTen(T value) isPowerOfTen(T value)
{ {
while (value >= 10 && value % 10 == 0) return logTen(value).has_value();
value /= 10;
return value == 1;
} }
/** MantissaRange defines a range for the mantissa of a normalized Number.
*
* The mantissa is in the range [min, max], where
* * min is a power of 10, and
* * max = min * 10 - 1.
*
* The mantissa_scale enum indicates whether the range is "small" or "large".
* This intentionally restricts the number of MantissaRanges that can be
* instantiated to two: one for each scale.
*
* The "small" scale is based on the behavior of STAmount for IOUs. It has a min
* value of 10^15, and a max value of 10^16-1. This was sufficient for
* uses before Lending Protocol was implemented, mostly related to AMM.
*
* However, it does not have sufficient precision to represent the full integer
* range of int64_t values (-2^63 to 2^63-1), which are needed for XRP and MPT
* values. The implementation of SingleAssetVault, and LendingProtocol need to
* represent those integer values accurately and precisely, both for the
* STNumber field type, and for internal calculations. That necessitated the
* "large" scale.
*
* The "large" scale is intended to represent all values that can be represented
* by an STAmount - IOUs, XRP, and MPTs. It has a min value of 10^18, and a max
* value of 10^19-1.
*
* Note that if the mentioned amendments are eventually retired, this class
* should be left in place, but the "small" scale option should be removed. This
* will allow for future expansion beyond 64-bits if it is ever needed.
*/
struct MantissaRange
{
using rep = std::uint64_t;
enum mantissa_scale { small, large };
explicit constexpr MantissaRange(mantissa_scale scale_)
: min(getMin(scale_)), max(min * 10 - 1), log(logTen(min).value_or(-1)), scale(scale_)
{
}
rep min;
rep max;
int log;
mantissa_scale scale;
private:
static constexpr rep
getMin(mantissa_scale scale_)
{
switch (scale_)
{
case small:
return 1'000'000'000'000'000ULL;
case large:
return 1'000'000'000'000'000'000ULL;
default:
// Since this can never be called outside a non-constexpr
// context, this throw assures that the build fails if an
// invalid scale is used.
throw std::runtime_error("Unknown mantissa scale");
}
}
};
// Like std::integral, but only 64-bit integral types.
template <class T>
concept Integral64 = std::is_same_v<T, std::int64_t> || std::is_same_v<T, std::uint64_t>;
/** Number is a floating point type that can represent a wide range of values.
*
* It can represent all values that can be represented by an STAmount -
* regardless of asset type - XRPAmount, MPTAmount, and IOUAmount, with at least
* as much precision as those types require.
*
* ---- Internal Representation ----
*
* Internally, Number is represented with three values:
* 1. a bool sign flag,
* 2. a std::uint64_t mantissa,
* 3. an int exponent.
*
* The internal mantissa is an unsigned integer in the range defined by the
* current MantissaRange. The exponent is an integer in the range
* [minExponent, maxExponent].
*
* See the description of MantissaRange for more details on the ranges.
*
* A non-zero mantissa is (almost) always normalized, meaning it and the
* exponent are grown or shrunk until the mantissa is in the range
* [MantissaRange.min, MantissaRange.max].
*
* Note:
* 1. Normalization can be disabled by using the "unchecked" ctor tag. This
* should only be used at specific conversion points, some constexpr
* values, and in unit tests.
* 2. The max of the "large" range, 10^19-1, is the largest 10^X-1 value that
* fits in an unsigned 64-bit number. (10^19-1 < 2^64-1 and
* 10^20-1 > 2^64-1). This avoids under- and overflows.
*
* ---- External Interface ----
*
* The external interface of Number consists of a std::int64_t mantissa, which
* is restricted to 63-bits, and an int exponent, which must be in the range
* [minExponent, maxExponent]. The range of the mantissa depends on which
* MantissaRange is currently active. For the "short" range, the mantissa will
* be between 10^15 and 10^16-1. For the "large" range, the mantissa will be
* between -(2^63-1) and 2^63-1. As noted above, the "large" range is needed to
* represent the full range of valid XRP and MPT integer values accurately.
*
* Note:
* 1. 2^63-1 is between 10^18 and 10^19-1, which are the limits of the "large"
* mantissa range.
* 2. The functions mantissa() and exponent() return the external view of the
* Number value, specifically using a signed 63-bit mantissa. This may
* require altering the internal representation to fit into that range
* before the value is returned. The interface guarantees consistency of
* the two values.
* 3. Number cannot represent -2^63 (std::numeric_limits<std::int64_t>::min())
* as an exact integer, but it doesn't need to, because all asset values
* on-ledger are non-negative. This is due to implementation details of
* several operations which use unsigned arithmetic internally. This is
* sufficient to represent all valid XRP values (where the absolute value
* can not exceed INITIAL_XRP: 10^17), and MPT values (where the absolute
* value can not exceed maxMPTokenAmount: 2^63-1).
*
* ---- Mantissa Range Switching ----
*
* The mantissa range may be changed at runtime via setMantissaScale(). The
* default mantissa range is "large". The range is updated whenever transaction
* processing begins, based on whether SingleAssetVault or LendingProtocol are
* enabled. If either is enabled, the mantissa range is set to "large". If not,
* it is set to "small", preserving backward compatibility and correct
* "amendment-gating".
*
* It is extremely unlikely that any more calls to setMantissaScale() will be
* needed outside of unit tests.
*
* ---- Usage With Different Ranges ----
*
* Outside of unit tests, and existing checks, code that uses Number should not
* know or care which mantissa range is active.
*
* The results of computations using Numbers with a small mantissa may differ
* from computations using Numbers with a large mantissa, specifically as it
* effects the results after rounding. That is why the large mantissa range is
* amendment gated in transaction processing.
*
* It is extremely unlikely that any more calls to getMantissaScale() will be
* needed outside of unit tests.
*
* Code that uses Number should not assume or check anything about the
* mantissa() or exponent() except that they fit into the "large" range
* specified in the "External Interface" section.
*
* ----- Unit Tests -----
*
* Within unit tests, it may be useful to explicitly switch between the two
* ranges, or to check which range is active when checking the results of
* computations. If the test is doing the math directly, the
* set/getMantissaScale() functions may be most appropriate. However, if the
* test has anything to do with transaction processing, it should enable or
* disable the amendments that control the mantissa range choice
* (SingleAssetVault and LendingProtocol), and/or check if either of those
* amendments are enabled to determine which result to expect.
*
*/
class Number class Number
{ {
using rep = std::int64_t; using rep = std::int64_t;
rep mantissa_{0}; using internalrep = MantissaRange::rep;
bool negative_{false};
internalrep mantissa_{0};
int exponent_{std::numeric_limits<int>::lowest()}; int exponent_{std::numeric_limits<int>::lowest()};
public: public:
// The range for the mantissa when normalized
constexpr static std::int64_t minMantissa = 1'000'000'000'000'000LL;
static_assert(isPowerOfTen(minMantissa));
constexpr static std::int64_t maxMantissa = minMantissa * 10 - 1;
static_assert(maxMantissa == 9'999'999'999'999'999LL);
// The range for the exponent when normalized // The range for the exponent when normalized
constexpr static int minExponent = -32768; constexpr static int minExponent = -32768;
constexpr static int maxExponent = 32768; constexpr static int maxExponent = 32768;
constexpr static internalrep maxRep = std::numeric_limits<rep>::max();
static_assert(maxRep == 9'223'372'036'854'775'807);
static_assert(-maxRep == std::numeric_limits<rep>::min() + 1);
// May need to make unchecked private
struct unchecked struct unchecked
{ {
explicit unchecked() = default; explicit unchecked() = default;
}; };
// Like unchecked, normalized is used with the ctors that take an
// internalrep mantissa. Unlike unchecked, those ctors will normalize the
// value.
// Only unit tests are expected to use this class
struct normalized
{
explicit normalized() = default;
};
explicit constexpr Number() = default; explicit constexpr Number() = default;
Number(rep mantissa); Number(rep mantissa);
explicit Number(rep mantissa, int exponent); explicit Number(rep mantissa, int exponent);
explicit constexpr Number(rep mantissa, int exponent, unchecked) noexcept; explicit constexpr Number(bool negative, internalrep mantissa, int exponent, unchecked) noexcept;
// Assume unsigned values are... unsigned. i.e. positive
explicit constexpr Number(internalrep mantissa, int exponent, unchecked) noexcept;
// Only unit tests are expected to use this ctor
explicit Number(bool negative, internalrep mantissa, int exponent, normalized);
// Assume unsigned values are... unsigned. i.e. positive
explicit Number(internalrep mantissa, int exponent, normalized);
constexpr rep constexpr rep
mantissa() const noexcept; mantissa() const noexcept;
@@ -78,11 +277,11 @@ public:
Number& Number&
operator/=(Number const& x); operator/=(Number const& x);
static constexpr Number static Number
min() noexcept; min() noexcept;
static constexpr Number static Number
max() noexcept; max() noexcept;
static constexpr Number static Number
lowest() noexcept; lowest() noexcept;
/** Conversions to Number are implicit and conversions away from Number /** Conversions to Number are implicit and conversions away from Number
@@ -96,7 +295,7 @@ public:
friend constexpr bool friend constexpr bool
operator==(Number const& x, Number const& y) noexcept operator==(Number const& x, Number const& y) noexcept
{ {
return x.mantissa_ == y.mantissa_ && x.exponent_ == y.exponent_; return x.negative_ == y.negative_ && x.mantissa_ == y.mantissa_ && x.exponent_ == y.exponent_;
} }
friend constexpr bool friend constexpr bool
@@ -110,8 +309,8 @@ public:
{ {
// If the two amounts have different signs (zero is treated as positive) // If the two amounts have different signs (zero is treated as positive)
// then the comparison is true iff the left is negative. // then the comparison is true iff the left is negative.
bool const lneg = x.mantissa_ < 0; bool const lneg = x.negative_;
bool const rneg = y.mantissa_ < 0; bool const rneg = y.negative_;
if (lneg != rneg) if (lneg != rneg)
return lneg; return lneg;
@@ -139,7 +338,7 @@ public:
constexpr int constexpr int
signum() const noexcept signum() const noexcept
{ {
return (mantissa_ < 0) ? -1 : (mantissa_ ? 1 : 0); return negative_ ? -1 : (mantissa_ ? 1 : 0);
} }
Number Number
@@ -169,6 +368,15 @@ public:
return os << to_string(x); return os << to_string(x);
} }
friend std::string
to_string(Number const& amount);
friend Number
root(Number f, unsigned d);
friend Number
root2(Number f);
// Thread local rounding control. Default is to_nearest // Thread local rounding control. Default is to_nearest
enum rounding_mode { to_nearest, towards_zero, downward, upward }; enum rounding_mode { to_nearest, towards_zero, downward, upward };
static rounding_mode static rounding_mode
@@ -177,44 +385,194 @@ public:
static rounding_mode static rounding_mode
setround(rounding_mode mode); setround(rounding_mode mode);
/** Returns which mantissa scale is currently in use for normalization.
*
* If you think you need to call this outside of unit tests, no you don't.
*/
static MantissaRange::mantissa_scale
getMantissaScale();
/** Changes which mantissa scale is used for normalization.
*
* If you think you need to call this outside of unit tests, no you don't.
*/
static void
setMantissaScale(MantissaRange::mantissa_scale scale);
inline static internalrep
minMantissa()
{
return range_.get().min;
}
inline static internalrep
maxMantissa()
{
return range_.get().max;
}
inline static int
mantissaLog()
{
return range_.get().log;
}
/// oneSmall is needed because the ranges are private
constexpr static Number
oneSmall();
/// oneLarge is needed because the ranges are private
constexpr static Number
oneLarge();
// And one is needed because it needs to choose between oneSmall and
// oneLarge based on the current range
static Number
one();
template <Integral64 T>
[[nodiscard]]
std::pair<T, int>
normalizeToRange(T minMantissa, T maxMantissa) const;
private: private:
static thread_local rounding_mode mode_; static thread_local rounding_mode mode_;
// The available ranges for mantissa
constexpr static MantissaRange smallRange{MantissaRange::small};
static_assert(isPowerOfTen(smallRange.min));
static_assert(smallRange.min == 1'000'000'000'000'000LL);
static_assert(smallRange.max == 9'999'999'999'999'999LL);
static_assert(smallRange.log == 15);
static_assert(smallRange.min < maxRep);
static_assert(smallRange.max < maxRep);
constexpr static MantissaRange largeRange{MantissaRange::large};
static_assert(isPowerOfTen(largeRange.min));
static_assert(largeRange.min == 1'000'000'000'000'000'000ULL);
static_assert(largeRange.max == internalrep(9'999'999'999'999'999'999ULL));
static_assert(largeRange.log == 18);
static_assert(largeRange.min < maxRep);
static_assert(largeRange.max > maxRep);
// The range for the mantissa when normalized.
// Use reference_wrapper to avoid making copies, and prevent accidentally
// changing the values inside the range.
static thread_local std::reference_wrapper<MantissaRange const> range_;
void void
normalize(); normalize();
constexpr bool
/** Normalize Number components to an arbitrary range.
*
* min/maxMantissa are parameters because this function is used by both
* normalize(), which reads from range_, and by normalizeToRange,
* which is public and can accept an arbitrary range from the caller.
*/
template <class T>
static void
normalize(
bool& negative,
T& mantissa,
int& exponent,
internalrep const& minMantissa,
internalrep const& maxMantissa);
template <class T>
friend void
doNormalize(
bool& negative,
T& mantissa_,
int& exponent_,
MantissaRange::rep const& minMantissa,
MantissaRange::rep const& maxMantissa);
bool
isnormal() const noexcept; isnormal() const noexcept;
// Copy the number, but modify the exponent by "exponentDelta". Because the
// mantissa doesn't change, the result will be "mostly" normalized, but the
// exponent could go out of range, so it will be checked.
Number
shiftExponent(int exponentDelta) const;
// Safely convert rep (int64) mantissa to internalrep (uint64). If the rep
// is negative, returns the positive value. This takes a little extra work
// because converting std::numeric_limits<std::int64_t>::min() flirts with
// UB, and can vary across compilers.
static internalrep
externalToInternal(rep mantissa);
class Guard; class Guard;
}; };
inline constexpr Number::Number(bool negative, internalrep mantissa, int exponent, unchecked) noexcept
: negative_(negative), mantissa_{mantissa}, exponent_{exponent}
{
}
inline constexpr Number::Number(internalrep mantissa, int exponent, unchecked) noexcept
: Number(false, mantissa, exponent, unchecked{})
{
}
constexpr static Number numZero{}; constexpr static Number numZero{};
inline constexpr Number::Number(rep mantissa, int exponent, unchecked) noexcept inline Number::Number(bool negative, internalrep mantissa, int exponent, normalized)
: mantissa_{mantissa}, exponent_{exponent} : Number(negative, mantissa, exponent, unchecked{})
{
normalize();
}
inline Number::Number(internalrep mantissa, int exponent, normalized) : Number(false, mantissa, exponent, normalized{})
{ {
} }
inline Number::Number(rep mantissa, int exponent) inline Number::Number(rep mantissa, int exponent)
: mantissa_{mantissa}, exponent_{exponent} : Number(mantissa < 0, externalToInternal(mantissa), exponent, normalized{})
{ {
normalize();
} }
inline Number::Number(rep mantissa) : Number{mantissa, 0} inline Number::Number(rep mantissa) : Number{mantissa, 0}
{ {
} }
/** Returns the mantissa of the external view of the Number.
*
* Please see the "---- External Interface ----" section of the class
* documentation for an explanation of why the internal value may be modified.
*/
inline constexpr Number::rep inline constexpr Number::rep
Number::mantissa() const noexcept Number::mantissa() const noexcept
{ {
return mantissa_; auto m = mantissa_;
if (m > maxRep)
{
XRPL_ASSERT_PARTS(
!isnormal() || (m % 10 == 0 && m / 10 <= maxRep),
"xrpl::Number::mantissa",
"large normalized mantissa has no remainder");
m /= 10;
}
auto const sign = negative_ ? -1 : 1;
return sign * static_cast<Number::rep>(m);
} }
/** Returns the exponent of the external view of the Number.
*
* Please see the "---- External Interface ----" section of the class
* documentation for an explanation of why the internal value may be modified.
*/
inline constexpr int inline constexpr int
Number::exponent() const noexcept Number::exponent() const noexcept
{ {
return exponent_; auto e = exponent_;
if (mantissa_ > maxRep)
{
XRPL_ASSERT_PARTS(
!isnormal() || (mantissa_ % 10 == 0 && mantissa_ / 10 <= maxRep),
"xrpl::Number::exponent",
"large normalized mantissa has no remainder");
++e;
}
return e;
} }
inline constexpr Number inline constexpr Number
@@ -226,15 +584,17 @@ Number::operator+() const noexcept
inline constexpr Number inline constexpr Number
Number::operator-() const noexcept Number::operator-() const noexcept
{ {
if (mantissa_ == 0)
return Number{};
auto x = *this; auto x = *this;
x.mantissa_ = -x.mantissa_; x.negative_ = !x.negative_;
return x; return x;
} }
inline Number& inline Number&
Number::operator++() Number::operator++()
{ {
*this += Number{1000000000000000, -15, unchecked{}}; *this += one();
return *this; return *this;
} }
@@ -249,7 +609,7 @@ Number::operator++(int)
inline Number& inline Number&
Number::operator--() Number::operator--()
{ {
*this -= Number{1000000000000000, -15, unchecked{}}; *this -= one();
return *this; return *this;
} }
@@ -299,30 +659,48 @@ operator/(Number const& x, Number const& y)
return z; return z;
} }
inline constexpr Number inline Number
Number::min() noexcept Number::min() noexcept
{ {
return Number{minMantissa, minExponent, unchecked{}}; return Number{false, range_.get().min, minExponent, unchecked{}};
} }
inline constexpr Number inline Number
Number::max() noexcept Number::max() noexcept
{ {
return Number{maxMantissa, maxExponent, unchecked{}}; return Number{false, std::min(range_.get().max, maxRep), maxExponent, unchecked{}};
} }
inline constexpr Number inline Number
Number::lowest() noexcept Number::lowest() noexcept
{ {
return -Number{maxMantissa, maxExponent, unchecked{}}; return Number{true, std::min(range_.get().max, maxRep), maxExponent, unchecked{}};
} }
inline constexpr bool inline bool
Number::isnormal() const noexcept Number::isnormal() const noexcept
{ {
auto const abs_m = mantissa_ < 0 ? -mantissa_ : mantissa_; MantissaRange const& range = range_;
return minMantissa <= abs_m && abs_m <= maxMantissa && auto const abs_m = mantissa_;
minExponent <= exponent_ && exponent_ <= maxExponent; return *this == Number{} ||
(range.min <= abs_m && abs_m <= range.max && (abs_m <= maxRep || abs_m % 10 == 0) && minExponent <= exponent_ &&
exponent_ <= maxExponent);
}
template <Integral64 T>
std::pair<T, int>
Number::normalizeToRange(T minMantissa, T maxMantissa) const
{
bool negative = negative_;
internalrep mantissa = mantissa_;
int exponent = exponent_;
if constexpr (std::is_unsigned_v<T>)
XRPL_ASSERT_PARTS(!negative, "xrpl::Number::normalizeToRange", "Number is non-negative for unsigned range.");
Number::normalize(negative, mantissa, exponent, minMantissa, maxMantissa);
auto const sign = negative ? -1 : 1;
return std::make_pair(static_cast<T>(sign * mantissa), exponent);
} }
inline constexpr Number inline constexpr Number
@@ -364,6 +742,20 @@ squelch(Number const& x, Number const& limit) noexcept
return x; return x;
} }
inline std::string
to_string(MantissaRange::mantissa_scale const& scale)
{
switch (scale)
{
case MantissaRange::small:
return "small";
case MantissaRange::large:
return "large";
default:
throw std::runtime_error("Bad scale");
}
}
class saveNumberRoundMode class saveNumberRoundMode
{ {
Number::rounding_mode mode_; Number::rounding_mode mode_;
@@ -373,8 +765,7 @@ public:
{ {
Number::setround(mode_); Number::setround(mode_);
} }
explicit saveNumberRoundMode(Number::rounding_mode mode) noexcept explicit saveNumberRoundMode(Number::rounding_mode mode) noexcept : mode_{mode}
: mode_{mode}
{ {
} }
saveNumberRoundMode(saveNumberRoundMode const&) = delete; saveNumberRoundMode(saveNumberRoundMode const&) = delete;
@@ -391,8 +782,7 @@ class NumberRoundModeGuard
saveNumberRoundMode saved_; saveNumberRoundMode saved_;
public: public:
explicit NumberRoundModeGuard(Number::rounding_mode mode) noexcept explicit NumberRoundModeGuard(Number::rounding_mode mode) noexcept : saved_{Number::setround(mode)}
: saved_{Number::setround(mode)}
{ {
} }
@@ -402,6 +792,32 @@ public:
operator=(NumberRoundModeGuard const&) = delete; operator=(NumberRoundModeGuard const&) = delete;
}; };
/** Sets the new scale and restores the old scale when it leaves scope.
*
* If you think you need to use this class outside of unit tests, no you don't.
*
*/
class NumberMantissaScaleGuard
{
MantissaRange::mantissa_scale const saved_;
public:
explicit NumberMantissaScaleGuard(MantissaRange::mantissa_scale scale) noexcept : saved_{Number::getMantissaScale()}
{
Number::setMantissaScale(scale);
}
~NumberMantissaScaleGuard()
{
Number::setMantissaScale(saved_);
}
NumberMantissaScaleGuard(NumberMantissaScaleGuard const&) = delete;
NumberMantissaScaleGuard&
operator=(NumberMantissaScaleGuard const&) = delete;
};
} // namespace xrpl } // namespace xrpl
#endif // XRPL_BASICS_NUMBER_H_INCLUDED #endif // XRPL_BASICS_NUMBER_H_INCLUDED

View File

@@ -11,8 +11,7 @@ namespace xrpl {
class Resolver class Resolver
{ {
public: public:
using HandlerType = using HandlerType = std::function<void(std::string, std::vector<beast::IP::Endpoint>)>;
std::function<void(std::string, std::vector<beast::IP::Endpoint>)>;
virtual ~Resolver() = 0; virtual ~Resolver() = 0;
@@ -41,9 +40,7 @@ public:
} }
virtual void virtual void
resolve( resolve(std::vector<std::string> const& names, HandlerType const& handler) = 0;
std::vector<std::string> const& names,
HandlerType const& handler) = 0;
/** @} */ /** @} */
}; };

View File

@@ -5,34 +5,28 @@
namespace xrpl { namespace xrpl {
template <class T> template <class T>
SharedWeakCachePointer<T>::SharedWeakCachePointer( SharedWeakCachePointer<T>::SharedWeakCachePointer(SharedWeakCachePointer const& rhs) = default;
SharedWeakCachePointer const& rhs) = default;
template <class T> template <class T>
template <class TT> template <class TT>
requires std::convertible_to<TT*, T*> requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>::SharedWeakCachePointer( SharedWeakCachePointer<T>::SharedWeakCachePointer(std::shared_ptr<TT> const& rhs) : combo_{rhs}
std::shared_ptr<TT> const& rhs)
: combo_{rhs}
{ {
} }
template <class T> template <class T>
SharedWeakCachePointer<T>::SharedWeakCachePointer( SharedWeakCachePointer<T>::SharedWeakCachePointer(SharedWeakCachePointer&& rhs) = default;
SharedWeakCachePointer&& rhs) = default;
template <class T> template <class T>
template <class TT> template <class TT>
requires std::convertible_to<TT*, T*> requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>::SharedWeakCachePointer(std::shared_ptr<TT>&& rhs) SharedWeakCachePointer<T>::SharedWeakCachePointer(std::shared_ptr<TT>&& rhs) : combo_{std::move(rhs)}
: combo_{std::move(rhs)}
{ {
} }
template <class T> template <class T>
SharedWeakCachePointer<T>& SharedWeakCachePointer<T>&
SharedWeakCachePointer<T>::operator=(SharedWeakCachePointer const& rhs) = SharedWeakCachePointer<T>::operator=(SharedWeakCachePointer const& rhs) = default;
default;
template <class T> template <class T>
template <class TT> template <class TT>

View File

@@ -51,11 +51,7 @@ class SlabAllocator
// The extent of the underlying memory block: // The extent of the underlying memory block:
std::size_t const size_; std::size_t const size_;
SlabBlock( SlabBlock(SlabBlock* next, std::uint8_t* data, std::size_t size, std::size_t item)
SlabBlock* next,
std::uint8_t* data,
std::size_t size,
std::size_t item)
: next_(next), p_(data), size_(size) : next_(next), p_(data), size_(size)
{ {
// We don't need to grab the mutex here, since we're the only // We don't need to grab the mutex here, since we're the only
@@ -126,9 +122,7 @@ class SlabAllocator
void void
deallocate(std::uint8_t* ptr) noexcept deallocate(std::uint8_t* ptr) noexcept
{ {
XRPL_ASSERT( XRPL_ASSERT(own(ptr), "xrpl::SlabAllocator::SlabBlock::deallocate : own input");
own(ptr),
"xrpl::SlabAllocator::SlabBlock::deallocate : own input");
std::lock_guard l(m_); std::lock_guard l(m_);
@@ -162,18 +156,13 @@ public:
contexts (e.g. when minimal memory usage is needed) and contexts (e.g. when minimal memory usage is needed) and
allows for graceful failure. allows for graceful failure.
*/ */
constexpr explicit SlabAllocator( constexpr explicit SlabAllocator(std::size_t extra, std::size_t alloc = 0, std::size_t align = 0)
std::size_t extra,
std::size_t alloc = 0,
std::size_t align = 0)
: itemAlignment_(align ? align : alignof(Type)) : itemAlignment_(align ? align : alignof(Type))
, itemSize_( , itemSize_(boost::alignment::align_up(sizeof(Type) + extra, itemAlignment_))
boost::alignment::align_up(sizeof(Type) + extra, itemAlignment_))
, slabSize_(alloc) , slabSize_(alloc)
{ {
XRPL_ASSERT( XRPL_ASSERT(
(itemAlignment_ & (itemAlignment_ - 1)) == 0, (itemAlignment_ & (itemAlignment_ - 1)) == 0, "xrpl::SlabAllocator::SlabAllocator : valid alignment");
"xrpl::SlabAllocator::SlabAllocator : valid alignment");
} }
SlabAllocator(SlabAllocator const& other) = delete; SlabAllocator(SlabAllocator const& other) = delete;
@@ -222,8 +211,7 @@ public:
// We want to allocate the memory at a 2 MiB boundary, to make it // We want to allocate the memory at a 2 MiB boundary, to make it
// possible to use hugepage mappings on Linux: // possible to use hugepage mappings on Linux:
auto buf = auto buf = boost::alignment::aligned_alloc(megabytes(std::size_t(2)), size);
boost::alignment::aligned_alloc(megabytes(std::size_t(2)), size);
// clang-format off // clang-format off
if (!buf) [[unlikely]] if (!buf) [[unlikely]]
@@ -241,31 +229,21 @@ public:
// We need to carve out a bit of memory for the slab header // We need to carve out a bit of memory for the slab header
// and then align the rest appropriately: // and then align the rest appropriately:
auto slabData = reinterpret_cast<void*>( auto slabData = reinterpret_cast<void*>(reinterpret_cast<std::uint8_t*>(buf) + sizeof(SlabBlock));
reinterpret_cast<std::uint8_t*>(buf) + sizeof(SlabBlock));
auto slabSize = size - sizeof(SlabBlock); auto slabSize = size - sizeof(SlabBlock);
// This operation is essentially guaranteed not to fail but // This operation is essentially guaranteed not to fail but
// let's be careful anyways. // let's be careful anyways.
if (!boost::alignment::align( if (!boost::alignment::align(itemAlignment_, itemSize_, slabData, slabSize))
itemAlignment_, itemSize_, slabData, slabSize))
{ {
boost::alignment::aligned_free(buf); boost::alignment::aligned_free(buf);
return nullptr; return nullptr;
} }
slab = new (buf) SlabBlock( slab = new (buf) SlabBlock(slabs_.load(), reinterpret_cast<std::uint8_t*>(slabData), slabSize, itemSize_);
slabs_.load(),
reinterpret_cast<std::uint8_t*>(slabData),
slabSize,
itemSize_);
// Link the new slab // Link the new slab
while (!slabs_.compare_exchange_weak( while (!slabs_.compare_exchange_weak(slab->next_, slab, std::memory_order_release, std::memory_order_relaxed))
slab->next_,
slab,
std::memory_order_release,
std::memory_order_relaxed))
{ {
; // Nothing to do ; // Nothing to do
} }
@@ -322,10 +300,7 @@ public:
std::size_t align; std::size_t align;
public: public:
constexpr SlabConfig( constexpr SlabConfig(std::size_t extra_, std::size_t alloc_ = 0, std::size_t align_ = alignof(Type))
std::size_t extra_,
std::size_t alloc_ = 0,
std::size_t align_ = alignof(Type))
: extra(extra_), alloc(alloc_), align(align_) : extra(extra_), alloc(alloc_), align(align_)
{ {
} }
@@ -336,23 +311,14 @@ public:
// Ensure that the specified allocators are sorted from smallest to // Ensure that the specified allocators are sorted from smallest to
// largest by size: // largest by size:
std::sort( std::sort(
std::begin(cfg), std::begin(cfg), std::end(cfg), [](SlabConfig const& a, SlabConfig const& b) { return a.extra < b.extra; });
std::end(cfg),
[](SlabConfig const& a, SlabConfig const& b) {
return a.extra < b.extra;
});
// We should never have two slabs of the same size // We should never have two slabs of the same size
if (std::adjacent_find( if (std::adjacent_find(std::begin(cfg), std::end(cfg), [](SlabConfig const& a, SlabConfig const& b) {
std::begin(cfg),
std::end(cfg),
[](SlabConfig const& a, SlabConfig const& b) {
return a.extra == b.extra; return a.extra == b.extra;
}) != cfg.end()) }) != cfg.end())
{ {
throw std::runtime_error( throw std::runtime_error("SlabAllocatorSet<" + beast::type_name<Type>() + ">: duplicate slab size");
"SlabAllocatorSet<" + beast::type_name<Type>() +
">: duplicate slab size");
} }
for (auto const& c : cfg) for (auto const& c : cfg)

View File

@@ -41,8 +41,7 @@ public:
operator=(Slice const&) noexcept = default; operator=(Slice const&) noexcept = default;
/** Create a slice pointing to existing memory. */ /** Create a slice pointing to existing memory. */
Slice(void const* data, std::size_t size) noexcept Slice(void const* data, std::size_t size) noexcept : data_(reinterpret_cast<std::uint8_t const*>(data)), size_(size)
: data_(reinterpret_cast<std::uint8_t const*>(data)), size_(size)
{ {
} }
@@ -85,9 +84,7 @@ public:
std::uint8_t std::uint8_t
operator[](std::size_t i) const noexcept operator[](std::size_t i) const noexcept
{ {
XRPL_ASSERT( XRPL_ASSERT(i < size_, "xrpl::Slice::operator[](std::size_t) const : valid input");
i < size_,
"xrpl::Slice::operator[](std::size_t) const : valid input");
return data_[i]; return data_[i];
} }
@@ -152,8 +149,8 @@ public:
/** Return a "sub slice" of given length starting at the given position /** Return a "sub slice" of given length starting at the given position
Note that the subslice encompasses the range [pos, pos + rcount), Note that the subslice encompasses the range [pos, pos + rCount),
where rcount is the smaller of count and size() - pos. where rCount is the smaller of count and size() - pos.
@param pos position of the first character @param pos position of the first character
@count requested length @count requested length
@@ -162,9 +159,7 @@ public:
@throws std::out_of_range if pos > size() @throws std::out_of_range if pos > size()
*/ */
Slice Slice
substr( substr(std::size_t pos, std::size_t count = std::numeric_limits<std::size_t>::max()) const
std::size_t pos,
std::size_t count = std::numeric_limits<std::size_t>::max()) const
{ {
if (pos > size()) if (pos > size())
throw std::out_of_range("Requested sub-slice is out of bounds"); throw std::out_of_range("Requested sub-slice is out of bounds");
@@ -203,11 +198,7 @@ operator!=(Slice const& lhs, Slice const& rhs) noexcept
inline bool inline bool
operator<(Slice const& lhs, Slice const& rhs) noexcept operator<(Slice const& lhs, Slice const& rhs) noexcept
{ {
return std::lexicographical_compare( return std::lexicographical_compare(lhs.data(), lhs.data() + lhs.size(), rhs.data(), rhs.data() + rhs.size());
lhs.data(),
lhs.data() + lhs.size(),
rhs.data(),
rhs.data() + rhs.size());
} }
template <class Stream> template <class Stream>
@@ -219,18 +210,14 @@ operator<<(Stream& s, Slice const& v)
} }
template <class T, std::size_t N> template <class T, std::size_t N>
std::enable_if_t< std::enable_if_t<std::is_same<T, char>::value || std::is_same<T, unsigned char>::value, Slice>
std::is_same<T, char>::value || std::is_same<T, unsigned char>::value,
Slice>
makeSlice(std::array<T, N> const& a) makeSlice(std::array<T, N> const& a)
{ {
return Slice(a.data(), a.size()); return Slice(a.data(), a.size());
} }
template <class T, class Alloc> template <class T, class Alloc>
std::enable_if_t< std::enable_if_t<std::is_same<T, char>::value || std::is_same<T, unsigned char>::value, Slice>
std::is_same<T, char>::value || std::is_same<T, unsigned char>::value,
Slice>
makeSlice(std::vector<T, Alloc> const& v) makeSlice(std::vector<T, Alloc> const& v)
{ {
return Slice(v.data(), v.size()); return Slice(v.data(), v.size());

View File

@@ -31,7 +31,7 @@ template <class Iterator>
std::optional<Blob> std::optional<Blob>
strUnHex(std::size_t strSize, Iterator begin, Iterator end) strUnHex(std::size_t strSize, Iterator begin, Iterator end)
{ {
static constexpr std::array<int, 256> const unxtab = []() { static constexpr std::array<int, 256> const digitLookupTable = []() {
std::array<int, 256> t{}; std::array<int, 256> t{};
for (auto& x : t) for (auto& x : t)
@@ -57,7 +57,7 @@ strUnHex(std::size_t strSize, Iterator begin, Iterator end)
if (strSize & 1) if (strSize & 1)
{ {
int c = unxtab[*iter++]; int c = digitLookupTable[*iter++];
if (c < 0) if (c < 0)
return {}; return {};
@@ -67,12 +67,12 @@ strUnHex(std::size_t strSize, Iterator begin, Iterator end)
while (iter != end) while (iter != end)
{ {
int cHigh = unxtab[*iter++]; int cHigh = digitLookupTable[*iter++];
if (cHigh < 0) if (cHigh < 0)
return {}; return {};
int cLow = unxtab[*iter++]; int cLow = digitLookupTable[*iter++];
if (cLow < 0) if (cLow < 0)
return {}; return {};
@@ -109,8 +109,7 @@ struct parsedURL
bool bool
operator==(parsedURL const& other) const operator==(parsedURL const& other) const
{ {
return scheme == other.scheme && domain == other.domain && return scheme == other.scheme && domain == other.domain && port == other.port && path == other.path;
port == other.port && path == other.path;
} }
}; };

View File

@@ -56,8 +56,7 @@ public:
clock_type::duration expiration, clock_type::duration expiration,
clock_type& clock, clock_type& clock,
beast::Journal journal, beast::Journal journal,
beast::insight::Collector::ptr const& collector = beast::insight::Collector::ptr const& collector = beast::insight::NullCollector::New());
beast::insight::NullCollector::New());
public: public:
/** Return the clock associated with the cache. */ /** Return the clock associated with the cache. */
@@ -114,15 +113,10 @@ public:
*/ */
template <class R> template <class R>
bool bool
canonicalize( canonicalize(key_type const& key, SharedPointerType& data, R&& replaceCallback);
key_type const& key,
SharedPointerType& data,
R&& replaceCallback);
bool bool
canonicalize_replace_cache( canonicalize_replace_cache(key_type const& key, SharedPointerType const& data);
key_type const& key,
SharedPointerType const& data);
bool bool
canonicalize_replace_client(key_type const& key, SharedPointerType& data); canonicalize_replace_client(key_type const& key, SharedPointerType& data);
@@ -136,8 +130,7 @@ public:
*/ */
template <class ReturnType = bool> template <class ReturnType = bool>
auto auto
insert(key_type const& key, T const& value) insert(key_type const& key, T const& value) -> std::enable_if_t<!IsKeyCache, ReturnType>;
-> std::enable_if_t<!IsKeyCache, ReturnType>;
template <class ReturnType = bool> template <class ReturnType = bool>
auto auto
@@ -183,10 +176,7 @@ private:
struct Stats struct Stats
{ {
template <class Handler> template <class Handler>
Stats( Stats(std::string const& prefix, Handler const& handler, beast::insight::Collector::ptr const& collector)
std::string const& prefix,
Handler const& handler,
beast::insight::Collector::ptr const& collector)
: hook(collector->make_hook(handler)) : hook(collector->make_hook(handler))
, size(collector->make_gauge(prefix, "size")) , size(collector->make_gauge(prefix, "size"))
, hit_rate(collector->make_gauge(prefix, "hit_rate")) , hit_rate(collector->make_gauge(prefix, "hit_rate"))
@@ -208,8 +198,7 @@ private:
public: public:
clock_type::time_point last_access; clock_type::time_point last_access;
explicit KeyOnlyEntry(clock_type::time_point const& last_access_) explicit KeyOnlyEntry(clock_type::time_point const& last_access_) : last_access(last_access_)
: last_access(last_access_)
{ {
} }
@@ -226,9 +215,7 @@ private:
shared_weak_combo_pointer_type ptr; shared_weak_combo_pointer_type ptr;
clock_type::time_point last_access; clock_type::time_point last_access;
ValueEntry( ValueEntry(clock_type::time_point const& last_access_, shared_pointer_type const& ptr_)
clock_type::time_point const& last_access_,
shared_pointer_type const& ptr_)
: ptr(ptr_), last_access(last_access_) : ptr(ptr_), last_access(last_access_)
{ {
} }
@@ -262,18 +249,13 @@ private:
} }
}; };
typedef typedef typename std::conditional<IsKeyCache, KeyOnlyEntry, ValueEntry>::type Entry;
typename std::conditional<IsKeyCache, KeyOnlyEntry, ValueEntry>::type
Entry;
using KeyOnlyCacheType = using KeyOnlyCacheType = hardened_partitioned_hash_map<key_type, KeyOnlyEntry, Hash, KeyEqual>;
hardened_partitioned_hash_map<key_type, KeyOnlyEntry, Hash, KeyEqual>;
using KeyValueCacheType = using KeyValueCacheType = hardened_partitioned_hash_map<key_type, ValueEntry, Hash, KeyEqual>;
hardened_partitioned_hash_map<key_type, ValueEntry, Hash, KeyEqual>;
using cache_type = using cache_type = hardened_partitioned_hash_map<key_type, Entry, Hash, KeyEqual>;
hardened_partitioned_hash_map<key_type, Entry, Hash, KeyEqual>;
[[nodiscard]] std::thread [[nodiscard]] std::thread
sweepHelper( sweepHelper(

View File

@@ -15,16 +15,7 @@ template <
class Hash, class Hash,
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline TaggedCache< inline TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::TaggedCache(
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
TaggedCache(
std::string const& name, std::string const& name,
int size, int size,
clock_type::duration expiration, clock_type::duration expiration,
@@ -53,15 +44,8 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline auto inline auto
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::clock()
Key, -> clock_type&
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::clock() -> clock_type&
{ {
return m_clock; return m_clock;
} }
@@ -76,15 +60,7 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline std::size_t inline std::size_t
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::size() const
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::size() const
{ {
std::lock_guard lock(m_mutex); std::lock_guard lock(m_mutex);
return m_cache.size(); return m_cache.size();
@@ -100,15 +76,7 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline int inline int
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getCacheSize() const
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getCacheSize() const
{ {
std::lock_guard lock(m_mutex); std::lock_guard lock(m_mutex);
return m_cache_count; return m_cache_count;
@@ -124,15 +92,7 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline int inline int
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getTrackSize() const
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getTrackSize() const
{ {
std::lock_guard lock(m_mutex); std::lock_guard lock(m_mutex);
return m_cache.size(); return m_cache.size();
@@ -148,15 +108,7 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline float inline float
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getHitRate()
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getHitRate()
{ {
std::lock_guard lock(m_mutex); std::lock_guard lock(m_mutex);
auto const total = static_cast<float>(m_hits + m_misses); auto const total = static_cast<float>(m_hits + m_misses);
@@ -173,15 +125,7 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline void inline void
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::clear()
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::clear()
{ {
std::lock_guard lock(m_mutex); std::lock_guard lock(m_mutex);
m_cache.clear(); m_cache.clear();
@@ -198,15 +142,7 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline void inline void
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::reset()
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::reset()
{ {
std::lock_guard lock(m_mutex); std::lock_guard lock(m_mutex);
m_cache.clear(); m_cache.clear();
@@ -226,15 +162,8 @@ template <
class Mutex> class Mutex>
template <class KeyComparable> template <class KeyComparable>
inline bool inline bool
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::touch_if_exists(
Key, KeyComparable const& key)
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::touch_if_exists(KeyComparable const& key)
{ {
std::lock_guard lock(m_mutex); std::lock_guard lock(m_mutex);
auto const iter(m_cache.find(key)); auto const iter(m_cache.find(key));
@@ -258,15 +187,7 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline void inline void
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::sweep()
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::sweep()
{ {
// Keep references to all the stuff we sweep // Keep references to all the stuff we sweep
// For performance, each worker thread should exit before the swept data // For performance, each worker thread should exit before the swept data
@@ -280,8 +201,7 @@ TaggedCache<
{ {
std::lock_guard lock(m_mutex); std::lock_guard lock(m_mutex);
if (m_target_size == 0 || if (m_target_size == 0 || (static_cast<int>(m_cache.size()) <= m_target_size))
(static_cast<int>(m_cache.size()) <= m_target_size))
{ {
when_expire = now - m_target_age; when_expire = now - m_target_age;
} }
@@ -293,10 +213,8 @@ TaggedCache<
if (when_expire > (now - minimumAge)) if (when_expire > (now - minimumAge))
when_expire = now - minimumAge; when_expire = now - minimumAge;
JLOG(m_journal.trace()) JLOG(m_journal.trace()) << m_name << " is growing fast " << m_cache.size() << " of " << m_target_size
<< m_name << " is growing fast " << m_cache.size() << " of " << " aging at " << (now - when_expire).count() << " of " << m_target_age.count();
<< m_target_size << " aging at " << (now - when_expire).count()
<< " of " << m_target_age.count();
} }
std::vector<std::thread> workers; std::vector<std::thread> workers;
@@ -305,13 +223,7 @@ TaggedCache<
for (std::size_t p = 0; p < m_cache.partitions(); ++p) for (std::size_t p = 0; p < m_cache.partitions(); ++p)
{ {
workers.push_back(sweepHelper( workers.push_back(sweepHelper(when_expire, now, m_cache.map()[p], allStuffToSweep[p], allRemovals, lock));
when_expire,
now,
m_cache.map()[p],
allStuffToSweep[p],
allRemovals,
lock));
} }
for (std::thread& worker : workers) for (std::thread& worker : workers)
worker.join(); worker.join();
@@ -322,9 +234,7 @@ TaggedCache<
// and decrement the reference count on each strong pointer. // and decrement the reference count on each strong pointer.
JLOG(m_journal.debug()) JLOG(m_journal.debug())
<< m_name << " TaggedCache sweep lock duration " << m_name << " TaggedCache sweep lock duration "
<< std::chrono::duration_cast<std::chrono::milliseconds>( << std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - start).count()
std::chrono::steady_clock::now() - start)
.count()
<< "ms"; << "ms";
} }
@@ -338,15 +248,9 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline bool inline bool
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::del(
Key, key_type const& key,
T, bool valid)
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::del(key_type const& key, bool valid)
{ {
// Remove from cache, if !valid, remove from map too. Returns true if // Remove from cache, if !valid, remove from map too. Returns true if
// removed from cache // removed from cache
@@ -385,16 +289,7 @@ template <
class Mutex> class Mutex>
template <class R> template <class R>
inline bool inline bool
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::canonicalize(
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
canonicalize(
key_type const& key, key_type const& key,
SharedPointerType& data, SharedPointerType& data,
R&& replaceCallback) R&& replaceCallback)
@@ -408,9 +303,7 @@ TaggedCache<
if (cit == m_cache.end()) if (cit == m_cache.end())
{ {
m_cache.emplace( m_cache.emplace(
std::piecewise_construct, std::piecewise_construct, std::forward_as_tuple(key), std::forward_as_tuple(m_clock.now(), data));
std::forward_as_tuple(key),
std::forward_as_tuple(m_clock.now(), data));
++m_cache_count; ++m_cache_count;
return false; return false;
} }
@@ -480,21 +373,10 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline bool inline bool
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
Key, canonicalize_replace_cache(key_type const& key, SharedPointerType const& data)
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
canonicalize_replace_cache(
key_type const& key,
SharedPointerType const& data)
{ {
return canonicalize( return canonicalize(key, const_cast<SharedPointerType&>(data), []() { return true; });
key, const_cast<SharedPointerType&>(data), []() { return true; });
} }
template < template <
@@ -507,15 +389,7 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline bool inline bool
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
canonicalize_replace_client(key_type const& key, SharedPointerType& data) canonicalize_replace_client(key_type const& key, SharedPointerType& data)
{ {
return canonicalize(key, data, []() { return false; }); return canonicalize(key, data, []() { return false; });
@@ -531,15 +405,8 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline SharedPointerType inline SharedPointerType
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::fetch(
Key, key_type const& key)
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::fetch(key_type const& key)
{ {
std::lock_guard<mutex_type> l(m_mutex); std::lock_guard<mutex_type> l(m_mutex);
auto ret = initialFetch(key, l); auto ret = initialFetch(key, l);
@@ -559,16 +426,9 @@ template <
class Mutex> class Mutex>
template <class ReturnType> template <class ReturnType>
inline auto inline auto
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::insert(
Key, key_type const& key,
T, T const& value) -> std::enable_if_t<!IsKeyCache, ReturnType>
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::insert(key_type const& key, T const& value)
-> std::enable_if_t<!IsKeyCache, ReturnType>
{ {
static_assert( static_assert(
std::is_same_v<std::shared_ptr<T>, SharedPointerType> || std::is_same_v<std::shared_ptr<T>, SharedPointerType> ||
@@ -597,23 +457,13 @@ template <
class Mutex> class Mutex>
template <class ReturnType> template <class ReturnType>
inline auto inline auto
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::insert(
Key, key_type const& key) -> std::enable_if_t<IsKeyCache, ReturnType>
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::insert(key_type const& key)
-> std::enable_if_t<IsKeyCache, ReturnType>
{ {
std::lock_guard lock(m_mutex); std::lock_guard lock(m_mutex);
clock_type::time_point const now(m_clock.now()); clock_type::time_point const now(m_clock.now());
auto [it, inserted] = m_cache.emplace( auto [it, inserted] =
std::piecewise_construct, m_cache.emplace(std::piecewise_construct, std::forward_as_tuple(key), std::forward_as_tuple(now));
std::forward_as_tuple(key),
std::forward_as_tuple(now));
if (!inserted) if (!inserted)
it->second.last_access = now; it->second.last_access = now;
return inserted; return inserted;
@@ -629,15 +479,9 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline bool inline bool
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::retrieve(
Key, key_type const& key,
T, T& data)
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::retrieve(key_type const& key, T& data)
{ {
// retrieve the value of the stored data // retrieve the value of the stored data
auto entry = fetch(key); auto entry = fetch(key);
@@ -659,15 +503,8 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline auto inline auto
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::peekMutex()
Key, -> mutex_type&
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::peekMutex() -> mutex_type&
{ {
return m_mutex; return m_mutex;
} }
@@ -682,15 +519,8 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline auto inline auto
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getKeys() const
Key, -> std::vector<key_type>
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getKeys() const -> std::vector<key_type>
{ {
std::vector<key_type> v; std::vector<key_type> v;
@@ -714,15 +544,7 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline double inline double
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::rate() const
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::rate() const
{ {
std::lock_guard lock(m_mutex); std::lock_guard lock(m_mutex);
auto const tot = m_hits + m_misses; auto const tot = m_hits + m_misses;
@@ -742,15 +564,9 @@ template <
class Mutex> class Mutex>
template <class Handler> template <class Handler>
inline SharedPointerType inline SharedPointerType
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::fetch(
Key, key_type const& digest,
T, Handler const& h)
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::fetch(key_type const& digest, Handler const& h)
{ {
{ {
std::lock_guard l(m_mutex); std::lock_guard l(m_mutex);
@@ -764,8 +580,7 @@ TaggedCache<
std::lock_guard l(m_mutex); std::lock_guard l(m_mutex);
++m_misses; ++m_misses;
auto const [it, inserted] = auto const [it, inserted] = m_cache.emplace(digest, Entry(m_clock.now(), std::move(sle)));
m_cache.emplace(digest, Entry(m_clock.now(), std::move(sle)));
if (!inserted) if (!inserted)
it->second.touch(m_clock.now()); it->second.touch(m_clock.now());
return it->second.ptr.getStrong(); return it->second.ptr.getStrong();
@@ -782,16 +597,9 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline SharedPointerType inline SharedPointerType
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::initialFetch(
Key, key_type const& key,
T, std::lock_guard<mutex_type> const& l)
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
initialFetch(key_type const& key, std::lock_guard<mutex_type> const& l)
{ {
auto cit = m_cache.find(key); auto cit = m_cache.find(key);
if (cit == m_cache.end()) if (cit == m_cache.end())
@@ -827,15 +635,7 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline void inline void
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::collect_metrics()
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::collect_metrics()
{ {
m_stats.size.set(getCacheSize()); m_stats.size.set(getCacheSize());
@@ -861,16 +661,7 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline std::thread inline std::thread
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::sweepHelper(
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
sweepHelper(
clock_type::time_point const& when_expire, clock_type::time_point const& when_expire,
[[maybe_unused]] clock_type::time_point const& now, [[maybe_unused]] clock_type::time_point const& now,
typename KeyValueCacheType::map_type& partition, typename KeyValueCacheType::map_type& partition,
@@ -930,10 +721,8 @@ TaggedCache<
if (mapRemovals || cacheRemovals) if (mapRemovals || cacheRemovals)
{ {
JLOG(m_journal.debug()) JLOG(m_journal.debug()) << "TaggedCache partition sweep " << m_name << ": cache = " << partition.size()
<< "TaggedCache partition sweep " << m_name << "-" << cacheRemovals << ", map-=" << mapRemovals;
<< ": cache = " << partition.size() << "-" << cacheRemovals
<< ", map-=" << mapRemovals;
} }
allRemovals += cacheRemovals; allRemovals += cacheRemovals;
@@ -950,16 +739,7 @@ template <
class KeyEqual, class KeyEqual,
class Mutex> class Mutex>
inline std::thread inline std::thread
TaggedCache< TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::sweepHelper(
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
sweepHelper(
clock_type::time_point const& when_expire, clock_type::time_point const& when_expire,
clock_type::time_point const& now, clock_type::time_point const& now,
typename KeyOnlyCacheType::map_type& partition, typename KeyOnlyCacheType::map_type& partition,
@@ -995,10 +775,8 @@ TaggedCache<
if (mapRemovals || cacheRemovals) if (mapRemovals || cacheRemovals)
{ {
JLOG(m_journal.debug()) JLOG(m_journal.debug()) << "TaggedCache partition sweep " << m_name << ": cache = " << partition.size()
<< "TaggedCache partition sweep " << m_name << "-" << cacheRemovals << ", map-=" << mapRemovals;
<< ": cache = " << partition.size() << "-" << cacheRemovals
<< ", map-=" << mapRemovals;
} }
allRemovals += cacheRemovals; allRemovals += cacheRemovals;

View File

@@ -40,8 +40,7 @@ template <
class Hash = beast::uhash<>, class Hash = beast::uhash<>,
class Pred = std::equal_to<Key>, class Pred = std::equal_to<Key>,
class Allocator = std::allocator<std::pair<Key const, Value>>> class Allocator = std::allocator<std::pair<Key const, Value>>>
using hash_multimap = using hash_multimap = std::unordered_multimap<Key, Value, Hash, Pred, Allocator>;
std::unordered_multimap<Key, Value, Hash, Pred, Allocator>;
template < template <
class Value, class Value,
@@ -75,8 +74,7 @@ template <
class Hash = hardened_hash<strong_hash>, class Hash = hardened_hash<strong_hash>,
class Pred = std::equal_to<Key>, class Pred = std::equal_to<Key>,
class Allocator = std::allocator<std::pair<Key const, Value>>> class Allocator = std::allocator<std::pair<Key const, Value>>>
using hardened_partitioned_hash_map = using hardened_partitioned_hash_map = partitioned_unordered_map<Key, Value, Hash, Pred, Allocator>;
partitioned_unordered_map<Key, Value, Hash, Pred, Allocator>;
template < template <
class Key, class Key,
@@ -84,8 +82,7 @@ template <
class Hash = hardened_hash<strong_hash>, class Hash = hardened_hash<strong_hash>,
class Pred = std::equal_to<Key>, class Pred = std::equal_to<Key>,
class Allocator = std::allocator<std::pair<Key const, Value>>> class Allocator = std::allocator<std::pair<Key const, Value>>>
using hardened_hash_multimap = using hardened_hash_multimap = std::unordered_multimap<Key, Value, Hash, Pred, Allocator>;
std::unordered_multimap<Key, Value, Hash, Pred, Allocator>;
template < template <
class Value, class Value,
@@ -99,8 +96,7 @@ template <
class Hash = hardened_hash<strong_hash>, class Hash = hardened_hash<strong_hash>,
class Pred = std::equal_to<Value>, class Pred = std::equal_to<Value>,
class Allocator = std::allocator<Value>> class Allocator = std::allocator<Value>>
using hardened_hash_multiset = using hardened_hash_multiset = std::unordered_multiset<Value, Hash, Pred, Allocator>;
std::unordered_multiset<Value, Hash, Pred, Allocator>;
} // namespace xrpl } // namespace xrpl

View File

@@ -52,13 +52,7 @@ generalized_set_intersection(
// std::set_intersection. // std::set_intersection.
template <class FwdIter1, class InputIter2, class Pred, class Comp> template <class FwdIter1, class InputIter2, class Pred, class Comp>
FwdIter1 FwdIter1
remove_if_intersect_or_match( remove_if_intersect_or_match(FwdIter1 first1, FwdIter1 last1, InputIter2 first2, InputIter2 last2, Pred pred, Comp comp)
FwdIter1 first1,
FwdIter1 last1,
InputIter2 first2,
InputIter2 last2,
Pred pred,
Comp comp)
{ {
// [original-first1, current-first1) is the set of elements to be preserved. // [original-first1, current-first1) is the set of elements to be preserved.
// [current-first1, i) is the set of elements that have been removed. // [current-first1, i) is the set of elements that have been removed.

View File

@@ -46,8 +46,7 @@ base64_encode(std::uint8_t const* data, std::size_t len);
inline std::string inline std::string
base64_encode(std::string const& s) base64_encode(std::string const& s)
{ {
return base64_encode( return base64_encode(reinterpret_cast<std::uint8_t const*>(s.data()), s.size());
reinterpret_cast<std::uint8_t const*>(s.data()), s.size());
} }
std::string std::string

View File

@@ -65,13 +65,9 @@ struct is_contiguous_container<Slice> : std::true_type
template <std::size_t Bits, class Tag = void> template <std::size_t Bits, class Tag = void>
class base_uint class base_uint
{ {
static_assert( static_assert((Bits % 32) == 0, "The length of a base_uint in bits must be a multiple of 32.");
(Bits % 32) == 0,
"The length of a base_uint in bits must be a multiple of 32.");
static_assert( static_assert(Bits >= 64, "The length of a base_uint in bits must be at least 64.");
Bits >= 64,
"The length of a base_uint in bits must be at least 64.");
static constexpr std::size_t WIDTH = Bits / 32; static constexpr std::size_t WIDTH = Bits / 32;
@@ -182,9 +178,7 @@ private:
{ {
// Local lambda that converts a single hex char to four bits and // Local lambda that converts a single hex char to four bits and
// ORs those bits into a uint32_t. // ORs those bits into a uint32_t.
auto hexCharToUInt = [](char c, auto hexCharToUInt = [](char c, std::uint32_t shift, std::uint32_t& accum) -> ParseResult {
std::uint32_t shift,
std::uint32_t& accum) -> ParseResult {
std::uint32_t nibble = 0xFFu; std::uint32_t nibble = 0xFFu;
if (c < '0' || c > 'f') if (c < '0' || c > 'f')
return ParseResult::badChar; return ParseResult::badChar;
@@ -221,8 +215,7 @@ private:
std::uint32_t accum = {}; std::uint32_t accum = {};
for (std::uint32_t shift : {4u, 0u, 12u, 8u, 20u, 16u, 28u, 24u}) for (std::uint32_t shift : {4u, 0u, 12u, 8u, 20u, 16u, 28u, 24u})
{ {
if (auto const result = hexCharToUInt(*in++, shift, accum); if (auto const result = hexCharToUInt(*in++, shift, accum); result != ParseResult::okay)
result != ParseResult::okay)
return Unexpected(result); return Unexpected(result);
} }
ret[i++] = accum; ret[i++] = accum;
@@ -261,8 +254,7 @@ public:
// This constructor is intended to be used at compile time since it might // This constructor is intended to be used at compile time since it might
// throw at runtime. Consider declaring this constructor consteval once // throw at runtime. Consider declaring this constructor consteval once
// we get to C++23. // we get to C++23.
explicit constexpr base_uint(std::string_view sv) noexcept(false) explicit constexpr base_uint(std::string_view sv) noexcept(false) : data_(parseFromStringViewThrows(sv))
: data_(parseFromStringViewThrows(sv))
{ {
} }
@@ -387,8 +379,7 @@ public:
// prefix operator // prefix operator
for (int i = WIDTH - 1; i >= 0; --i) for (int i = WIDTH - 1; i >= 0; --i)
{ {
data_[i] = boost::endian::native_to_big( data_[i] = boost::endian::native_to_big(boost::endian::big_to_native(data_[i]) + 1);
boost::endian::big_to_native(data_[i]) + 1);
if (data_[i] != 0) if (data_[i] != 0)
break; break;
} }
@@ -412,8 +403,7 @@ public:
for (int i = WIDTH - 1; i >= 0; --i) for (int i = WIDTH - 1; i >= 0; --i)
{ {
auto prev = data_[i]; auto prev = data_[i];
data_[i] = boost::endian::native_to_big( data_[i] = boost::endian::native_to_big(boost::endian::big_to_native(data_[i]) - 1);
boost::endian::big_to_native(data_[i]) - 1);
if (prev != 0) if (prev != 0)
break; break;
@@ -453,11 +443,9 @@ public:
for (int i = WIDTH; i--;) for (int i = WIDTH; i--;)
{ {
std::uint64_t n = carry + boost::endian::big_to_native(data_[i]) + std::uint64_t n = carry + boost::endian::big_to_native(data_[i]) + boost::endian::big_to_native(b.data_[i]);
boost::endian::big_to_native(b.data_[i]);
data_[i] = data_[i] = boost::endian::native_to_big(static_cast<std::uint32_t>(n));
boost::endian::native_to_big(static_cast<std::uint32_t>(n));
carry = n >> 32; carry = n >> 32;
} }
@@ -557,8 +545,7 @@ operator<=>(base_uint<Bits, Tag> const& lhs, base_uint<Bits, Tag> const& rhs)
if (ret.first == lhs.cend()) if (ret.first == lhs.cend())
return std::strong_ordering::equivalent; return std::strong_ordering::equivalent;
return (*ret.first > *ret.second) ? std::strong_ordering::greater return (*ret.first > *ret.second) ? std::strong_ordering::greater : std::strong_ordering::less;
: std::strong_ordering::less;
} }
template <std::size_t Bits, typename Tag> template <std::size_t Bits, typename Tag>
@@ -617,9 +604,7 @@ template <std::size_t Bits, class Tag>
inline std::string inline std::string
to_short_string(base_uint<Bits, Tag> const& a) to_short_string(base_uint<Bits, Tag> const& a)
{ {
static_assert( static_assert(base_uint<Bits, Tag>::bytes > 4, "For 4 bytes or less, use a native type");
base_uint<Bits, Tag>::bytes > 4,
"For 4 bytes or less, use a native type");
return strHex(a.cbegin(), a.cbegin() + 4) + "..."; return strHex(a.cbegin(), a.cbegin() + 4) + "...";
} }
@@ -653,8 +638,7 @@ static_assert(sizeof(uint256) == 256 / 8, "There should be no padding bytes");
namespace beast { namespace beast {
template <std::size_t Bits, class Tag> template <std::size_t Bits, class Tag>
struct is_uniquely_represented<xrpl::base_uint<Bits, Tag>> struct is_uniquely_represented<xrpl::base_uint<Bits, Tag>> : public std::true_type
: public std::true_type
{ {
explicit is_uniquely_represented() = default; explicit is_uniquely_represented() = default;
}; };

View File

@@ -16,12 +16,9 @@ namespace xrpl {
// A few handy aliases // A few handy aliases
using days = std::chrono::duration< using days = std::chrono::duration<int, std::ratio_multiply<std::chrono::hours::period, std::ratio<24>>>;
int,
std::ratio_multiply<std::chrono::hours::period, std::ratio<24>>>;
using weeks = std::chrono:: using weeks = std::chrono::duration<int, std::ratio_multiply<days::period, std::ratio<7>>>;
duration<int, std::ratio_multiply<days::period, std::ratio<7>>>;
/** Clock for measuring the network time. /** Clock for measuring the network time.
@@ -34,8 +31,7 @@ using weeks = std::chrono::
*/ */
constexpr static std::chrono::seconds epoch_offset = constexpr static std::chrono::seconds epoch_offset =
date::sys_days{date::year{2000} / 1 / 1} - date::sys_days{date::year{2000} / 1 / 1} - date::sys_days{date::year{1970} / 1 / 1};
date::sys_days{date::year{1970} / 1 / 1};
static_assert(epoch_offset.count() == 946684800); static_assert(epoch_offset.count() == 946684800);
@@ -64,8 +60,7 @@ to_string(NetClock::time_point tp)
{ {
// 2000-01-01 00:00:00 UTC is 946684800s from 1970-01-01 00:00:00 UTC // 2000-01-01 00:00:00 UTC is 946684800s from 1970-01-01 00:00:00 UTC
using namespace std::chrono; using namespace std::chrono;
return to_string( return to_string(system_clock::time_point{tp.time_since_epoch() + epoch_offset});
system_clock::time_point{tp.time_since_epoch() + epoch_offset});
} }
template <class Duration> template <class Duration>
@@ -82,8 +77,7 @@ to_string_iso(NetClock::time_point tp)
// 2000-01-01 00:00:00 UTC is 946684800s from 1970-01-01 00:00:00 UTC // 2000-01-01 00:00:00 UTC is 946684800s from 1970-01-01 00:00:00 UTC
// Note, NetClock::duration is seconds, as checked by static_assert // Note, NetClock::duration is seconds, as checked by static_assert
static_assert(std::is_same_v<NetClock::duration::period, std::ratio<1>>); static_assert(std::is_same_v<NetClock::duration::period, std::ratio<1>>);
return to_string_iso(date::sys_time<NetClock::duration>{ return to_string_iso(date::sys_time<NetClock::duration>{tp.time_since_epoch() + epoch_offset});
tp.time_since_epoch() + epoch_offset});
} }
/** A clock for measuring elapsed time. /** A clock for measuring elapsed time.

View File

@@ -36,15 +36,10 @@ template <class E, class... Args>
[[noreturn]] inline void [[noreturn]] inline void
Throw(Args&&... args) Throw(Args&&... args)
{ {
static_assert( static_assert(std::is_convertible<E*, std::exception*>::value, "Exception must derive from std::exception.");
std::is_convertible<E*, std::exception*>::value,
"Exception must derive from std::exception.");
E e(std::forward<Args>(args)...); E e(std::forward<Args>(args)...);
LogThrow( LogThrow(std::string("Throwing exception of type " + beast::type_name<E>() + ": ") + e.what());
std::string(
"Throwing exception of type " + beast::type_name<E>() + ": ") +
e.what());
throw e; throw e;
} }

View File

@@ -24,8 +24,7 @@ public:
Collection const& collection; Collection const& collection;
std::string const delimiter; std::string const delimiter;
explicit CollectionAndDelimiter(Collection const& c, std::string delim) explicit CollectionAndDelimiter(Collection const& c, std::string delim) : collection(c), delimiter(std::move(delim))
: collection(c), delimiter(std::move(delim))
{ {
} }
@@ -33,11 +32,7 @@ public:
friend Stream& friend Stream&
operator<<(Stream& s, CollectionAndDelimiter const& cd) operator<<(Stream& s, CollectionAndDelimiter const& cd)
{ {
return join( return join(s, std::begin(cd.collection), std::end(cd.collection), cd.delimiter);
s,
std::begin(cd.collection),
std::end(cd.collection),
cd.delimiter);
} }
}; };
@@ -69,8 +64,7 @@ public:
char const* collection; char const* collection;
std::string const delimiter; std::string const delimiter;
explicit CollectionAndDelimiter(char const c[N], std::string delim) explicit CollectionAndDelimiter(char const c[N], std::string delim) : collection(c), delimiter(std::move(delim))
: collection(c), delimiter(std::move(delim))
{ {
} }

View File

@@ -51,8 +51,7 @@ public:
using const_reference = value_type const&; using const_reference = value_type const&;
using pointer = value_type*; using pointer = value_type*;
using const_pointer = value_type const*; using const_pointer = value_type const*;
using map_type = std:: using map_type = std::unordered_map<key_type, mapped_type, hasher, key_equal, allocator_type>;
unordered_map<key_type, mapped_type, hasher, key_equal, allocator_type>;
using partition_map_type = std::vector<map_type>; using partition_map_type = std::vector<map_type>;
struct iterator struct iterator
@@ -113,8 +112,7 @@ public:
friend bool friend bool
operator==(iterator const& lhs, iterator const& rhs) operator==(iterator const& lhs, iterator const& rhs)
{ {
return lhs.map_ == rhs.map_ && lhs.ait_ == rhs.ait_ && return lhs.map_ == rhs.map_ && lhs.ait_ == rhs.ait_ && lhs.mit_ == rhs.mit_;
lhs.mit_ == rhs.mit_;
} }
friend bool friend bool
@@ -190,8 +188,7 @@ public:
friend bool friend bool
operator==(const_iterator const& lhs, const_iterator const& rhs) operator==(const_iterator const& lhs, const_iterator const& rhs)
{ {
return lhs.map_ == rhs.map_ && lhs.ait_ == rhs.ait_ && return lhs.map_ == rhs.map_ && lhs.ait_ == rhs.ait_ && lhs.mit_ == rhs.mit_;
lhs.mit_ == rhs.mit_;
} }
friend bool friend bool
@@ -231,14 +228,11 @@ private:
} }
public: public:
partitioned_unordered_map( partitioned_unordered_map(std::optional<std::size_t> partitions = std::nullopt)
std::optional<std::size_t> partitions = std::nullopt)
{ {
// Set partitions to the number of hardware threads if the parameter // Set partitions to the number of hardware threads if the parameter
// is either empty or set to 0. // is either empty or set to 0.
partitions_ = partitions && *partitions partitions_ = partitions && *partitions ? *partitions : std::thread::hardware_concurrency();
? *partitions
: std::thread::hardware_concurrency();
map_.resize(partitions_); map_.resize(partitions_);
XRPL_ASSERT( XRPL_ASSERT(
partitions_, partitions_,
@@ -337,10 +331,8 @@ public:
auto const& key = std::get<0>(keyTuple); auto const& key = std::get<0>(keyTuple);
iterator it(&map_); iterator it(&map_);
it.ait_ = it.map_->begin() + partitioner(key); it.ait_ = it.map_->begin() + partitioner(key);
auto [eit, inserted] = it.ait_->emplace( auto [eit, inserted] =
std::piecewise_construct, it.ait_->emplace(std::piecewise_construct, std::forward<T>(keyTuple), std::forward<U>(valueTuple));
std::forward<T>(keyTuple),
std::forward<U>(valueTuple));
it.mit_ = eit; it.mit_ = eit;
return {it, inserted}; return {it, inserted};
} }
@@ -351,8 +343,7 @@ public:
{ {
iterator it(&map_); iterator it(&map_);
it.ait_ = it.map_->begin() + partitioner(key); it.ait_ = it.map_->begin() + partitioner(key);
auto [eit, inserted] = auto [eit, inserted] = it.ait_->emplace(std::forward<T>(key), std::forward<U>(val));
it.ait_->emplace(std::forward<T>(key), std::forward<U>(val));
it.mit_ = eit; it.mit_ = eit;
return {it, inserted}; return {it, inserted};
} }

View File

@@ -20,8 +20,7 @@ static_assert(
"The Ripple default PRNG engine must return an unsigned integral type."); "The Ripple default PRNG engine must return an unsigned integral type.");
static_assert( static_assert(
std::numeric_limits<beast::xor_shift_engine::result_type>::max() >= std::numeric_limits<beast::xor_shift_engine::result_type>::max() >= std::numeric_limits<std::uint64_t>::max(),
std::numeric_limits<std::uint64_t>::max(),
"The Ripple default PRNG engine return must be at least 64 bits wide."); "The Ripple default PRNG engine return must be at least 64 bits wide.");
#endif #endif
@@ -90,9 +89,7 @@ default_prng()
*/ */
/** @{ */ /** @{ */
template <class Engine, class Integral> template <class Engine, class Integral>
std::enable_if_t< std::enable_if_t<std::is_integral<Integral>::value && detail::is_engine<Engine>::value, Integral>
std::is_integral<Integral>::value && detail::is_engine<Engine>::value,
Integral>
rand_int(Engine& engine, Integral min, Integral max) rand_int(Engine& engine, Integral min, Integral max)
{ {
XRPL_ASSERT(max > min, "xrpl::rand_int : max over min inputs"); XRPL_ASSERT(max > min, "xrpl::rand_int : max over min inputs");
@@ -111,9 +108,7 @@ rand_int(Integral min, Integral max)
} }
template <class Engine, class Integral> template <class Engine, class Integral>
std::enable_if_t< std::enable_if_t<std::is_integral<Integral>::value && detail::is_engine<Engine>::value, Integral>
std::is_integral<Integral>::value && detail::is_engine<Engine>::value,
Integral>
rand_int(Engine& engine, Integral max) rand_int(Engine& engine, Integral max)
{ {
return rand_int(engine, Integral(0), max); return rand_int(engine, Integral(0), max);
@@ -127,9 +122,7 @@ rand_int(Integral max)
} }
template <class Integral, class Engine> template <class Integral, class Engine>
std::enable_if_t< std::enable_if_t<std::is_integral<Integral>::value && detail::is_engine<Engine>::value, Integral>
std::is_integral<Integral>::value && detail::is_engine<Engine>::value,
Integral>
rand_int(Engine& engine) rand_int(Engine& engine)
{ {
return rand_int(engine, std::numeric_limits<Integral>::max()); return rand_int(engine, std::numeric_limits<Integral>::max());
@@ -147,23 +140,17 @@ rand_int()
/** @{ */ /** @{ */
template <class Byte, class Engine> template <class Byte, class Engine>
std::enable_if_t< std::enable_if_t<
(std::is_same<Byte, unsigned char>::value || (std::is_same<Byte, unsigned char>::value || std::is_same<Byte, std::uint8_t>::value) &&
std::is_same<Byte, std::uint8_t>::value) &&
detail::is_engine<Engine>::value, detail::is_engine<Engine>::value,
Byte> Byte>
rand_byte(Engine& engine) rand_byte(Engine& engine)
{ {
return static_cast<Byte>(rand_int<Engine, std::uint32_t>( return static_cast<Byte>(
engine, rand_int<Engine, std::uint32_t>(engine, std::numeric_limits<Byte>::min(), std::numeric_limits<Byte>::max()));
std::numeric_limits<Byte>::min(),
std::numeric_limits<Byte>::max()));
} }
template <class Byte = std::uint8_t> template <class Byte = std::uint8_t>
std::enable_if_t< std::enable_if_t<(std::is_same<Byte, unsigned char>::value || std::is_same<Byte, std::uint8_t>::value), Byte>
(std::is_same<Byte, unsigned char>::value ||
std::is_same<Byte, std::uint8_t>::value),
Byte>
rand_byte() rand_byte()
{ {
return rand_byte<Byte>(default_prng()); return rand_byte<Byte>(default_prng());

View File

@@ -12,37 +12,28 @@ namespace xrpl {
template <class Src, class Dest> template <class Src, class Dest>
concept SafeToCast = (std::is_integral_v<Src> && std::is_integral_v<Dest>) && concept SafeToCast = (std::is_integral_v<Src> && std::is_integral_v<Dest>) &&
(std::is_signed<Src>::value || std::is_unsigned<Dest>::value) && (std::is_signed<Src>::value || std::is_unsigned<Dest>::value) &&
(std::is_signed<Src>::value != std::is_signed<Dest>::value (std::is_signed<Src>::value != std::is_signed<Dest>::value ? sizeof(Dest) > sizeof(Src)
? sizeof(Dest) > sizeof(Src)
: sizeof(Dest) >= sizeof(Src)); : sizeof(Dest) >= sizeof(Src));
template <class Dest, class Src> template <class Dest, class Src>
inline constexpr std:: inline constexpr std::enable_if_t<std::is_integral_v<Dest> && std::is_integral_v<Src>, Dest>
enable_if_t<std::is_integral_v<Dest> && std::is_integral_v<Src>, Dest>
safe_cast(Src s) noexcept safe_cast(Src s) noexcept
{ {
static_assert( static_assert(std::is_signed_v<Dest> || std::is_unsigned_v<Src>, "Cannot cast signed to unsigned");
std::is_signed_v<Dest> || std::is_unsigned_v<Src>, constexpr unsigned not_same = std::is_signed_v<Dest> != std::is_signed_v<Src>;
"Cannot cast signed to unsigned"); static_assert(sizeof(Dest) >= sizeof(Src) + not_same, "Destination is too small to hold all values of source");
constexpr unsigned not_same =
std::is_signed_v<Dest> != std::is_signed_v<Src>;
static_assert(
sizeof(Dest) >= sizeof(Src) + not_same,
"Destination is too small to hold all values of source");
return static_cast<Dest>(s); return static_cast<Dest>(s);
} }
template <class Dest, class Src> template <class Dest, class Src>
inline constexpr std:: inline constexpr std::enable_if_t<std::is_enum_v<Dest> && std::is_integral_v<Src>, Dest>
enable_if_t<std::is_enum_v<Dest> && std::is_integral_v<Src>, Dest>
safe_cast(Src s) noexcept safe_cast(Src s) noexcept
{ {
return static_cast<Dest>(safe_cast<std::underlying_type_t<Dest>>(s)); return static_cast<Dest>(safe_cast<std::underlying_type_t<Dest>>(s));
} }
template <class Dest, class Src> template <class Dest, class Src>
inline constexpr std:: inline constexpr std::enable_if_t<std::is_integral_v<Dest> && std::is_enum_v<Src>, Dest>
enable_if_t<std::is_integral_v<Dest> && std::is_enum_v<Src>, Dest>
safe_cast(Src s) noexcept safe_cast(Src s) noexcept
{ {
return safe_cast<Dest>(static_cast<std::underlying_type_t<Src>>(s)); return safe_cast<Dest>(static_cast<std::underlying_type_t<Src>>(s));
@@ -53,8 +44,7 @@ inline constexpr std::
// underlying types become safe, it can be converted to a safe_cast. // underlying types become safe, it can be converted to a safe_cast.
template <class Dest, class Src> template <class Dest, class Src>
inline constexpr std:: inline constexpr std::enable_if_t<std::is_integral_v<Dest> && std::is_integral_v<Src>, Dest>
enable_if_t<std::is_integral_v<Dest> && std::is_integral_v<Src>, Dest>
unsafe_cast(Src s) noexcept unsafe_cast(Src s) noexcept
{ {
static_assert( static_assert(
@@ -65,16 +55,14 @@ inline constexpr std::
} }
template <class Dest, class Src> template <class Dest, class Src>
inline constexpr std:: inline constexpr std::enable_if_t<std::is_enum_v<Dest> && std::is_integral_v<Src>, Dest>
enable_if_t<std::is_enum_v<Dest> && std::is_integral_v<Src>, Dest>
unsafe_cast(Src s) noexcept unsafe_cast(Src s) noexcept
{ {
return static_cast<Dest>(unsafe_cast<std::underlying_type_t<Dest>>(s)); return static_cast<Dest>(unsafe_cast<std::underlying_type_t<Dest>>(s));
} }
template <class Dest, class Src> template <class Dest, class Src>
inline constexpr std:: inline constexpr std::enable_if_t<std::is_integral_v<Dest> && std::is_enum_v<Src>, Dest>
enable_if_t<std::is_integral_v<Dest> && std::is_enum_v<Src>, Dest>
unsafe_cast(Src s) noexcept unsafe_cast(Src s) noexcept
{ {
return unsafe_cast<Dest>(static_cast<std::underlying_type_t<Src>>(s)); return unsafe_cast<Dest>(static_cast<std::underlying_type_t<Src>>(s));

View File

@@ -36,10 +36,8 @@ public:
} }
scope_exit(scope_exit&& rhs) noexcept( scope_exit(scope_exit&& rhs) noexcept(
std::is_nothrow_move_constructible_v<EF> || std::is_nothrow_move_constructible_v<EF> || std::is_nothrow_copy_constructible_v<EF>)
std::is_nothrow_copy_constructible_v<EF>) : exit_function_{std::forward<EF>(rhs.exit_function_)}, execute_on_destruction_{rhs.execute_on_destruction_}
: exit_function_{std::forward<EF>(rhs.exit_function_)}
, execute_on_destruction_{rhs.execute_on_destruction_}
{ {
rhs.release(); rhs.release();
} }
@@ -50,14 +48,11 @@ public:
template <class EFP> template <class EFP>
explicit scope_exit( explicit scope_exit(
EFP&& f, EFP&& f,
std::enable_if_t< std::enable_if_t<!std::is_same_v<std::remove_cv_t<EFP>, scope_exit> && std::is_constructible_v<EF, EFP>>* =
!std::is_same_v<std::remove_cv_t<EFP>, scope_exit> && 0) noexcept
std::is_constructible_v<EF, EFP>>* = 0) noexcept
: exit_function_{std::forward<EFP>(f)} : exit_function_{std::forward<EFP>(f)}
{ {
static_assert( static_assert(std::is_nothrow_constructible_v<EF, decltype(std::forward<EFP>(f))>);
std::
is_nothrow_constructible_v<EF, decltype(std::forward<EFP>(f))>);
} }
void void
@@ -80,14 +75,12 @@ class scope_fail
public: public:
~scope_fail() ~scope_fail()
{ {
if (execute_on_destruction_ && if (execute_on_destruction_ && std::uncaught_exceptions() > uncaught_on_creation_)
std::uncaught_exceptions() > uncaught_on_creation_)
exit_function_(); exit_function_();
} }
scope_fail(scope_fail&& rhs) noexcept( scope_fail(scope_fail&& rhs) noexcept(
std::is_nothrow_move_constructible_v<EF> || std::is_nothrow_move_constructible_v<EF> || std::is_nothrow_copy_constructible_v<EF>)
std::is_nothrow_copy_constructible_v<EF>)
: exit_function_{std::forward<EF>(rhs.exit_function_)} : exit_function_{std::forward<EF>(rhs.exit_function_)}
, execute_on_destruction_{rhs.execute_on_destruction_} , execute_on_destruction_{rhs.execute_on_destruction_}
, uncaught_on_creation_{rhs.uncaught_on_creation_} , uncaught_on_creation_{rhs.uncaught_on_creation_}
@@ -101,14 +94,11 @@ public:
template <class EFP> template <class EFP>
explicit scope_fail( explicit scope_fail(
EFP&& f, EFP&& f,
std::enable_if_t< std::enable_if_t<!std::is_same_v<std::remove_cv_t<EFP>, scope_fail> && std::is_constructible_v<EF, EFP>>* =
!std::is_same_v<std::remove_cv_t<EFP>, scope_fail> && 0) noexcept
std::is_constructible_v<EF, EFP>>* = 0) noexcept
: exit_function_{std::forward<EFP>(f)} : exit_function_{std::forward<EFP>(f)}
{ {
static_assert( static_assert(std::is_nothrow_constructible_v<EF, decltype(std::forward<EFP>(f))>);
std::
is_nothrow_constructible_v<EF, decltype(std::forward<EFP>(f))>);
} }
void void
@@ -131,14 +121,12 @@ class scope_success
public: public:
~scope_success() noexcept(noexcept(exit_function_())) ~scope_success() noexcept(noexcept(exit_function_()))
{ {
if (execute_on_destruction_ && if (execute_on_destruction_ && std::uncaught_exceptions() <= uncaught_on_creation_)
std::uncaught_exceptions() <= uncaught_on_creation_)
exit_function_(); exit_function_();
} }
scope_success(scope_success&& rhs) noexcept( scope_success(scope_success&& rhs) noexcept(
std::is_nothrow_move_constructible_v<EF> || std::is_nothrow_move_constructible_v<EF> || std::is_nothrow_copy_constructible_v<EF>)
std::is_nothrow_copy_constructible_v<EF>)
: exit_function_{std::forward<EF>(rhs.exit_function_)} : exit_function_{std::forward<EF>(rhs.exit_function_)}
, execute_on_destruction_{rhs.execute_on_destruction_} , execute_on_destruction_{rhs.execute_on_destruction_}
, uncaught_on_creation_{rhs.uncaught_on_creation_} , uncaught_on_creation_{rhs.uncaught_on_creation_}
@@ -152,9 +140,7 @@ public:
template <class EFP> template <class EFP>
explicit scope_success( explicit scope_success(
EFP&& f, EFP&& f,
std::enable_if_t< std::enable_if_t<!std::is_same_v<std::remove_cv_t<EFP>, scope_success> && std::is_constructible_v<EF, EFP>>* =
!std::is_same_v<std::remove_cv_t<EFP>, scope_success> &&
std::is_constructible_v<EF, EFP>>* =
0) noexcept(std::is_nothrow_constructible_v<EF, EFP> || std::is_nothrow_constructible_v<EF, EFP&>) 0) noexcept(std::is_nothrow_constructible_v<EF, EFP> || std::is_nothrow_constructible_v<EF, EFP&>)
: exit_function_{std::forward<EFP>(f)} : exit_function_{std::forward<EFP>(f)}
{ {
@@ -213,12 +199,9 @@ class scope_unlock
std::unique_lock<Mutex>* plock; std::unique_lock<Mutex>* plock;
public: public:
explicit scope_unlock(std::unique_lock<Mutex>& lock) noexcept(true) explicit scope_unlock(std::unique_lock<Mutex>& lock) noexcept(true) : plock(&lock)
: plock(&lock)
{ {
XRPL_ASSERT( XRPL_ASSERT(plock->owns_lock(), "xrpl::scope_unlock::scope_unlock : mutex must be locked");
plock->owns_lock(),
"xrpl::scope_unlock::scope_unlock : mutex must be locked");
plock->unlock(); plock->unlock();
} }

View File

@@ -100,12 +100,9 @@ public:
@note For performance reasons, you should strive to have `lock` be @note For performance reasons, you should strive to have `lock` be
on a cacheline by itself. on a cacheline by itself.
*/ */
packed_spinlock(std::atomic<T>& lock, int index) packed_spinlock(std::atomic<T>& lock, int index) : bits_(lock), mask_(static_cast<T>(1) << index)
: bits_(lock), mask_(static_cast<T>(1) << index)
{ {
XRPL_ASSERT( XRPL_ASSERT(index >= 0 && (mask_ != 0), "xrpl::packed_spinlock::packed_spinlock : valid index and mask");
index >= 0 && (mask_ != 0),
"xrpl::packed_spinlock::packed_spinlock : valid index and mask");
} }
[[nodiscard]] bool [[nodiscard]] bool
@@ -178,10 +175,7 @@ public:
T expected = 0; T expected = 0;
return lock_.compare_exchange_weak( return lock_.compare_exchange_weak(
expected, expected, std::numeric_limits<T>::max(), std::memory_order_acquire, std::memory_order_relaxed);
std::numeric_limits<T>::max(),
std::memory_order_acquire,
std::memory_order_relaxed);
} }
void void

View File

@@ -11,9 +11,7 @@ std::string
strHex(FwdIt begin, FwdIt end) strHex(FwdIt begin, FwdIt end)
{ {
static_assert( static_assert(
std::is_convertible< std::is_convertible<typename std::iterator_traits<FwdIt>::iterator_category, std::forward_iterator_tag>::value,
typename std::iterator_traits<FwdIt>::iterator_category,
std::forward_iterator_tag>::value,
"FwdIt must be a forward iterator"); "FwdIt must be a forward iterator");
std::string result; std::string result;
result.reserve(2 * std::distance(begin, end)); result.reserve(2 * std::distance(begin, end));

View File

@@ -31,9 +31,7 @@ class tagged_integer
tagged_integer<Int, Tag>, tagged_integer<Int, Tag>,
boost::bitwise< boost::bitwise<
tagged_integer<Int, Tag>, tagged_integer<Int, Tag>,
boost::unit_steppable< boost::unit_steppable<tagged_integer<Int, Tag>, boost::shiftable<tagged_integer<Int, Tag>>>>>>
tagged_integer<Int, Tag>,
boost::shiftable<tagged_integer<Int, Tag>>>>>>
{ {
private: private:
Int m_value; Int m_value;
@@ -46,14 +44,10 @@ public:
template < template <
class OtherInt, class OtherInt,
class = typename std::enable_if< class = typename std::enable_if<std::is_integral<OtherInt>::value && sizeof(OtherInt) <= sizeof(Int)>::type>
std::is_integral<OtherInt>::value &&
sizeof(OtherInt) <= sizeof(Int)>::type>
explicit constexpr tagged_integer(OtherInt value) noexcept : m_value(value) explicit constexpr tagged_integer(OtherInt value) noexcept : m_value(value)
{ {
static_assert( static_assert(sizeof(tagged_integer) == sizeof(Int), "tagged_integer is adding padding");
sizeof(tagged_integer) == sizeof(Int),
"tagged_integer is adding padding");
} }
bool bool

View File

@@ -32,11 +32,7 @@ private:
public: public:
io_latency_probe(duration const& period, boost::asio::io_context& ios) io_latency_probe(duration const& period, boost::asio::io_context& ios)
: m_count(1) : m_count(1), m_period(period), m_ios(ios), m_timer(m_ios), m_cancel(false)
, m_period(period)
, m_ios(ios)
, m_timer(m_ios)
, m_cancel(false)
{ {
} }
@@ -91,10 +87,7 @@ public:
std::lock_guard lock(m_mutex); std::lock_guard lock(m_mutex);
if (m_cancel) if (m_cancel)
throw std::logic_error("io_latency_probe is canceled"); throw std::logic_error("io_latency_probe is canceled");
boost::asio::post( boost::asio::post(m_ios, sample_op<Handler>(std::forward<Handler>(handler), Clock::now(), false, this));
m_ios,
sample_op<Handler>(
std::forward<Handler>(handler), Clock::now(), false, this));
} }
/** Initiate continuous i/o latency sampling. /** Initiate continuous i/o latency sampling.
@@ -108,10 +101,7 @@ public:
std::lock_guard lock(m_mutex); std::lock_guard lock(m_mutex);
if (m_cancel) if (m_cancel)
throw std::logic_error("io_latency_probe is canceled"); throw std::logic_error("io_latency_probe is canceled");
boost::asio::post( boost::asio::post(m_ios, sample_op<Handler>(std::forward<Handler>(handler), Clock::now(), true, this));
m_ios,
sample_op<Handler>(
std::forward<Handler>(handler), Clock::now(), true, this));
} }
private: private:
@@ -151,15 +141,8 @@ private:
bool m_repeat; bool m_repeat;
io_latency_probe* m_probe; io_latency_probe* m_probe;
sample_op( sample_op(Handler const& handler, time_point const& start, bool repeat, io_latency_probe* probe)
Handler const& handler, : m_handler(handler), m_start(start), m_repeat(repeat), m_probe(probe)
time_point const& start,
bool repeat,
io_latency_probe* probe)
: m_handler(handler)
, m_start(start)
, m_repeat(repeat)
, m_probe(probe)
{ {
XRPL_ASSERT( XRPL_ASSERT(
m_probe, m_probe,
@@ -214,23 +197,19 @@ private:
// Calculate when we want to sample again, and // Calculate when we want to sample again, and
// adjust for the expected latency. // adjust for the expected latency.
// //
typename Clock::time_point const when( typename Clock::time_point const when(now + m_probe->m_period - 2 * elapsed);
now + m_probe->m_period - 2 * elapsed);
if (when <= now) if (when <= now)
{ {
// The latency is too high to maintain the desired // The latency is too high to maintain the desired
// period so don't bother with a timer. // period so don't bother with a timer.
// //
boost::asio::post( boost::asio::post(m_probe->m_ios, sample_op<Handler>(m_handler, now, m_repeat, m_probe));
m_probe->m_ios,
sample_op<Handler>(m_handler, now, m_repeat, m_probe));
} }
else else
{ {
m_probe->m_timer.expires_after(when - now); m_probe->m_timer.expires_after(when - now);
m_probe->m_timer.async_wait( m_probe->m_timer.async_wait(sample_op<Handler>(m_handler, now, m_repeat, m_probe));
sample_op<Handler>(m_handler, now, m_repeat, m_probe));
} }
} }
} }
@@ -241,9 +220,7 @@ private:
if (!m_probe) if (!m_probe)
return; return;
typename Clock::time_point const now(Clock::now()); typename Clock::time_point const now(Clock::now());
boost::asio::post( boost::asio::post(m_probe->m_ios, sample_op<Handler>(m_handler, now, m_repeat, m_probe));
m_probe->m_ios,
sample_op<Handler>(m_handler, now, m_repeat, m_probe));
} }
}; };
}; };

View File

@@ -29,8 +29,7 @@ private:
time_point now_; time_point now_;
public: public:
explicit manual_clock(time_point const& now = time_point(duration(0))) explicit manual_clock(time_point const& now = time_point(duration(0))) : now_(now)
: now_(now)
{ {
} }
@@ -44,9 +43,7 @@ public:
void void
set(time_point const& when) set(time_point const& when)
{ {
XRPL_ASSERT( XRPL_ASSERT(!Clock::is_steady || when >= now_, "beast::manual_clock::set(time_point) : forward input");
!Clock::is_steady || when >= now_,
"beast::manual_clock::set(time_point) : forward input");
now_ = when; now_ = when;
} }
@@ -64,8 +61,7 @@ public:
advance(std::chrono::duration<Rep, Period> const& elapsed) advance(std::chrono::duration<Rep, Period> const& elapsed)
{ {
XRPL_ASSERT( XRPL_ASSERT(
!Clock::is_steady || (now_ + elapsed) >= now_, !Clock::is_steady || (now_ + elapsed) >= now_, "beast::manual_clock::advance(duration) : forward input");
"beast::manual_clock::advance(duration) : forward input");
now_ += elapsed; now_ += elapsed;
} }

View File

@@ -10,14 +10,12 @@ namespace beast {
/** Expire aged container items past the specified age. */ /** Expire aged container items past the specified age. */
template <class AgedContainer, class Rep, class Period> template <class AgedContainer, class Rep, class Period>
typename std::enable_if<is_aged_container<AgedContainer>::value, std::size_t>:: typename std::enable_if<is_aged_container<AgedContainer>::value, std::size_t>::type
type
expire(AgedContainer& c, std::chrono::duration<Rep, Period> const& age) expire(AgedContainer& c, std::chrono::duration<Rep, Period> const& age)
{ {
std::size_t n(0); std::size_t n(0);
auto const expired(c.clock().now() - age); auto const expired(c.clock().now() - age);
for (auto iter(c.chronological.cbegin()); for (auto iter(c.chronological.cbegin()); iter != c.chronological.cend() && iter.when() <= expired;)
iter != c.chronological.cend() && iter.when() <= expired;)
{ {
iter = c.erase(iter); iter = c.erase(iter);
++n; ++n;

View File

@@ -15,8 +15,7 @@ template <
class Clock = std::chrono::steady_clock, class Clock = std::chrono::steady_clock,
class Compare = std::less<Key>, class Compare = std::less<Key>,
class Allocator = std::allocator<std::pair<Key const, T>>> class Allocator = std::allocator<std::pair<Key const, T>>>
using aged_map = detail:: using aged_map = detail::aged_ordered_container<false, true, Key, T, Clock, Compare, Allocator>;
aged_ordered_container<false, true, Key, T, Clock, Compare, Allocator>;
} }

View File

@@ -15,8 +15,7 @@ template <
class Clock = std::chrono::steady_clock, class Clock = std::chrono::steady_clock,
class Compare = std::less<Key>, class Compare = std::less<Key>,
class Allocator = std::allocator<std::pair<Key const, T>>> class Allocator = std::allocator<std::pair<Key const, T>>>
using aged_multimap = detail:: using aged_multimap = detail::aged_ordered_container<true, true, Key, T, Clock, Compare, Allocator>;
aged_ordered_container<true, true, Key, T, Clock, Compare, Allocator>;
} }

View File

@@ -14,8 +14,7 @@ template <
class Clock = std::chrono::steady_clock, class Clock = std::chrono::steady_clock,
class Compare = std::less<Key>, class Compare = std::less<Key>,
class Allocator = std::allocator<Key>> class Allocator = std::allocator<Key>>
using aged_multiset = detail:: using aged_multiset = detail::aged_ordered_container<true, false, Key, void, Clock, Compare, Allocator>;
aged_ordered_container<true, false, Key, void, Clock, Compare, Allocator>;
} }

View File

@@ -14,8 +14,7 @@ template <
class Clock = std::chrono::steady_clock, class Clock = std::chrono::steady_clock,
class Compare = std::less<Key>, class Compare = std::less<Key>,
class Allocator = std::allocator<Key>> class Allocator = std::allocator<Key>>
using aged_set = detail:: using aged_set = detail::aged_ordered_container<false, false, Key, void, Clock, Compare, Allocator>;
aged_ordered_container<false, false, Key, void, Clock, Compare, Allocator>;
} }

Some files were not shown because too many files have changed in this diff Show More