Compare commits

...

62 Commits

Author SHA1 Message Date
Ed Hennis
4f98033c84 Fix formatting 2026-01-28 19:42:06 -05:00
Ed Hennis
0f4b201379 Merge branch 'develop' into ximinez/fix-getledger 2026-01-28 18:44:48 -04:00
Ed Hennis
561f2a29d0 Merge commit '5f638f55536def0d88b970d1018a465a238e55f4' into ximinez/fix-getledger
* commit '5f638f55536def0d88b970d1018a465a238e55f4':
  chore: Set ColumnLimit to 120 in clang-format (6288)
2026-01-28 17:43:47 -05:00
Ed Hennis
2e14e453ae Merge commit '92046785d1fea5f9efe5a770d636792ea6cab78b' into ximinez/fix-getledger
* commit '92046785d1fea5f9efe5a770d636792ea6cab78b':
  test: Fix the `xrpl.net` unit test using async read (6241)
  ci: Upload Conan recipes for develop, release candidates, and releases (6286)
  fix: Stop embedded tests from hanging on ARM by using `atomic_flag` (6248)
  fix:  Remove DEFAULT fields that change to the default in associateAsset (6259) (6273)
  refactor: Update Boost to 1.90 (6280)
  refactor: clean up uses of `std::source_location` (6272)
  ci: Pass missing sanitizers input to actions (6266)
  ci: Properly propagate Conan credentials (6265)
  ci: Explicitly set version when exporting the Conan recipe (6264)
  ci: Use plus instead of hyphen for Conan recipe version suffix (6261)
  chore: Detect uninitialized variables in CMake files (6247)
  ci: Run on-trigger and on-pr when generate-version is modified (6257)
  refactor: Enforce 15-char limit and simplify labels for thread naming (6212)
  docs: Update Ripple Bug Bounty public key (6258)
  ci: Add missing commit hash to Conan recipe version (6256)
  fix: Include `<functional>` header in `Number.h` (6254)
  ci: Upload Conan recipe for merges into develop and commits to release (6235)
  Limit reply size on `TMGetObjectByHash` queries (6110)
  ci: remove 'master' branch as a trigger (6234)
  Improve ledger_entry lookups for fee, amendments, NUNL, and hashes (5644)
2026-01-28 17:43:35 -05:00
Ayaz Salikhov
f3627fb5d5 chore: Remove unnecessary boost::system requirement from conanfile (#6290) 2026-01-28 19:14:31 +00:00
Ayaz Salikhov
5f638f5553 chore: Set ColumnLimit to 120 in clang-format (#6288)
This change updates the ColumnLimit from 80 to 120, and applies clang-format to reformat the code.
2026-01-28 18:09:50 +00:00
Jingchen
92046785d1 test: Fix the xrpl.net unit test using async read (#6241)
This change makes the `read` function call in `handleConnection` async, adds a new class `TestSink` to help debugging, and adds a new target `xrpl.tests.helpers` to put the helper class in.
2026-01-28 15:14:35 +00:00
Bart
b90a843ddd ci: Upload Conan recipes for develop, release candidates, and releases (#6286)
To allow developers to consume the latest unstable and (near-)stable versions of our `xrpl` Conan recipe, we should export and upload it whenever a push occurs to the corresponding branch or a release tag has been created. This way, developers do not have to figure out themselves what the most recent shortened commit hash was to determine the latest unstable recipe version (e.g. `3.2.0-b0+a1b2c3d`) or what the most recent release (candidate) was to determine the latest (near-)stable recipe version (e.g. `3.1.0-rc2`).

Now, pushes to the `develop` branch will produce the `develop` recipe version, pushes to the `release` branch will produce the `rc` recipe version, and creation of versioned tags will produce the `release` recipe version.
2026-01-28 10:02:34 +00:00
Jingchen
bb529d0317 fix: Stop embedded tests from hanging on ARM by using atomic_flag (#6248)
This change replaces the mutex `stoppingMutex_`, the `atomic_bool` variable `isTimeToStop`, and the conditional variable `stoppingCondition_` with an `atomic_flag` variable.

When `xrpld` is running the embedded tests as a child process, it has a control thread (the app bundle thread) that starts the application, and an application thread (the thread that executes `app_->run()`). Due to the relaxed memory ordering on ARM, it's not guaranteed that the application thread can see the change of the value resulting from the `isTimeToStop.exchange(true)` call before it is notified by `stoppingCondition_.notify_all()`, even though they do happen in the right order in the app bundle thread in `ApplicationImp::signalStop`. We therefore often get into the situation where `isTimeToStop` is `true`, but the application thread is waiting for `stoppingCondition_` to notify, because the app bundle thread may have already notified before the application thread actually starts waiting.

Switching to a single `atomic_flag` variable makes sure that there's only one synchronisation object and then the memory order guarantee provided by c++ can make sure that `notify_all` gets synchronised after `test_and_set` does.

Fixing this issue will stop the unit tests hanging forever and then we should see less (or hopefully no) time out errors in daily github action runs
2026-01-26 21:39:28 +00:00
Ed Hennis
a2f1973574 fix: Remove DEFAULT fields that change to the default in associateAsset (#6259) (#6273)
- Add Vault creation tests for showing valid range for AssetsMaximum
2026-01-26 19:58:12 +00:00
Bart
847e875635 refactor: Update Boost to 1.90 (#6280)
Upcoming feature work requires functionality present in a newer Boost version. These newer versions also have improvements for sanitizers.
2026-01-26 18:54:43 +00:00
Mayukha Vadari
778da954b4 refactor: clean up uses of std::source_location (#6272)
Since the minimum Clang version we support is 16, the checks for version < 15 are no longer necessary. This change therefore removes the macros checking if the clang version is < 15 and simplifies uses of `std::source_location`.
2026-01-23 14:09:00 -05:00
Bart
0586b5678e ci: Pass missing sanitizers input to actions (#6266)
The `upload-conan-deps` workflow that's triggered on push is supposed to upload the Conan dependencies to our remote, so future PR commits can pull those dependencies from the remote. However, as the `sanitize` argument is missing, it was building different dependencies than what the PRs are building for the asan/tsan/ubsan job, so the latter would not find anything in the remote that they could use. This change sets the missing `sanitizers` input variable when running the `build-deps` action. 

Separately, the `setup-conan` action showed the default profile, while we are using the `ci` profile. To ensure the profile is correctly printed when sanitizers are enabled, the environment variable the profile uses is set before calling the action.
2026-01-23 06:40:55 -05:00
Bart
66158d786f ci: Properly propagate Conan credentials (#6265)
The export and upload steps were initially in a separate action, where GitHub Actions does not support the `secrets` keyword, but only `inputs` for the credentials. After they were moved to a reusable workflow, only part of the references to the credentials were updated. This change correctly references to the Conan credentials via `secrets` instead of `inputs`.
2026-01-22 16:05:15 -05:00
Bart
c57ffdbcb8 ci: Explicitly set version when exporting the Conan recipe (#6264)
By default the Conan recipe extracts the version from `BuildInfo.cpp`, but in some of the cases we want to upload a recipe with a suffix derived from the commit hash. This currently then results in the uploading to fail, since there is a version mismatch.

Here we explicitly set the version, and then simplify the steps in the upload workflow since we now need the recipe name (embedded within the conanfile.py but also needed when uploading), the recipe version, and the recipe ref (name/version).
2026-01-22 19:05:59 +00:00
Bart
4e3f953fc4 ci: Use plus instead of hyphen for Conan recipe version suffix (#6261)
Conan recipes use semantic versioning, and since our version already contains a hyphen the second hyphen causes Conan to ignore it. The plus sign is a valid separator we can use instead, so this change uses a `+` to separate a version suffix (commit hash) instead of a `-`.
2026-01-22 16:42:53 +00:00
Pratik Mankawde
a4f8aa623f chore: Detect uninitialized variables in CMake files (#6247)
There were a few uninitialized variables in CMake files. This change will make sure we always check if a variable has been initialized before using them, or in come cases initialize them by default. This change will raise an error on CI if a developer introduced an uninitialized variable in CMake files.
2026-01-22 11:16:18 -05:00
Bart
8695313565 ci: Run on-trigger and on-pr when generate-version is modified (#6257)
This change ensures that the `on-pr` and `on-trigger` workflows run when the generate-version action is modified.
2026-01-22 13:48:50 +00:00
Valentin Balaschenko
68c9d5ca0f refactor: Enforce 15-char limit and simplify labels for thread naming (#6212)
This change continues the thread naming work from #5691 and #5758, which enables more useful lock contention profiling by ensuring threads/jobs have short, stable, human-readable names (rather than being truncated/failing due to OS limits). This changes diagnostic naming only (thread names and job/load-event labels), not behavior.

Specific modifications are:
* Shortens all thread/job names used with `beast::setCurrentThreadName`, so the effective Linux thread name stays within the 15-character limit.
* Removes per-ledger sequence numbers from job/thread names to avoid long labels. This improves aggregation in lock contention profiling for short-lived job executions.
2026-01-22 08:19:29 -05:00
David Fuelling
211054baff docs: Update Ripple Bug Bounty public key (#6258)
The Ripple Bug Bounty program recently changed the public keys that security researchers can use to encrypt vulnerabilities and messages for submission to the program. This information was updated on https://ripple.com/legal/bug-bounty/ and this PR updates the `SECURITY.md` to align.
2026-01-21 19:55:56 -05:00
Bart
4fd4e93b3e ci: Add missing commit hash to Conan recipe version (#6256)
During several iterations of development of https://github.com/XRPLF/rippled/pull/6235, the commit hash was supposed to be moved into the `run:` statement, but it slipped through the cracks and did not get added. This change adds the commit hash as suffix to the Conan recipe version.
2026-01-21 19:17:05 -05:00
Ayaz Salikhov
4cd6cc3e01 fix: Include <functional> header in Number.h (#6254)
The `Number.h` header file now has `std::reference_wrapper` from `<functional>`, but the include is missing, causing downstream build problems. This change adds the header.
2026-01-21 18:52:22 -05:00
Bart
a37c556079 ci: Upload Conan recipe for merges into develop and commits to release (#6235)
This change uploads the `libxrpl` library as a Conan recipe to our remote when (i) merging into the `develop` branch, (ii) committing to a PR that targets a `release*` branch, and (iii) a versioned tag is applied. Clio is only notified in the second case. The user and channel are no longer used when uploading the recipe.

Specific changes are:
* A `generate-version` action is added, which extracts the build version from `BuildInfo.cpp` and appends the short 7-character commit hash to it for merges into the `develop` branch and for commits to a PR that targets a `release*` branch. When a tag is applied, however, the tag itself is used as the version. This functionality has been turned into a separate action as we will use the same versioning logic for creating .rpm and .deb packages, as well as Docker images.
* An `upload-recipe` action is added, which calls the `generate-version` action and further handles the uploading of the recipe to Conan.
* This action is called by both the `on-pr` and `on-trigger` workflows, and a new `on-tag` workflow.

The reason for this change is that we have downstream uses for the `libxrpl` library, but currently only upload the recipe to check for compatibility with Clio when making commits to a PR that targets the release branch.
2026-01-21 17:31:44 -05:00
Pratik Mankawde
5e808794d8 Limit reply size on TMGetObjectByHash queries (#6110)
`PeerImp` processes `TMGetObjectByHash` queries with an unbounded per-request loop, which performs a `NodeStore` fetch and then appends retrieved data to the reply for each queried object without a local count cap or reply-byte budget. However, the `Nodestore` fetches are expensive when high in numbers, which might slow down the process overall. Hence this code change adds an upper cap on the response size.
2026-01-21 09:19:53 -05:00
Bart
12c0d67ff6 ci: remove 'master' branch as a trigger (#6234)
This change removes the `master` branch as a trigger for the CI pipelines, and updates comments accordingly. It also fixes the pre-commit workflow, so it will run on all release branches.
2026-01-16 15:01:53 -05:00
Ed Hennis
00d3cee6cc Improve ledger_entry lookups for fee, amendments, NUNL, and hashes (#5644)
These "fixed location" objects can be found in multiple ways:

1. The lookup parameters use the same format as other ledger objects, but the only valid value is true or the valid index of the object: 
  - Amendments: "amendments" : true
  - FeeSettings: "fee" : true
  - NegativeUNL: "nunl" : true
  - LedgerHashes: "hashes" : true (For the "short" list. See below.)

2. With RPC API >= 3, using special case values to "index", such as "index" : "amendments". Uses the same names as above. Note that for "hashes", this option will only return the recent ledger hashes / "short" skip list.

3. LedgerHashes has two types: "short", which stores recent ledger hashes, and "long", which stores the flag ledger hashes for a particular ledger range.
  - To find a "long" LedgerHashes object, request '"hashes" : <ledger sequence>'. <ledger sequence> must be a number that evaluates to an unsigned integer.
  - To find the "short" LedgerHashes object, request "hashes": true as with the other fixed objects.

The following queries are all functionally equivalent:

  - "amendments" : true
  - "index" : "amendments" (API >=3 only)
  - "amendments" : "7DB0788C020F02780A673DC74757F23823FA3014C1866E72CC4CD8B226CD6EF4"
  - "index" : "7DB0788C020F02780A673DC74757F23823FA3014C1866E72CC4CD8B226CD6EF4"

Finally, whether the object is found or not, if a valid index is computed, that index will be returned. This can be used to confirm the query was valid, or to save the index for future use.
2026-01-16 12:26:30 -05:00
Ed Hennis
b54539fb59 Merge branch 'develop' into ximinez/fix-getledger 2026-01-15 13:03:22 -04:00
Pratik Mankawde
96d17b7f66 ci: Add sanitizers to CI builds (#5996)
This change adds support for sanitizer build options in CI builds workflow. Currently `asan+ubsan` is enabled, while `tsan+ubsan` is left disabled as more changes are required.
2026-01-15 16:18:14 +00:00
Ed Hennis
83036e4f77 Merge branch 'develop' into ximinez/fix-getledger 2026-01-15 12:05:51 -04:00
Ayaz Salikhov
ec44347ffc test: Use gtest instead of doctest (#6216)
This change switches over the doctest framework to the gtest framework.
2026-01-15 08:36:13 -05:00
Ed Hennis
c9458b72ca test: Suppress "parse failed" message in Batch tests (#6207) 2026-01-14 23:45:00 +00:00
Mayukha Vadari
ebcfd6645d test: Replace failed string in Vault test case (#6214)
The word `failed` in the test case makes it hard to search through the test logs when an actual test failure occurs, so this change renames the word to just `fail` instead.
2026-01-14 14:40:07 -05:00
Ed Hennis
0f231c3cd2 Merge branch 'develop' into ximinez/fix-getledger 2026-01-13 18:19:04 -04:00
Ed Hennis
efa57e872b Change LendingProtocol feature and dependencies to supported (#6146) 2026-01-13 21:53:40 +00:00
Ed Hennis
58febcb683 Merge branch 'develop' into ximinez/fix-getledger 2026-01-13 15:27:53 -04:00
Ed Hennis
754757bac9 Merge branch 'develop' into ximinez/fix-getledger 2026-01-12 14:52:07 -04:00
Ed Hennis
859467455c Merge branch 'develop' into ximinez/fix-getledger 2026-01-11 00:50:36 -04:00
Ed Hennis
60dc20042c Merge branch 'develop' into ximinez/fix-getledger 2026-01-08 17:06:02 -04:00
Ed Hennis
108b3e5e2c Merge branch 'develop' into ximinez/fix-getledger 2026-01-08 13:04:12 -04:00
Ed Hennis
599f197109 Merge branch 'develop' into ximinez/fix-getledger 2026-01-06 14:02:05 -05:00
Ed Hennis
36000188b1 Merge branch 'develop' into ximinez/fix-getledger 2025-12-22 17:39:51 -05:00
Ed Hennis
944d758a39 Merge branch 'develop' into ximinez/fix-getledger 2025-12-18 19:59:45 -05:00
Ed Hennis
de661f8ef8 Merge branch 'develop' into ximinez/fix-getledger 2025-12-12 20:34:51 -05:00
Ed Hennis
8b70664d4c Merge remote-tracking branch 'XRPLF/develop' into ximinez/fix-getledger
* XRPLF/develop:
  refactor: Rename `ripple` namespace to `xrpl` (5982)
  refactor: Move JobQueue and related classes into xrpl.core module (6121)
  refactor: Rename `rippled` binary to `xrpld` (5983)
2025-12-11 15:30:34 -05:00
Ed Hennis
4f03625b75 Merge remote-tracking branch 'XRPLF/develop' into ximinez/fix-getledger
* XRPLF/develop:
  refactor: rename info() to header() (6138)
  refactor: rename `LedgerInfo` to `LedgerHeader` (6136)
  refactor: clean up `RPCHelpers` (5684)
  chore: Fix docs readme and cmake (6122)
  chore: Clean up .gitignore and .gitattributes (6001)
  chore: Use updated secp256k1 recipe (6118)
2025-12-11 14:28:37 -05:00
Ed Hennis
fb0379f93d Merge branch 'develop' into ximinez/fix-getledger 2025-12-05 21:13:02 -05:00
Ed Hennis
8e795197c9 Merge branch 'develop' into ximinez/fix-getledger 2025-12-02 17:37:21 -05:00
Ed Hennis
cf7665137e Merge branch 'develop' into ximinez/fix-getledger 2025-12-01 14:40:37 -05:00
Ed Hennis
0aa43e1772 Merge branch 'develop' into ximinez/fix-getledger 2025-11-28 15:46:37 -05:00
Ed Hennis
0834c23f27 Merge branch 'develop' into ximinez/fix-getledger 2025-11-27 01:48:49 -05:00
Ed Hennis
cd62ba13ab Merge branch 'develop' into ximinez/fix-getledger 2025-11-26 00:25:08 -05:00
Ed Hennis
31fc348446 Merge branch 'develop' into ximinez/fix-getledger 2025-11-25 14:54:57 -05:00
Ed Hennis
009f17b9bf Merge branch 'develop' into ximinez/fix-getledger 2025-11-24 21:49:03 -05:00
Ed Hennis
b0872bae95 Merge branch 'develop' into ximinez/fix-getledger 2025-11-24 21:30:14 -05:00
Ed Hennis
0c69b23b93 Merge branch 'develop' into ximinez/fix-getledger 2025-11-21 12:47:49 -05:00
Ed Hennis
a2e93188fc Merge branch 'develop' into ximinez/fix-getledger 2025-11-18 22:39:22 -05:00
Ed Hennis
5406d28357 Merge branch 'develop' into ximinez/fix-getledger 2025-11-15 03:08:34 -05:00
Ed Hennis
7804c09494 Merge branch 'develop' into ximinez/fix-getledger 2025-11-13 12:18:58 -05:00
Ed Hennis
79d294bd2d Merge branch 'develop' into ximinez/fix-getledger 2025-11-12 14:12:47 -05:00
Ed Hennis
acace507d0 Fix formatting 2025-11-10 19:52:59 -05:00
Ed Hennis
98732100fb Reduce duplicate peer traffic for ledger data (#5126)
- Drop duplicate outgoing TMGetLedger messages per peer
  - Allow a retry after 30s in case of peer or network congestion.
  - Addresses RIPD-1870
  - (Changes levelization. That is not desirable, and will need to be fixed.)
- Drop duplicate incoming TMGetLedger messages per peer
  - Allow a retry after 15s in case of peer or network congestion.
  - The requestCookie is ignored when computing the hash, thus increasing
    the chances of detecting duplicate messages.
  - With duplicate messages, keep track of the different requestCookies
    (or lack of cookie). When work is finally done for a given request,
    send the response to all the peers that are waiting on the request,
    sending one message per peer, including all the cookies and
    a "directResponse" flag indicating the data is intended for the
    sender, too.
  - Addresses RIPD-1871
- Drop duplicate incoming TMLedgerData messages
  - Addresses RIPD-1869
2025-11-10 19:52:59 -05:00
Ed Hennis
b186516d0a Improve job queue collision checks and logging
- Improve logging related to ledger acquisition and operating mode
  changes
- Class "CanProcess" to keep track of processing of distinct items
2025-11-10 19:52:59 -05:00
1072 changed files with 31175 additions and 67799 deletions

View File

@@ -37,7 +37,7 @@ BinPackParameters: false
BreakBeforeBinaryOperators: false
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true
ColumnLimit: 80
ColumnLimit: 120
CommentPragmas: "^ IWYU pragma:"
ConstructorInitializerAllOnOneLineOrOnePerLine: true
ConstructorInitializerIndentWidth: 4

View File

@@ -28,6 +28,7 @@ ignoreRegExpList:
- /[\['"`]-[DWw][a-zA-Z0-9_-]+['"`\]]/g # compile flags
suggestWords:
- xprl->xrpl
- xprld->xrpld
- unsynched->unsynced
- synched->synced
- synch->sync
@@ -61,6 +62,7 @@ words:
- compr
- conanfile
- conanrun
- confs
- connectability
- coro
- coros
@@ -90,6 +92,7 @@ words:
- finalizers
- firewalled
- fmtdur
- fsanitize
- funclets
- gcov
- gcovr
@@ -126,6 +129,7 @@ words:
- lseq
- lsmf
- ltype
- mcmodel
- MEMORYSTATUSEX
- Merkle
- Metafuncton
@@ -235,6 +239,8 @@ words:
- txn
- txns
- txs
- UBSAN
- ubsan
- umant
- unacquired
- unambiguity
@@ -270,6 +276,7 @@ words:
- xbridge
- xchain
- ximinez
- EXPECT_STREQ
- XMACRO
- xrpkuwait
- xrpl

View File

@@ -18,6 +18,10 @@ inputs:
description: "The logging verbosity."
required: false
default: "verbose"
sanitizers:
description: "The sanitizers to enable."
required: false
default: ""
runs:
using: composite
@@ -29,9 +33,11 @@ runs:
BUILD_OPTION: ${{ inputs.force_build == 'true' && '*' || 'missing' }}
BUILD_TYPE: ${{ inputs.build_type }}
LOG_VERBOSITY: ${{ inputs.log_verbosity }}
SANITIZERS: ${{ inputs.sanitizers }}
run: |
echo 'Installing dependencies.'
conan install \
--profile ci \
--build="${BUILD_OPTION}" \
--options:host='&:tests=True' \
--options:host='&:xrpld=True' \

View File

@@ -0,0 +1,44 @@
name: Generate build version number
description: "Generate build version number."
outputs:
version:
description: "The generated build version number."
value: ${{ steps.version.outputs.version }}
runs:
using: composite
steps:
# When a tag is pushed, the version is used as-is.
- name: Generate version for tag event
if: ${{ github.event_name == 'tag' }}
shell: bash
env:
VERSION: ${{ github.ref_name }}
run: echo "VERSION=${VERSION}" >> "${GITHUB_ENV}"
# When a tag is not pushed, then the version (e.g. 1.2.3-b0) is extracted
# from the BuildInfo.cpp file and the shortened commit hash appended to it.
# We use a plus sign instead of a hyphen because Conan recipe versions do
# not support two hyphens.
- name: Generate version for non-tag event
if: ${{ github.event_name != 'tag' }}
shell: bash
run: |
echo 'Extracting version from BuildInfo.cpp.'
VERSION="$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" | awk -F '"' '{print $2}')"
if [[ -z "${VERSION}" ]]; then
echo 'Unable to extract version from BuildInfo.cpp.'
exit 1
fi
echo 'Appending shortened commit hash to version.'
SHA='${{ github.sha }}'
VERSION="${VERSION}+${SHA:0:7}"
echo "VERSION=${VERSION}" >> "${GITHUB_ENV}"
- name: Output version
id: version
shell: bash
run: echo "version=${VERSION}" >> "${GITHUB_OUTPUT}"

View File

@@ -2,11 +2,11 @@ name: Setup Conan
description: "Set up Conan configuration, profile, and remote."
inputs:
conan_remote_name:
remote_name:
description: "The name of the Conan remote to use."
required: false
default: xrplf
conan_remote_url:
remote_url:
description: "The URL of the Conan endpoint to use."
required: false
default: https://conan.ripplex.io
@@ -28,19 +28,19 @@ runs:
shell: bash
run: |
echo 'Installing profile.'
conan config install conan/profiles/default -tf $(conan config home)/profiles/
conan config install conan/profiles/ -tf $(conan config home)/profiles/
echo 'Conan profile:'
conan profile show
conan profile show --profile ci
- name: Set up Conan remote
shell: bash
env:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
CONAN_REMOTE_URL: ${{ inputs.conan_remote_url }}
REMOTE_NAME: ${{ inputs.remote_name }}
REMOTE_URL: ${{ inputs.remote_url }}
run: |
echo "Adding Conan remote '${CONAN_REMOTE_NAME}' at '${CONAN_REMOTE_URL}'."
conan remote add --index 0 --force "${CONAN_REMOTE_NAME}" "${CONAN_REMOTE_URL}"
echo "Adding Conan remote '${REMOTE_NAME}' at '${REMOTE_URL}'."
conan remote add --index 0 --force "${REMOTE_NAME}" "${REMOTE_URL}"
echo 'Listing Conan remotes.'
conan remote list

View File

@@ -104,6 +104,7 @@ test.overlay > xrpl.basics
test.overlay > xrpld.app
test.overlay > xrpld.overlay
test.overlay > xrpld.peerfinder
test.overlay > xrpl.nodestore
test.overlay > xrpl.protocol
test.overlay > xrpl.shamap
test.peerfinder > test.beast

View File

@@ -20,8 +20,8 @@ class Config:
Generate a strategy matrix for GitHub Actions CI.
On each PR commit we will build a selection of Debian, RHEL, Ubuntu, MacOS, and
Windows configurations, while upon merge into the develop, release, or master
branches, we will build all configurations, and test most of them.
Windows configurations, while upon merge into the develop or release branches,
we will build all configurations, and test most of them.
We will further set additional CMake arguments as follows:
- All builds will have the `tests`, `werr`, and `xrpld` options.
@@ -229,7 +229,7 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
if (n := os["compiler_version"]) != "":
config_name += f"-{n}"
config_name += (
f"-{architecture['platform'][architecture['platform'].find('/') + 1 :]}"
f"-{architecture['platform'][architecture['platform'].find('/')+1:]}"
)
config_name += f"-{build_type.lower()}"
if "-Dcoverage=ON" in cmake_args:
@@ -240,17 +240,53 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
# Add the configuration to the list, with the most unique fields first,
# so that they are easier to identify in the GitHub Actions UI, as long
# names get truncated.
configurations.append(
{
"config_name": config_name,
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
}
)
# Add Address and Thread (both coupled with UB) sanitizers for specific bookworm distros.
# GCC-Asan rippled-embedded tests are failing because of https://github.com/google/sanitizers/issues/856
if (
os["distro_version"] == "bookworm"
and f"{os['compiler_name']}-{os['compiler_version']}" == "clang-20"
):
# Add ASAN + UBSAN configuration.
configurations.append(
{
"config_name": config_name + "-asan-ubsan",
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
"sanitizers": "address,undefinedbehavior",
}
)
# TSAN is deactivated due to seg faults with latest compilers.
activate_tsan = False
if activate_tsan:
configurations.append(
{
"config_name": config_name + "-tsan-ubsan",
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
"sanitizers": "thread,undefinedbehavior",
}
)
else:
configurations.append(
{
"config_name": config_name,
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
"sanitizers": "",
}
)
return configurations

View File

@@ -1,7 +1,8 @@
# This workflow runs all workflows to check, build and test the project on
# various Linux flavors, as well as on MacOS and Windows, on every push to a
# user branch. However, it will not run if the pull request is a draft unless it
# has the 'DraftRunCI' label.
# has the 'DraftRunCI' label. For commits to PRs that target a release branch,
# it also uploads the libxrpl recipe to the Conan remote.
name: PR
on:
@@ -53,12 +54,12 @@ jobs:
.github/scripts/rename/**
.github/workflows/reusable-check-levelization.yml
.github/workflows/reusable-check-rename.yml
.github/workflows/reusable-notify-clio.yml
.github/workflows/on-pr.yml
# Keep the paths below in sync with those in `on-trigger.yml`.
.github/actions/build-deps/**
.github/actions/build-test/**
.github/actions/generate-version/**
.github/actions/setup-conan/**
.github/scripts/strategy-matrix/**
.github/workflows/reusable-build.yml
@@ -66,6 +67,7 @@ jobs:
.github/workflows/reusable-build-test.yml
.github/workflows/reusable-strategy-matrix.yml
.github/workflows/reusable-test.yml
.github/workflows/reusable-upload-recipe.yml
.codecov.yml
cmake/**
conan/**
@@ -121,22 +123,42 @@ jobs:
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
notify-clio:
upload-recipe:
needs:
- should-run
- build-test
if: ${{ needs.should-run.outputs.go == 'true' && (startsWith(github.base_ref, 'release') || github.base_ref == 'master') }}
uses: ./.github/workflows/reusable-notify-clio.yml
# Only run when committing to a PR that targets a release branch in the
# XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && needs.should-run.outputs.go == 'true' && startsWith(github.ref, 'refs/heads/release') }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
clio_notify_token: ${{ secrets.CLIO_NOTIFY_TOKEN }}
conan_remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
conan_remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}
notify-clio:
needs: upload-recipe
runs-on: ubuntu-latest
steps:
# Notify the Clio repository about the newly proposed release version, so
# it can be checked for compatibility before the release is actually made.
- name: Notify Clio
env:
GH_TOKEN: ${{ secrets.CLIO_NOTIFY_TOKEN }}
PR_URL: ${{ github.event.pull_request.html_url }}
run: |
gh api --method POST -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" \
/repos/xrplf/clio/dispatches -f "event_type=check_libxrpl" \
-F "client_payload[ref]=${{ needs.upload-recipe.outputs.recipe_ref }}" \
-F "client_payload[pr_url]=${PR_URL}"
passed:
if: failure() || cancelled()
needs:
- build-test
- check-levelization
- check-rename
- build-test
- upload-recipe
- notify-clio
runs-on: ubuntu-latest
steps:
- name: Fail

25
.github/workflows/on-tag.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
# This workflow uploads the libxrpl recipe to the Conan remote when a versioned
# tag is pushed.
name: Tag
on:
push:
tags:
- "v*"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
upload-recipe:
# Only run when a tag is pushed to the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}

View File

@@ -1,9 +1,7 @@
# This workflow runs all workflows to build the dependencies required for the
# project on various Linux flavors, as well as on MacOS and Windows, on a
# scheduled basis, on merge into the 'develop', 'release', or 'master' branches,
# or manually. The missing commits check is only run when the code is merged
# into the 'develop' or 'release' branches, and the documentation is built when
# the code is merged into the 'develop' branch.
# This workflow runs all workflows to build and test the code on various Linux
# flavors, as well as on MacOS and Windows, on a scheduled basis, on merge into
# the 'develop' or 'release*' branches, or when requested manually. Upon pushes
# to the develop branch it also uploads the libxrpl recipe to the Conan remote.
name: Trigger
on:
@@ -11,7 +9,6 @@ on:
branches:
- "develop"
- "release*"
- "master"
paths:
# These paths are unique to `on-trigger.yml`.
- ".github/workflows/on-trigger.yml"
@@ -19,6 +16,7 @@ on:
# Keep the paths below in sync with those in `on-pr.yml`.
- ".github/actions/build-deps/**"
- ".github/actions/build-test/**"
- ".github/actions/generate-version/**"
- ".github/actions/setup-conan/**"
- ".github/scripts/strategy-matrix/**"
- ".github/workflows/reusable-build.yml"
@@ -26,6 +24,7 @@ on:
- ".github/workflows/reusable-build-test.yml"
- ".github/workflows/reusable-strategy-matrix.yml"
- ".github/workflows/reusable-test.yml"
- ".github/workflows/reusable-upload-recipe.yml"
- ".codecov.yml"
- "cmake/**"
- "conan/**"
@@ -70,11 +69,20 @@ jobs:
with:
# Enable ccache only for events targeting the XRPLF repository, since
# other accounts will not have access to our remote cache storage.
# However, we do not enable ccache for events targeting the master or a
# release branch, to protect against the rare case that the output
# produced by ccache is not identical to a regular compilation.
ccache_enabled: ${{ github.repository_owner == 'XRPLF' && !(github.base_ref == 'master' || startsWith(github.base_ref, 'release')) }}
# However, we do not enable ccache for events targeting a release branch,
# to protect against the rare case that the output produced by ccache is
# not identical to a regular compilation.
ccache_enabled: ${{ github.repository_owner == 'XRPLF' && !startsWith(github.ref, 'refs/heads/release') }}
os: ${{ matrix.os }}
strategy_matrix: ${{ github.event_name == 'schedule' && 'all' || 'minimal' }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
upload-recipe:
needs: build-test
# Only run when pushing to the develop branch in the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && github.event_name == 'push' && github.ref == 'refs/heads/develop' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}

View File

@@ -3,7 +3,9 @@ name: Run pre-commit hooks
on:
pull_request:
push:
branches: [develop, release, master]
branches:
- "develop"
- "release*"
workflow_dispatch:
jobs:

View File

@@ -51,6 +51,12 @@ on:
type: number
default: 2
sanitizers:
description: "The sanitizers to enable."
required: false
type: string
default: ""
secrets:
CODECOV_TOKEN:
description: "The Codecov token to use for uploading coverage reports."
@@ -91,6 +97,7 @@ jobs:
# Determine if coverage and voidstar should be enabled.
COVERAGE_ENABLED: ${{ contains(inputs.cmake_args, '-Dcoverage=ON') }}
VOIDSTAR_ENABLED: ${{ contains(inputs.cmake_args, '-Dvoidstar=ON') }}
SANITIZERS_ENABLED: ${{ inputs.sanitizers != '' }}
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
@@ -118,6 +125,8 @@ jobs:
subtract: ${{ inputs.nproc_subtract }}
- name: Setup Conan
env:
SANITIZERS: ${{ inputs.sanitizers }}
uses: ./.github/actions/setup-conan
- name: Build dependencies
@@ -128,11 +137,13 @@ jobs:
# Set the verbosity to "quiet" for Windows to avoid an excessive
# amount of logs. For other OSes, the "verbose" logs are more useful.
log_verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }}
sanitizers: ${{ inputs.sanitizers }}
- name: Configure CMake
working-directory: ${{ env.BUILD_DIR }}
env:
BUILD_TYPE: ${{ inputs.build_type }}
SANITIZERS: ${{ inputs.sanitizers }}
CMAKE_ARGS: ${{ inputs.cmake_args }}
run: |
cmake \
@@ -174,7 +185,7 @@ jobs:
if-no-files-found: error
- name: Check linking (Linux)
if: ${{ runner.os == 'Linux' }}
if: ${{ runner.os == 'Linux' && env.SANITIZERS_ENABLED == 'false' }}
working-directory: ${{ env.BUILD_DIR }}
run: |
ldd ./xrpld
@@ -191,6 +202,14 @@ jobs:
run: |
./xrpld --version | grep libvoidstar
- name: Set sanitizer options
if: ${{ !inputs.build_only && env.SANITIZERS_ENABLED == 'true' }}
run: |
echo "ASAN_OPTIONS=print_stacktrace=1:detect_container_overflow=0:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/asan.supp" >> ${GITHUB_ENV}
echo "TSAN_OPTIONS=second_deadlock_stack=1:halt_on_error=0:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/tsan.supp" >> ${GITHUB_ENV}
echo "UBSAN_OPTIONS=suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/ubsan.supp" >> ${GITHUB_ENV}
echo "LSAN_OPTIONS=suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/lsan.supp" >> ${GITHUB_ENV}
- name: Run the separate tests
if: ${{ !inputs.build_only }}
working-directory: ${{ env.BUILD_DIR }}

View File

@@ -57,5 +57,6 @@ jobs:
runs_on: ${{ toJSON(matrix.architecture.runner) }}
image: ${{ contains(matrix.architecture.platform, 'linux') && format('ghcr.io/xrplf/ci/{0}-{1}:{2}-{3}-sha-{4}', matrix.os.distro_name, matrix.os.distro_version, matrix.os.compiler_name, matrix.os.compiler_version, matrix.os.image_sha) || '' }}
config_name: ${{ matrix.config_name }}
sanitizers: ${{ matrix.sanitizers }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -1,91 +0,0 @@
# This workflow exports the built libxrpl package to the Conan remote on a
# a channel named after the pull request, and notifies the Clio repository about
# the new version so it can check for compatibility.
name: Notify Clio
# This workflow can only be triggered by other workflows.
on:
workflow_call:
inputs:
conan_remote_name:
description: "The name of the Conan remote to use."
required: false
type: string
default: xrplf
conan_remote_url:
description: "The URL of the Conan endpoint to use."
required: false
type: string
default: https://conan.ripplex.io
secrets:
clio_notify_token:
description: "The GitHub token to notify Clio about new versions."
required: true
conan_remote_username:
description: "The username for logging into the Conan remote."
required: true
conan_remote_password:
description: "The password for logging into the Conan remote."
required: true
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-clio
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
upload:
if: ${{ github.event.pull_request.head.repo.full_name == github.repository }}
runs-on: ubuntu-latest
container: ghcr.io/xrplf/ci/ubuntu-noble:gcc-13-sha-5dd7158
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Generate outputs
id: generate
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
run: |
echo 'Generating user and channel.'
echo "user=clio" >> "${GITHUB_OUTPUT}"
echo "channel=pr_${PR_NUMBER}" >> "${GITHUB_OUTPUT}"
echo 'Extracting version.'
echo "version=$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" | awk -F '"' '{print $2}')" >> "${GITHUB_OUTPUT}"
- name: Calculate conan reference
id: conan_ref
run: |
echo "conan_ref=${{ steps.generate.outputs.version }}@${{ steps.generate.outputs.user }}/${{ steps.generate.outputs.channel }}" >> "${GITHUB_OUTPUT}"
- name: Set up Conan
uses: ./.github/actions/setup-conan
with:
conan_remote_name: ${{ inputs.conan_remote_name }}
conan_remote_url: ${{ inputs.conan_remote_url }}
- name: Log into Conan remote
env:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
run: conan remote login "${CONAN_REMOTE_NAME}" "${{ secrets.conan_remote_username }}" --password "${{ secrets.conan_remote_password }}"
- name: Upload package
env:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
run: |
conan export --user=${{ steps.generate.outputs.user }} --channel=${{ steps.generate.outputs.channel }} .
conan upload --confirm --check --remote="${CONAN_REMOTE_NAME}" xrpl/${{ steps.conan_ref.outputs.conan_ref }}
outputs:
conan_ref: ${{ steps.conan_ref.outputs.conan_ref }}
notify:
needs: upload
runs-on: ubuntu-latest
steps:
- name: Notify Clio
env:
GH_TOKEN: ${{ secrets.clio_notify_token }}
PR_URL: ${{ github.event.pull_request.html_url }}
run: |
gh api --method POST -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" \
/repos/xrplf/clio/dispatches -f "event_type=check_libxrpl" \
-F "client_payload[conan_ref]=${{ needs.upload.outputs.conan_ref }}" \
-F "client_payload[pr_url]=${PR_URL}"

View File

@@ -0,0 +1,97 @@
# This workflow exports the built libxrpl package to the Conan remote.
name: Upload Conan recipe
# This workflow can only be triggered by other workflows.
on:
workflow_call:
inputs:
remote_name:
description: "The name of the Conan remote to use."
required: false
type: string
default: xrplf
remote_url:
description: "The URL of the Conan endpoint to use."
required: false
type: string
default: https://conan.ripplex.io
secrets:
remote_username:
description: "The username for logging into the Conan remote."
required: true
remote_password:
description: "The password for logging into the Conan remote."
required: true
outputs:
recipe_ref:
description: "The Conan recipe reference ('name/version') that was uploaded."
value: ${{ jobs.upload.outputs.ref }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-upload-recipe
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
upload:
runs-on: ubuntu-latest
container: ghcr.io/xrplf/ci/ubuntu-noble:gcc-13-sha-5dd7158
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Generate build version number
id: version
uses: ./.github/actions/generate-version
- name: Set up Conan
uses: ./.github/actions/setup-conan
with:
remote_name: ${{ inputs.remote_name }}
remote_url: ${{ inputs.remote_url }}
- name: Log into Conan remote
env:
REMOTE_NAME: ${{ inputs.remote_name }}
REMOTE_USERNAME: ${{ secrets.remote_username }}
REMOTE_PASSWORD: ${{ secrets.remote_password }}
run: conan remote login "${REMOTE_NAME}" "${REMOTE_USERNAME}" --password "${REMOTE_PASSWORD}"
- name: Upload Conan recipe (version)
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=${{ steps.version.outputs.version }}
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/${{ steps.version.outputs.version }}
- name: Upload Conan recipe (develop)
if: ${{ github.ref == 'refs/heads/develop' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=develop
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/develop
- name: Upload Conan recipe (rc)
if: ${{ startsWith(github.ref, 'refs/heads/release') }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=rc
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/rc
- name: Upload Conan recipe (release)
if: ${{ github.event_name == 'tag' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=release
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/release
outputs:
ref: xrpl/${{ steps.version.outputs.version }}

View File

@@ -84,10 +84,12 @@ jobs:
subtract: ${{ env.NPROC_SUBTRACT }}
- name: Setup Conan
env:
SANITIZERS: ${{ matrix.sanitizers }}
uses: ./.github/actions/setup-conan
with:
conan_remote_name: ${{ env.CONAN_REMOTE_NAME }}
conan_remote_url: ${{ env.CONAN_REMOTE_URL }}
remote_name: ${{ env.CONAN_REMOTE_NAME }}
remote_url: ${{ env.CONAN_REMOTE_URL }}
- name: Build dependencies
uses: ./.github/actions/build-deps
@@ -98,6 +100,7 @@ jobs:
# Set the verbosity to "quiet" for Windows to avoid an excessive
# amount of logs. For other OSes, the "verbose" logs are more useful.
log_verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }}
sanitizers: ${{ matrix.sanitizers }}
- name: Log into Conan remote
if: ${{ github.repository_owner == 'XRPLF' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}

View File

@@ -1,5 +1,5 @@
| :warning: **WARNING** :warning:
|---|
| :warning: **WARNING** :warning: |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| These instructions assume you have a C++ development environment ready with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up on Linux, macOS, or Windows, [see this guide](./docs/build/environment.md). |
> These instructions also assume a basic familiarity with Conan and CMake.
@@ -148,8 +148,8 @@ function extract_version {
}
# Define which recipes to export.
recipes=('ed25519' 'grpc' 'openssl' 'secp256k1' 'snappy' 'soci')
folders=('all' 'all' '3.x.x' 'all' 'all' 'all')
recipes=('ed25519' 'grpc' 'nudb' 'openssl' 'secp256k1' 'snappy' 'soci')
folders=('all' 'all' 'all' '3.x.x' 'all' 'all' 'all')
# Selectively check out the recipes from our CCI fork.
cd external
@@ -523,18 +523,32 @@ stored inside the build directory, as either of:
- file named `coverage.`_extension_, with a suitable extension for the report format, or
- directory named `coverage`, with the `index.html` and other files inside, for the `html-details` or `html-nested` report formats.
## Sanitizers
To build dependencies and xrpld with sanitizer instrumentation, set the
`SANITIZERS` environment variable (only once before running conan and cmake) and use the `sanitizers` profile in conan:
```bash
export SANITIZERS=address,undefinedbehavior
conan install .. --output-folder . --profile:all sanitizers --build missing --settings build_type=Debug
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Debug -Dxrpld=ON -Dtests=ON ..
```
See [Sanitizers docs](./docs/build/sanitizers.md) for more details.
## Options
| Option | Default Value | Description |
| ---------- | ------------- | ------------------------------------------------------------------ |
| `assert` | OFF | Enable assertions. |
| `coverage` | OFF | Prepare the coverage report. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
| `tests` | OFF | Build tests. |
| `unity` | OFF | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld application, and not just the libxrpl library. |
| `werr` | OFF | Treat compilation warnings as errors |
| `wextra` | OFF | Enable additional compilation warnings |
| Option | Default Value | Description |
| ---------- | ------------- | -------------------------------------------------------------- |
| `assert` | OFF | Enable assertions. |
| `coverage` | OFF | Prepare the coverage report. |
| `tests` | OFF | Build tests. |
| `unity` | OFF | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld application, and not just the libxrpl library. |
| `werr` | OFF | Treat compilation warnings as errors |
| `wextra` | OFF | Enable additional compilation warnings |
[Unity builds][5] may be faster for the first build
(at the cost of much more memory) since they concatenate sources into fewer

View File

@@ -8,7 +8,9 @@ if(POLICY CMP0077)
endif()
# Fix "unrecognized escape" issues when passing CMAKE_MODULE_PATH on Windows.
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
if(DEFINED CMAKE_MODULE_PATH)
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
endif()
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
project(xrpl)
@@ -16,14 +18,16 @@ set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU")
include(CompilationEnv)
if(is_gcc)
# GCC-specific fixes
add_compile_options(-Wno-unknown-pragmas -Wno-subobject-linkage)
# -Wno-subobject-linkage can be removed when we upgrade GCC version to at least 13.3
elseif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
elseif(is_clang)
# Clang-specific fixes
add_compile_options(-Wno-unknown-warning-option) # Ignore unknown warning options
elseif(MSVC)
elseif(is_msvc)
# MSVC-specific fixes
add_compile_options(/wd4068) # Ignore unknown pragmas
endif()
@@ -77,6 +81,7 @@ if (packages_only)
return ()
endif ()
include(XrplCompiler)
include(XrplSanitizers)
include(XrplInterface)
option(only_docs "Include only the docs target?" FALSE)

View File

@@ -78,72 +78,61 @@ To report a qualifying bug, please send a detailed report to:
| Email Address | bugs@ripple.com |
| :-----------: | :-------------------------------------------------- |
| Short Key ID | `0xC57929BE` |
| Long Key ID | `0xCD49A0AFC57929BE` |
| Fingerprint | `24E6 3B02 37E0 FA9C 5E96 8974 CD49 A0AF C579 29BE` |
| Short Key ID | `0xA9F514E0` |
| Long Key ID | `0xD900855AA9F514E0` |
| Fingerprint | `B72C 0654 2F2A E250 2763 A268 D900 855A A9F5 14E0` |
The full PGP key for this address, which is also available on several key servers (e.g. on [keyserver.ubuntu.com](https://keyserver.ubuntu.com)), is:
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFUwGHYBEAC0wpGpBPkd8W1UdQjg9+cEFzeIEJRaoZoeuJD8mofwI5Ejnjdt
kCpUYEDal0ygkKobu8SzOoATcDl18iCrScX39VpTm96vISFZMhmOryYCIp4QLJNN
4HKc2ZdBj6W4igNi6vj5Qo6JMyGpLY2mz4CZskbt0TNuUxWrGood+UrCzpY8x7/N
a93fcvNw+prgCr0rCH3hAPmAFfsOBbtGzNnmq7xf3jg5r4Z4sDiNIF1X1y53DAfV
rWDx49IKsuCEJfPMp1MnBSvDvLaQ2hKXs+cOpx1BCZgHn3skouEUxxgqbtTzBLt1
xXpmuijsaltWngPnGO7mOAzbpZSdBm82/Emrk9bPMuD0QaLQjWr7HkTSUs6ZsKt4
7CLPdWqxyY/QVw9UaxeHEtWGQGMIQGgVJGh1fjtUr5O1sC9z9jXcQ0HuIHnRCTls
GP7hklJmfH5V4SyAJQ06/hLuEhUJ7dn+BlqCsT0tLmYTgZYNzNcLHcqBFMEZHvHw
9GENMx/tDXgajKql4bJnzuTK0iGU/YepanANLd1JHECJ4jzTtmKOus9SOGlB2/l1
0t0ADDYAS3eqOdOcUvo9ElSLCI5vSVHhShSte/n2FMWU+kMUboTUisEG8CgQnrng
g2CvvQvqDkeOtZeqMcC7HdiZS0q3LJUWtwA/ViwxrVlBDCxiTUXCotyBWwARAQAB
tDBSaXBwbGUgTGFicyBCdWcgQm91bnR5IFByb2dyYW0gPGJ1Z3NAcmlwcGxlLmNv
bT6JAjcEEwEKACEFAlUwGHYCGwMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ
zUmgr8V5Kb6R0g//SwY/mVJY59k87iL26/KayauSoOcz7xjcST26l4ZHVVX85gOY
HYZl8k0+m8X3zxeYm9a3QAoAml8sfoaFRFQP8ynnefRrLUPaZ2MjbJ0SACMwZNef
T6o7Mi8LBAaiNZdYVyIfX1oM6YXtqYkuJdav6ZCyvVYqc9OvMJPY2ZzJYuI/ZtvQ
/lTndxCeg9ALNX/iezOLGdfMpf4HuIFVwcPPlwGi+HDlB9/bggDEHC8z434SXVFc
aQatXAPcDkjMUweU7y0CZtYEj00HITd4pSX6MqGiHrxlDZTqinCOPs1Ieqp7qufs
MzlM6irLGucxj1+wa16ieyYvEtGaPIsksUKkywx0O7cf8N2qKg+eIkUk6O0Uc6eO
CszizmiXIXy4O6OiLlVHGKkXHMSW9Nwe9GE95O8G9WR8OZCEuDv+mHPAutO+IjdP
PDAAUvy+3XnkceO+HGWRpVvJZfFP2YH4A33InFL5yqlJmSoR/yVingGLxk55bZDM
+HYGR3VeMb8Xj1rf/02qERsZyccMCFdAvKDbTwmvglyHdVLu5sPmktxbBYiemfyJ
qxMxmYXCc9S0hWrWZW7edktBa9NpE58z1mx+hRIrDNbS2sDHrib9PULYCySyVYcF
P+PWEe1CAS5jqkR2ker5td2/pHNnJIycynBEs7l6zbc9fu+nktFJz0q2B+GJAhwE
EAEKAAYFAlUwGaQACgkQ+tiY1qQ2QkjMFw//f2hNY3BPNe+1qbhzumMDCnbTnGif
kLuAGl9OKt81VHG1f6RnaGiLpR696+6Ja45KzH15cQ5JJl5Bgs1YkR/noTGX8IAD
c70eNwiFu8JXTaaeeJrsmFkF9Tueufb364risYkvPP8tNUD3InBFEZT3WN7JKwix
coD4/BwekUwOZVDd/uCFEyhlhZsROxdKNisNo3VtAq2s+3tIBAmTrriFUl0K+ZC5
zgavcpnPN57zMtW9aK+VO3wXqAKYLYmtgxkVzSLUZt2M7JuwOaAdyuYWAneKZPCu
1AXkmyo+d84sd5mZaKOr5xArAFiNMWPUcZL4rkS1Fq4dKtGAqzzR7a7hWtA5o27T
6vynuxZ1n0PPh0er2O/zF4znIjm5RhTlfjp/VmhZdQfpulFEQ/dMxxGkQ9z5IYbX
mTlSDbCSb+FMsanRBJ7Drp5EmBIudVGY6SHI5Re1RQiEh7GoDfUMUwZO+TVDII5R
Ra7WyuimYleJgDo/+7HyfuIyGDaUCVj6pwVtYtYIdOI3tTw1R1Mr0V8yaNVnJghL
CHcEJQL+YHSmiMM3ySil3O6tm1By6lFz8bVe/rgG/5uklQrnjMR37jYboi1orCC4
yeIoQeV0ItlxeTyBwYIV/o1DBNxDevTZvJabC93WiGLw2XFjpZ0q/9+zI2rJUZJh
qxmKP+D4e27lCI65Ag0EVTAYdgEQAMvttYNqeRNBRpSX8fk45WVIV8Fb21fWdwk6
2SkZnJURbiC0LxQnOi7wrtii7DeFZtwM2kFHihS1VHekBnIKKZQSgGoKuFAQMGyu
a426H4ZsSmA9Ufd7kRbvdtEcp7/RTAanhrSL4lkBhaKJrXlxBJ27o3nd7/rh7r3a
OszbPY6DJ5bWClX3KooPTDl/RF2lHn+fweFk58UvuunHIyo4BWJUdilSXIjLun+P
Qaik4ZAsZVwNhdNz05d+vtai4AwbYoO7adboMLRkYaXSQwGytkm+fM6r7OpXHYuS
cR4zB/OK5hxCVEpWfiwN71N2NMvnEMaWd/9uhqxJzyvYgkVUXV9274TUe16pzXnW
ZLfmitjwc91e7mJBBfKNenDdhaLEIlDRwKTLj7k58f9srpMnyZFacntu5pUMNblB
cjXwWxz5ZaQikLnKYhIvrIEwtWPyjqOzNXNvYfZamve/LJ8HmWGCKao3QHoAIDvB
9XBxrDyTJDpxbog6Qu4SY8AdgVlan6c/PsLDc7EUegeYiNTzsOK+eq3G5/E92eIu
TsUXlciypFcRm1q8vLRr+HYYe2mJDo4GetB1zLkAFBcYJm/x9iJQbu0hn5NxJvZO
R0Y5nOJQdyi+muJzKYwhkuzaOlswzqVXkq/7+QCjg7QsycdcwDjiQh3OrsgXHrwl
M7gyafL9ABEBAAGJAh8EGAEKAAkFAlUwGHYCGwwACgkQzUmgr8V5Kb50BxAAhj9T
TwmNrgRldTHszj+Qc+v8RWqV6j+R+zc0cn5XlUa6XFaXI1OFFg71H4dhCPEiYeN0
IrnocyMNvCol+eKIlPKbPTmoixjQ4udPTR1DC1Bx1MyW5FqOrsgBl5t0e1VwEViM
NspSStxu5Hsr6oWz2GD48lXZWJOgoL1RLs+uxjcyjySD/em2fOKASwchYmI+ezRv
plfhAFIMKTSCN2pgVTEOaaz13M0U+MoprThqF1LWzkGkkC7n/1V1f5tn83BWiagG
2N2Q4tHLfyouzMUKnX28kQ9sXfxwmYb2sA9FNIgxy+TdKU2ofLxivoWT8zS189z/
Yj9fErmiMjns2FzEDX+bipAw55X4D/RsaFgC+2x2PDbxeQh6JalRA2Wjq32Ouubx
u+I4QhEDJIcVwt9x6LPDuos1F+M5QW0AiUhKrZJ17UrxOtaquh/nPUL9T3l2qPUn
1ChrZEEEhHO6vA8+jn0+cV9n5xEz30Str9iHnDQ5QyR5LyV4UBPgTdWyQzNVKA69
KsSr9lbHEtQFRzGuBKwt6UlSFv9vPWWJkJit5XDKAlcKuGXj0J8OlltToocGElkF
+gEBZfoOWi/IBjRLrFW2cT3p36DTR5O1Ud/1DLnWRqgWNBLrbs2/KMKE6EnHttyD
7Tz8SQkuxltX/yBXMV3Ddy0t6nWV2SZEfuxJAQI=
=spg4
mQINBGkSZAQBEACprU199OhgdsOsygNjiQV4msuN3vDOUooehL+NwfsGfW79Tbqq
Q2u7uQ3NZjW+M2T4nsDwuhkr7pe7xSReR5W8ssaczvtUyxkvbMClilcgZ2OSCAuC
N9tzJsqOqkwBvXoNXkn//T2jnPz0ZU2wSF+NrEibq5FeuyGdoX3yXXBxq9pW9HzK
HkQll63QSl6BzVSGRQq+B6lGgaZGLwf3mzmIND9Z5VGLNK2jKynyz9z091whNG/M
kV+E7/r/bujHk7WIVId07G5/COTXmSr7kFnNEkd2Umw42dkgfiNKvlmJ9M7c1wLK
KbL9Eb4ADuW6rRc5k4s1e6GT8R4/VPliWbCl9SE32hXH8uTkqVIFZP2eyM5WRRHs
aKzitkQG9UK9gcb0kdgUkxOvvgPHAe5IuZlcHFzU4y0dBbU1VEFWVpiLU0q+IuNw
5BRemeHc59YNsngkmAZ+/9zouoShRusZmC8Wzotv75C2qVBcjijPvmjWAUz0Zunm
Lsr+O71vqHE73pERjD07wuD/ISjiYRYYE/bVrXtXLZijC7qAH4RE3nID+2ojcZyO
/2jMQvt7un56RsGH4UBHi3aBHi9bUoDGCXKiQY981cEuNaOxpou7Mh3x/ONzzSvk
sTV6nl1LOZHykN1JyKwaNbTSAiuyoN+7lOBqbV04DNYAHL88PrT21P83aQARAQAB
tB1SaXBwbGUgTGFicyA8YnVnc0ByaXBwbGUuY29tPokCTgQTAQgAOBYhBLcsBlQv
KuJQJ2OiaNkAhVqp9RTgBQJpEmQEAhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheA
AAoJENkAhVqp9RTgBzgP/i7y+aDWl1maig1XMdyb+o0UGusumFSW4Hmj278wlKVv
usgLPihYgHE0PKrv6WRyKOMC1tQEcYYN93M+OeQ1vFhS2YyURq6RCMmh4zq/awXG
uZbG36OURB5NH8lGBOHiN/7O+nY0CgenBT2JWm+GW3nEOAVOVm4+r5GlpPlv+Dp1
NPBThcKXFMnH73++NpSQoDzTfRYHPxhDAX3jkLi/moXfSanOLlR6l94XNNN0jBHW
Quao0rzf4WSXq9g6AS224xhAA5JyIcFl8TX7hzj5HaFn3VWo3COoDu4U7H+BM0fl
85yqiMQypp7EhN2gxpMMWaHY5TFM85U/bFXFYfEgihZ4/gt4uoIzsNI9jlX7mYvG
KFdDij+oTlRsuOxdIy60B3dKcwOH9nZZCz0SPsN/zlRWgKzK4gDKdGhFkU9OlvPu
94ZqscanoiWKDoZkF96+sjgfjkuHsDK7Lwc1Xi+T4drHG/3aVpkYabXox+lrKB/S
yxZjeqOIQzWPhnLgCaLyvsKo5hxKzL0w3eURu8F3IS7RgOOlljv4M+Me9sEVcdNV
aN3/tQwbaomSX1X5D5YXqhBwC3rU3wXwamsscRTGEpkV+JCX6KUqGP7nWmxCpAly
FL05XuOd5SVHJjXLeuje0JqLUpN514uL+bThWwDbDTdAdwW3oK/2WbXz7IfJRLBj
uQINBGkSZAQBEADdI3SL2F72qkrgFqXWE6HSRBu9bsAvTE5QrRPWk7ux6at537r4
S4sIw2dOwLvbyIrDgKNq3LQ5wCK88NO/NeCOFm4AiCJSl3pJHXYnTDoUxTrrxx+o
vSRI4I3fHEql/MqzgiAb0YUezjgFdh3vYheMPp/309PFbOLhiFqEcx80Mx5h06UH
gDzu1qNj3Ec+31NLic5zwkrAkvFvD54d6bqYR3SEgMau6aYEewpGHbWBi2pLqSi2
lQcAeOFixqGpTwDmAnYR8YtjBYepy0MojEAdTHcQQlOYSDk4q4elG+io2N8vECfU
rD6ORecN48GXdZINYWTAdslrUeanmBdgQrYkSpce8TSghgT9P01SNaXxmyaehVUO
lqI4pcg5G2oojAE8ncNS3TwDtt7daTaTC3bAdr4PXDVAzNAiewjMNZPB7xidkDGQ
Y4W1LxTMXyJVWxehYOH7tsbBRKninlfRnLgYzmtIbNRAAvNcsxU6ihv3AV0WFknN
YbSzotEv1Xq/5wk309x8zCDe+sP0cQicvbXafXmUzPAZzeqFg+VLFn7F9MP1WGlW
B1u7VIvBF1Mp9Nd3EAGBAoLRdRu+0dVWIjPTQuPIuD9cCatJA0wVaKUrjYbBMl88
a12LixNVGeSFS9N7ADHx0/o7GNT6l88YbaLP6zggUHpUD/bR+cDN7vllIQARAQAB
iQI2BBgBCAAgFiEEtywGVC8q4lAnY6Jo2QCFWqn1FOAFAmkSZAQCGwwACgkQ2QCF
Wqn1FOAfAA/8CYq4p0p4bobY20CKEMsZrkBTFJyPDqzFwMeTjgpzqbD7Y3Qq5QCK
OBbvY02GWdiIsNOzKdBxiuam2xYP9WHZj4y7/uWEvT0qlPVmDFu+HXjoJ43oxwFd
CUp2gMuQ4cSL3X94VRJ3BkVL+tgBm8CNY0vnTLLOO3kum/R69VsGJS1JSGUWjNM+
4qwS3mz+73xJu1HmERyN2RZF/DGIZI2PyONQQ6aH85G1Dd2ohu2/DBAkQAMBrPbj
FrbDaBLyFhODxU3kTWqnfLlaElSm2EGdIU2yx7n4BggEa//NZRMm5kyeo4vzhtlQ
YIVUMLAOLZvnEqDnsLKp+22FzNR/O+htBQC4lPywl53oYSALdhz1IQlcAC1ru5KR
XPzhIXV6IIzkcx9xNkEclZxmsuy5ERXyKEmLbIHAlzFmnrldlt2ZgXDtzaorLmxj
klKibxd5tF50qOpOivz+oPtFo7n+HmFa1nlVAMxlDCUdM0pEVeYDKI5zfVwalyhZ
NnjpakdZSXMwgc7NP/hH9buF35hKDp7EckT2y3JNYwHsDdy1icXN2q40XZw5tSIn
zkPWdu3OUY8PISohN6Pw4h0RH4ZmoX97E8sEfmdKaT58U4Hf2aAv5r9IWCSrAVqY
u5jvac29CzQR9Kal0A+8phHAXHNFD83SwzIC0syaT9ficAguwGH8X6Q=
=nGuD
-----END PGP PUBLIC KEY BLOCK-----
```

View File

@@ -0,0 +1,60 @@
# Shared detection of compiler, operating system, and architecture.
#
# This module centralizes environment detection so that other
# CMake modules can use the same variables instead of repeating
# checks on CMAKE_* and built-in platform variables.
# Only run once per configure step.
include_guard(GLOBAL)
# --------------------------------------------------------------------
# Compiler detection (C++)
# --------------------------------------------------------------------
set(is_clang FALSE)
set(is_gcc FALSE)
set(is_msvc FALSE)
set(is_xcode FALSE)
if(CMAKE_CXX_COMPILER_ID MATCHES ".*Clang") # Clang or AppleClang
set(is_clang TRUE)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(is_gcc TRUE)
elseif(MSVC)
set(is_msvc TRUE)
else()
message(FATAL_ERROR "Unsupported C++ compiler: ${CMAKE_CXX_COMPILER_ID}")
endif()
# Xcode generator detection
if(CMAKE_GENERATOR STREQUAL "Xcode")
set(is_xcode TRUE)
endif()
# --------------------------------------------------------------------
# Operating system detection
# --------------------------------------------------------------------
set(is_linux FALSE)
set(is_windows FALSE)
set(is_macos FALSE)
if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
set(is_linux TRUE)
elseif(CMAKE_SYSTEM_NAME STREQUAL "Windows")
set(is_windows TRUE)
elseif(CMAKE_SYSTEM_NAME STREQUAL "Darwin")
set(is_macos TRUE)
endif()
# --------------------------------------------------------------------
# Architecture
# --------------------------------------------------------------------
set(is_amd64 FALSE)
set(is_arm64 FALSE)
if(CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64|AMD64")
set(is_amd64 TRUE)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|arm64")
set(is_arm64 TRUE)
else()
message(FATAL_ERROR "Unknown architecture: ${CMAKE_SYSTEM_PROCESSOR}")
endif()

View File

@@ -2,16 +2,23 @@
setup project-wide compiler settings
#]===================================================================]
include(CompilationEnv)
#[=========================================================[
TODO some/most of these common settings belong in a
toolchain file, especially the ABI-impacting ones
#]=========================================================]
add_library (common INTERFACE)
add_library (Xrpl::common ALIAS common)
include(XrplSanitizers)
# add a single global dependency on this interface lib
link_libraries (Xrpl::common)
# Respect CMAKE_POSITION_INDEPENDENT_CODE setting (may be set by Conan toolchain)
if(NOT DEFINED CMAKE_POSITION_INDEPENDENT_CODE)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
endif()
set_target_properties (common
PROPERTIES INTERFACE_POSITION_INDEPENDENT_CODE ON)
PROPERTIES INTERFACE_POSITION_INDEPENDENT_CODE ${CMAKE_POSITION_INDEPENDENT_CODE})
set(CMAKE_CXX_EXTENSIONS OFF)
target_compile_definitions (common
INTERFACE
@@ -116,8 +123,8 @@ else ()
# link to static libc/c++ iff:
# * static option set and
# * NOT APPLE (AppleClang does not support static libc/c++) and
# * NOT san (sanitizers typically don't work with static libc/c++)
$<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${san}>>>:
# * NOT SANITIZERS (sanitizers typically don't work with static libc/c++)
$<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${SANITIZERS_ENABLED}>>>:
-static-libstdc++
-static-libgcc
>)

View File

@@ -32,14 +32,14 @@ target_protobuf_sources(xrpl.libpb xrpl/proto
target_compile_options(xrpl.libpb
PUBLIC
$<$<BOOL:${MSVC}>:-wd4996>
$<$<BOOL:${XCODE}>:
$<$<BOOL:${is_msvc}>:-wd4996>
$<$<BOOL:${is_xcode}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>
PRIVATE
$<$<BOOL:${MSVC}>:-wd4065>
$<$<NOT:$<BOOL:${MSVC}>>:-Wno-deprecated-declarations>
$<$<BOOL:${is_msvc}>:-wd4065>
$<$<NOT:$<BOOL:${is_msvc}>>:-Wno-deprecated-declarations>
)
target_link_libraries(xrpl.libpb

View File

@@ -4,6 +4,12 @@
include(create_symbolic_link)
# If no suffix is defined for executables (e.g. Windows uses .exe but Linux
# and macOS use none), then explicitly set it to the empty string.
if(NOT DEFINED suffix)
set(suffix "")
endif()
install (
TARGETS
common

View File

@@ -2,6 +2,13 @@
xrpld compile options/settings via an interface library
#]===================================================================]
include(CompilationEnv)
# Set defaults for optional variables to avoid uninitialized variable warnings
if(NOT DEFINED voidstar)
set(voidstar OFF)
endif()
add_library (opts INTERFACE)
add_library (Xrpl::opts ALIAS opts)
target_compile_definitions (opts
@@ -42,22 +49,6 @@ if(jemalloc)
target_link_libraries(opts INTERFACE jemalloc::jemalloc)
endif ()
if (san)
target_compile_options (opts
INTERFACE
# sanitizers recommend minimum of -O1 for reasonable performance
$<$<CONFIG:Debug>:-O1>
${SAN_FLAG}
-fno-omit-frame-pointer)
target_compile_definitions (opts
INTERFACE
$<$<STREQUAL:${san},address>:SANITIZER=ASAN>
$<$<STREQUAL:${san},thread>:SANITIZER=TSAN>
$<$<STREQUAL:${san},memory>:SANITIZER=MSAN>
$<$<STREQUAL:${san},undefined>:SANITIZER=UBSAN>)
target_link_libraries (opts INTERFACE ${SAN_FLAG} ${SAN_LIB})
endif ()
#[===================================================================[
xrpld transitive library deps via an interface library
#]===================================================================]
@@ -66,7 +57,7 @@ add_library (xrpl_syslibs INTERFACE)
add_library (Xrpl::syslibs ALIAS xrpl_syslibs)
target_link_libraries (xrpl_syslibs
INTERFACE
$<$<BOOL:${MSVC}>:
$<$<BOOL:${is_msvc}>:
legacy_stdio_definitions.lib
Shlwapi
kernel32
@@ -83,10 +74,10 @@ target_link_libraries (xrpl_syslibs
odbccp32
crypt32
>
$<$<NOT:$<BOOL:${MSVC}>>:dl>
$<$<NOT:$<OR:$<BOOL:${MSVC}>,$<BOOL:${APPLE}>>>:rt>)
$<$<NOT:$<BOOL:${is_msvc}>>:dl>
$<$<NOT:$<OR:$<BOOL:${is_msvc}>,$<BOOL:${is_macos}>>>:rt>)
if (NOT MSVC)
if (NOT is_msvc)
set (THREADS_PREFER_PTHREAD_FLAG ON)
find_package (Threads)
target_link_libraries (xrpl_syslibs INTERFACE Threads::Threads)

201
cmake/XrplSanitizers.cmake Normal file
View File

@@ -0,0 +1,201 @@
#[===================================================================[
Configure sanitizers based on environment variables.
This module reads the following environment variables:
- SANITIZERS: The sanitizers to enable. Possible values:
- "address"
- "address,undefinedbehavior"
- "thread"
- "thread,undefinedbehavior"
- "undefinedbehavior"
The compiler type and platform are detected in CompilationEnv.cmake.
The sanitizer compile options are applied to the 'common' interface library
which is linked to all targets in the project.
Internal flag variables set by this module:
- SANITIZER_TYPES: List of sanitizer types to enable (e.g., "address",
"thread", "undefined"). And two more flags for undefined behavior sanitizer (e.g., "float-divide-by-zero", "unsigned-integer-overflow").
This list is joined with commas and passed to -fsanitize=<list>.
- SANITIZERS_COMPILE_FLAGS: Compiler flags for sanitizer instrumentation.
Includes:
* -fno-omit-frame-pointer: Preserves frame pointers for stack traces
* -O1: Minimum optimization for reasonable performance
* -fsanitize=<types>: Enables sanitizer instrumentation
* -fsanitize-ignorelist=<path>: (Clang only) Compile-time ignorelist
* -mcmodel=large/medium: (GCC only) Code model for large binaries
* -Wno-stringop-overflow: (GCC only) Suppresses false positive warnings
* -Wno-tsan: (For GCC TSAN combination only) Suppresses atomic_thread_fence warnings
- SANITIZERS_LINK_FLAGS: Linker flags for sanitizer runtime libraries.
Includes:
* -fsanitize=<types>: Links sanitizer runtime libraries
* -mcmodel=large/medium: (GCC only) Matches compile-time code model
- SANITIZERS_RELOCATION_FLAGS: (GCC only) Code model flags for linking.
Used to handle large instrumented binaries on x86_64:
* -mcmodel=large: For AddressSanitizer (prevents relocation errors)
* -mcmodel=medium: For ThreadSanitizer (large model is incompatible)
#]===================================================================]
include(CompilationEnv)
# Read environment variable
set(SANITIZERS "")
if(DEFINED ENV{SANITIZERS})
set(SANITIZERS "$ENV{SANITIZERS}")
endif()
# Set SANITIZERS_ENABLED flag for use in other modules
if(SANITIZERS MATCHES "address|thread|undefinedbehavior")
set(SANITIZERS_ENABLED TRUE)
else()
set(SANITIZERS_ENABLED FALSE)
return()
endif()
# Sanitizers are not supported on Windows/MSVC
if(is_msvc)
message(FATAL_ERROR "Sanitizers are not supported on Windows/MSVC. "
"Please unset the SANITIZERS environment variable.")
endif()
message(STATUS "Configuring sanitizers: ${SANITIZERS}")
# Parse SANITIZERS value to determine which sanitizers to enable
set(enable_asan FALSE)
set(enable_tsan FALSE)
set(enable_ubsan FALSE)
# Normalize SANITIZERS into a list
set(san_list "${SANITIZERS}")
string(REPLACE "," ";" san_list "${san_list}")
separate_arguments(san_list)
foreach(san IN LISTS san_list)
if(san STREQUAL "address")
set(enable_asan TRUE)
elseif(san STREQUAL "thread")
set(enable_tsan TRUE)
elseif(san STREQUAL "undefinedbehavior")
set(enable_ubsan TRUE)
else()
message(FATAL_ERROR "Unsupported sanitizer type: ${san}"
"Supported: address, thread, undefinedbehavior and their combinations.")
endif()
endforeach()
# Validate sanitizer compatibility
if(enable_asan AND enable_tsan)
message(FATAL_ERROR "AddressSanitizer and ThreadSanitizer are incompatible and cannot be enabled simultaneously. "
"Use 'address' or 'thread', optionally with 'undefinedbehavior'.")
endif()
# Frame pointer is required for meaningful stack traces. Sanitizers recommend minimum of -O1 for reasonable performance
set(SANITIZERS_COMPILE_FLAGS "-fno-omit-frame-pointer" "-O1")
# Build the sanitizer flags list
set(SANITIZER_TYPES)
if(enable_asan)
list(APPEND SANITIZER_TYPES "address")
elseif(enable_tsan)
list(APPEND SANITIZER_TYPES "thread")
endif()
if(enable_ubsan)
# UB sanitizer flags
list(APPEND SANITIZER_TYPES "undefined" "float-divide-by-zero")
if(is_clang)
# Clang supports additional UB checks. More info here https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html
list(APPEND SANITIZER_TYPES "unsigned-integer-overflow")
endif()
endif()
# Configure code model for GCC on amd64
# Use large code model for ASAN to avoid relocation errors
# Use medium code model for TSAN (large is not compatible with TSAN)
set(SANITIZERS_RELOCATION_FLAGS)
# Compiler-specific configuration
if(is_gcc)
# Disable mold, gold and lld linkers for GCC with sanitizers
# Use default linker (bfd/ld) which is more lenient with mixed code models
# This is needed since the size of instrumented binary exceeds the limits set by mold, lld and gold linkers
set(use_mold OFF CACHE BOOL "Use mold linker" FORCE)
set(use_gold OFF CACHE BOOL "Use gold linker" FORCE)
set(use_lld OFF CACHE BOOL "Use lld linker" FORCE)
message(STATUS " Disabled mold, gold, and lld linkers for GCC with sanitizers")
# Suppress false positive warnings in GCC with stringop-overflow
list(APPEND SANITIZERS_COMPILE_FLAGS "-Wno-stringop-overflow")
if(is_amd64 AND enable_asan)
message(STATUS " Using large code model (-mcmodel=large)")
list(APPEND SANITIZERS_COMPILE_FLAGS "-mcmodel=large")
list(APPEND SANITIZERS_RELOCATION_FLAGS "-mcmodel=large")
elseif(enable_tsan)
# GCC doesn't support atomic_thread_fence with tsan. Suppress warnings.
list(APPEND SANITIZERS_COMPILE_FLAGS "-Wno-tsan")
message(STATUS " Using medium code model (-mcmodel=medium)")
list(APPEND SANITIZERS_COMPILE_FLAGS "-mcmodel=medium")
list(APPEND SANITIZERS_RELOCATION_FLAGS "-mcmodel=medium")
endif()
# Join sanitizer flags with commas for -fsanitize option
list(JOIN SANITIZER_TYPES "," SANITIZER_TYPES_STR)
# Add sanitizer to compile and link flags
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
set(SANITIZERS_LINK_FLAGS "${SANITIZERS_RELOCATION_FLAGS}" "-fsanitize=${SANITIZER_TYPES_STR}")
elseif(is_clang)
# Add ignorelist for Clang (GCC doesn't support this)
# Use CMAKE_SOURCE_DIR to get the path to the ignorelist
set(IGNORELIST_PATH "${CMAKE_SOURCE_DIR}/sanitizers/suppressions/sanitizer-ignorelist.txt")
if(NOT EXISTS "${IGNORELIST_PATH}")
message(FATAL_ERROR "Sanitizer ignorelist not found: ${IGNORELIST_PATH}")
endif()
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize-ignorelist=${IGNORELIST_PATH}")
message(STATUS " Using sanitizer ignorelist: ${IGNORELIST_PATH}")
# Join sanitizer flags with commas for -fsanitize option
list(JOIN SANITIZER_TYPES "," SANITIZER_TYPES_STR)
# Add sanitizer to compile and link flags
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
set(SANITIZERS_LINK_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
endif()
message(STATUS " Compile flags: ${SANITIZERS_COMPILE_FLAGS}")
message(STATUS " Link flags: ${SANITIZERS_LINK_FLAGS}")
# Apply the sanitizer flags to the 'common' interface library
# This is the same library used by XrplCompiler.cmake
target_compile_options(common INTERFACE
$<$<COMPILE_LANGUAGE:CXX>:${SANITIZERS_COMPILE_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${SANITIZERS_COMPILE_FLAGS}>
)
# Apply linker flags
target_link_options(common INTERFACE ${SANITIZERS_LINK_FLAGS})
# Define SANITIZERS macro for BuildInfo.cpp
set(sanitizers_list)
if(enable_asan)
list(APPEND sanitizers_list "ASAN")
endif()
if(enable_tsan)
list(APPEND sanitizers_list "TSAN")
endif()
if(enable_ubsan)
list(APPEND sanitizers_list "UBSAN")
endif()
if(sanitizers_list)
list(JOIN sanitizers_list "." sanitizers_str)
target_compile_definitions(common INTERFACE SANITIZERS=${sanitizers_str})
endif()

View File

@@ -2,6 +2,8 @@
sanity checks
#]===================================================================]
include(CompilationEnv)
get_property(is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set (CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "" FORCE)
@@ -16,14 +18,12 @@ if (NOT is_multiconfig)
endif ()
endif ()
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES ".*Clang") # both Clang and AppleClang
set (is_clang TRUE)
if (is_clang) # both Clang and AppleClang
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang" AND
CMAKE_CXX_COMPILER_VERSION VERSION_LESS 16.0)
message (FATAL_ERROR "This project requires clang 16 or later")
endif ()
elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
set (is_gcc TRUE)
elseif (is_gcc)
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 12.0)
message (FATAL_ERROR "This project requires GCC 12 or later")
endif ()
@@ -40,11 +40,6 @@ if (MSVC AND CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
message (FATAL_ERROR "Visual Studio 32-bit build is not supported.")
endif ()
if (NOT CMAKE_SIZEOF_VOID_P EQUAL 8)
message (FATAL_ERROR "Xrpld requires a 64 bit target architecture.\n"
"The most likely cause of this warning is trying to build xrpld with a 32-bit OS.")
endif ()
if (APPLE AND NOT HOMEBREW)
find_program (HOMEBREW brew)
endif ()

View File

@@ -2,16 +2,13 @@
declare options and variables
#]===================================================================]
if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
set (is_linux TRUE)
else()
set(is_linux FALSE)
endif()
include(CompilationEnv)
if("$ENV{CI}" STREQUAL "true" OR "$ENV{CONTINUOUS_INTEGRATION}" STREQUAL "true")
set(is_ci TRUE)
else()
set(is_ci FALSE)
set(is_ci FALSE)
if(DEFINED ENV{CI})
if("$ENV{CI}" STREQUAL "true")
set(is_ci TRUE)
endif()
endif()
get_directory_property(has_parent PARENT_DIRECTORY)
@@ -62,7 +59,7 @@ else()
set(wextra OFF CACHE BOOL "gcc/clang only" FORCE)
endif()
if(is_linux)
if(is_linux AND NOT SANITIZER)
option(BUILD_SHARED_LIBS "build shared xrpl libraries" OFF)
option(static "link protobuf, openssl, libc++, and boost statically" ON)
option(perf "Enables flags that assist with perf recording" OFF)
@@ -107,33 +104,6 @@ option(local_protobuf
option(local_grpc
"Force a local build of gRPC instead of looking for an installed version." OFF)
# this one is a string and therefore can't be an option
set(san "" CACHE STRING "On gcc & clang, add sanitizer instrumentation")
set_property(CACHE san PROPERTY STRINGS ";undefined;memory;address;thread")
if(san)
string(TOLOWER ${san} san)
set(SAN_FLAG "-fsanitize=${san}")
set(SAN_LIB "")
if(is_gcc)
if(san STREQUAL "address")
set(SAN_LIB "asan")
elseif(san STREQUAL "thread")
set(SAN_LIB "tsan")
elseif(san STREQUAL "memory")
set(SAN_LIB "msan")
elseif(san STREQUAL "undefined")
set(SAN_LIB "ubsan")
endif()
endif()
set(_saved_CRL ${CMAKE_REQUIRED_LIBRARIES})
set(CMAKE_REQUIRED_LIBRARIES "${SAN_FLAG};${SAN_LIB}")
check_cxx_compiler_flag(${SAN_FLAG} COMPILER_SUPPORTS_SAN)
set(CMAKE_REQUIRED_LIBRARIES ${_saved_CRL})
if(NOT COMPILER_SUPPORTS_SAN)
message(FATAL_ERROR "${san} sanitizer does not seem to be supported by your compiler")
endif()
endif()
# the remaining options are obscure and rarely used
option(beast_no_unit_test_inline
"Prevents unit test definitions from being inserted into global table"

View File

@@ -1,3 +1,6 @@
include(CompilationEnv)
include(XrplSanitizers)
find_package(Boost REQUIRED
COMPONENTS
chrono
@@ -27,12 +30,11 @@ target_link_libraries(xrpl_boost
Boost::process
Boost::program_options
Boost::regex
Boost::system
Boost::thread)
if(Boost_COMPILER)
target_link_libraries(xrpl_boost INTERFACE Boost::disable_autolinking)
endif()
if(san AND is_clang)
if(SANITIZERS_ENABLED AND is_clang)
# TODO: gcc does not support -fsanitize-blacklist...can we do something else
# for gcc ?
if(NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)

View File

@@ -11,19 +11,19 @@
"re2/20230301#ca3b241baec15bd31ea9187150e0b333%1765850148.103",
"protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1765850161.038",
"openssl/3.5.4#1b986e61b38fdfda3b40bebc1b234393%1768312656.257",
"nudb/2.0.9#fb8dfd1a5557f5e0528114c2da17721e%1765850143.957",
"nudb/2.0.9#0432758a24204da08fee953ec9ea03cb%1769436073.32",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1765850143.914",
"libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1765842973.492",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1765842973.03",
"libarchive/3.8.1#ffee18995c706e02bf96e7a2f7042e0d%1765850144.736",
"jemalloc/5.3.0#e951da9cf599e956cebc117880d2d9f8%1729241615.244",
"gtest/1.17.0#5224b3b3ff3b4ce1133cbdd27d53ee7d%1768312129.152",
"grpc/1.72.0#f244a57bff01e708c55a1100b12e1589%1765850193.734",
"ed25519/2015.03#ae761bdc52730a843f0809bdf6c1b1f6%1765850143.772",
"doctest/2.4.12#eb9fb352fb2fdfc8abb17ec270945165%1765850143.95",
"date/3.0.4#862e11e80030356b53c2c38599ceb32b%1765850143.772",
"c-ares/1.34.5#5581c2b62a608b40bb85d965ab3ec7c8%1765850144.336",
"bzip2/1.0.8#c470882369c2d95c5c77e970c0c7e321%1765850143.837",
"boost/1.88.0#8852c0b72ce8271fb8ff7c53456d4983%1765850172.862",
"boost/1.90.0#d5e8defe7355494953be18524a7f135b%1765955095.179",
"abseil/20250127.0#99262a368bd01c0ccca8790dfced9719%1766517936.993"
],
"build_requires": [
@@ -42,18 +42,22 @@
],
"python_requires": [],
"overrides": {
"boost/1.90.0#d5e8defe7355494953be18524a7f135b": [
null,
"boost/1.90.0"
],
"protobuf/5.27.0": [
"protobuf/6.32.1"
],
"lz4/1.9.4": [
"lz4/1.10.0"
],
"boost/1.83.0": [
"boost/1.88.0"
],
"sqlite3/3.44.2": [
"sqlite3/3.49.1"
],
"boost/1.83.0": [
"boost/1.90.0"
],
"lz4/[>=1.9.4 <2]": [
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504"
]

1
conan/profiles/ci Normal file
View File

@@ -0,0 +1 @@
include(sanitizers)

59
conan/profiles/sanitizers Normal file
View File

@@ -0,0 +1,59 @@
include(default)
{% set compiler, version, compiler_exe = detect_api.detect_default_compiler() %}
{% set sanitizers = os.getenv("SANITIZERS") %}
[conf]
{% if sanitizers %}
{% if compiler == "gcc" %}
{% if "address" in sanitizers or "thread" in sanitizers or "undefinedbehavior" in sanitizers %}
{% set sanitizer_list = [] %}
{% set model_code = "" %}
{% set extra_cxxflags = ["-fno-omit-frame-pointer", "-O1", "-Wno-stringop-overflow"] %}
{% if "address" in sanitizers %}
{% set _ = sanitizer_list.append("address") %}
{% set model_code = "-mcmodel=large" %}
{% elif "thread" in sanitizers %}
{% set _ = sanitizer_list.append("thread") %}
{% set model_code = "-mcmodel=medium" %}
{% set _ = extra_cxxflags.append("-Wno-tsan") %}
{% endif %}
{% if "undefinedbehavior" in sanitizers %}
{% set _ = sanitizer_list.append("undefined") %}
{% set _ = sanitizer_list.append("float-divide-by-zero") %}
{% endif %}
{% set sanitizer_flags = "-fsanitize=" ~ ",".join(sanitizer_list) ~ " " ~ model_code %}
tools.build:cxxflags+=['{{sanitizer_flags}} {{" ".join(extra_cxxflags)}}']
tools.build:sharedlinkflags+=['{{sanitizer_flags}}']
tools.build:exelinkflags+=['{{sanitizer_flags}}']
{% endif %}
{% elif compiler == "apple-clang" or compiler == "clang" %}
{% if "address" in sanitizers or "thread" in sanitizers or "undefinedbehavior" in sanitizers %}
{% set sanitizer_list = [] %}
{% set extra_cxxflags = ["-fno-omit-frame-pointer", "-O1"] %}
{% if "address" in sanitizers %}
{% set _ = sanitizer_list.append("address") %}
{% elif "thread" in sanitizers %}
{% set _ = sanitizer_list.append("thread") %}
{% endif %}
{% if "undefinedbehavior" in sanitizers %}
{% set _ = sanitizer_list.append("undefined") %}
{% set _ = sanitizer_list.append("float-divide-by-zero") %}
{% set _ = sanitizer_list.append("unsigned-integer-overflow") %}
{% endif %}
{% set sanitizer_flags = "-fsanitize=" ~ ",".join(sanitizer_list) %}
tools.build:cxxflags+=['{{sanitizer_flags}} {{" ".join(extra_cxxflags)}}']
tools.build:sharedlinkflags+=['{{sanitizer_flags}}']
tools.build:exelinkflags+=['{{sanitizer_flags}}']
{% endif %}
{% endif %}
{% endif %}
tools.info.package_id:confs+=["tools.build:cxxflags", "tools.build:exelinkflags", "tools.build:sharedlinkflags"]

View File

@@ -39,7 +39,7 @@ class Xrpl(ConanFile):
]
test_requires = [
"doctest/2.4.12",
"gtest/1.17.0",
]
tool_requires = [
@@ -131,7 +131,7 @@ class Xrpl(ConanFile):
transitive_headers_opt = (
{"transitive_headers": True} if conan_version.split(".")[0] == "2" else {}
)
self.requires("boost/1.88.0", force=True, **transitive_headers_opt)
self.requires("boost/1.90.0", force=True, **transitive_headers_opt)
self.requires("date/3.0.4", **transitive_headers_opt)
self.requires("lz4/1.10.0", force=True)
self.requires("protobuf/6.32.1", force=True)
@@ -203,7 +203,6 @@ class Xrpl(ConanFile):
"boost::program_options",
"boost::process",
"boost::regex",
"boost::system",
"boost::thread",
"date::date",
"ed25519::ed25519",

207
docs/build/sanitizers.md vendored Normal file
View File

@@ -0,0 +1,207 @@
# Sanitizer Configuration for Rippled
This document explains how to properly configure and run sanitizers (AddressSanitizer, undefinedbehaviorSanitizer, ThreadSanitizer) with the xrpld project.
Corresponding suppression files are located in the `sanitizers/suppressions` directory.
- [Sanitizer Configuration for Rippled](#sanitizer-configuration-for-rippled)
- [Building with Sanitizers](#building-with-sanitizers)
- [Summary](#summary)
- [Build steps:](#build-steps)
- [Install dependencies](#install-dependencies)
- [Call CMake](#call-cmake)
- [Build](#build)
- [Running Tests with Sanitizers](#running-tests-with-sanitizers)
- [AddressSanitizer (ASAN)](#addresssanitizer-asan)
- [ThreadSanitizer (TSan)](#threadsanitizer-tsan)
- [LeakSanitizer (LSan)](#leaksanitizer-lsan)
- [UndefinedBehaviorSanitizer (UBSan)](#undefinedbehaviorsanitizer-ubsan)
- [Suppression Files](#suppression-files)
- [`asan.supp`](#asansupp)
- [`lsan.supp`](#lsansupp)
- [`ubsan.supp`](#ubsansupp)
- [`tsan.supp`](#tsansupp)
- [`sanitizer-ignorelist.txt`](#sanitizer-ignorelisttxt)
- [Troubleshooting](#troubleshooting)
- ["ASAN is ignoring requested \_\_asan_handle_no_return" warnings](#asan-is-ignoring-requested-__asan_handle_no_return-warnings)
- [Sanitizer Mismatch Errors](#sanitizer-mismatch-errors)
- [References](#references)
## Building with Sanitizers
### Summary
Follow the same instructions as mentioned in [BUILD.md](../../BUILD.md) but with the following changes:
1. Make sure you have a clean build directory.
2. Set the `SANITIZERS` environment variable before calling conan install and cmake. Only set it once. Make sure both conan and cmake read the same values.
Example: `export SANITIZERS=address,undefinedbehavior`
3. Optionally use `--profile:all sanitizers` with Conan to build dependencies with sanitizer instrumentation. [!NOTE]Building with sanitizer-instrumented dependencies is slower but produces fewer false positives.
4. Set `ASAN_OPTIONS`, `LSAN_OPTIONS`, `UBSAN_OPTIONS` and `TSAN_OPTIONS` environment variables to configure sanitizer behavior when running executables. [More details below](#running-tests-with-sanitizers).
---
### Build steps:
```bash
cd /path/to/rippled
rm -rf .build
mkdir .build
cd .build
```
#### Install dependencies
The `SANITIZERS` environment variable is used by both Conan and CMake.
```bash
export SANITIZERS=address,undefinedbehavior
# Standard build (without instrumenting dependencies)
conan install .. --output-folder . --build missing --settings build_type=Debug
# Or with sanitizer-instrumented dependencies (takes longer but fewer false positives)
conan install .. --output-folder . --profile:all sanitizers --build missing --settings build_type=Debug
```
[!CAUTION]
Do not mix Address and Thread sanitizers - they are incompatible.
Since you already set the `SANITIZERS` environment variable when running Conan, same values will be read for the next part.
#### Call CMake
```bash
cmake .. -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=Debug \
-Dtests=ON -Dxrpld=ON
```
#### Build
```bash
cmake --build . --parallel 4
```
## Running Tests with Sanitizers
### AddressSanitizer (ASAN)
**IMPORTANT**: ASAN with Boost produces many false positives. Use these options:
```bash
export ASAN_OPTIONS="print_stacktrace=1:detect_container_overflow=0:suppressions=path/to/asan.supp:halt_on_error=0:log_path=asan.log"
export LSAN_OPTIONS="suppressions=path/to/lsan.supp:halt_on_error=0:log_path=lsan.log"
# Run tests
./xrpld --unittest --unittest-jobs=5
```
**Why `detect_container_overflow=0`?**
- Boost intrusive containers (used in `aged_unordered_container`) trigger false positives
- Boost context switching (used in `Workers.cpp`) confuses ASAN's stack tracking
- Since we usually don't build Boost (because we don't want to instrument Boost and detect issues in Boost code) with ASAN but use Boost containers in ASAN instrumented rippled code, it generates false positives.
- Building dependencies with ASAN instrumentation reduces false positives. But we don't want to instrument dependencies like Boost with ASAN because it is slow (to compile as well as run tests) and not necessary.
- See: https://github.com/google/sanitizers/wiki/AddressSanitizerContainerOverflow
- More such flags are detailed [here](https://github.com/google/sanitizers/wiki/AddressSanitizerFlags)
### ThreadSanitizer (TSan)
```bash
export TSAN_OPTIONS="suppressions=path/to/tsan.supp halt_on_error=0 log_path=tsan.log"
# Run tests
./xrpld --unittest --unittest-jobs=5
```
More details [here](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual).
### LeakSanitizer (LSan)
LSan is automatically enabled with ASAN. To disable it:
```bash
export ASAN_OPTIONS="detect_leaks=0"
```
More details [here](https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer).
### UndefinedBehaviorSanitizer (UBSan)
```bash
export UBSAN_OPTIONS="suppressions=path/to/ubsan.supp:print_stacktrace=1:halt_on_error=0:log_path=ubsan.log"
# Run tests
./xrpld --unittest --unittest-jobs=5
```
More details [here](https://clang.llvm.org/docs/undefinedbehaviorSanitizer.html).
## Suppression Files
[!NOTE] Attached files contain more details.
### [`asan.supp`](../../sanitizers/suppressions/asan.supp)
- **Purpose**: Suppress AddressSanitizer (ASAN) errors only
- **Format**: `interceptor_name:<pattern>` where pattern matches file names. Supported suppression types are:
- interceptor_name
- interceptor_via_fun
- interceptor_via_lib
- odr_violation
- **More info**: [AddressSanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizer)
- **Note**: Cannot suppress stack-buffer-overflow, container-overflow, etc.
### [`lsan.supp`](../../sanitizers/suppressions/lsan.supp)
- **Purpose**: Suppress LeakSanitizer (LSan) errors only
- **Format**: `leak:<pattern>` where pattern matches function/file names
- **More info**: [LeakSanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer)
### [`ubsan.supp`](../../sanitizers/suppressions/ubsan.supp)
- **Purpose**: Suppress undefinedbehaviorSanitizer errors
- **Format**: `<error_type>:<pattern>` (e.g., `unsigned-integer-overflow:protobuf`)
- **Covers**: Intentional overflows in sanitizers/suppressions libraries (protobuf, gRPC, stdlib)
- More info [UBSan suppressions](https://clang.llvm.org/docs/SanitizerSpecialCaseList.html).
### [`tsan.supp`](../../sanitizers/suppressions/tsan.supp)
- **Purpose**: Suppress ThreadSanitizer data race warnings
- **Format**: `race:<pattern>` where pattern matches function/file names
- **More info**: [ThreadSanitizer suppressions](https://github.com/google/sanitizers/wiki/ThreadSanitizerSuppressions)
### [`sanitizer-ignorelist.txt`](../../sanitizers/suppressions/sanitizer-ignorelist.txt)
- **Purpose**: Compile-time ignorelist for all sanitizers
- **Usage**: Passed via `-fsanitize-ignorelist=absolute/path/to/sanitizer-ignorelist.txt`
- **Format**: `<level>:<pattern>` (e.g., `src:Workers.cpp`)
## Troubleshooting
### "ASAN is ignoring requested \_\_asan_handle_no_return" warnings
These warnings appear when using Boost context switching and are harmless. They indicate potential false positives.
### Sanitizer Mismatch Errors
If you see undefined symbols like `___tsan_atomic_load` when building with ASAN:
**Problem**: Dependencies were built with a different sanitizer than the main project.
**Solution**: Rebuild everything with the same sanitizer:
```bash
rm -rf .build
# Then follow the build instructions above
```
Then review the log files: `asan.log.*`, `ubsan.log.*`, `tsan.log.*`
## References
- [AddressSanitizer Wiki](https://github.com/google/sanitizers/wiki/AddressSanitizer)
- [AddressSanitizer Flags](https://github.com/google/sanitizers/wiki/AddressSanitizerFlags)
- [Container Overflow Detection](https://github.com/google/sanitizers/wiki/AddressSanitizerContainerOverflow)
- [UndefinedBehavior Sanitizer](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html)
- [ThreadSanitizer](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual)

View File

@@ -13,9 +13,7 @@ namespace xrpl {
@throws runtime_error
*/
void
extractTarLz4(
boost::filesystem::path const& src,
boost::filesystem::path const& dst);
extractTarLz4(boost::filesystem::path const& src, boost::filesystem::path const& dst);
} // namespace xrpl

View File

@@ -14,8 +14,7 @@
namespace xrpl {
using IniFileSections =
std::unordered_map<std::string, std::vector<std::string>>;
using IniFileSections = std::unordered_map<std::string, std::vector<std::string>>;
//------------------------------------------------------------------------------
@@ -86,8 +85,7 @@ public:
if (lines_.empty())
return "";
if (lines_.size() > 1)
Throw<std::runtime_error>(
"A legacy value must have exactly one line. Section: " + name_);
Throw<std::runtime_error>("A legacy value must have exactly one line. Section: " + name_);
return lines_[0];
}
@@ -233,10 +231,7 @@ public:
The previous value, if any, is overwritten.
*/
void
overwrite(
std::string const& section,
std::string const& key,
std::string const& value);
overwrite(std::string const& section, std::string const& key, std::string const& value);
/** Remove all the key/value pairs from the section.
*/
@@ -274,9 +269,7 @@ public:
bool
had_trailing_comments() const
{
return std::any_of(map_.cbegin(), map_.cend(), [](auto s) {
return s.second.had_trailing_comments();
});
return std::any_of(map_.cbegin(), map_.cend(), [](auto s) { return s.second.had_trailing_comments(); });
}
protected:
@@ -315,10 +308,7 @@ set(T& target, std::string const& name, Section const& section)
*/
template <class T>
bool
set(T& target,
T const& defaultValue,
std::string const& name,
Section const& section)
set(T& target, T const& defaultValue, std::string const& name, Section const& section)
{
bool found_and_valid = set<T>(target, name, section);
if (!found_and_valid)
@@ -333,9 +323,7 @@ set(T& target,
// NOTE This routine might be more clumsy than the previous two
template <class T = std::string>
T
get(Section const& section,
std::string const& name,
T const& defaultValue = T{})
get(Section const& section, std::string const& name, T const& defaultValue = T{})
{
try
{

View File

@@ -25,8 +25,7 @@ public:
Buffer() = default;
/** Create an uninitialized buffer with the given size. */
explicit Buffer(std::size_t size)
: p_(size ? new std::uint8_t[size] : nullptr), size_(size)
explicit Buffer(std::size_t size) : p_(size ? new std::uint8_t[size] : nullptr), size_(size)
{
}
@@ -62,8 +61,7 @@ public:
/** Move-construct.
The other buffer is reset.
*/
Buffer(Buffer&& other) noexcept
: p_(std::move(other.p_)), size_(other.size_)
Buffer(Buffer&& other) noexcept : p_(std::move(other.p_)), size_(other.size_)
{
other.size_ = 0;
}
@@ -94,8 +92,7 @@ public:
{
// Ensure the slice isn't a subset of the buffer.
XRPL_ASSERT(
s.size() == 0 || size_ == 0 || s.data() < p_.get() ||
s.data() >= p_.get() + size_,
s.size() == 0 || size_ == 0 || s.data() < p_.get() || s.data() >= p_.get() + size_,
"xrpl::Buffer::operator=(Slice) : input not a subset");
if (auto p = alloc(s.size()))

View File

@@ -0,0 +1,133 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2024 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_BASICS_CANPROCESS_H_INCLUDED
#define RIPPLE_BASICS_CANPROCESS_H_INCLUDED
#include <functional>
#include <mutex>
#include <set>
/** RAII class to check if an Item is already being processed on another thread,
* as indicated by it's presence in a Collection.
*
* If the Item is not in the Collection, it will be added under lock in the
* ctor, and removed under lock in the dtor. The object will be considered
* "usable" and evaluate to `true`.
*
* If the Item is in the Collection, no changes will be made to the collection,
* and the CanProcess object will be considered "unusable".
*
* It's up to the caller to decide what "usable" and "unusable" mean. (e.g.
* Process or skip a block of code, or set a flag.)
*
* The current use is to avoid lock contention that would be involved in
* processing something associated with the Item.
*
* Examples:
*
* void IncomingLedgers::acquireAsync(LedgerHash const& hash, ...)
* {
* if (CanProcess check{acquiresMutex_, pendingAcquires_, hash})
* {
* acquire(hash, ...);
* }
* }
*
* bool
* NetworkOPsImp::recvValidation(
* std::shared_ptr<STValidation> const& val,
* std::string const& source)
* {
* CanProcess check(
* validationsMutex_, pendingValidations_, val->getLedgerHash());
* BypassAccept bypassAccept =
* check ? BypassAccept::no : BypassAccept::yes;
* handleNewValidation(app_, val, source, bypassAccept, m_journal);
* }
*
*/
class CanProcess
{
public:
template <class Mutex, class Collection, class Item>
CanProcess(Mutex& mtx, Collection& collection, Item const& item) : cleanup_(insert(mtx, collection, item))
{
}
~CanProcess()
{
if (cleanup_)
cleanup_();
}
explicit
operator bool() const
{
return static_cast<bool>(cleanup_);
}
private:
template <bool useIterator, class Mutex, class Collection, class Item>
std::function<void()>
doInsert(Mutex& mtx, Collection& collection, Item const& item)
{
std::unique_lock<Mutex> lock(mtx);
// TODO: Use structured binding once LLVM 16 is the minimum supported
// version. See also: https://github.com/llvm/llvm-project/issues/48582
// https://github.com/llvm/llvm-project/commit/127bf44385424891eb04cff8e52d3f157fc2cb7c
auto const insertResult = collection.insert(item);
auto const it = insertResult.first;
if (!insertResult.second)
return {};
if constexpr (useIterator)
return [&, it]() {
std::unique_lock<Mutex> lock(mtx);
collection.erase(it);
};
else
return [&]() {
std::unique_lock<Mutex> lock(mtx);
collection.erase(item);
};
}
// Generic insert() function doesn't use iterators because they may get
// invalidated
template <class Mutex, class Collection, class Item>
std::function<void()>
insert(Mutex& mtx, Collection& collection, Item const& item)
{
return doInsert<false>(mtx, collection, item);
}
// Specialize insert() for std::set, which does not invalidate iterators for
// insert and erase
template <class Mutex, class Item>
std::function<void()>
insert(Mutex& mtx, std::set<Item>& collection, Item const& item)
{
return doInsert<true>(mtx, collection, item);
}
// If set, then the item is "usable"
std::function<void()> cleanup_;
};
#endif

View File

@@ -36,10 +36,7 @@ lz4Compress(void const* in, std::size_t inSize, BufferFactory&& bf)
auto compressed = bf(outCapacity);
auto compressedSize = LZ4_compress_default(
reinterpret_cast<char const*>(in),
reinterpret_cast<char*>(compressed),
inSize,
outCapacity);
reinterpret_cast<char const*>(in), reinterpret_cast<char*>(compressed), inSize, outCapacity);
if (compressedSize == 0)
Throw<std::runtime_error>("lz4 compress: failed");
@@ -70,10 +67,8 @@ lz4Decompress(
Throw<std::runtime_error>("lz4Decompress: integer overflow (output)");
if (LZ4_decompress_safe(
reinterpret_cast<char const*>(in),
reinterpret_cast<char*>(decompressed),
inSize,
decompressedSize) != decompressedSize)
reinterpret_cast<char const*>(in), reinterpret_cast<char*>(decompressed), inSize, decompressedSize) !=
decompressedSize)
Throw<std::runtime_error>("lz4Decompress: failed");
return decompressedSize;
@@ -89,11 +84,7 @@ lz4Decompress(
*/
template <typename InputStream>
std::size_t
lz4Decompress(
InputStream& in,
std::size_t inSize,
std::uint8_t* decompressed,
std::size_t decompressedSize)
lz4Decompress(InputStream& in, std::size_t inSize, std::uint8_t* decompressed, std::size_t decompressedSize)
{
std::vector<std::uint8_t> compressed;
std::uint8_t const* chunk = nullptr;
@@ -116,9 +107,7 @@ lz4Decompress(
compressed.resize(inSize);
}
chunkSize = chunkSize < (inSize - copiedInSize)
? chunkSize
: (inSize - copiedInSize);
chunkSize = chunkSize < (inSize - copiedInSize) ? chunkSize : (inSize - copiedInSize);
std::copy(chunk, chunk + chunkSize, compressed.data() + copiedInSize);
@@ -135,8 +124,7 @@ lz4Decompress(
if (in.ByteCount() > (currentBytes + copiedInSize))
in.BackUp(in.ByteCount() - currentBytes - copiedInSize);
if ((copiedInSize == 0 && chunkSize < inSize) ||
(copiedInSize > 0 && copiedInSize != inSize))
if ((copiedInSize == 0 && chunkSize < inSize) || (copiedInSize > 0 && copiedInSize != inSize))
Throw<std::runtime_error>("lz4 decompress: insufficient input size");
return lz4Decompress(chunk, inSize, decompressed, decompressedSize);

View File

@@ -56,9 +56,7 @@ private:
if (m_value != value_type())
{
std::size_t elapsed =
std::chrono::duration_cast<std::chrono::seconds>(now - m_when)
.count();
std::size_t elapsed = std::chrono::duration_cast<std::chrono::seconds>(now - m_when).count();
// A span larger than four times the window decays the
// value to an insignificant amount so just reset it.

View File

@@ -108,23 +108,20 @@ Unexpected(E (&)[N]) -> Unexpected<E const*>;
// Definition of Expected. All of the machinery comes from boost::result.
template <class T, class E>
class [[nodiscard]] Expected
: private boost::outcome_v2::result<T, E, detail::throw_policy>
class [[nodiscard]] Expected : private boost::outcome_v2::result<T, E, detail::throw_policy>
{
using Base = boost::outcome_v2::result<T, E, detail::throw_policy>;
public:
template <typename U>
requires std::convertible_to<U, T>
constexpr Expected(U&& r)
: Base(boost::outcome_v2::in_place_type_t<T>{}, std::forward<U>(r))
constexpr Expected(U&& r) : Base(boost::outcome_v2::in_place_type_t<T>{}, std::forward<U>(r))
{
}
template <typename U>
requires std::convertible_to<U, E> && (!std::is_reference_v<U>)
constexpr Expected(Unexpected<U> e)
: Base(boost::outcome_v2::in_place_type_t<E>{}, std::move(e.value()))
constexpr Expected(Unexpected<U> e) : Base(boost::outcome_v2::in_place_type_t<E>{}, std::move(e.value()))
{
}
@@ -195,8 +192,7 @@ public:
// Specialization of Expected<void, E>. Allows returning either success
// (without a value) or the reason for the failure.
template <class E>
class [[nodiscard]] Expected<void, E>
: private boost::outcome_v2::result<void, E, detail::throw_policy>
class [[nodiscard]] Expected<void, E> : private boost::outcome_v2::result<void, E, detail::throw_policy>
{
using Base = boost::outcome_v2::result<void, E, detail::throw_policy>;

View File

@@ -15,10 +15,7 @@ getFileContents(
std::optional<std::size_t> maxSize = std::nullopt);
void
writeFileContents(
boost::system::error_code& ec,
boost::filesystem::path const& destPath,
std::string const& contents);
writeFileContents(boost::system::error_code& ec, boost::filesystem::path const& destPath, std::string const& contents);
} // namespace xrpl

View File

@@ -45,8 +45,8 @@ struct SharedIntrusiveAdoptNoIncrementTag
//
template <class T>
concept CAdoptTag = std::is_same_v<T, SharedIntrusiveAdoptIncrementStrongTag> ||
std::is_same_v<T, SharedIntrusiveAdoptNoIncrementTag>;
concept CAdoptTag =
std::is_same_v<T, SharedIntrusiveAdoptIncrementStrongTag> || std::is_same_v<T, SharedIntrusiveAdoptNoIncrementTag>;
//------------------------------------------------------------------------------
@@ -122,9 +122,7 @@ public:
controlled by the rhs param.
*/
template <class TT>
SharedIntrusive(
StaticCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs);
SharedIntrusive(StaticCastTagSharedIntrusive, SharedIntrusive<TT> const& rhs);
/** Create a new SharedIntrusive by statically casting the pointer
controlled by the rhs param.
@@ -136,9 +134,7 @@ public:
controlled by the rhs param.
*/
template <class TT>
SharedIntrusive(
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs);
SharedIntrusive(DynamicCastTagSharedIntrusive, SharedIntrusive<TT> const& rhs);
/** Create a new SharedIntrusive by dynamically casting the pointer
controlled by the rhs param.
@@ -304,9 +300,7 @@ class SharedWeakUnion
// Tagged pointer. Low bit determines if this is a strong or a weak
// pointer. The low bit must be masked to zero when converting back to a
// pointer. If the low bit is '1', this is a weak pointer.
static_assert(
alignof(T) >= 2,
"Bad alignment: Combo pointer requires low bit to be zero");
static_assert(alignof(T) >= 2, "Bad alignment: Combo pointer requires low bit to be zero");
public:
SharedWeakUnion() = default;
@@ -450,9 +444,7 @@ make_SharedIntrusive(Args&&... args)
auto p = new TT(std::forward<Args>(args)...);
static_assert(
noexcept(SharedIntrusive<TT>(
std::declval<TT*>(),
std::declval<SharedIntrusiveAdoptNoIncrementTag>())),
noexcept(SharedIntrusive<TT>(std::declval<TT*>(), std::declval<SharedIntrusiveAdoptNoIncrementTag>())),
"SharedIntrusive constructor should not throw or this can leak "
"memory");

View File

@@ -12,9 +12,7 @@ template <class T>
template <CAdoptTag TAdoptTag>
SharedIntrusive<T>::SharedIntrusive(T* p, TAdoptTag) noexcept : ptr_{p}
{
if constexpr (std::is_same_v<
TAdoptTag,
SharedIntrusiveAdoptIncrementStrongTag>)
if constexpr (std::is_same_v<TAdoptTag, SharedIntrusiveAdoptIncrementStrongTag>)
{
if (p)
p->addStrongRef();
@@ -46,16 +44,14 @@ SharedIntrusive<T>::SharedIntrusive(SharedIntrusive<TT> const& rhs)
}
template <class T>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive&& rhs)
: ptr_{rhs.unsafeExchange(nullptr)}
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive&& rhs) : ptr_{rhs.unsafeExchange(nullptr)}
{
}
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive<TT>&& rhs)
: ptr_{rhs.unsafeExchange(nullptr)}
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive<TT>&& rhs) : ptr_{rhs.unsafeExchange(nullptr)}
{
}
template <class T>
@@ -112,9 +108,7 @@ requires std::convertible_to<TT*, T*>
SharedIntrusive<T>&
SharedIntrusive<T>::operator=(SharedIntrusive<TT>&& rhs)
{
static_assert(
!std::is_same_v<T, TT>,
"This overload should not be instantiated for T == TT");
static_assert(!std::is_same_v<T, TT>, "This overload should not be instantiated for T == TT");
unsafeReleaseAndStore(rhs.unsafeExchange(nullptr));
return *this;
@@ -139,9 +133,7 @@ template <CAdoptTag TAdoptTag>
void
SharedIntrusive<T>::adopt(T* p)
{
if constexpr (std::is_same_v<
TAdoptTag,
SharedIntrusiveAdoptIncrementStrongTag>)
if constexpr (std::is_same_v<TAdoptTag, SharedIntrusiveAdoptIncrementStrongTag>)
{
if (p)
p->addStrongRef();
@@ -157,9 +149,7 @@ SharedIntrusive<T>::~SharedIntrusive()
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
StaticCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs)
SharedIntrusive<T>::SharedIntrusive(StaticCastTagSharedIntrusive, SharedIntrusive<TT> const& rhs)
: ptr_{[&] {
auto p = static_cast<T*>(rhs.unsafeGetRawPtr());
if (p)
@@ -171,18 +161,14 @@ SharedIntrusive<T>::SharedIntrusive(
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
StaticCastTagSharedIntrusive,
SharedIntrusive<TT>&& rhs)
SharedIntrusive<T>::SharedIntrusive(StaticCastTagSharedIntrusive, SharedIntrusive<TT>&& rhs)
: ptr_{static_cast<T*>(rhs.unsafeExchange(nullptr))}
{
}
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs)
SharedIntrusive<T>::SharedIntrusive(DynamicCastTagSharedIntrusive, SharedIntrusive<TT> const& rhs)
: ptr_{[&] {
auto p = dynamic_cast<T*>(rhs.unsafeGetRawPtr());
if (p)
@@ -194,9 +180,7 @@ SharedIntrusive<T>::SharedIntrusive(
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT>&& rhs)
SharedIntrusive<T>::SharedIntrusive(DynamicCastTagSharedIntrusive, SharedIntrusive<TT>&& rhs)
{
// This can be simplified without the `exchange`, but the `exchange` is kept
// in anticipation of supporting atomic operations.
@@ -315,8 +299,7 @@ WeakIntrusive<T>::WeakIntrusive(WeakIntrusive&& rhs) : ptr_{rhs.ptr_}
}
template <class T>
WeakIntrusive<T>::WeakIntrusive(SharedIntrusive<T> const& rhs)
: ptr_{rhs.unsafeGetRawPtr()}
WeakIntrusive<T>::WeakIntrusive(SharedIntrusive<T> const& rhs) : ptr_{rhs.unsafeGetRawPtr()}
{
if (ptr_)
ptr_->addWeakRef();

View File

@@ -160,22 +160,19 @@ private:
See description of the `refCounts` field for a fuller description of
this field.
*/
static constexpr FieldType partialDestroyStartedMask =
(one << (FieldTypeBits - 1));
static constexpr FieldType partialDestroyStartedMask = (one << (FieldTypeBits - 1));
/** Flag that is set when the partialDestroy function has finished running
See description of the `refCounts` field for a fuller description of
this field.
*/
static constexpr FieldType partialDestroyFinishedMask =
(one << (FieldTypeBits - 2));
static constexpr FieldType partialDestroyFinishedMask = (one << (FieldTypeBits - 2));
/** Mask that will zero out all the `count` bits and leave the tag bits
unchanged.
*/
static constexpr FieldType tagMask =
partialDestroyStartedMask | partialDestroyFinishedMask;
static constexpr FieldType tagMask = partialDestroyStartedMask | partialDestroyFinishedMask;
/** Mask that will zero out the `tag` bits and leave the count bits
unchanged.
@@ -184,13 +181,11 @@ private:
/** Mask that will zero out everything except the strong count.
*/
static constexpr FieldType strongMask =
((one << StrongCountNumBits) - 1) & valueMask;
static constexpr FieldType strongMask = ((one << StrongCountNumBits) - 1) & valueMask;
/** Mask that will zero out everything except the weak count.
*/
static constexpr FieldType weakMask =
(((one << WeakCountNumBits) - 1) << StrongCountNumBits) & valueMask;
static constexpr FieldType weakMask = (((one << WeakCountNumBits) - 1) << StrongCountNumBits) & valueMask;
/** Unpack the count and tag fields from the packed atomic integer form. */
struct RefCountPair
@@ -215,10 +210,8 @@ private:
FieldType
combinedValue() const noexcept;
static constexpr CountType maxStrongValue =
static_cast<CountType>((one << StrongCountNumBits) - 1);
static constexpr CountType maxWeakValue =
static_cast<CountType>((one << WeakCountNumBits) - 1);
static constexpr CountType maxStrongValue = static_cast<CountType>((one << StrongCountNumBits) - 1);
static constexpr CountType maxWeakValue = static_cast<CountType>((one << WeakCountNumBits) - 1);
/** Put an extra margin to detect when running up against limits.
This is only used in debug code, and is useful if we reduce the
number of bits in the strong and weak counts (to 16 and 14 bits).
@@ -274,8 +267,7 @@ IntrusiveRefCounts::releaseStrongRef() const
}
}
if (refCounts.compare_exchange_weak(
prevIntVal, nextIntVal, std::memory_order_acq_rel))
if (refCounts.compare_exchange_weak(prevIntVal, nextIntVal, std::memory_order_acq_rel))
{
// Can't be in partial destroy because only decrementing the strong
// count to zero can start a partial destroy, and that can't happen
@@ -331,8 +323,7 @@ IntrusiveRefCounts::addWeakReleaseStrongRef() const
action = partialDestroy;
}
}
if (refCounts.compare_exchange_weak(
prevIntVal, nextIntVal, std::memory_order_acq_rel))
if (refCounts.compare_exchange_weak(prevIntVal, nextIntVal, std::memory_order_acq_rel))
{
XRPL_ASSERT(
(!(prevIntVal & partialDestroyStartedMask)),
@@ -376,8 +367,7 @@ IntrusiveRefCounts::checkoutStrongRefFromWeak() const noexcept
auto curValue = RefCountPair{1, 1}.combinedValue();
auto desiredValue = RefCountPair{2, 1}.combinedValue();
while (!refCounts.compare_exchange_weak(
curValue, desiredValue, std::memory_order_acq_rel))
while (!refCounts.compare_exchange_weak(curValue, desiredValue, std::memory_order_acq_rel))
{
RefCountPair const prev{curValue};
if (!prev.strong)
@@ -406,20 +396,15 @@ inline IntrusiveRefCounts::~IntrusiveRefCounts() noexcept
{
#ifndef NDEBUG
auto v = refCounts.load(std::memory_order_acquire);
XRPL_ASSERT(
(!(v & valueMask)),
"xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : count must be zero");
XRPL_ASSERT((!(v & valueMask)), "xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : count must be zero");
auto t = v & tagMask;
XRPL_ASSERT(
(!t || t == tagMask),
"xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : valid tag");
XRPL_ASSERT((!t || t == tagMask), "xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : valid tag");
#endif
}
//------------------------------------------------------------------------------
inline IntrusiveRefCounts::RefCountPair::RefCountPair(
IntrusiveRefCounts::FieldType v) noexcept
inline IntrusiveRefCounts::RefCountPair::RefCountPair(IntrusiveRefCounts::FieldType v) noexcept
: strong{static_cast<CountType>(v & strongMask)}
, weak{static_cast<CountType>((v & weakMask) >> StrongCountNumBits)}
, partialDestroyStartedBit{v & partialDestroyStartedMask}
@@ -449,10 +434,8 @@ IntrusiveRefCounts::RefCountPair::combinedValue() const noexcept
(strong < checkStrongMaxValue && weak < checkWeakMaxValue),
"xrpl::IntrusiveRefCounts::RefCountPair::combinedValue : inputs "
"inside range");
return (static_cast<IntrusiveRefCounts::FieldType>(weak)
<< IntrusiveRefCounts::StrongCountNumBits) |
static_cast<IntrusiveRefCounts::FieldType>(strong) |
partialDestroyStartedBit | partialDestroyFinishedBit;
return (static_cast<IntrusiveRefCounts::FieldType>(weak) << IntrusiveRefCounts::StrongCountNumBits) |
static_cast<IntrusiveRefCounts::FieldType>(strong) | partialDestroyStartedBit | partialDestroyFinishedBit;
}
template <class T>
@@ -460,11 +443,9 @@ inline void
partialDestructorFinished(T** o)
{
T& self = **o;
IntrusiveRefCounts::RefCountPair p =
self.refCounts.fetch_or(IntrusiveRefCounts::partialDestroyFinishedMask);
IntrusiveRefCounts::RefCountPair p = self.refCounts.fetch_or(IntrusiveRefCounts::partialDestroyFinishedMask);
XRPL_ASSERT(
(!p.partialDestroyFinishedBit && p.partialDestroyStartedBit &&
!p.strong),
(!p.partialDestroyFinishedBit && p.partialDestroyStartedBit && !p.strong),
"xrpl::partialDestructorFinished : not a weak ref");
if (!p.weak)
{

View File

@@ -55,8 +55,7 @@ template <class = void>
boost::thread_specific_ptr<detail::LocalValues>&
getLocalValues()
{
static boost::thread_specific_ptr<detail::LocalValues> tsp(
&detail::LocalValues::cleanup);
static boost::thread_specific_ptr<detail::LocalValues> tsp(&detail::LocalValues::cleanup);
return tsp;
}
@@ -105,9 +104,7 @@ LocalValue<T>::operator*()
}
return *reinterpret_cast<T*>(
lvs->values
.emplace(this, std::make_unique<detail::LocalValues::Value<T>>(t_))
.first->second->get());
lvs->values.emplace(this, std::make_unique<detail::LocalValues::Value<T>>(t_)).first->second->get());
}
} // namespace xrpl

View File

@@ -39,22 +39,17 @@ private:
std::string partition_;
public:
Sink(
std::string const& partition,
beast::severities::Severity thresh,
Logs& logs);
Sink(std::string const& partition, beast::severities::Severity thresh, Logs& logs);
Sink(Sink const&) = delete;
Sink&
operator=(Sink const&) = delete;
void
write(beast::severities::Severity level, std::string const& text)
override;
write(beast::severities::Severity level, std::string const& text) override;
void
writeAlways(beast::severities::Severity level, std::string const& text)
override;
writeAlways(beast::severities::Severity level, std::string const& text) override;
};
/** Manages a system file containing logged output.
@@ -140,11 +135,7 @@ private:
};
std::mutex mutable mutex_;
std::map<
std::string,
std::unique_ptr<beast::Journal::Sink>,
boost::beast::iless>
sinks_;
std::map<std::string, std::unique_ptr<beast::Journal::Sink>, boost::beast::iless> sinks_;
beast::severities::Severity thresh_;
File file_;
bool silent_ = false;
@@ -180,11 +171,7 @@ public:
partition_severities() const;
void
write(
beast::severities::Severity level,
std::string const& partition,
std::string const& text,
bool console);
write(beast::severities::Severity level, std::string const& partition, std::string const& text, bool console);
std::string
rotate();
@@ -201,9 +188,7 @@ public:
}
virtual std::unique_ptr<beast::Journal::Sink>
makeSink(
std::string const& partition,
beast::severities::Severity startingLevel);
makeSink(std::string const& partition, beast::severities::Severity startingLevel);
public:
static LogSeverity

View File

@@ -4,6 +4,7 @@
#include <xrpl/beast/utility/instrumentation.h>
#include <cstdint>
#include <functional>
#include <limits>
#include <optional>
#include <ostream>
@@ -73,10 +74,7 @@ struct MantissaRange
enum mantissa_scale { small, large };
explicit constexpr MantissaRange(mantissa_scale scale_)
: min(getMin(scale_))
, max(min * 10 - 1)
, log(logTen(min).value_or(-1))
, scale(scale_)
: min(getMin(scale_)), max(min * 10 - 1), log(logTen(min).value_or(-1)), scale(scale_)
{
}
@@ -106,8 +104,7 @@ private:
// Like std::integral, but only 64-bit integral types.
template <class T>
concept Integral64 =
std::is_same_v<T, std::int64_t> || std::is_same_v<T, std::uint64_t>;
concept Integral64 = std::is_same_v<T, std::int64_t> || std::is_same_v<T, std::uint64_t>;
/** Number is a floating point type that can represent a wide range of values.
*
@@ -244,22 +241,11 @@ public:
Number(rep mantissa);
explicit Number(rep mantissa, int exponent);
explicit constexpr Number(
bool negative,
internalrep mantissa,
int exponent,
unchecked) noexcept;
explicit constexpr Number(bool negative, internalrep mantissa, int exponent, unchecked) noexcept;
// Assume unsigned values are... unsigned. i.e. positive
explicit constexpr Number(
internalrep mantissa,
int exponent,
unchecked) noexcept;
explicit constexpr Number(internalrep mantissa, int exponent, unchecked) noexcept;
// Only unit tests are expected to use this ctor
explicit Number(
bool negative,
internalrep mantissa,
int exponent,
normalized);
explicit Number(bool negative, internalrep mantissa, int exponent, normalized);
// Assume unsigned values are... unsigned. i.e. positive
explicit Number(internalrep mantissa, int exponent, normalized);
@@ -309,8 +295,7 @@ public:
friend constexpr bool
operator==(Number const& x, Number const& y) noexcept
{
return x.negative_ == y.negative_ && x.mantissa_ == y.mantissa_ &&
x.exponent_ == y.exponent_;
return x.negative_ == y.negative_ && x.mantissa_ == y.mantissa_ && x.exponent_ == y.exponent_;
}
friend constexpr bool
@@ -518,37 +503,25 @@ private:
class Guard;
};
inline constexpr Number::Number(
bool negative,
internalrep mantissa,
int exponent,
unchecked) noexcept
inline constexpr Number::Number(bool negative, internalrep mantissa, int exponent, unchecked) noexcept
: negative_(negative), mantissa_{mantissa}, exponent_{exponent}
{
}
inline constexpr Number::Number(
internalrep mantissa,
int exponent,
unchecked) noexcept
inline constexpr Number::Number(internalrep mantissa, int exponent, unchecked) noexcept
: Number(false, mantissa, exponent, unchecked{})
{
}
constexpr static Number numZero{};
inline Number::Number(
bool negative,
internalrep mantissa,
int exponent,
normalized)
inline Number::Number(bool negative, internalrep mantissa, int exponent, normalized)
: Number(negative, mantissa, exponent, unchecked{})
{
normalize();
}
inline Number::Number(internalrep mantissa, int exponent, normalized)
: Number(false, mantissa, exponent, normalized{})
inline Number::Number(internalrep mantissa, int exponent, normalized) : Number(false, mantissa, exponent, normalized{})
{
}
@@ -695,15 +668,13 @@ Number::min() noexcept
inline Number
Number::max() noexcept
{
return Number{
false, std::min(range_.get().max, maxRep), maxExponent, unchecked{}};
return Number{false, std::min(range_.get().max, maxRep), maxExponent, unchecked{}};
}
inline Number
Number::lowest() noexcept
{
return Number{
true, std::min(range_.get().max, maxRep), maxExponent, unchecked{}};
return Number{true, std::min(range_.get().max, maxRep), maxExponent, unchecked{}};
}
inline bool
@@ -712,8 +683,7 @@ Number::isnormal() const noexcept
MantissaRange const& range = range_;
auto const abs_m = mantissa_;
return *this == Number{} ||
(range.min <= abs_m && abs_m <= range.max &&
(abs_m <= maxRep || abs_m % 10 == 0) && minExponent <= exponent_ &&
(range.min <= abs_m && abs_m <= range.max && (abs_m <= maxRep || abs_m % 10 == 0) && minExponent <= exponent_ &&
exponent_ <= maxExponent);
}
@@ -726,10 +696,7 @@ Number::normalizeToRange(T minMantissa, T maxMantissa) const
int exponent = exponent_;
if constexpr (std::is_unsigned_v<T>)
XRPL_ASSERT_PARTS(
!negative,
"xrpl::Number::normalizeToRange",
"Number is non-negative for unsigned range.");
XRPL_ASSERT_PARTS(!negative, "xrpl::Number::normalizeToRange", "Number is non-negative for unsigned range.");
Number::normalize(negative, mantissa, exponent, minMantissa, maxMantissa);
auto const sign = negative ? -1 : 1;
@@ -798,8 +765,7 @@ public:
{
Number::setround(mode_);
}
explicit saveNumberRoundMode(Number::rounding_mode mode) noexcept
: mode_{mode}
explicit saveNumberRoundMode(Number::rounding_mode mode) noexcept : mode_{mode}
{
}
saveNumberRoundMode(saveNumberRoundMode const&) = delete;
@@ -816,8 +782,7 @@ class NumberRoundModeGuard
saveNumberRoundMode saved_;
public:
explicit NumberRoundModeGuard(Number::rounding_mode mode) noexcept
: saved_{Number::setround(mode)}
explicit NumberRoundModeGuard(Number::rounding_mode mode) noexcept : saved_{Number::setround(mode)}
{
}
@@ -837,9 +802,7 @@ class NumberMantissaScaleGuard
MantissaRange::mantissa_scale const saved_;
public:
explicit NumberMantissaScaleGuard(
MantissaRange::mantissa_scale scale) noexcept
: saved_{Number::getMantissaScale()}
explicit NumberMantissaScaleGuard(MantissaRange::mantissa_scale scale) noexcept : saved_{Number::getMantissaScale()}
{
Number::setMantissaScale(scale);
}

View File

@@ -11,8 +11,7 @@ namespace xrpl {
class Resolver
{
public:
using HandlerType =
std::function<void(std::string, std::vector<beast::IP::Endpoint>)>;
using HandlerType = std::function<void(std::string, std::vector<beast::IP::Endpoint>)>;
virtual ~Resolver() = 0;
@@ -41,9 +40,7 @@ public:
}
virtual void
resolve(
std::vector<std::string> const& names,
HandlerType const& handler) = 0;
resolve(std::vector<std::string> const& names, HandlerType const& handler) = 0;
/** @} */
};

View File

@@ -5,34 +5,28 @@
namespace xrpl {
template <class T>
SharedWeakCachePointer<T>::SharedWeakCachePointer(
SharedWeakCachePointer const& rhs) = default;
SharedWeakCachePointer<T>::SharedWeakCachePointer(SharedWeakCachePointer const& rhs) = default;
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>::SharedWeakCachePointer(
std::shared_ptr<TT> const& rhs)
: combo_{rhs}
SharedWeakCachePointer<T>::SharedWeakCachePointer(std::shared_ptr<TT> const& rhs) : combo_{rhs}
{
}
template <class T>
SharedWeakCachePointer<T>::SharedWeakCachePointer(
SharedWeakCachePointer&& rhs) = default;
SharedWeakCachePointer<T>::SharedWeakCachePointer(SharedWeakCachePointer&& rhs) = default;
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>::SharedWeakCachePointer(std::shared_ptr<TT>&& rhs)
: combo_{std::move(rhs)}
SharedWeakCachePointer<T>::SharedWeakCachePointer(std::shared_ptr<TT>&& rhs) : combo_{std::move(rhs)}
{
}
template <class T>
SharedWeakCachePointer<T>&
SharedWeakCachePointer<T>::operator=(SharedWeakCachePointer const& rhs) =
default;
SharedWeakCachePointer<T>::operator=(SharedWeakCachePointer const& rhs) = default;
template <class T>
template <class TT>

View File

@@ -51,11 +51,7 @@ class SlabAllocator
// The extent of the underlying memory block:
std::size_t const size_;
SlabBlock(
SlabBlock* next,
std::uint8_t* data,
std::size_t size,
std::size_t item)
SlabBlock(SlabBlock* next, std::uint8_t* data, std::size_t size, std::size_t item)
: next_(next), p_(data), size_(size)
{
// We don't need to grab the mutex here, since we're the only
@@ -126,9 +122,7 @@ class SlabAllocator
void
deallocate(std::uint8_t* ptr) noexcept
{
XRPL_ASSERT(
own(ptr),
"xrpl::SlabAllocator::SlabBlock::deallocate : own input");
XRPL_ASSERT(own(ptr), "xrpl::SlabAllocator::SlabBlock::deallocate : own input");
std::lock_guard l(m_);
@@ -162,18 +156,13 @@ public:
contexts (e.g. when minimal memory usage is needed) and
allows for graceful failure.
*/
constexpr explicit SlabAllocator(
std::size_t extra,
std::size_t alloc = 0,
std::size_t align = 0)
constexpr explicit SlabAllocator(std::size_t extra, std::size_t alloc = 0, std::size_t align = 0)
: itemAlignment_(align ? align : alignof(Type))
, itemSize_(
boost::alignment::align_up(sizeof(Type) + extra, itemAlignment_))
, itemSize_(boost::alignment::align_up(sizeof(Type) + extra, itemAlignment_))
, slabSize_(alloc)
{
XRPL_ASSERT(
(itemAlignment_ & (itemAlignment_ - 1)) == 0,
"xrpl::SlabAllocator::SlabAllocator : valid alignment");
(itemAlignment_ & (itemAlignment_ - 1)) == 0, "xrpl::SlabAllocator::SlabAllocator : valid alignment");
}
SlabAllocator(SlabAllocator const& other) = delete;
@@ -222,8 +211,7 @@ public:
// We want to allocate the memory at a 2 MiB boundary, to make it
// possible to use hugepage mappings on Linux:
auto buf =
boost::alignment::aligned_alloc(megabytes(std::size_t(2)), size);
auto buf = boost::alignment::aligned_alloc(megabytes(std::size_t(2)), size);
// clang-format off
if (!buf) [[unlikely]]
@@ -241,31 +229,21 @@ public:
// We need to carve out a bit of memory for the slab header
// and then align the rest appropriately:
auto slabData = reinterpret_cast<void*>(
reinterpret_cast<std::uint8_t*>(buf) + sizeof(SlabBlock));
auto slabData = reinterpret_cast<void*>(reinterpret_cast<std::uint8_t*>(buf) + sizeof(SlabBlock));
auto slabSize = size - sizeof(SlabBlock);
// This operation is essentially guaranteed not to fail but
// let's be careful anyways.
if (!boost::alignment::align(
itemAlignment_, itemSize_, slabData, slabSize))
if (!boost::alignment::align(itemAlignment_, itemSize_, slabData, slabSize))
{
boost::alignment::aligned_free(buf);
return nullptr;
}
slab = new (buf) SlabBlock(
slabs_.load(),
reinterpret_cast<std::uint8_t*>(slabData),
slabSize,
itemSize_);
slab = new (buf) SlabBlock(slabs_.load(), reinterpret_cast<std::uint8_t*>(slabData), slabSize, itemSize_);
// Link the new slab
while (!slabs_.compare_exchange_weak(
slab->next_,
slab,
std::memory_order_release,
std::memory_order_relaxed))
while (!slabs_.compare_exchange_weak(slab->next_, slab, std::memory_order_release, std::memory_order_relaxed))
{
; // Nothing to do
}
@@ -322,10 +300,7 @@ public:
std::size_t align;
public:
constexpr SlabConfig(
std::size_t extra_,
std::size_t alloc_ = 0,
std::size_t align_ = alignof(Type))
constexpr SlabConfig(std::size_t extra_, std::size_t alloc_ = 0, std::size_t align_ = alignof(Type))
: extra(extra_), alloc(alloc_), align(align_)
{
}
@@ -336,23 +311,14 @@ public:
// Ensure that the specified allocators are sorted from smallest to
// largest by size:
std::sort(
std::begin(cfg),
std::end(cfg),
[](SlabConfig const& a, SlabConfig const& b) {
return a.extra < b.extra;
});
std::begin(cfg), std::end(cfg), [](SlabConfig const& a, SlabConfig const& b) { return a.extra < b.extra; });
// We should never have two slabs of the same size
if (std::adjacent_find(
std::begin(cfg),
std::end(cfg),
[](SlabConfig const& a, SlabConfig const& b) {
return a.extra == b.extra;
}) != cfg.end())
if (std::adjacent_find(std::begin(cfg), std::end(cfg), [](SlabConfig const& a, SlabConfig const& b) {
return a.extra == b.extra;
}) != cfg.end())
{
throw std::runtime_error(
"SlabAllocatorSet<" + beast::type_name<Type>() +
">: duplicate slab size");
throw std::runtime_error("SlabAllocatorSet<" + beast::type_name<Type>() + ">: duplicate slab size");
}
for (auto const& c : cfg)

View File

@@ -41,8 +41,7 @@ public:
operator=(Slice const&) noexcept = default;
/** Create a slice pointing to existing memory. */
Slice(void const* data, std::size_t size) noexcept
: data_(reinterpret_cast<std::uint8_t const*>(data)), size_(size)
Slice(void const* data, std::size_t size) noexcept : data_(reinterpret_cast<std::uint8_t const*>(data)), size_(size)
{
}
@@ -85,9 +84,7 @@ public:
std::uint8_t
operator[](std::size_t i) const noexcept
{
XRPL_ASSERT(
i < size_,
"xrpl::Slice::operator[](std::size_t) const : valid input");
XRPL_ASSERT(i < size_, "xrpl::Slice::operator[](std::size_t) const : valid input");
return data_[i];
}
@@ -162,9 +159,7 @@ public:
@throws std::out_of_range if pos > size()
*/
Slice
substr(
std::size_t pos,
std::size_t count = std::numeric_limits<std::size_t>::max()) const
substr(std::size_t pos, std::size_t count = std::numeric_limits<std::size_t>::max()) const
{
if (pos > size())
throw std::out_of_range("Requested sub-slice is out of bounds");
@@ -203,11 +198,7 @@ operator!=(Slice const& lhs, Slice const& rhs) noexcept
inline bool
operator<(Slice const& lhs, Slice const& rhs) noexcept
{
return std::lexicographical_compare(
lhs.data(),
lhs.data() + lhs.size(),
rhs.data(),
rhs.data() + rhs.size());
return std::lexicographical_compare(lhs.data(), lhs.data() + lhs.size(), rhs.data(), rhs.data() + rhs.size());
}
template <class Stream>
@@ -219,18 +210,14 @@ operator<<(Stream& s, Slice const& v)
}
template <class T, std::size_t N>
std::enable_if_t<
std::is_same<T, char>::value || std::is_same<T, unsigned char>::value,
Slice>
std::enable_if_t<std::is_same<T, char>::value || std::is_same<T, unsigned char>::value, Slice>
makeSlice(std::array<T, N> const& a)
{
return Slice(a.data(), a.size());
}
template <class T, class Alloc>
std::enable_if_t<
std::is_same<T, char>::value || std::is_same<T, unsigned char>::value,
Slice>
std::enable_if_t<std::is_same<T, char>::value || std::is_same<T, unsigned char>::value, Slice>
makeSlice(std::vector<T, Alloc> const& v)
{
return Slice(v.data(), v.size());

View File

@@ -109,8 +109,7 @@ struct parsedURL
bool
operator==(parsedURL const& other) const
{
return scheme == other.scheme && domain == other.domain &&
port == other.port && path == other.path;
return scheme == other.scheme && domain == other.domain && port == other.port && path == other.path;
}
};

View File

@@ -56,8 +56,7 @@ public:
clock_type::duration expiration,
clock_type& clock,
beast::Journal journal,
beast::insight::Collector::ptr const& collector =
beast::insight::NullCollector::New());
beast::insight::Collector::ptr const& collector = beast::insight::NullCollector::New());
public:
/** Return the clock associated with the cache. */
@@ -114,15 +113,10 @@ public:
*/
template <class R>
bool
canonicalize(
key_type const& key,
SharedPointerType& data,
R&& replaceCallback);
canonicalize(key_type const& key, SharedPointerType& data, R&& replaceCallback);
bool
canonicalize_replace_cache(
key_type const& key,
SharedPointerType const& data);
canonicalize_replace_cache(key_type const& key, SharedPointerType const& data);
bool
canonicalize_replace_client(key_type const& key, SharedPointerType& data);
@@ -136,8 +130,7 @@ public:
*/
template <class ReturnType = bool>
auto
insert(key_type const& key, T const& value)
-> std::enable_if_t<!IsKeyCache, ReturnType>;
insert(key_type const& key, T const& value) -> std::enable_if_t<!IsKeyCache, ReturnType>;
template <class ReturnType = bool>
auto
@@ -183,10 +176,7 @@ private:
struct Stats
{
template <class Handler>
Stats(
std::string const& prefix,
Handler const& handler,
beast::insight::Collector::ptr const& collector)
Stats(std::string const& prefix, Handler const& handler, beast::insight::Collector::ptr const& collector)
: hook(collector->make_hook(handler))
, size(collector->make_gauge(prefix, "size"))
, hit_rate(collector->make_gauge(prefix, "hit_rate"))
@@ -208,8 +198,7 @@ private:
public:
clock_type::time_point last_access;
explicit KeyOnlyEntry(clock_type::time_point const& last_access_)
: last_access(last_access_)
explicit KeyOnlyEntry(clock_type::time_point const& last_access_) : last_access(last_access_)
{
}
@@ -226,9 +215,7 @@ private:
shared_weak_combo_pointer_type ptr;
clock_type::time_point last_access;
ValueEntry(
clock_type::time_point const& last_access_,
shared_pointer_type const& ptr_)
ValueEntry(clock_type::time_point const& last_access_, shared_pointer_type const& ptr_)
: ptr(ptr_), last_access(last_access_)
{
}
@@ -262,18 +249,13 @@ private:
}
};
typedef
typename std::conditional<IsKeyCache, KeyOnlyEntry, ValueEntry>::type
Entry;
typedef typename std::conditional<IsKeyCache, KeyOnlyEntry, ValueEntry>::type Entry;
using KeyOnlyCacheType =
hardened_partitioned_hash_map<key_type, KeyOnlyEntry, Hash, KeyEqual>;
using KeyOnlyCacheType = hardened_partitioned_hash_map<key_type, KeyOnlyEntry, Hash, KeyEqual>;
using KeyValueCacheType =
hardened_partitioned_hash_map<key_type, ValueEntry, Hash, KeyEqual>;
using KeyValueCacheType = hardened_partitioned_hash_map<key_type, ValueEntry, Hash, KeyEqual>;
using cache_type =
hardened_partitioned_hash_map<key_type, Entry, Hash, KeyEqual>;
using cache_type = hardened_partitioned_hash_map<key_type, Entry, Hash, KeyEqual>;
[[nodiscard]] std::thread
sweepHelper(

View File

@@ -15,22 +15,13 @@ template <
class Hash,
class KeyEqual,
class Mutex>
inline TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
TaggedCache(
std::string const& name,
int size,
clock_type::duration expiration,
clock_type& clock,
beast::Journal journal,
beast::insight::Collector::ptr const& collector)
inline TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::TaggedCache(
std::string const& name,
int size,
clock_type::duration expiration,
clock_type& clock,
beast::Journal journal,
beast::insight::Collector::ptr const& collector)
: m_journal(journal)
, m_clock(clock)
, m_stats(name, std::bind(&TaggedCache::collect_metrics, this), collector)
@@ -53,15 +44,8 @@ template <
class KeyEqual,
class Mutex>
inline auto
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::clock() -> clock_type&
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::clock()
-> clock_type&
{
return m_clock;
}
@@ -76,15 +60,7 @@ template <
class KeyEqual,
class Mutex>
inline std::size_t
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::size() const
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::size() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
@@ -100,15 +76,7 @@ template <
class KeyEqual,
class Mutex>
inline int
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getCacheSize() const
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getCacheSize() const
{
std::lock_guard lock(m_mutex);
return m_cache_count;
@@ -124,15 +92,7 @@ template <
class KeyEqual,
class Mutex>
inline int
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getTrackSize() const
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getTrackSize() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
@@ -148,15 +108,7 @@ template <
class KeyEqual,
class Mutex>
inline float
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getHitRate()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getHitRate()
{
std::lock_guard lock(m_mutex);
auto const total = static_cast<float>(m_hits + m_misses);
@@ -173,15 +125,7 @@ template <
class KeyEqual,
class Mutex>
inline void
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::clear()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::clear()
{
std::lock_guard lock(m_mutex);
m_cache.clear();
@@ -198,15 +142,7 @@ template <
class KeyEqual,
class Mutex>
inline void
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::reset()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::reset()
{
std::lock_guard lock(m_mutex);
m_cache.clear();
@@ -226,15 +162,8 @@ template <
class Mutex>
template <class KeyComparable>
inline bool
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::touch_if_exists(KeyComparable const& key)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::touch_if_exists(
KeyComparable const& key)
{
std::lock_guard lock(m_mutex);
auto const iter(m_cache.find(key));
@@ -258,15 +187,7 @@ template <
class KeyEqual,
class Mutex>
inline void
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::sweep()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::sweep()
{
// Keep references to all the stuff we sweep
// For performance, each worker thread should exit before the swept data
@@ -280,8 +201,7 @@ TaggedCache<
{
std::lock_guard lock(m_mutex);
if (m_target_size == 0 ||
(static_cast<int>(m_cache.size()) <= m_target_size))
if (m_target_size == 0 || (static_cast<int>(m_cache.size()) <= m_target_size))
{
when_expire = now - m_target_age;
}
@@ -293,10 +213,8 @@ TaggedCache<
if (when_expire > (now - minimumAge))
when_expire = now - minimumAge;
JLOG(m_journal.trace())
<< m_name << " is growing fast " << m_cache.size() << " of "
<< m_target_size << " aging at " << (now - when_expire).count()
<< " of " << m_target_age.count();
JLOG(m_journal.trace()) << m_name << " is growing fast " << m_cache.size() << " of " << m_target_size
<< " aging at " << (now - when_expire).count() << " of " << m_target_age.count();
}
std::vector<std::thread> workers;
@@ -305,13 +223,7 @@ TaggedCache<
for (std::size_t p = 0; p < m_cache.partitions(); ++p)
{
workers.push_back(sweepHelper(
when_expire,
now,
m_cache.map()[p],
allStuffToSweep[p],
allRemovals,
lock));
workers.push_back(sweepHelper(when_expire, now, m_cache.map()[p], allStuffToSweep[p], allRemovals, lock));
}
for (std::thread& worker : workers)
worker.join();
@@ -322,9 +234,7 @@ TaggedCache<
// and decrement the reference count on each strong pointer.
JLOG(m_journal.debug())
<< m_name << " TaggedCache sweep lock duration "
<< std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - start)
.count()
<< std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - start).count()
<< "ms";
}
@@ -338,15 +248,9 @@ template <
class KeyEqual,
class Mutex>
inline bool
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::del(key_type const& key, bool valid)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::del(
key_type const& key,
bool valid)
{
// Remove from cache, if !valid, remove from map too. Returns true if
// removed from cache
@@ -385,19 +289,10 @@ template <
class Mutex>
template <class R>
inline bool
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
canonicalize(
key_type const& key,
SharedPointerType& data,
R&& replaceCallback)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::canonicalize(
key_type const& key,
SharedPointerType& data,
R&& replaceCallback)
{
// Return canonical value, store if needed, refresh in cache
// Return values: true=we had the data already
@@ -408,9 +303,7 @@ TaggedCache<
if (cit == m_cache.end())
{
m_cache.emplace(
std::piecewise_construct,
std::forward_as_tuple(key),
std::forward_as_tuple(m_clock.now(), data));
std::piecewise_construct, std::forward_as_tuple(key), std::forward_as_tuple(m_clock.now(), data));
++m_cache_count;
return false;
}
@@ -480,21 +373,10 @@ template <
class KeyEqual,
class Mutex>
inline bool
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
canonicalize_replace_cache(
key_type const& key,
SharedPointerType const& data)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
canonicalize_replace_cache(key_type const& key, SharedPointerType const& data)
{
return canonicalize(
key, const_cast<SharedPointerType&>(data), []() { return true; });
return canonicalize(key, const_cast<SharedPointerType&>(data), []() { return true; });
}
template <
@@ -507,15 +389,7 @@ template <
class KeyEqual,
class Mutex>
inline bool
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
canonicalize_replace_client(key_type const& key, SharedPointerType& data)
{
return canonicalize(key, data, []() { return false; });
@@ -531,15 +405,8 @@ template <
class KeyEqual,
class Mutex>
inline SharedPointerType
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::fetch(key_type const& key)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::fetch(
key_type const& key)
{
std::lock_guard<mutex_type> l(m_mutex);
auto ret = initialFetch(key, l);
@@ -559,16 +426,9 @@ template <
class Mutex>
template <class ReturnType>
inline auto
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::insert(key_type const& key, T const& value)
-> std::enable_if_t<!IsKeyCache, ReturnType>
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::insert(
key_type const& key,
T const& value) -> std::enable_if_t<!IsKeyCache, ReturnType>
{
static_assert(
std::is_same_v<std::shared_ptr<T>, SharedPointerType> ||
@@ -597,23 +457,13 @@ template <
class Mutex>
template <class ReturnType>
inline auto
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::insert(key_type const& key)
-> std::enable_if_t<IsKeyCache, ReturnType>
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::insert(
key_type const& key) -> std::enable_if_t<IsKeyCache, ReturnType>
{
std::lock_guard lock(m_mutex);
clock_type::time_point const now(m_clock.now());
auto [it, inserted] = m_cache.emplace(
std::piecewise_construct,
std::forward_as_tuple(key),
std::forward_as_tuple(now));
auto [it, inserted] =
m_cache.emplace(std::piecewise_construct, std::forward_as_tuple(key), std::forward_as_tuple(now));
if (!inserted)
it->second.last_access = now;
return inserted;
@@ -629,15 +479,9 @@ template <
class KeyEqual,
class Mutex>
inline bool
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::retrieve(key_type const& key, T& data)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::retrieve(
key_type const& key,
T& data)
{
// retrieve the value of the stored data
auto entry = fetch(key);
@@ -659,15 +503,8 @@ template <
class KeyEqual,
class Mutex>
inline auto
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::peekMutex() -> mutex_type&
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::peekMutex()
-> mutex_type&
{
return m_mutex;
}
@@ -682,15 +519,8 @@ template <
class KeyEqual,
class Mutex>
inline auto
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getKeys() const -> std::vector<key_type>
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getKeys() const
-> std::vector<key_type>
{
std::vector<key_type> v;
@@ -714,15 +544,7 @@ template <
class KeyEqual,
class Mutex>
inline double
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::rate() const
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::rate() const
{
std::lock_guard lock(m_mutex);
auto const tot = m_hits + m_misses;
@@ -742,15 +564,9 @@ template <
class Mutex>
template <class Handler>
inline SharedPointerType
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::fetch(key_type const& digest, Handler const& h)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::fetch(
key_type const& digest,
Handler const& h)
{
{
std::lock_guard l(m_mutex);
@@ -764,8 +580,7 @@ TaggedCache<
std::lock_guard l(m_mutex);
++m_misses;
auto const [it, inserted] =
m_cache.emplace(digest, Entry(m_clock.now(), std::move(sle)));
auto const [it, inserted] = m_cache.emplace(digest, Entry(m_clock.now(), std::move(sle)));
if (!inserted)
it->second.touch(m_clock.now());
return it->second.ptr.getStrong();
@@ -782,16 +597,9 @@ template <
class KeyEqual,
class Mutex>
inline SharedPointerType
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
initialFetch(key_type const& key, std::lock_guard<mutex_type> const& l)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::initialFetch(
key_type const& key,
std::lock_guard<mutex_type> const& l)
{
auto cit = m_cache.find(key);
if (cit == m_cache.end())
@@ -827,15 +635,7 @@ template <
class KeyEqual,
class Mutex>
inline void
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::collect_metrics()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::collect_metrics()
{
m_stats.size.set(getCacheSize());
@@ -861,22 +661,13 @@ template <
class KeyEqual,
class Mutex>
inline std::thread
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
sweepHelper(
clock_type::time_point const& when_expire,
[[maybe_unused]] clock_type::time_point const& now,
typename KeyValueCacheType::map_type& partition,
SweptPointersVector& stuffToSweep,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::sweepHelper(
clock_type::time_point const& when_expire,
[[maybe_unused]] clock_type::time_point const& now,
typename KeyValueCacheType::map_type& partition,
SweptPointersVector& stuffToSweep,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
{
return std::thread([&, this]() {
int cacheRemovals = 0;
@@ -930,10 +721,8 @@ TaggedCache<
if (mapRemovals || cacheRemovals)
{
JLOG(m_journal.debug())
<< "TaggedCache partition sweep " << m_name
<< ": cache = " << partition.size() << "-" << cacheRemovals
<< ", map-=" << mapRemovals;
JLOG(m_journal.debug()) << "TaggedCache partition sweep " << m_name << ": cache = " << partition.size()
<< "-" << cacheRemovals << ", map-=" << mapRemovals;
}
allRemovals += cacheRemovals;
@@ -950,22 +739,13 @@ template <
class KeyEqual,
class Mutex>
inline std::thread
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
sweepHelper(
clock_type::time_point const& when_expire,
clock_type::time_point const& now,
typename KeyOnlyCacheType::map_type& partition,
SweptPointersVector&,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::sweepHelper(
clock_type::time_point const& when_expire,
clock_type::time_point const& now,
typename KeyOnlyCacheType::map_type& partition,
SweptPointersVector&,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
{
return std::thread([&, this]() {
int cacheRemovals = 0;
@@ -995,10 +775,8 @@ TaggedCache<
if (mapRemovals || cacheRemovals)
{
JLOG(m_journal.debug())
<< "TaggedCache partition sweep " << m_name
<< ": cache = " << partition.size() << "-" << cacheRemovals
<< ", map-=" << mapRemovals;
JLOG(m_journal.debug()) << "TaggedCache partition sweep " << m_name << ": cache = " << partition.size()
<< "-" << cacheRemovals << ", map-=" << mapRemovals;
}
allRemovals += cacheRemovals;

View File

@@ -40,8 +40,7 @@ template <
class Hash = beast::uhash<>,
class Pred = std::equal_to<Key>,
class Allocator = std::allocator<std::pair<Key const, Value>>>
using hash_multimap =
std::unordered_multimap<Key, Value, Hash, Pred, Allocator>;
using hash_multimap = std::unordered_multimap<Key, Value, Hash, Pred, Allocator>;
template <
class Value,
@@ -75,8 +74,7 @@ template <
class Hash = hardened_hash<strong_hash>,
class Pred = std::equal_to<Key>,
class Allocator = std::allocator<std::pair<Key const, Value>>>
using hardened_partitioned_hash_map =
partitioned_unordered_map<Key, Value, Hash, Pred, Allocator>;
using hardened_partitioned_hash_map = partitioned_unordered_map<Key, Value, Hash, Pred, Allocator>;
template <
class Key,
@@ -84,8 +82,7 @@ template <
class Hash = hardened_hash<strong_hash>,
class Pred = std::equal_to<Key>,
class Allocator = std::allocator<std::pair<Key const, Value>>>
using hardened_hash_multimap =
std::unordered_multimap<Key, Value, Hash, Pred, Allocator>;
using hardened_hash_multimap = std::unordered_multimap<Key, Value, Hash, Pred, Allocator>;
template <
class Value,
@@ -99,8 +96,7 @@ template <
class Hash = hardened_hash<strong_hash>,
class Pred = std::equal_to<Value>,
class Allocator = std::allocator<Value>>
using hardened_hash_multiset =
std::unordered_multiset<Value, Hash, Pred, Allocator>;
using hardened_hash_multiset = std::unordered_multiset<Value, Hash, Pred, Allocator>;
} // namespace xrpl

View File

@@ -52,13 +52,7 @@ generalized_set_intersection(
// std::set_intersection.
template <class FwdIter1, class InputIter2, class Pred, class Comp>
FwdIter1
remove_if_intersect_or_match(
FwdIter1 first1,
FwdIter1 last1,
InputIter2 first2,
InputIter2 last2,
Pred pred,
Comp comp)
remove_if_intersect_or_match(FwdIter1 first1, FwdIter1 last1, InputIter2 first2, InputIter2 last2, Pred pred, Comp comp)
{
// [original-first1, current-first1) is the set of elements to be preserved.
// [current-first1, i) is the set of elements that have been removed.

View File

@@ -46,8 +46,7 @@ base64_encode(std::uint8_t const* data, std::size_t len);
inline std::string
base64_encode(std::string const& s)
{
return base64_encode(
reinterpret_cast<std::uint8_t const*>(s.data()), s.size());
return base64_encode(reinterpret_cast<std::uint8_t const*>(s.data()), s.size());
}
std::string

View File

@@ -65,13 +65,9 @@ struct is_contiguous_container<Slice> : std::true_type
template <std::size_t Bits, class Tag = void>
class base_uint
{
static_assert(
(Bits % 32) == 0,
"The length of a base_uint in bits must be a multiple of 32.");
static_assert((Bits % 32) == 0, "The length of a base_uint in bits must be a multiple of 32.");
static_assert(
Bits >= 64,
"The length of a base_uint in bits must be at least 64.");
static_assert(Bits >= 64, "The length of a base_uint in bits must be at least 64.");
static constexpr std::size_t WIDTH = Bits / 32;
@@ -182,9 +178,7 @@ private:
{
// Local lambda that converts a single hex char to four bits and
// ORs those bits into a uint32_t.
auto hexCharToUInt = [](char c,
std::uint32_t shift,
std::uint32_t& accum) -> ParseResult {
auto hexCharToUInt = [](char c, std::uint32_t shift, std::uint32_t& accum) -> ParseResult {
std::uint32_t nibble = 0xFFu;
if (c < '0' || c > 'f')
return ParseResult::badChar;
@@ -221,8 +215,7 @@ private:
std::uint32_t accum = {};
for (std::uint32_t shift : {4u, 0u, 12u, 8u, 20u, 16u, 28u, 24u})
{
if (auto const result = hexCharToUInt(*in++, shift, accum);
result != ParseResult::okay)
if (auto const result = hexCharToUInt(*in++, shift, accum); result != ParseResult::okay)
return Unexpected(result);
}
ret[i++] = accum;
@@ -261,8 +254,7 @@ public:
// This constructor is intended to be used at compile time since it might
// throw at runtime. Consider declaring this constructor consteval once
// we get to C++23.
explicit constexpr base_uint(std::string_view sv) noexcept(false)
: data_(parseFromStringViewThrows(sv))
explicit constexpr base_uint(std::string_view sv) noexcept(false) : data_(parseFromStringViewThrows(sv))
{
}
@@ -387,8 +379,7 @@ public:
// prefix operator
for (int i = WIDTH - 1; i >= 0; --i)
{
data_[i] = boost::endian::native_to_big(
boost::endian::big_to_native(data_[i]) + 1);
data_[i] = boost::endian::native_to_big(boost::endian::big_to_native(data_[i]) + 1);
if (data_[i] != 0)
break;
}
@@ -412,8 +403,7 @@ public:
for (int i = WIDTH - 1; i >= 0; --i)
{
auto prev = data_[i];
data_[i] = boost::endian::native_to_big(
boost::endian::big_to_native(data_[i]) - 1);
data_[i] = boost::endian::native_to_big(boost::endian::big_to_native(data_[i]) - 1);
if (prev != 0)
break;
@@ -453,11 +443,9 @@ public:
for (int i = WIDTH; i--;)
{
std::uint64_t n = carry + boost::endian::big_to_native(data_[i]) +
boost::endian::big_to_native(b.data_[i]);
std::uint64_t n = carry + boost::endian::big_to_native(data_[i]) + boost::endian::big_to_native(b.data_[i]);
data_[i] =
boost::endian::native_to_big(static_cast<std::uint32_t>(n));
data_[i] = boost::endian::native_to_big(static_cast<std::uint32_t>(n));
carry = n >> 32;
}
@@ -557,8 +545,7 @@ operator<=>(base_uint<Bits, Tag> const& lhs, base_uint<Bits, Tag> const& rhs)
if (ret.first == lhs.cend())
return std::strong_ordering::equivalent;
return (*ret.first > *ret.second) ? std::strong_ordering::greater
: std::strong_ordering::less;
return (*ret.first > *ret.second) ? std::strong_ordering::greater : std::strong_ordering::less;
}
template <std::size_t Bits, typename Tag>
@@ -617,9 +604,7 @@ template <std::size_t Bits, class Tag>
inline std::string
to_short_string(base_uint<Bits, Tag> const& a)
{
static_assert(
base_uint<Bits, Tag>::bytes > 4,
"For 4 bytes or less, use a native type");
static_assert(base_uint<Bits, Tag>::bytes > 4, "For 4 bytes or less, use a native type");
return strHex(a.cbegin(), a.cbegin() + 4) + "...";
}
@@ -653,8 +638,7 @@ static_assert(sizeof(uint256) == 256 / 8, "There should be no padding bytes");
namespace beast {
template <std::size_t Bits, class Tag>
struct is_uniquely_represented<xrpl::base_uint<Bits, Tag>>
: public std::true_type
struct is_uniquely_represented<xrpl::base_uint<Bits, Tag>> : public std::true_type
{
explicit is_uniquely_represented() = default;
};

View File

@@ -16,12 +16,9 @@ namespace xrpl {
// A few handy aliases
using days = std::chrono::duration<
int,
std::ratio_multiply<std::chrono::hours::period, std::ratio<24>>>;
using days = std::chrono::duration<int, std::ratio_multiply<std::chrono::hours::period, std::ratio<24>>>;
using weeks = std::chrono::
duration<int, std::ratio_multiply<days::period, std::ratio<7>>>;
using weeks = std::chrono::duration<int, std::ratio_multiply<days::period, std::ratio<7>>>;
/** Clock for measuring the network time.
@@ -34,8 +31,7 @@ using weeks = std::chrono::
*/
constexpr static std::chrono::seconds epoch_offset =
date::sys_days{date::year{2000} / 1 / 1} -
date::sys_days{date::year{1970} / 1 / 1};
date::sys_days{date::year{2000} / 1 / 1} - date::sys_days{date::year{1970} / 1 / 1};
static_assert(epoch_offset.count() == 946684800);
@@ -64,8 +60,7 @@ to_string(NetClock::time_point tp)
{
// 2000-01-01 00:00:00 UTC is 946684800s from 1970-01-01 00:00:00 UTC
using namespace std::chrono;
return to_string(
system_clock::time_point{tp.time_since_epoch() + epoch_offset});
return to_string(system_clock::time_point{tp.time_since_epoch() + epoch_offset});
}
template <class Duration>
@@ -82,8 +77,7 @@ to_string_iso(NetClock::time_point tp)
// 2000-01-01 00:00:00 UTC is 946684800s from 1970-01-01 00:00:00 UTC
// Note, NetClock::duration is seconds, as checked by static_assert
static_assert(std::is_same_v<NetClock::duration::period, std::ratio<1>>);
return to_string_iso(date::sys_time<NetClock::duration>{
tp.time_since_epoch() + epoch_offset});
return to_string_iso(date::sys_time<NetClock::duration>{tp.time_since_epoch() + epoch_offset});
}
/** A clock for measuring elapsed time.

View File

@@ -36,15 +36,10 @@ template <class E, class... Args>
[[noreturn]] inline void
Throw(Args&&... args)
{
static_assert(
std::is_convertible<E*, std::exception*>::value,
"Exception must derive from std::exception.");
static_assert(std::is_convertible<E*, std::exception*>::value, "Exception must derive from std::exception.");
E e(std::forward<Args>(args)...);
LogThrow(
std::string(
"Throwing exception of type " + beast::type_name<E>() + ": ") +
e.what());
LogThrow(std::string("Throwing exception of type " + beast::type_name<E>() + ": ") + e.what());
throw e;
}

View File

@@ -24,8 +24,7 @@ public:
Collection const& collection;
std::string const delimiter;
explicit CollectionAndDelimiter(Collection const& c, std::string delim)
: collection(c), delimiter(std::move(delim))
explicit CollectionAndDelimiter(Collection const& c, std::string delim) : collection(c), delimiter(std::move(delim))
{
}
@@ -33,11 +32,7 @@ public:
friend Stream&
operator<<(Stream& s, CollectionAndDelimiter const& cd)
{
return join(
s,
std::begin(cd.collection),
std::end(cd.collection),
cd.delimiter);
return join(s, std::begin(cd.collection), std::end(cd.collection), cd.delimiter);
}
};
@@ -69,8 +64,7 @@ public:
char const* collection;
std::string const delimiter;
explicit CollectionAndDelimiter(char const c[N], std::string delim)
: collection(c), delimiter(std::move(delim))
explicit CollectionAndDelimiter(char const c[N], std::string delim) : collection(c), delimiter(std::move(delim))
{
}

View File

@@ -51,8 +51,7 @@ public:
using const_reference = value_type const&;
using pointer = value_type*;
using const_pointer = value_type const*;
using map_type = std::
unordered_map<key_type, mapped_type, hasher, key_equal, allocator_type>;
using map_type = std::unordered_map<key_type, mapped_type, hasher, key_equal, allocator_type>;
using partition_map_type = std::vector<map_type>;
struct iterator
@@ -113,8 +112,7 @@ public:
friend bool
operator==(iterator const& lhs, iterator const& rhs)
{
return lhs.map_ == rhs.map_ && lhs.ait_ == rhs.ait_ &&
lhs.mit_ == rhs.mit_;
return lhs.map_ == rhs.map_ && lhs.ait_ == rhs.ait_ && lhs.mit_ == rhs.mit_;
}
friend bool
@@ -190,8 +188,7 @@ public:
friend bool
operator==(const_iterator const& lhs, const_iterator const& rhs)
{
return lhs.map_ == rhs.map_ && lhs.ait_ == rhs.ait_ &&
lhs.mit_ == rhs.mit_;
return lhs.map_ == rhs.map_ && lhs.ait_ == rhs.ait_ && lhs.mit_ == rhs.mit_;
}
friend bool
@@ -231,14 +228,11 @@ private:
}
public:
partitioned_unordered_map(
std::optional<std::size_t> partitions = std::nullopt)
partitioned_unordered_map(std::optional<std::size_t> partitions = std::nullopt)
{
// Set partitions to the number of hardware threads if the parameter
// is either empty or set to 0.
partitions_ = partitions && *partitions
? *partitions
: std::thread::hardware_concurrency();
partitions_ = partitions && *partitions ? *partitions : std::thread::hardware_concurrency();
map_.resize(partitions_);
XRPL_ASSERT(
partitions_,
@@ -337,10 +331,8 @@ public:
auto const& key = std::get<0>(keyTuple);
iterator it(&map_);
it.ait_ = it.map_->begin() + partitioner(key);
auto [eit, inserted] = it.ait_->emplace(
std::piecewise_construct,
std::forward<T>(keyTuple),
std::forward<U>(valueTuple));
auto [eit, inserted] =
it.ait_->emplace(std::piecewise_construct, std::forward<T>(keyTuple), std::forward<U>(valueTuple));
it.mit_ = eit;
return {it, inserted};
}
@@ -351,8 +343,7 @@ public:
{
iterator it(&map_);
it.ait_ = it.map_->begin() + partitioner(key);
auto [eit, inserted] =
it.ait_->emplace(std::forward<T>(key), std::forward<U>(val));
auto [eit, inserted] = it.ait_->emplace(std::forward<T>(key), std::forward<U>(val));
it.mit_ = eit;
return {it, inserted};
}

View File

@@ -20,8 +20,7 @@ static_assert(
"The Ripple default PRNG engine must return an unsigned integral type.");
static_assert(
std::numeric_limits<beast::xor_shift_engine::result_type>::max() >=
std::numeric_limits<std::uint64_t>::max(),
std::numeric_limits<beast::xor_shift_engine::result_type>::max() >= std::numeric_limits<std::uint64_t>::max(),
"The Ripple default PRNG engine return must be at least 64 bits wide.");
#endif
@@ -90,9 +89,7 @@ default_prng()
*/
/** @{ */
template <class Engine, class Integral>
std::enable_if_t<
std::is_integral<Integral>::value && detail::is_engine<Engine>::value,
Integral>
std::enable_if_t<std::is_integral<Integral>::value && detail::is_engine<Engine>::value, Integral>
rand_int(Engine& engine, Integral min, Integral max)
{
XRPL_ASSERT(max > min, "xrpl::rand_int : max over min inputs");
@@ -111,9 +108,7 @@ rand_int(Integral min, Integral max)
}
template <class Engine, class Integral>
std::enable_if_t<
std::is_integral<Integral>::value && detail::is_engine<Engine>::value,
Integral>
std::enable_if_t<std::is_integral<Integral>::value && detail::is_engine<Engine>::value, Integral>
rand_int(Engine& engine, Integral max)
{
return rand_int(engine, Integral(0), max);
@@ -127,9 +122,7 @@ rand_int(Integral max)
}
template <class Integral, class Engine>
std::enable_if_t<
std::is_integral<Integral>::value && detail::is_engine<Engine>::value,
Integral>
std::enable_if_t<std::is_integral<Integral>::value && detail::is_engine<Engine>::value, Integral>
rand_int(Engine& engine)
{
return rand_int(engine, std::numeric_limits<Integral>::max());
@@ -147,23 +140,17 @@ rand_int()
/** @{ */
template <class Byte, class Engine>
std::enable_if_t<
(std::is_same<Byte, unsigned char>::value ||
std::is_same<Byte, std::uint8_t>::value) &&
(std::is_same<Byte, unsigned char>::value || std::is_same<Byte, std::uint8_t>::value) &&
detail::is_engine<Engine>::value,
Byte>
rand_byte(Engine& engine)
{
return static_cast<Byte>(rand_int<Engine, std::uint32_t>(
engine,
std::numeric_limits<Byte>::min(),
std::numeric_limits<Byte>::max()));
return static_cast<Byte>(
rand_int<Engine, std::uint32_t>(engine, std::numeric_limits<Byte>::min(), std::numeric_limits<Byte>::max()));
}
template <class Byte = std::uint8_t>
std::enable_if_t<
(std::is_same<Byte, unsigned char>::value ||
std::is_same<Byte, std::uint8_t>::value),
Byte>
std::enable_if_t<(std::is_same<Byte, unsigned char>::value || std::is_same<Byte, std::uint8_t>::value), Byte>
rand_byte()
{
return rand_byte<Byte>(default_prng());

View File

@@ -12,38 +12,29 @@ namespace xrpl {
template <class Src, class Dest>
concept SafeToCast = (std::is_integral_v<Src> && std::is_integral_v<Dest>) &&
(std::is_signed<Src>::value || std::is_unsigned<Dest>::value) &&
(std::is_signed<Src>::value != std::is_signed<Dest>::value
? sizeof(Dest) > sizeof(Src)
: sizeof(Dest) >= sizeof(Src));
(std::is_signed<Src>::value != std::is_signed<Dest>::value ? sizeof(Dest) > sizeof(Src)
: sizeof(Dest) >= sizeof(Src));
template <class Dest, class Src>
inline constexpr std::
enable_if_t<std::is_integral_v<Dest> && std::is_integral_v<Src>, Dest>
safe_cast(Src s) noexcept
inline constexpr std::enable_if_t<std::is_integral_v<Dest> && std::is_integral_v<Src>, Dest>
safe_cast(Src s) noexcept
{
static_assert(
std::is_signed_v<Dest> || std::is_unsigned_v<Src>,
"Cannot cast signed to unsigned");
constexpr unsigned not_same =
std::is_signed_v<Dest> != std::is_signed_v<Src>;
static_assert(
sizeof(Dest) >= sizeof(Src) + not_same,
"Destination is too small to hold all values of source");
static_assert(std::is_signed_v<Dest> || std::is_unsigned_v<Src>, "Cannot cast signed to unsigned");
constexpr unsigned not_same = std::is_signed_v<Dest> != std::is_signed_v<Src>;
static_assert(sizeof(Dest) >= sizeof(Src) + not_same, "Destination is too small to hold all values of source");
return static_cast<Dest>(s);
}
template <class Dest, class Src>
inline constexpr std::
enable_if_t<std::is_enum_v<Dest> && std::is_integral_v<Src>, Dest>
safe_cast(Src s) noexcept
inline constexpr std::enable_if_t<std::is_enum_v<Dest> && std::is_integral_v<Src>, Dest>
safe_cast(Src s) noexcept
{
return static_cast<Dest>(safe_cast<std::underlying_type_t<Dest>>(s));
}
template <class Dest, class Src>
inline constexpr std::
enable_if_t<std::is_integral_v<Dest> && std::is_enum_v<Src>, Dest>
safe_cast(Src s) noexcept
inline constexpr std::enable_if_t<std::is_integral_v<Dest> && std::is_enum_v<Src>, Dest>
safe_cast(Src s) noexcept
{
return safe_cast<Dest>(static_cast<std::underlying_type_t<Src>>(s));
}
@@ -53,9 +44,8 @@ inline constexpr std::
// underlying types become safe, it can be converted to a safe_cast.
template <class Dest, class Src>
inline constexpr std::
enable_if_t<std::is_integral_v<Dest> && std::is_integral_v<Src>, Dest>
unsafe_cast(Src s) noexcept
inline constexpr std::enable_if_t<std::is_integral_v<Dest> && std::is_integral_v<Src>, Dest>
unsafe_cast(Src s) noexcept
{
static_assert(
!SafeToCast<Src, Dest>,
@@ -65,17 +55,15 @@ inline constexpr std::
}
template <class Dest, class Src>
inline constexpr std::
enable_if_t<std::is_enum_v<Dest> && std::is_integral_v<Src>, Dest>
unsafe_cast(Src s) noexcept
inline constexpr std::enable_if_t<std::is_enum_v<Dest> && std::is_integral_v<Src>, Dest>
unsafe_cast(Src s) noexcept
{
return static_cast<Dest>(unsafe_cast<std::underlying_type_t<Dest>>(s));
}
template <class Dest, class Src>
inline constexpr std::
enable_if_t<std::is_integral_v<Dest> && std::is_enum_v<Src>, Dest>
unsafe_cast(Src s) noexcept
inline constexpr std::enable_if_t<std::is_integral_v<Dest> && std::is_enum_v<Src>, Dest>
unsafe_cast(Src s) noexcept
{
return unsafe_cast<Dest>(static_cast<std::underlying_type_t<Src>>(s));
}

View File

@@ -36,10 +36,8 @@ public:
}
scope_exit(scope_exit&& rhs) noexcept(
std::is_nothrow_move_constructible_v<EF> ||
std::is_nothrow_copy_constructible_v<EF>)
: exit_function_{std::forward<EF>(rhs.exit_function_)}
, execute_on_destruction_{rhs.execute_on_destruction_}
std::is_nothrow_move_constructible_v<EF> || std::is_nothrow_copy_constructible_v<EF>)
: exit_function_{std::forward<EF>(rhs.exit_function_)}, execute_on_destruction_{rhs.execute_on_destruction_}
{
rhs.release();
}
@@ -50,14 +48,11 @@ public:
template <class EFP>
explicit scope_exit(
EFP&& f,
std::enable_if_t<
!std::is_same_v<std::remove_cv_t<EFP>, scope_exit> &&
std::is_constructible_v<EF, EFP>>* = 0) noexcept
std::enable_if_t<!std::is_same_v<std::remove_cv_t<EFP>, scope_exit> && std::is_constructible_v<EF, EFP>>* =
0) noexcept
: exit_function_{std::forward<EFP>(f)}
{
static_assert(
std::
is_nothrow_constructible_v<EF, decltype(std::forward<EFP>(f))>);
static_assert(std::is_nothrow_constructible_v<EF, decltype(std::forward<EFP>(f))>);
}
void
@@ -80,14 +75,12 @@ class scope_fail
public:
~scope_fail()
{
if (execute_on_destruction_ &&
std::uncaught_exceptions() > uncaught_on_creation_)
if (execute_on_destruction_ && std::uncaught_exceptions() > uncaught_on_creation_)
exit_function_();
}
scope_fail(scope_fail&& rhs) noexcept(
std::is_nothrow_move_constructible_v<EF> ||
std::is_nothrow_copy_constructible_v<EF>)
std::is_nothrow_move_constructible_v<EF> || std::is_nothrow_copy_constructible_v<EF>)
: exit_function_{std::forward<EF>(rhs.exit_function_)}
, execute_on_destruction_{rhs.execute_on_destruction_}
, uncaught_on_creation_{rhs.uncaught_on_creation_}
@@ -101,14 +94,11 @@ public:
template <class EFP>
explicit scope_fail(
EFP&& f,
std::enable_if_t<
!std::is_same_v<std::remove_cv_t<EFP>, scope_fail> &&
std::is_constructible_v<EF, EFP>>* = 0) noexcept
std::enable_if_t<!std::is_same_v<std::remove_cv_t<EFP>, scope_fail> && std::is_constructible_v<EF, EFP>>* =
0) noexcept
: exit_function_{std::forward<EFP>(f)}
{
static_assert(
std::
is_nothrow_constructible_v<EF, decltype(std::forward<EFP>(f))>);
static_assert(std::is_nothrow_constructible_v<EF, decltype(std::forward<EFP>(f))>);
}
void
@@ -131,14 +121,12 @@ class scope_success
public:
~scope_success() noexcept(noexcept(exit_function_()))
{
if (execute_on_destruction_ &&
std::uncaught_exceptions() <= uncaught_on_creation_)
if (execute_on_destruction_ && std::uncaught_exceptions() <= uncaught_on_creation_)
exit_function_();
}
scope_success(scope_success&& rhs) noexcept(
std::is_nothrow_move_constructible_v<EF> ||
std::is_nothrow_copy_constructible_v<EF>)
std::is_nothrow_move_constructible_v<EF> || std::is_nothrow_copy_constructible_v<EF>)
: exit_function_{std::forward<EF>(rhs.exit_function_)}
, execute_on_destruction_{rhs.execute_on_destruction_}
, uncaught_on_creation_{rhs.uncaught_on_creation_}
@@ -152,9 +140,7 @@ public:
template <class EFP>
explicit scope_success(
EFP&& f,
std::enable_if_t<
!std::is_same_v<std::remove_cv_t<EFP>, scope_success> &&
std::is_constructible_v<EF, EFP>>* =
std::enable_if_t<!std::is_same_v<std::remove_cv_t<EFP>, scope_success> && std::is_constructible_v<EF, EFP>>* =
0) noexcept(std::is_nothrow_constructible_v<EF, EFP> || std::is_nothrow_constructible_v<EF, EFP&>)
: exit_function_{std::forward<EFP>(f)}
{
@@ -213,12 +199,9 @@ class scope_unlock
std::unique_lock<Mutex>* plock;
public:
explicit scope_unlock(std::unique_lock<Mutex>& lock) noexcept(true)
: plock(&lock)
explicit scope_unlock(std::unique_lock<Mutex>& lock) noexcept(true) : plock(&lock)
{
XRPL_ASSERT(
plock->owns_lock(),
"xrpl::scope_unlock::scope_unlock : mutex must be locked");
XRPL_ASSERT(plock->owns_lock(), "xrpl::scope_unlock::scope_unlock : mutex must be locked");
plock->unlock();
}

View File

@@ -100,12 +100,9 @@ public:
@note For performance reasons, you should strive to have `lock` be
on a cacheline by itself.
*/
packed_spinlock(std::atomic<T>& lock, int index)
: bits_(lock), mask_(static_cast<T>(1) << index)
packed_spinlock(std::atomic<T>& lock, int index) : bits_(lock), mask_(static_cast<T>(1) << index)
{
XRPL_ASSERT(
index >= 0 && (mask_ != 0),
"xrpl::packed_spinlock::packed_spinlock : valid index and mask");
XRPL_ASSERT(index >= 0 && (mask_ != 0), "xrpl::packed_spinlock::packed_spinlock : valid index and mask");
}
[[nodiscard]] bool
@@ -178,10 +175,7 @@ public:
T expected = 0;
return lock_.compare_exchange_weak(
expected,
std::numeric_limits<T>::max(),
std::memory_order_acquire,
std::memory_order_relaxed);
expected, std::numeric_limits<T>::max(), std::memory_order_acquire, std::memory_order_relaxed);
}
void

View File

@@ -11,9 +11,7 @@ std::string
strHex(FwdIt begin, FwdIt end)
{
static_assert(
std::is_convertible<
typename std::iterator_traits<FwdIt>::iterator_category,
std::forward_iterator_tag>::value,
std::is_convertible<typename std::iterator_traits<FwdIt>::iterator_category, std::forward_iterator_tag>::value,
"FwdIt must be a forward iterator");
std::string result;
result.reserve(2 * std::distance(begin, end));

View File

@@ -31,9 +31,7 @@ class tagged_integer
tagged_integer<Int, Tag>,
boost::bitwise<
tagged_integer<Int, Tag>,
boost::unit_steppable<
tagged_integer<Int, Tag>,
boost::shiftable<tagged_integer<Int, Tag>>>>>>
boost::unit_steppable<tagged_integer<Int, Tag>, boost::shiftable<tagged_integer<Int, Tag>>>>>>
{
private:
Int m_value;
@@ -46,14 +44,10 @@ public:
template <
class OtherInt,
class = typename std::enable_if<
std::is_integral<OtherInt>::value &&
sizeof(OtherInt) <= sizeof(Int)>::type>
class = typename std::enable_if<std::is_integral<OtherInt>::value && sizeof(OtherInt) <= sizeof(Int)>::type>
explicit constexpr tagged_integer(OtherInt value) noexcept : m_value(value)
{
static_assert(
sizeof(tagged_integer) == sizeof(Int),
"tagged_integer is adding padding");
static_assert(sizeof(tagged_integer) == sizeof(Int), "tagged_integer is adding padding");
}
bool

View File

@@ -32,11 +32,7 @@ private:
public:
io_latency_probe(duration const& period, boost::asio::io_context& ios)
: m_count(1)
, m_period(period)
, m_ios(ios)
, m_timer(m_ios)
, m_cancel(false)
: m_count(1), m_period(period), m_ios(ios), m_timer(m_ios), m_cancel(false)
{
}
@@ -91,10 +87,7 @@ public:
std::lock_guard lock(m_mutex);
if (m_cancel)
throw std::logic_error("io_latency_probe is canceled");
boost::asio::post(
m_ios,
sample_op<Handler>(
std::forward<Handler>(handler), Clock::now(), false, this));
boost::asio::post(m_ios, sample_op<Handler>(std::forward<Handler>(handler), Clock::now(), false, this));
}
/** Initiate continuous i/o latency sampling.
@@ -108,10 +101,7 @@ public:
std::lock_guard lock(m_mutex);
if (m_cancel)
throw std::logic_error("io_latency_probe is canceled");
boost::asio::post(
m_ios,
sample_op<Handler>(
std::forward<Handler>(handler), Clock::now(), true, this));
boost::asio::post(m_ios, sample_op<Handler>(std::forward<Handler>(handler), Clock::now(), true, this));
}
private:
@@ -151,15 +141,8 @@ private:
bool m_repeat;
io_latency_probe* m_probe;
sample_op(
Handler const& handler,
time_point const& start,
bool repeat,
io_latency_probe* probe)
: m_handler(handler)
, m_start(start)
, m_repeat(repeat)
, m_probe(probe)
sample_op(Handler const& handler, time_point const& start, bool repeat, io_latency_probe* probe)
: m_handler(handler), m_start(start), m_repeat(repeat), m_probe(probe)
{
XRPL_ASSERT(
m_probe,
@@ -214,23 +197,19 @@ private:
// Calculate when we want to sample again, and
// adjust for the expected latency.
//
typename Clock::time_point const when(
now + m_probe->m_period - 2 * elapsed);
typename Clock::time_point const when(now + m_probe->m_period - 2 * elapsed);
if (when <= now)
{
// The latency is too high to maintain the desired
// period so don't bother with a timer.
//
boost::asio::post(
m_probe->m_ios,
sample_op<Handler>(m_handler, now, m_repeat, m_probe));
boost::asio::post(m_probe->m_ios, sample_op<Handler>(m_handler, now, m_repeat, m_probe));
}
else
{
m_probe->m_timer.expires_after(when - now);
m_probe->m_timer.async_wait(
sample_op<Handler>(m_handler, now, m_repeat, m_probe));
m_probe->m_timer.async_wait(sample_op<Handler>(m_handler, now, m_repeat, m_probe));
}
}
}
@@ -241,9 +220,7 @@ private:
if (!m_probe)
return;
typename Clock::time_point const now(Clock::now());
boost::asio::post(
m_probe->m_ios,
sample_op<Handler>(m_handler, now, m_repeat, m_probe));
boost::asio::post(m_probe->m_ios, sample_op<Handler>(m_handler, now, m_repeat, m_probe));
}
};
};

View File

@@ -29,8 +29,7 @@ private:
time_point now_;
public:
explicit manual_clock(time_point const& now = time_point(duration(0)))
: now_(now)
explicit manual_clock(time_point const& now = time_point(duration(0))) : now_(now)
{
}
@@ -44,9 +43,7 @@ public:
void
set(time_point const& when)
{
XRPL_ASSERT(
!Clock::is_steady || when >= now_,
"beast::manual_clock::set(time_point) : forward input");
XRPL_ASSERT(!Clock::is_steady || when >= now_, "beast::manual_clock::set(time_point) : forward input");
now_ = when;
}
@@ -64,8 +61,7 @@ public:
advance(std::chrono::duration<Rep, Period> const& elapsed)
{
XRPL_ASSERT(
!Clock::is_steady || (now_ + elapsed) >= now_,
"beast::manual_clock::advance(duration) : forward input");
!Clock::is_steady || (now_ + elapsed) >= now_, "beast::manual_clock::advance(duration) : forward input");
now_ += elapsed;
}

View File

@@ -10,14 +10,12 @@ namespace beast {
/** Expire aged container items past the specified age. */
template <class AgedContainer, class Rep, class Period>
typename std::enable_if<is_aged_container<AgedContainer>::value, std::size_t>::
type
expire(AgedContainer& c, std::chrono::duration<Rep, Period> const& age)
typename std::enable_if<is_aged_container<AgedContainer>::value, std::size_t>::type
expire(AgedContainer& c, std::chrono::duration<Rep, Period> const& age)
{
std::size_t n(0);
auto const expired(c.clock().now() - age);
for (auto iter(c.chronological.cbegin());
iter != c.chronological.cend() && iter.when() <= expired;)
for (auto iter(c.chronological.cbegin()); iter != c.chronological.cend() && iter.when() <= expired;)
{
iter = c.erase(iter);
++n;

View File

@@ -15,8 +15,7 @@ template <
class Clock = std::chrono::steady_clock,
class Compare = std::less<Key>,
class Allocator = std::allocator<std::pair<Key const, T>>>
using aged_map = detail::
aged_ordered_container<false, true, Key, T, Clock, Compare, Allocator>;
using aged_map = detail::aged_ordered_container<false, true, Key, T, Clock, Compare, Allocator>;
}

View File

@@ -15,8 +15,7 @@ template <
class Clock = std::chrono::steady_clock,
class Compare = std::less<Key>,
class Allocator = std::allocator<std::pair<Key const, T>>>
using aged_multimap = detail::
aged_ordered_container<true, true, Key, T, Clock, Compare, Allocator>;
using aged_multimap = detail::aged_ordered_container<true, true, Key, T, Clock, Compare, Allocator>;
}

View File

@@ -14,8 +14,7 @@ template <
class Clock = std::chrono::steady_clock,
class Compare = std::less<Key>,
class Allocator = std::allocator<Key>>
using aged_multiset = detail::
aged_ordered_container<true, false, Key, void, Clock, Compare, Allocator>;
using aged_multiset = detail::aged_ordered_container<true, false, Key, void, Clock, Compare, Allocator>;
}

View File

@@ -14,8 +14,7 @@ template <
class Clock = std::chrono::steady_clock,
class Compare = std::less<Key>,
class Allocator = std::allocator<Key>>
using aged_set = detail::
aged_ordered_container<false, false, Key, void, Clock, Compare, Allocator>;
using aged_set = detail::aged_ordered_container<false, false, Key, void, Clock, Compare, Allocator>;
}

View File

@@ -16,15 +16,7 @@ template <
class Hash = std::hash<Key>,
class KeyEqual = std::equal_to<Key>,
class Allocator = std::allocator<std::pair<Key const, T>>>
using aged_unordered_map = detail::aged_unordered_container<
false,
true,
Key,
T,
Clock,
Hash,
KeyEqual,
Allocator>;
using aged_unordered_map = detail::aged_unordered_container<false, true, Key, T, Clock, Hash, KeyEqual, Allocator>;
}

View File

@@ -16,15 +16,7 @@ template <
class Hash = std::hash<Key>,
class KeyEqual = std::equal_to<Key>,
class Allocator = std::allocator<std::pair<Key const, T>>>
using aged_unordered_multimap = detail::aged_unordered_container<
true,
true,
Key,
T,
Clock,
Hash,
KeyEqual,
Allocator>;
using aged_unordered_multimap = detail::aged_unordered_container<true, true, Key, T, Clock, Hash, KeyEqual, Allocator>;
}

View File

@@ -15,15 +15,8 @@ template <
class Hash = std::hash<Key>,
class KeyEqual = std::equal_to<Key>,
class Allocator = std::allocator<Key>>
using aged_unordered_multiset = detail::aged_unordered_container<
true,
false,
Key,
void,
Clock,
Hash,
KeyEqual,
Allocator>;
using aged_unordered_multiset =
detail::aged_unordered_container<true, false, Key, void, Clock, Hash, KeyEqual, Allocator>;
}

View File

@@ -15,15 +15,7 @@ template <
class Hash = std::hash<Key>,
class KeyEqual = std::equal_to<Key>,
class Allocator = std::allocator<Key>>
using aged_unordered_set = detail::aged_unordered_container<
false,
false,
Key,
void,
Clock,
Hash,
KeyEqual,
Allocator>;
using aged_unordered_set = detail::aged_unordered_container<false, false, Key, void, Clock, Hash, KeyEqual, Allocator>;
}

View File

@@ -16,14 +16,12 @@ template <bool is_const, class Iterator>
class aged_container_iterator
{
public:
using iterator_category =
typename std::iterator_traits<Iterator>::iterator_category;
using iterator_category = typename std::iterator_traits<Iterator>::iterator_category;
using value_type = typename std::conditional<
is_const,
typename Iterator::value_type::stashed::value_type const,
typename Iterator::value_type::stashed::value_type>::type;
using difference_type =
typename std::iterator_traits<Iterator>::difference_type;
using difference_type = typename std::iterator_traits<Iterator>::difference_type;
using pointer = value_type*;
using reference = value_type&;
using time_point = typename Iterator::value_type::stashed::time_point;
@@ -38,31 +36,22 @@ public:
class = typename std::enable_if<
(other_is_const == false || is_const == true) &&
std::is_same<Iterator, OtherIterator>::value == false>::type>
explicit aged_container_iterator(
aged_container_iterator<other_is_const, OtherIterator> const& other)
explicit aged_container_iterator(aged_container_iterator<other_is_const, OtherIterator> const& other)
: m_iter(other.m_iter)
{
}
// Disable constructing a const_iterator from a non-const_iterator.
template <
bool other_is_const,
class = typename std::enable_if<
other_is_const == false || is_const == true>::type>
aged_container_iterator(
aged_container_iterator<other_is_const, Iterator> const& other)
: m_iter(other.m_iter)
template <bool other_is_const, class = typename std::enable_if<other_is_const == false || is_const == true>::type>
aged_container_iterator(aged_container_iterator<other_is_const, Iterator> const& other) : m_iter(other.m_iter)
{
}
// Disable assigning a const_iterator to a non-const iterator
template <bool other_is_const, class OtherIterator>
auto
operator=(
aged_container_iterator<other_is_const, OtherIterator> const& other) ->
typename std::enable_if<
other_is_const == false || is_const == true,
aged_container_iterator&>::type
operator=(aged_container_iterator<other_is_const, OtherIterator> const& other) ->
typename std::enable_if<other_is_const == false || is_const == true, aged_container_iterator&>::type
{
m_iter = other.m_iter;
return *this;
@@ -70,16 +59,14 @@ public:
template <bool other_is_const, class OtherIterator>
bool
operator==(aged_container_iterator<other_is_const, OtherIterator> const&
other) const
operator==(aged_container_iterator<other_is_const, OtherIterator> const& other) const
{
return m_iter == other.m_iter;
}
template <bool other_is_const, class OtherIterator>
bool
operator!=(aged_container_iterator<other_is_const, OtherIterator> const&
other) const
operator!=(aged_container_iterator<other_is_const, OtherIterator> const& other) const
{
return m_iter != other.m_iter;
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -17,16 +17,11 @@ namespace detail {
template <class T>
struct is_empty_base_optimization_derived
: std::integral_constant<
bool,
std::is_empty<T>::value && !boost::is_final<T>::value>
: std::integral_constant<bool, std::is_empty<T>::value && !boost::is_final<T>::value>
{
};
template <
class T,
int UniqueID = 0,
bool isDerived = is_empty_base_optimization_derived<T>::value>
template <class T, int UniqueID = 0, bool isDerived = is_empty_base_optimization_derived<T>::value>
class empty_base_optimization : private T
{
public:

View File

@@ -35,9 +35,7 @@ template <std::size_t N>
void
setCurrentThreadName(char const (&newThreadName)[N])
{
static_assert(
N <= maxThreadNameLength + 1,
"Thread name cannot exceed 15 characters");
static_assert(N <= maxThreadNameLength + 1, "Thread name cannot exceed 15 characters");
setCurrentThreadName(std::string_view(newThreadName, N - 1));
}

View File

@@ -40,8 +40,7 @@ struct LexicalCast<std::string, In>
std::enable_if_t<std::is_enum_v<Enumeration>, bool>
operator()(std::string& out, Enumeration in)
{
out = std::to_string(
static_cast<std::underlying_type_t<Enumeration>>(in));
out = std::to_string(static_cast<std::underlying_type_t<Enumeration>>(in));
return true;
}
};
@@ -52,14 +51,10 @@ struct LexicalCast<Out, std::string_view>
{
explicit LexicalCast() = default;
static_assert(
std::is_integral_v<Out>,
"beast::LexicalCast can only be used with integral types");
static_assert(std::is_integral_v<Out>, "beast::LexicalCast can only be used with integral types");
template <class Integral = Out>
std::enable_if_t<
std::is_integral_v<Integral> && !std::is_same_v<Integral, bool>,
bool>
std::enable_if_t<std::is_integral_v<Integral> && !std::is_same_v<Integral, bool>, bool>
operator()(Integral& out, std::string_view in) const
{
auto first = in.data();
@@ -79,10 +74,9 @@ struct LexicalCast<Out, std::string_view>
std::string result;
// Convert the input to lowercase
std::transform(
in.begin(), in.end(), std::back_inserter(result), [](auto c) {
return std::tolower(static_cast<unsigned char>(c));
});
std::transform(in.begin(), in.end(), std::back_inserter(result), [](auto c) {
return std::tolower(static_cast<unsigned char>(c));
});
if (result == "1" || result == "true")
{
@@ -140,8 +134,7 @@ struct LexicalCast<Out, char const*>
bool
operator()(Out& out, char const* in) const
{
XRPL_ASSERT(
in, "beast::detail::LexicalCast(char const*) : non-null input");
XRPL_ASSERT(in, "beast::detail::LexicalCast(char const*) : non-null input");
return LexicalCast<Out, std::string_view>()(out, in);
}
};

View File

@@ -55,8 +55,7 @@ class ListIterator
{
public:
using iterator_category = std::bidirectional_iterator_tag;
using value_type =
typename beast::detail::CopyConst<N, typename N::value_type>::type;
using value_type = typename beast::detail::CopyConst<N, typename N::value_type>::type;
using difference_type = std::ptrdiff_t;
using pointer = value_type*;
using reference = value_type&;

View File

@@ -14,21 +14,16 @@ class LockFreeStackIterator
{
protected:
using Node = typename Container::Node;
using NodePtr =
typename std::conditional<IsConst, Node const*, Node*>::type;
using NodePtr = typename std::conditional<IsConst, Node const*, Node*>::type;
public:
using iterator_category = std::forward_iterator_tag;
using value_type = typename Container::value_type;
using difference_type = typename Container::difference_type;
using pointer = typename std::conditional<
IsConst,
typename Container::const_pointer,
typename Container::pointer>::type;
using reference = typename std::conditional<
IsConst,
typename Container::const_reference,
typename Container::reference>::type;
using pointer =
typename std::conditional<IsConst, typename Container::const_pointer, typename Container::pointer>::type;
using reference =
typename std::conditional<IsConst, typename Container::const_reference, typename Container::reference>::type;
LockFreeStackIterator() : m_node()
{
@@ -39,9 +34,7 @@ public:
}
template <bool OtherIsConst>
explicit LockFreeStackIterator(
LockFreeStackIterator<Container, OtherIsConst> const& other)
: m_node(other.m_node)
explicit LockFreeStackIterator(LockFreeStackIterator<Container, OtherIsConst> const& other) : m_node(other.m_node)
{
}
@@ -160,8 +153,7 @@ public:
using size_type = std::size_t;
using difference_type = std::ptrdiff_t;
using iterator = LockFreeStackIterator<LockFreeStack<Element, Tag>, false>;
using const_iterator =
LockFreeStackIterator<LockFreeStack<Element, Tag>, true>;
using const_iterator = LockFreeStackIterator<LockFreeStack<Element, Tag>, true>;
LockFreeStack() : m_end(nullptr), m_head(&m_end)
{
@@ -199,11 +191,7 @@ public:
{
first = (old_head == &m_end);
node->m_next = old_head;
} while (!m_head.compare_exchange_strong(
old_head,
node,
std::memory_order_release,
std::memory_order_relaxed));
} while (!m_head.compare_exchange_strong(old_head, node, std::memory_order_release, std::memory_order_relaxed));
return first;
}
@@ -226,11 +214,7 @@ public:
if (node == &m_end)
return nullptr;
new_head = node->m_next.load();
} while (!m_head.compare_exchange_strong(
node,
new_head,
std::memory_order_release,
std::memory_order_relaxed));
} while (!m_head.compare_exchange_strong(node, new_head, std::memory_order_release, std::memory_order_relaxed));
return static_cast<Element*>(node);
}

View File

@@ -27,8 +27,7 @@ template <class T>
inline void
reverse_bytes(T& t)
{
unsigned char* bytes = static_cast<unsigned char*>(
std::memmove(std::addressof(t), std::addressof(t), sizeof(T)));
unsigned char* bytes = static_cast<unsigned char*>(std::memmove(std::addressof(t), std::addressof(t), sizeof(T)));
for (unsigned i = 0; i < sizeof(T) / 2; ++i)
std::swap(bytes[i], bytes[sizeof(T) - 1 - i]);
}
@@ -53,11 +52,7 @@ template <class T, class Hasher>
inline void
maybe_reverse_bytes(T& t, Hasher&)
{
maybe_reverse_bytes(
t,
std::integral_constant<
bool,
Hasher::endian != boost::endian::order::native>{});
maybe_reverse_bytes(t, std::integral_constant<bool, Hasher::endian != boost::endian::order::native>{});
}
} // namespace detail
@@ -71,10 +66,8 @@ maybe_reverse_bytes(T& t, Hasher&)
template <class T>
struct is_uniquely_represented
: public std::integral_constant<
bool,
std::is_integral<T>::value || std::is_enum<T>::value ||
std::is_pointer<T>::value>
: public std::
integral_constant<bool, std::is_integral<T>::value || std::is_enum<T>::value || std::is_pointer<T>::value>
{
explicit is_uniquely_represented() = default;
};
@@ -92,8 +85,7 @@ struct is_uniquely_represented<T volatile> : public is_uniquely_represented<T>
};
template <class T>
struct is_uniquely_represented<T const volatile>
: public is_uniquely_represented<T>
struct is_uniquely_represented<T const volatile> : public is_uniquely_represented<T>
{
explicit is_uniquely_represented() = default;
};
@@ -104,8 +96,7 @@ template <class T, class U>
struct is_uniquely_represented<std::pair<T, U>>
: public std::integral_constant<
bool,
is_uniquely_represented<T>::value &&
is_uniquely_represented<U>::value &&
is_uniquely_represented<T>::value && is_uniquely_represented<U>::value &&
sizeof(T) + sizeof(U) == sizeof(std::pair<T, U>)>
{
explicit is_uniquely_represented() = default;
@@ -117,8 +108,7 @@ template <class... T>
struct is_uniquely_represented<std::tuple<T...>>
: public std::integral_constant<
bool,
std::conjunction_v<is_uniquely_represented<T>...> &&
sizeof(std::tuple<T...>) == (sizeof(T) + ...)>
std::conjunction_v<is_uniquely_represented<T>...> && sizeof(std::tuple<T...>) == (sizeof(T) + ...)>
{
explicit is_uniquely_represented() = default;
};
@@ -135,10 +125,8 @@ struct is_uniquely_represented<T[N]> : public is_uniquely_represented<T>
template <class T, std::size_t N>
struct is_uniquely_represented<std::array<T, N>>
: public std::integral_constant<
bool,
is_uniquely_represented<T>::value &&
sizeof(T) * N == sizeof(std::array<T, N>)>
: public std::
integral_constant<bool, is_uniquely_represented<T>::value && sizeof(T) * N == sizeof(std::array<T, N>)>
{
explicit is_uniquely_represented() = default;
};
@@ -158,12 +146,10 @@ struct is_uniquely_represented<std::array<T, N>>
*/
/** @{ */
template <class T, class HashAlgorithm>
struct is_contiguously_hashable
: public std::integral_constant<
bool,
is_uniquely_represented<T>::value &&
(sizeof(T) == 1 ||
HashAlgorithm::endian == boost::endian::order::native)>
struct is_contiguously_hashable : public std::integral_constant<
bool,
is_uniquely_represented<T>::value &&
(sizeof(T) == 1 || HashAlgorithm::endian == boost::endian::order::native)>
{
explicit is_contiguously_hashable() = default;
};
@@ -173,8 +159,7 @@ struct is_contiguously_hashable<T[N], HashAlgorithm>
: public std::integral_constant<
bool,
is_uniquely_represented<T[N]>::value &&
(sizeof(T) == 1 ||
HashAlgorithm::endian == boost::endian::order::native)>
(sizeof(T) == 1 || HashAlgorithm::endian == boost::endian::order::native)>
{
explicit is_contiguously_hashable() = default;
};
@@ -219,8 +204,7 @@ hash_append(Hasher& h, T const& t) noexcept
template <class Hasher, class T>
inline std::enable_if_t<
!is_contiguously_hashable<T, Hasher>::value &&
(std::is_integral<T>::value || std::is_pointer<T>::value ||
std::is_enum<T>::value)>
(std::is_integral<T>::value || std::is_pointer<T>::value || std::is_enum<T>::value)>
hash_append(Hasher& h, T t) noexcept
{
detail::reverse_bytes(t);
@@ -254,15 +238,11 @@ hash_append(Hasher& h, T (&a)[N]) noexcept;
template <class Hasher, class CharT, class Traits, class Alloc>
std::enable_if_t<!is_contiguously_hashable<CharT, Hasher>::value>
hash_append(
Hasher& h,
std::basic_string<CharT, Traits, Alloc> const& s) noexcept;
hash_append(Hasher& h, std::basic_string<CharT, Traits, Alloc> const& s) noexcept;
template <class Hasher, class CharT, class Traits, class Alloc>
std::enable_if_t<is_contiguously_hashable<CharT, Hasher>::value>
hash_append(
Hasher& h,
std::basic_string<CharT, Traits, Alloc> const& s) noexcept;
hash_append(Hasher& h, std::basic_string<CharT, Traits, Alloc> const& s) noexcept;
template <class Hasher, class T, class U>
std::enable_if_t<!is_contiguously_hashable<std::pair<T, U>, Hasher>::value>
@@ -294,14 +274,10 @@ hash_append(Hasher& h, std::unordered_set<Key, Hash, Pred, Alloc> const& s);
template <class Hasher, class Key, class Compare, class Alloc>
std::enable_if_t<!is_contiguously_hashable<Key, Hasher>::value>
hash_append(
Hasher& h,
boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept;
hash_append(Hasher& h, boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept;
template <class Hasher, class Key, class Compare, class Alloc>
std::enable_if_t<is_contiguously_hashable<Key, Hasher>::value>
hash_append(
Hasher& h,
boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept;
hash_append(Hasher& h, boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept;
template <class Hasher, class T0, class T1, class... T>
void
hash_append(Hasher& h, T0 const& t0, T1 const& t1, T const&... t) noexcept;
@@ -320,9 +296,7 @@ hash_append(Hasher& h, T (&a)[N]) noexcept
template <class Hasher, class CharT, class Traits, class Alloc>
inline std::enable_if_t<!is_contiguously_hashable<CharT, Hasher>::value>
hash_append(
Hasher& h,
std::basic_string<CharT, Traits, Alloc> const& s) noexcept
hash_append(Hasher& h, std::basic_string<CharT, Traits, Alloc> const& s) noexcept
{
for (auto c : s)
hash_append(h, c);
@@ -331,9 +305,7 @@ hash_append(
template <class Hasher, class CharT, class Traits, class Alloc>
inline std::enable_if_t<is_contiguously_hashable<CharT, Hasher>::value>
hash_append(
Hasher& h,
std::basic_string<CharT, Traits, Alloc> const& s) noexcept
hash_append(Hasher& h, std::basic_string<CharT, Traits, Alloc> const& s) noexcept
{
h(s.data(), s.size() * sizeof(CharT));
hash_append(h, s.size());
@@ -342,8 +314,7 @@ hash_append(
// pair
template <class Hasher, class T, class U>
inline std::enable_if_t<
!is_contiguously_hashable<std::pair<T, U>, Hasher>::value>
inline std::enable_if_t<!is_contiguously_hashable<std::pair<T, U>, Hasher>::value>
hash_append(Hasher& h, std::pair<T, U> const& p) noexcept
{
hash_append(h, p.first, p.second);
@@ -380,18 +351,14 @@ hash_append(Hasher& h, std::array<T, N> const& a) noexcept
template <class Hasher, class Key, class Compare, class Alloc>
std::enable_if_t<!is_contiguously_hashable<Key, Hasher>::value>
hash_append(
Hasher& h,
boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept
hash_append(Hasher& h, boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept
{
for (auto const& t : v)
hash_append(h, t);
}
template <class Hasher, class Key, class Compare, class Alloc>
std::enable_if_t<is_contiguously_hashable<Key, Hasher>::value>
hash_append(
Hasher& h,
boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept
hash_append(Hasher& h, boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept
{
h(&(v.begin()), v.size() * sizeof(Key));
}
@@ -414,10 +381,7 @@ hash_one(Hasher& h, T const& t) noexcept
template <class Hasher, class... T, std::size_t... I>
inline void
tuple_hash(
Hasher& h,
std::tuple<T...> const& t,
std::index_sequence<I...>) noexcept
tuple_hash(Hasher& h, std::tuple<T...> const& t, std::index_sequence<I...>) noexcept
{
for_each_item(hash_one(h, std::get<I>(t))...);
}
@@ -425,8 +389,7 @@ tuple_hash(
} // namespace detail
template <class Hasher, class... T>
inline std::enable_if_t<
!is_contiguously_hashable<std::tuple<T...>, Hasher>::value>
inline std::enable_if_t<!is_contiguously_hashable<std::tuple<T...>, Hasher>::value>
hash_append(Hasher& h, std::tuple<T...> const& t) noexcept
{
detail::tuple_hash(h, t, std::index_sequence_for<T...>{});
@@ -452,9 +415,7 @@ hash_append(Hasher& h, std::chrono::duration<Rep, Period> const& d) noexcept
template <class Hasher, class Clock, class Duration>
inline void
hash_append(
Hasher& h,
std::chrono::time_point<Clock, Duration> const& tp) noexcept
hash_append(Hasher& h, std::chrono::time_point<Clock, Duration> const& tp) noexcept
{
hash_append(h, tp.time_since_epoch());
}

View File

@@ -49,8 +49,7 @@ private:
{
std::memcpy(writeBuffer_.data(), data, len);
writeBuffer_ = writeBuffer_.subspan(len);
readBuffer_ = std::span{
std::begin(buffer_), buffer_.size() - writeBuffer_.size()};
readBuffer_ = std::span{std::begin(buffer_), buffer_.size() - writeBuffer_.size()};
}
}
@@ -98,8 +97,7 @@ private:
{
if (seed_.has_value())
{
return XXH3_64bits_withSeed(
readBuffer_.data(), readBuffer_.size(), *seed_);
return XXH3_64bits_withSeed(readBuffer_.data(), readBuffer_.size(), *seed_);
}
else
{
@@ -128,17 +126,13 @@ public:
}
}
template <
class Seed,
std::enable_if_t<std::is_unsigned<Seed>::value>* = nullptr>
template <class Seed, std::enable_if_t<std::is_unsigned<Seed>::value>* = nullptr>
explicit xxhasher(Seed seed) : seed_(seed)
{
resetBuffers();
}
template <
class Seed,
std::enable_if_t<std::is_unsigned<Seed>::value>* = nullptr>
template <class Seed, std::enable_if_t<std::is_unsigned<Seed>::value>* = nullptr>
xxhasher(Seed seed, Seed) : seed_(seed)
{
resetBuffers();

View File

@@ -23,9 +23,7 @@ public:
@param journal Destination for logging output.
*/
static std::shared_ptr<StatsDCollector>
New(IP::Endpoint const& address,
std::string const& prefix,
Journal journal);
New(IP::Endpoint const& address, std::string const& prefix, Journal journal);
};
} // namespace insight

View File

@@ -26,8 +26,7 @@ struct ci_equal_pred
operator()(char c1, char c2)
{
// VFALCO TODO Use a table lookup here
return std::tolower(static_cast<unsigned char>(c1)) ==
std::tolower(static_cast<unsigned char>(c2));
return std::tolower(static_cast<unsigned char>(c1)) == std::tolower(static_cast<unsigned char>(c2));
}
};
@@ -97,8 +96,7 @@ trim_right(String const& s)
*/
template <
class FwdIt,
class Result = std::vector<
std::basic_string<typename std::iterator_traits<FwdIt>::value_type>>,
class Result = std::vector<std::basic_string<typename std::iterator_traits<FwdIt>::value_type>>,
class Char>
Result
split(FwdIt first, FwdIt last, Char delim)
@@ -172,10 +170,7 @@ split(FwdIt first, FwdIt last, Char delim)
return result;
}
template <
class FwdIt,
class Result = std::vector<
std::basic_string<typename std::iterator_traits<FwdIt>::value_type>>>
template <class FwdIt, class Result = std::vector<std::basic_string<typename std::iterator_traits<FwdIt>::value_type>>>
Result
split_commas(FwdIt first, FwdIt last)
{
@@ -223,8 +218,7 @@ public:
bool
operator==(list_iterator const& other) const
{
return other.it_ == it_ && other.end_ == end_ &&
other.value_.size() == value_.size();
return other.it_ == it_ && other.end_ == end_ && other.value_.size() == value_.size();
}
bool
@@ -288,14 +282,12 @@ list_iterator::increment()
++it_;
if (it_ == end_)
{
value_ = boost::string_ref(
&*start, std::distance(start, it_));
value_ = boost::string_ref(&*start, std::distance(start, it_));
return;
}
if (*it_ == '"')
{
value_ = boost::string_ref(
&*start, std::distance(start, it_));
value_ = boost::string_ref(&*start, std::distance(start, it_));
++it_;
return;
}
@@ -321,8 +313,7 @@ list_iterator::increment()
++it_;
if (it_ == end_ || *it_ == ',' || is_lws(*it_))
{
value_ =
boost::string_ref(&*start, std::distance(start, it_));
value_ = boost::string_ref(&*start, std::distance(start, it_));
return;
}
}
@@ -344,8 +335,7 @@ inline boost::iterator_range<list_iterator>
make_list(boost::string_ref const& field)
{
return boost::iterator_range<list_iterator>{
list_iterator{field.begin(), field.end()},
list_iterator{field.end(), field.end()}};
list_iterator{field.begin(), field.end()}, list_iterator{field.end(), field.end()}};
}
/** Returns true if the specified token exists in the list.
@@ -367,12 +357,8 @@ bool
is_keep_alive(boost::beast::http::message<isRequest, Body, Fields> const& m)
{
if (m.version() <= 10)
return boost::beast::http::token_list{
m[boost::beast::http::field::connection]}
.exists("keep-alive");
return !boost::beast::http::token_list{
m[boost::beast::http::field::connection]}
.exists("close");
return boost::beast::http::token_list{m[boost::beast::http::field::connection]}.exists("keep-alive");
return !boost::beast::http::token_list{m[boost::beast::http::field::connection]}.exists("close");
}
} // namespace rfc2616

View File

@@ -31,9 +31,7 @@ protected:
boost::asio::io_context ios_;
private:
boost::optional<boost::asio::executor_work_guard<
boost::asio::io_context::executor_type>>
work_;
boost::optional<boost::asio::executor_work_guard<boost::asio::io_context::executor_type>> work_;
std::vector<std::thread> threads_;
std::mutex m_;
std::condition_variable cv_;
@@ -43,8 +41,7 @@ public:
/// The type of yield context passed to functions.
using yield_context = boost::asio::yield_context;
explicit enable_yield_to(std::size_t concurrency = 1)
: work_(boost::asio::make_work_guard(ios_))
explicit enable_yield_to(std::size_t concurrency = 1) : work_(boost::asio::make_work_guard(ios_))
{
threads_.reserve(concurrency);
while (concurrency--)

View File

@@ -23,12 +23,7 @@ global_suites()
template <class Suite>
struct insert_suite
{
insert_suite(
char const* name,
char const* module,
char const* library,
bool manual,
int priority)
insert_suite(char const* name, char const* module, char const* library, bool manual, int priority)
{
global_suites().insert<Suite>(name, module, library, manual, priority);
}

View File

@@ -53,8 +53,7 @@ public:
//------------------------------------------------------------------------------
template <class>
selector::selector(mode_t mode, std::string const& pattern)
: mode_(mode), pat_(pattern)
selector::selector(mode_t mode, std::string const& pattern) : mode_(mode), pat_(pattern)
{
if (mode_ == automatch && pattern.empty())
mode_ = all;

View File

@@ -140,10 +140,7 @@ reporter<_>::results::add(suite_results const& r)
if (elapsed >= std::chrono::seconds{1})
{
auto const iter = std::lower_bound(
top.begin(),
top.end(),
elapsed,
[](run_time const& t1, typename clock_type::duration const& t2) {
top.begin(), top.end(), elapsed, [](run_time const& t1, typename clock_type::duration const& t2) {
return t1.second > t2;
});
if (iter != top.end())
@@ -176,10 +173,8 @@ reporter<_>::~reporter()
os_ << std::setw(8) << fmtdur(i.second) << " " << i.first << '\n';
}
auto const elapsed = clock_type::now() - results_.start;
os_ << fmtdur(elapsed) << ", " << amount{results_.suites, "suite"} << ", "
<< amount{results_.cases, "case"} << ", "
<< amount{results_.total, "test"} << " total, "
<< amount{results_.failed, "failure"} << std::endl;
os_ << fmtdur(elapsed) << ", " << amount{results_.suites, "suite"} << ", " << amount{results_.cases, "case"} << ", "
<< amount{results_.total, "test"} << " total, " << amount{results_.failed, "failure"} << std::endl;
}
template <class _>
@@ -214,9 +209,7 @@ void
reporter<_>::on_case_begin(std::string const& name)
{
case_results_ = case_results(name);
os_ << suite_results_.name
<< (case_results_.name.empty() ? "" : (" " + case_results_.name))
<< std::endl;
os_ << suite_results_.name << (case_results_.name.empty() ? "" : (" " + case_results_.name)) << std::endl;
}
template <class _>
@@ -239,8 +232,7 @@ reporter<_>::on_fail(std::string const& reason)
{
++case_results_.failed;
++case_results_.total;
os_ << "#" << case_results_.total << " failed"
<< (reason.empty() ? "" : ": ") << reason << std::endl;
os_ << "#" << case_results_.total << " failed" << (reason.empty() ? "" : ": ") << reason << std::endl;
}
template <class _>

View File

@@ -24,8 +24,7 @@ public:
{
}
test(bool pass_, std::string const& reason_)
: pass(pass_), reason(reason_)
test(bool pass_, std::string const& reason_) : pass(pass_), reason(reason_)
{
}

View File

@@ -92,17 +92,13 @@ private:
}
};
template <
class CharT,
class Traits = std::char_traits<CharT>,
class Allocator = std::allocator<CharT>>
template <class CharT, class Traits = std::char_traits<CharT>, class Allocator = std::allocator<CharT>>
class log_os : public std::basic_ostream<CharT, Traits>
{
log_buf<CharT, Traits, Allocator> buf_;
public:
explicit log_os(suite& self)
: std::basic_ostream<CharT, Traits>(&buf_), buf_(self)
explicit log_os(suite& self) : std::basic_ostream<CharT, Traits>(&buf_), buf_(self)
{
}
};
@@ -241,11 +237,7 @@ public:
template <class Condition, class String>
bool
expect(
Condition const& shouldBeTrue,
String const& reason,
char const* file,
int line);
expect(Condition const& shouldBeTrue, String const& reason, char const* file, int line);
/** @} */
//
@@ -349,8 +341,7 @@ public:
}
template <class T>
scoped_testcase(suite& self, std::stringstream& ss, T const& t)
: suite_(self), ss_(ss)
scoped_testcase(suite& self, std::stringstream& ss, T const& t) : suite_(self), ss_(ss)
{
ss_.clear();
ss_.str({});
@@ -423,11 +414,7 @@ suite::expect(Condition const& shouldBeTrue, String const& reason)
template <class Condition, class String>
bool
suite::expect(
Condition const& shouldBeTrue,
String const& reason,
char const* file,
int line)
suite::expect(Condition const& shouldBeTrue, String const& reason, char const* file, int line)
{
if (shouldBeTrue)
{
@@ -576,8 +563,7 @@ suite::run(runner& r)
If the condition is false, the file and line number are reported.
*/
#define BEAST_EXPECTS(cond, reason) \
((cond) ? (pass(), true) : (fail((reason), __FILE__, __LINE__), false))
#define BEAST_EXPECTS(cond, reason) ((cond) ? (pass(), true) : (fail((reason), __FILE__, __LINE__), false))
#endif
} // namespace unit_test
@@ -587,11 +573,9 @@ suite::run(runner& r)
// detail:
// This inserts the suite with the given manual flag
#define BEAST_DEFINE_TESTSUITE_INSERT( \
Class, Module, Library, manual, priority) \
static beast::unit_test::detail::insert_suite<Class##_test> \
Library##Module##Class##_test_instance( \
#Class, #Module, #Library, manual, priority)
#define BEAST_DEFINE_TESTSUITE_INSERT(Class, Module, Library, manual, priority) \
static beast::unit_test::detail::insert_suite<Class##_test> Library##Module##Class##_test_instance( \
#Class, #Module, #Library, manual, priority)
//------------------------------------------------------------------------------
@@ -646,8 +630,7 @@ suite::run(runner& r)
#else
#include <xrpl/beast/unit_test/global_suites.h>
#define BEAST_DEFINE_TESTSUITE(Class, Module, Library) \
BEAST_DEFINE_TESTSUITE_INSERT(Class, Module, Library, false, 0)
#define BEAST_DEFINE_TESTSUITE(Class, Module, Library) BEAST_DEFINE_TESTSUITE_INSERT(Class, Module, Library, false, 0)
#define BEAST_DEFINE_TESTSUITE_MANUAL(Class, Module, Library) \
BEAST_DEFINE_TESTSUITE_INSERT(Class, Module, Library, true, 0)
#define BEAST_DEFINE_TESTSUITE_PRIO(Class, Module, Library, Priority) \

View File

@@ -28,13 +28,7 @@ class suite_info
run_type run_;
public:
suite_info(
std::string name,
std::string module,
std::string library,
bool manual,
int priority,
run_type run)
suite_info(std::string name, std::string module, std::string library, bool manual, int priority, run_type run)
: name_(std::move(name))
, module_(std::move(module))
, library_(std::move(library))
@@ -88,10 +82,8 @@ public:
{
// we want higher priority suites sorted first, thus the negation
// of priority value here
return std::forward_as_tuple(
-lhs.priority_, lhs.library_, lhs.module_, lhs.name_) <
std::forward_as_tuple(
-rhs.priority_, rhs.library_, rhs.module_, rhs.name_);
return std::forward_as_tuple(-lhs.priority_, lhs.library_, lhs.module_, lhs.name_) <
std::forward_as_tuple(-rhs.priority_, rhs.library_, rhs.module_, rhs.name_);
}
};
@@ -100,20 +92,10 @@ public:
/// Convenience for producing suite_info for a given test type.
template <class Suite>
suite_info
make_suite_info(
std::string name,
std::string module,
std::string library,
bool manual,
int priority)
make_suite_info(std::string name, std::string module, std::string library, bool manual, int priority)
{
return suite_info(
std::move(name),
std::move(module),
std::move(library),
manual,
priority,
[](runner& r) { Suite{}(r); });
std::move(name), std::move(module), std::move(library), manual, priority, [](runner& r) { Suite{}(r); });
}
} // namespace unit_test

View File

@@ -33,24 +33,14 @@ public:
*/
template <class Suite>
void
insert(
char const* name,
char const* module,
char const* library,
bool manual,
int priority);
insert(char const* name, char const* module, char const* library, bool manual, int priority);
};
//------------------------------------------------------------------------------
template <class Suite>
void
suite_list::insert(
char const* name,
char const* module,
char const* library,
bool manual,
int priority)
suite_list::insert(char const* name, char const* module, char const* library, bool manual, int priority)
{
#ifndef NDEBUG
{
@@ -65,8 +55,7 @@ suite_list::insert(
BOOST_ASSERT(result.second); // Duplicate type
}
#endif
cont().emplace(
make_suite_info<Suite>(name, module, library, manual, priority));
cont().emplace(make_suite_info<Suite>(name, module, library, manual, priority));
}
} // namespace unit_test

View File

@@ -45,8 +45,7 @@ public:
template <class F, class... Args>
explicit thread(suite& s, F&& f, Args&&... args) : s_(&s)
{
std::function<void(void)> b =
std::bind(std::forward<F>(f), std::forward<Args>(args)...);
std::function<void(void)> b = std::bind(std::forward<F>(f), std::forward<Args>(args)...);
t_ = std::thread(&thread::run, this, std::move(b));
}

Some files were not shown because too many files have changed in this diff Show More