Compare commits

..

333 Commits

Author SHA1 Message Date
Mayukha Vadari
ac173b6827 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2026-01-28 17:14:13 -05:00
Mayukha Vadari
69c61b2235 fix merge issues 2026-01-28 16:52:27 -05:00
Mayukha Vadari
6ae0e860ff Merge remote-tracking branch 'upstream/ripple/wasmi-host-functions' into ripple/se/fees 2026-01-28 16:39:44 -05:00
Mayukha Vadari
c077e7f073 Merge commit '4eb34f3' into ripple/se/fees 2026-01-28 16:39:06 -05:00
Mayukha Vadari
803a344c65 fix clang-format 2026-01-28 16:35:02 -05:00
Mayukha Vadari
4eb34f381a Merge branch 'ripple/wasmi' into wasmi-host-functions 2026-01-28 15:56:40 -05:00
Mayukha Vadari
72fffb6e51 Merge branch 'develop' into ripple/wasmi 2026-01-28 15:56:18 -05:00
Mayukha Vadari
f7ee580f01 Merge commit '5f638f55536def0d88b970d1018a465a238e55f4' into ripple/wasmi 2026-01-28 15:56:11 -05:00
Mayukha Vadari
122d405750 Merge commit '92046785d1fea5f9efe5a770d636792ea6cab78b' into ripple/wasmi 2026-01-28 15:56:04 -05:00
Ayaz Salikhov
f3627fb5d5 chore: Remove unnecessary boost::system requirement from conanfile (#6290) 2026-01-28 19:14:31 +00:00
Ayaz Salikhov
5f638f5553 chore: Set ColumnLimit to 120 in clang-format (#6288)
This change updates the ColumnLimit from 80 to 120, and applies clang-format to reformat the code.
2026-01-28 18:09:50 +00:00
Jingchen
92046785d1 test: Fix the xrpl.net unit test using async read (#6241)
This change makes the `read` function call in `handleConnection` async, adds a new class `TestSink` to help debugging, and adds a new target `xrpl.tests.helpers` to put the helper class in.
2026-01-28 15:14:35 +00:00
Bart
b90a843ddd ci: Upload Conan recipes for develop, release candidates, and releases (#6286)
To allow developers to consume the latest unstable and (near-)stable versions of our `xrpl` Conan recipe, we should export and upload it whenever a push occurs to the corresponding branch or a release tag has been created. This way, developers do not have to figure out themselves what the most recent shortened commit hash was to determine the latest unstable recipe version (e.g. `3.2.0-b0+a1b2c3d`) or what the most recent release (candidate) was to determine the latest (near-)stable recipe version (e.g. `3.1.0-rc2`).

Now, pushes to the `develop` branch will produce the `develop` recipe version, pushes to the `release` branch will produce the `rc` recipe version, and creation of versioned tags will produce the `release` recipe version.
2026-01-28 10:02:34 +00:00
Olek
c1c1b4ea67 Reject non-canonical binaries (#6277)
* Reject non-canonical binaries

* Review fixes

* Cleanup Number2 class

* Use enum instead of 0
2026-01-27 16:30:51 -05:00
Mayukha Vadari
977caea0a5 Merge branch 'ripple/wasmi' into ripple/wasmi-host-functions 2026-01-27 13:26:55 -05:00
Mayukha Vadari
d7ed6d6512 Merge branch 'develop' into ripple/wasmi 2026-01-27 13:26:39 -05:00
Olek
f1f2e2629f Fix for Big-Endian machines (#6245) 2026-01-27 13:05:54 -05:00
Olek
917c610f96 Ensure request size less than int limit (#6239)
* Ensure request size less than int limit

* Move size check to wasmParams function
2026-01-27 12:37:47 -05:00
Jingchen
bb529d0317 fix: Stop embedded tests from hanging on ARM by using atomic_flag (#6248)
This change replaces the mutex `stoppingMutex_`, the `atomic_bool` variable `isTimeToStop`, and the conditional variable `stoppingCondition_` with an `atomic_flag` variable.

When `xrpld` is running the embedded tests as a child process, it has a control thread (the app bundle thread) that starts the application, and an application thread (the thread that executes `app_->run()`). Due to the relaxed memory ordering on ARM, it's not guaranteed that the application thread can see the change of the value resulting from the `isTimeToStop.exchange(true)` call before it is notified by `stoppingCondition_.notify_all()`, even though they do happen in the right order in the app bundle thread in `ApplicationImp::signalStop`. We therefore often get into the situation where `isTimeToStop` is `true`, but the application thread is waiting for `stoppingCondition_` to notify, because the app bundle thread may have already notified before the application thread actually starts waiting.

Switching to a single `atomic_flag` variable makes sure that there's only one synchronisation object and then the memory order guarantee provided by c++ can make sure that `notify_all` gets synchronised after `test_and_set` does.

Fixing this issue will stop the unit tests hanging forever and then we should see less (or hopefully no) time out errors in daily github action runs
2026-01-26 21:39:28 +00:00
Mayukha Vadari
317e533d81 clean up Wasm_test.cpp more (#6278) 2026-01-26 15:21:15 -05:00
Ed Hennis
a2f1973574 fix: Remove DEFAULT fields that change to the default in associateAsset (#6259) (#6273)
- Add Vault creation tests for showing valid range for AssetsMaximum
2026-01-26 19:58:12 +00:00
Olek
4160677878 Switch to series expansion method for ln() (#6268)
* Switch to series expansion method for ln()
Add float lg() tests to Number tests;
* Rename lg -> log10
* Add check for 0 to log10()
2026-01-26 14:04:03 -05:00
Bart
847e875635 refactor: Update Boost to 1.90 (#6280)
Upcoming feature work requires functionality present in a newer Boost version. These newer versions also have improvements for sanitizers.
2026-01-26 18:54:43 +00:00
Mayukha Vadari
430696682d Merge remote-tracking branch 'upstream/ripple/se/fees' into ripple/smart-escrow 2026-01-26 09:43:14 -05:00
Mayukha Vadari
4621e4eda3 Merge branch 'ripple/wasmi-host-functions' into ripple/se/fees 2026-01-26 09:29:34 -05:00
Olek
df98db1452 Check wasm return type (#6240)
* Check wasm return type

* Add more tests
2026-01-23 16:12:14 -05:00
Mayukha Vadari
981ac7abf4 simplify fee code (#6249)
* simplify lambda

* clean up fee code

* fix tests, better error handling

* simplify source_location
2026-01-23 15:14:24 -05:00
Mayukha Vadari
57d2a91ad5 test large WASM modules (#6206)
* [WIP] first attempt at large wasm test

* finish large WASM modules

* fix windows build (hopefully)

* Apply suggestions from code review

* respond to comments

* add file and line to fail

* clean up test

* add source_location

* simplify

* fix windows
2026-01-23 14:45:09 -05:00
Mayukha Vadari
778da954b4 refactor: clean up uses of std::source_location (#6272)
Since the minimum Clang version we support is 16, the checks for version < 15 are no longer necessary. This change therefore removes the macros checking if the clang version is < 15 and simplifies uses of `std::source_location`.
2026-01-23 14:09:00 -05:00
Mayukha Vadari
673476ef1b Merge branch 'ripple/wasmi' into ripple/wasmi-host-functions 2026-01-23 13:13:26 -05:00
Mayukha Vadari
8bc6f9cd70 Merge branch 'develop' into ripple/wasmi 2026-01-23 13:13:11 -05:00
Bart
0586b5678e ci: Pass missing sanitizers input to actions (#6266)
The `upload-conan-deps` workflow that's triggered on push is supposed to upload the Conan dependencies to our remote, so future PR commits can pull those dependencies from the remote. However, as the `sanitize` argument is missing, it was building different dependencies than what the PRs are building for the asan/tsan/ubsan job, so the latter would not find anything in the remote that they could use. This change sets the missing `sanitizers` input variable when running the `build-deps` action. 

Separately, the `setup-conan` action showed the default profile, while we are using the `ci` profile. To ensure the profile is correctly printed when sanitizers are enabled, the environment variable the profile uses is set before calling the action.
2026-01-23 06:40:55 -05:00
Mayukha Vadari
ba5debfecd update return calculation (#6250) 2026-01-22 17:01:56 -05:00
Bart
66158d786f ci: Properly propagate Conan credentials (#6265)
The export and upload steps were initially in a separate action, where GitHub Actions does not support the `secrets` keyword, but only `inputs` for the credentials. After they were moved to a reusable workflow, only part of the references to the credentials were updated. This change correctly references to the Conan credentials via `secrets` instead of `inputs`.
2026-01-22 16:05:15 -05:00
Bart
c57ffdbcb8 ci: Explicitly set version when exporting the Conan recipe (#6264)
By default the Conan recipe extracts the version from `BuildInfo.cpp`, but in some of the cases we want to upload a recipe with a suffix derived from the commit hash. This currently then results in the uploading to fail, since there is a version mismatch.

Here we explicitly set the version, and then simplify the steps in the upload workflow since we now need the recipe name (embedded within the conanfile.py but also needed when uploading), the recipe version, and the recipe ref (name/version).
2026-01-22 19:05:59 +00:00
Bart
4e3f953fc4 ci: Use plus instead of hyphen for Conan recipe version suffix (#6261)
Conan recipes use semantic versioning, and since our version already contains a hyphen the second hyphen causes Conan to ignore it. The plus sign is a valid separator we can use instead, so this change uses a `+` to separate a version suffix (commit hash) instead of a `-`.
2026-01-22 16:42:53 +00:00
Pratik Mankawde
a4f8aa623f chore: Detect uninitialized variables in CMake files (#6247)
There were a few uninitialized variables in CMake files. This change will make sure we always check if a variable has been initialized before using them, or in come cases initialize them by default. This change will raise an error on CI if a developer introduced an uninitialized variable in CMake files.
2026-01-22 11:16:18 -05:00
Bart
8695313565 ci: Run on-trigger and on-pr when generate-version is modified (#6257)
This change ensures that the `on-pr` and `on-trigger` workflows run when the generate-version action is modified.
2026-01-22 13:48:50 +00:00
Valentin Balaschenko
68c9d5ca0f refactor: Enforce 15-char limit and simplify labels for thread naming (#6212)
This change continues the thread naming work from #5691 and #5758, which enables more useful lock contention profiling by ensuring threads/jobs have short, stable, human-readable names (rather than being truncated/failing due to OS limits). This changes diagnostic naming only (thread names and job/load-event labels), not behavior.

Specific modifications are:
* Shortens all thread/job names used with `beast::setCurrentThreadName`, so the effective Linux thread name stays within the 15-character limit.
* Removes per-ledger sequence numbers from job/thread names to avoid long labels. This improves aggregation in lock contention profiling for short-lived job executions.
2026-01-22 08:19:29 -05:00
Mayukha Vadari
f4a27c9b6d minor refactor of Wasm_test (#6229) 2026-01-21 18:05:48 -05:00
Olek
fd1cb318e3 Check that max parameters length is multiple of sizeof(int32) (#6253) 2026-01-21 17:22:47 -05:00
Olek
94b35a234e Make hostfunctions object shared (#6252) 2026-01-21 15:27:06 -05:00
Mayukha Vadari
b5d0078927 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2026-01-21 13:26:37 -05:00
Mayukha Vadari
43c80edaf4 Merge branch 'wasmi-host-functions' into ripple/se/fees 2026-01-21 12:58:24 -05:00
Mayukha Vadari
8c3544a58c Merge branch 'ripple/wasmi' into wasmi-host-functions 2026-01-21 12:57:47 -05:00
Mayukha Vadari
ed5139d4e3 Merge branch 'develop' into ripple/wasmi 2026-01-21 12:57:29 -05:00
Olek
42494dd4cf Ensure lifetime of imports (#6230) 2026-01-21 12:43:12 -05:00
Mayukha Vadari
ce84cc8b44 improve trace hf code (#6190)
* adjust trace statements

* add helper function

* use lambda instead

* use same paradigm in TestHostFunctions

* oops
2026-01-15 20:50:55 -05:00
Mayukha Vadari
9a9a7aab01 Add Vector256 support to the locator (#6131)
* add Vector256 nesting/length support

* [WIP] add tests

* fix tests

* simplify with helper function

* oops typo

* remove static variable

* respond to comments

* STBaseOrUInt256->FieldValue

* oops

* add more tests for coverage

* respond to comments
2026-01-15 20:14:42 -05:00
Olek
209a1a6ffa Don't throw from hostfunctions stack (#6221) 2026-01-15 19:52:22 -05:00
Mayukha Vadari
9538e9b34c Merge remote-tracking branch 'upstream/ripple/se/fees' into ripple/smart-escrow 2026-01-15 14:34:54 -05:00
Mayukha Vadari
384b3608d7 Merge branch 'ripple/wasmi-host-functions' into ripple/se/fees 2026-01-15 14:17:57 -05:00
Oleksandr
fc35a9f9c8 Fix usage of the Number class 2026-01-14 19:36:50 -05:00
Oleksandr
c5e50aa221 Fix merge issues 2026-01-14 14:46:35 -05:00
Mayukha Vadari
074b1f00d5 Merge branch 'ripple/wasmi' into wasmi-host-functions 2026-01-14 13:04:28 -05:00
Mayukha Vadari
7a9d245950 Merge branch 'develop' into ripple/wasmi 2026-01-14 13:01:35 -05:00
Mayukha Vadari
1809fe07f2 remove test file 2026-01-14 12:43:12 -05:00
Mayukha Vadari
409c67494a move helper functions to separate file (#6178)
* move helper functions to separate file

* break it up into sections, split out float helpers

* split impls into multiple cpp files

* namespace detail

* fix build issue

* fix tests

* clean up

* put float helpers into wasm_float namespace
2026-01-13 20:34:57 -05:00
pwang200
fb97f7b596 fix unit tests failed due to fuel changes (#6174)
* fix unit tests failed due to fuel changes

* fix tests

* fix tests

---------

Co-authored-by: Mayukha Vadari <mvadari@ripple.com>
2026-01-13 16:48:25 -05:00
Olek
c626b6403a Fix unaligned access (#6208) 2026-01-13 16:40:42 -05:00
Olek
81cbc91927 Fix traces (#6127)
* Fix traces
* More tests for codecov
* Review fixes
* trace float test
* Fix return value for traces
* Remove SuiteJournalSink2
* Add explicit severity
* Move logs to ApplyView
* Add check for output strings
* Merging fix
2026-01-13 16:38:48 -05:00
Mayukha Vadari
845c503ea6 Merge remote-tracking branch 'upstream/ripple/se/fees' into ripple/smart-escrow 2026-01-13 13:48:15 -05:00
Mayukha Vadari
e1513570df Merge branch 'ripple/wasmi-host-functions' into ripple/se/fees 2026-01-13 13:46:50 -05:00
pwang200
1c812a6c4d disable Wasm features added in Wasmi 1.0, and fix unit test fuel cost due to Wasmi 1.0 fuel changes (#6173)
* disable 4 more wasm features

* unit tests for disabled Wasmi 1.0 features

* fix unit tests failed due to fuel changes

* rearrange wasm feature unit tests

* fix gas costs

* Update src/test/app/wasm_fixtures/wat/custom_page_sizes.wat

---------

Co-authored-by: Mayukha Vadari <mvadari@ripple.com>
2026-01-12 22:04:33 -05:00
Mayukha Vadari
0724927799 Merge branch 'ripple/wasmi' into ripple/wasmi-host-functions 2026-01-12 15:17:36 -05:00
Mayukha Vadari
ff39fa59d9 Merge remote-tracking branch 'upstream/ripple/se/fees' into ripple/smart-escrow 2026-01-12 14:12:04 -05:00
Olek
d83ec96848 Switch to wasmi v1.0.6 (#6204) 2026-01-12 13:36:02 -05:00
Mayukha Vadari
f0d0739528 Merge branch 'ripple/wasmi-host-functions' into ripple/se/fees 2026-01-12 13:28:07 -05:00
Mayukha Vadari
375dd50b35 Merge branch 'ripple/wasmi' into ripple/wasmi-host-functions 2026-01-12 13:19:17 -05:00
Mayukha Vadari
419d53ec4c Merge branch 'develop' into ripple/wasmi 2026-01-12 13:10:58 -05:00
Mayukha Vadari
d4d70d5675 Merge branch 'develop' into ripple/wasmi 2026-01-12 12:27:48 -05:00
Olek
6ab15f8377 Add checks to allocate (#6185) 2026-01-09 14:49:09 -05:00
pwang200
91f3d51f3d fix start function loop 2026-01-09 11:38:54 -05:00
pwang200
9ed60b45f8 section corruption unit tests 2026-01-08 16:15:36 -05:00
pwang200
d5c53dcfd2 fix Uninitialized import entries lead to undefined behavior During WASM Instantiation 2026-01-08 16:14:49 -05:00
Mayukha Vadari
8015088340 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2026-01-08 11:46:18 -05:00
Mayukha Vadari
103379836a Merge branch 'wasmi-host-functions' into ripple/se/fees 2026-01-08 11:45:05 -05:00
Mayukha Vadari
e94321fb41 Merge branch 'ripple/wasmi' into wasmi-host-functions 2026-01-08 11:44:15 -05:00
Mayukha Vadari
bbc28b3b1c Merge branch 'develop' into ripple/wasmi 2026-01-08 11:42:28 -05:00
Mayukha Vadari
843e981c8a Merge remote-tracking branch 'upstream/ripple/wasmi' into wasmi-host-functions 2026-01-07 16:52:56 -05:00
Mayukha Vadari
5aab274b7a Merge branch 'develop' into ripple/wasmi 2026-01-07 16:52:10 -05:00
Mayukha Vadari
2c30e41191 use the develop hashes 2026-01-07 16:50:45 -05:00
Mayukha Vadari
8ea5106b0b Merge branch 'develop' into ripple/wasmi 2026-01-07 14:34:49 -05:00
Mayukha Vadari
f57f67a8ae infinite loop test (#6064) 2026-01-07 11:51:58 -05:00
Mayukha Vadari
61b2fe4f64 fix test 2026-01-06 17:58:22 -05:00
Mayukha Vadari
7ee964f514 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2026-01-06 17:22:42 -05:00
Mayukha Vadari
397bc8781e fix more merge issues 2026-01-06 14:27:46 -05:00
pwang200
a98269f049 a batch of memory, table, and trap tests (#6100)
wasm memory, table, and trap unit tests
2026-01-06 14:03:18 -05:00
Mayukha Vadari
8bb8c2e38b Merge remote-tracking branch 'upstream/ripple/wasmi-host-functions' into ripple/se/fees 2026-01-06 13:40:53 -05:00
Mayukha Vadari
b66bc47ca9 fix more merge issues 2026-01-06 13:30:30 -05:00
Mayukha Vadari
0e9c7458bb fix more merge issues 2026-01-05 18:53:14 -05:00
Mayukha Vadari
36ecd3b52b Merge remote-tracking branch 'upstream/ripple/wasmi-host-functions' into ripple/se/fees 2026-01-05 18:51:46 -05:00
Mayukha Vadari
1d89940653 merge fixes 2026-01-05 18:48:09 -05:00
Mayukha Vadari
1a1a6806ec Merge branch 'ripple/wasmi' into ripple/wasmi-host-functions 2026-01-05 18:44:41 -05:00
Mayukha Vadari
1977df9c2e Merge remote-tracking branch 'upstream/develop' into ripple/wasmi 2026-01-05 18:43:49 -05:00
Mayukha Vadari
6ffbef09c2 fix gas in test 2025-12-23 11:00:57 -08:00
Mayukha Vadari
e05f907788 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-12-22 17:02:12 -08:00
Mayukha Vadari
9d1f51b01a Merge branch 'ripple/wasmi-host-functions' into ripple/se/fees 2025-12-22 17:01:34 -08:00
Mayukha Vadari
6c95548df5 Merge remote-tracking branch 'upstream/develop' into ripple/wasmi 2025-12-22 15:51:19 -08:00
Olek
69ab39d658 Fix potential memory leaks found by srlabs (#6145) 2025-12-18 14:13:48 -05:00
Mayukha Vadari
b9eb66eecc fix parameter index desynchronization (#6148) 2025-12-17 14:19:34 -08:00
Mayukha Vadari
e916416642 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-12-15 10:47:26 -08:00
Mayukha Vadari
827ecc6e3a Merge branch 'ripple/wasmi-host-functions' into ripple/se/fees 2025-12-15 10:44:17 -08:00
Mayukha Vadari
6a54ed7f14 fix fee overflow issue in EscrowFinish (#6130) 2025-12-12 15:12:43 -08:00
Mayukha Vadari
881087dd3d Merge remote-tracking branch 'upstream/ripple/wasmi' into wasmi-host-functions 2025-12-08 14:29:47 -05:00
Mayukha Vadari
90e0bbd0fc Merge branch 'develop' into ripple/wasmi 2025-12-08 14:28:41 -05:00
Olek
b57df290de Use conan repo for wasmi lib (#6109)
* Use conan repo for wasmi lib
* Generate lockfile
2025-12-08 13:02:01 -05:00
Mayukha Vadari
8a403f1241 Merge branch 'develop' into ripple/wasmi 2025-12-05 14:32:48 -05:00
Olek
1e0741690d Fix sign cost (#6103) 2025-12-03 18:27:06 -05:00
Mayukha Vadari
6d2640871d Merge branch 'develop' into ripple/wasmi 2025-12-02 18:40:54 -05:00
Olek
c5d178f152 HF cost for smart escrow (#6097) 2025-12-02 13:12:15 -05:00
Mayukha Vadari
5a17940e2a Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-12-02 07:48:22 -05:00
Mayukha Vadari
27ac30208d Merge branch 'ripple/wasmi-host-functions' into ripple/se/fees 2025-12-02 07:44:22 -05:00
pwang200
c145598ff9 add memory limit and disable float and other advanced instructions 2025-12-02 00:09:20 -05:00
Olek
50e5608d86 wasmi HF cost 2025-12-01 20:21:52 -05:00
Mayukha Vadari
abfcc4ef67 fix tests 2025-11-25 04:17:42 +05:30
Mayukha Vadari
e40a4df777 Merge branch 'ripple/wasmi-host-functions' into ripple/se/fees 2025-11-25 03:49:06 +05:30
Mayukha Vadari
dba187f8c5 fix build issues 2025-11-25 03:42:38 +05:30
Mayukha Vadari
7a7b96107c Merge branch 'ripple/wasmi' into ripple/wasmi-host-functions 2025-11-25 03:42:05 +05:30
Mayukha Vadari
8f2f8d53b4 update gas amounts for wasmi 2025-11-25 03:32:01 +05:30
Mayukha Vadari
49acc61961 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-11-25 03:31:10 +05:30
Olek
500bb68831 Fix win build (#6076) 2025-11-24 16:56:23 -05:00
Mayukha Vadari
95d78a8600 Merge branch 'wasmi-host-functions' into ripple/se/fees 2025-11-25 03:26:19 +05:30
Mayukha Vadari
53eb0f60bc fix another build issue 2025-11-25 03:10:58 +05:30
Mayukha Vadari
41205ae928 Merge branch 'ripple/wasmi' into wasmi-host-functions 2025-11-25 03:01:51 +05:30
Mayukha Vadari
c33b0ae463 fix build issue 2025-11-25 02:58:57 +05:30
Mayukha Vadari
16087c9680 fix merge issue 2025-11-25 02:57:47 +05:30
Mayukha Vadari
56bc6d58f6 Merge branch 'ripple/wasmi' into wasmi-host-functions 2025-11-25 02:45:00 +05:30
Mayukha Vadari
ef5d335e09 update 2025-11-25 02:44:18 +05:30
Mayukha Vadari
25c3060fef remove conan.lock (temporary) 2025-11-25 02:40:57 +05:30
Mayukha Vadari
ce9f0b38a4 Merge branch 'develop' into ripple/wasmi 2025-11-25 02:33:47 +05:30
Mayukha Vadari
35f7cbf772 update 2025-11-25 02:31:51 +05:30
Mayukha Vadari
def7758a23 remove copyright stuff 2025-11-04 17:53:13 -05:00
Mayukha Vadari
58e5b4ad25 Merge remote-tracking branch 'upstream/ripple/se/fees' into ripple/smart-escrow 2025-11-04 17:52:17 -05:00
Mayukha Vadari
578413859c Merge branch 'ripple/wamr-host-functions' into ripple/se/fees 2025-11-04 17:52:00 -05:00
Mayukha Vadari
0db564d261 WASMI data 2025-11-04 15:57:07 -05:00
Mayukha Vadari
427b7ea104 run rename script 2025-11-04 15:29:08 -05:00
Mayukha Vadari
fa8aa49376 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-11-04 14:51:11 -05:00
Mayukha Vadari
3195eb16b2 Merge branch 'wamr-host-functions' into ripple/se/fees 2025-11-04 14:50:31 -05:00
Mayukha Vadari
7bf6878b4b fix imports 2025-11-04 14:49:45 -05:00
Mayukha Vadari
a891b49c67 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-11-04 14:24:51 -05:00
Mayukha Vadari
eed280d169 Merge branch 'wamr-host-functions' into ripple/se/fees 2025-11-04 13:37:09 -05:00
Mayukha Vadari
0bc1a115ff Merge branch 'wamr' into wamr-host-functions 2025-11-04 13:36:22 -05:00
Mayukha Vadari
334bcfa5ef Merge branch 'develop' into wamr 2025-11-04 13:36:01 -05:00
Mayukha Vadari
106dea4559 update fixtures to use the latest version of stdlib 2025-11-04 13:35:25 -05:00
Mayukha Vadari
3ffdcf8114 allow 0-value trace amounts 2025-11-04 13:19:40 -05:00
Olek
4021a7eb28 Wamr and HF security review fixes (#5965) 2025-10-31 10:34:31 -04:00
Ayaz Salikhov
0690fda0f1 Merge branch 'develop' into ripple/wamr 2025-10-30 14:12:15 +00:00
Mayukha Vadari
d0cc48c6d3 Update cmake/RippledCore.cmake
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-29 16:41:11 -04:00
Olek
d66e3c949e Chores: Sort package list (#5963) 2025-10-29 12:55:07 -04:00
Mayukha Vadari
7e2e10f02c Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-10-24 18:01:33 -04:00
Mayukha Vadari
aebec5378c Merge branch 'wamr-host-functions' into ripple/se/fees 2025-10-24 18:01:17 -04:00
Mayukha Vadari
0c65a386b5 fix tests 2025-10-24 18:01:01 -04:00
Mayukha Vadari
22ca691e75 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-10-24 17:14:54 -04:00
Mayukha Vadari
fc6ff69752 fix tests 2025-10-24 17:14:46 -04:00
Mayukha Vadari
079e251aca fix bug 2025-10-24 16:52:17 -04:00
Mayukha Vadari
67e2c1b563 Merge branch 'wamr-host-functions' into ripple/se/fees 2025-10-24 16:06:02 -04:00
Mayukha Vadari
29f5430881 fix bug 2025-10-24 16:05:38 -04:00
Mayukha Vadari
b6bd268be2 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-10-24 16:02:23 -04:00
Mayukha Vadari
209ee25c32 Merge branch 'ripple/wamr-host-functions' into ripple/se/fees 2025-10-24 16:02:16 -04:00
Mayukha Vadari
101f285bcd return size from updateData 2025-10-24 16:01:45 -04:00
Mayukha Vadari
566b85b3d6 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-10-23 15:41:53 -04:00
Mayukha Vadari
af6beb1d7c fix ledger used for rules 2025-10-23 15:41:32 -04:00
Mayukha Vadari
7d22fe804d Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-10-23 15:38:46 -04:00
Mayukha Vadari
ce19c13059 Merge branch 'ripple/wamr-host-functions' into ripple/se/fees 2025-10-23 15:38:37 -04:00
Mayukha Vadari
286dc6322b Merge branch 'ripple/wamr' into ripple/wamr-host-functions 2025-10-23 15:38:28 -04:00
Mayukha Vadari
c9346cd40d Merge branch 'develop' into ripple/wamr 2025-10-23 15:38:04 -04:00
Mayukha Vadari
9cfb7ac340 fix build issue 2025-10-20 17:20:35 -04:00
Mayukha Vadari
3a0e9aab4f Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-10-20 14:51:15 -04:00
Mayukha Vadari
43caa1ef29 Merge branch 'ripple/wamr-host-functions' into ripple/se/fees 2025-10-20 11:53:56 -04:00
Mayukha Vadari
1c5683ec78 Merge branch 'ripple/wamr' into ripple/wamr-host-functions 2025-10-20 11:53:22 -04:00
Mayukha Vadari
9bee155d59 Merge branch 'develop' into ripple/wamr 2025-10-20 11:53:03 -04:00
Mayukha Vadari
f34b05f4de Merge branch 'ripple/wamr' into ripple/wamr-host-functions 2025-10-16 12:12:05 -04:00
Mayukha Vadari
97ce25f4ce Merge branch 'develop' into ripple/wamr 2025-10-16 12:11:55 -04:00
Mayukha Vadari
3b6cd22e32 fix 2025-10-14 18:10:11 -04:00
Mayukha Vadari
c9c35780d2 reduce diff more 2025-10-14 18:00:58 -04:00
Mayukha Vadari
17f401f374 reduce diff 2025-10-14 17:58:41 -04:00
Mayukha Vadari
c6c54b3282 resolve todos 2025-10-14 17:56:12 -04:00
Mayukha Vadari
91455b6860 respond to comments 2025-10-13 15:20:03 -04:00
Olek
9e14c14a26 Use xrplf conan repo for wamr (#5862) 2025-10-13 15:11:21 -04:00
Mayukha Vadari
f16f243c22 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-10-13 14:01:11 -04:00
Mayukha Vadari
fe601308e7 Merge branch 'ripple/wamr-host-functions' into ripple/se/fees 2025-10-13 13:58:09 -04:00
Mayukha Vadari
c507880d8f Merge branch 'ripple/wamr' into ripple/wamr-host-functions 2025-10-13 13:57:22 -04:00
Mayukha Vadari
3f8328bbf8 Merge branch 'develop' into ripple/wamr 2025-10-13 13:55:07 -04:00
Mayukha Vadari
e41f6a71b7 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-10-09 17:13:13 -04:00
Mayukha Vadari
51f1be7f5b Merge branch 'ripple/wamr-host-functions' into ripple/se/fees 2025-10-09 17:10:52 -04:00
Mayukha Vadari
c10a5f9ef6 Merge branch 'ripple/wamr' into ripple/wamr-host-functions 2025-10-09 17:10:31 -04:00
Mayukha Vadari
3c141de695 Merge branch 'develop' into ripple/wamr 2025-10-09 16:52:25 -04:00
Mayukha Vadari
db263b696c Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-10-06 17:01:32 -04:00
Mayukha Vadari
f57b855d74 Merge branch 'ripple/wamr-host-functions' into ripple/se/fees 2025-10-06 17:01:17 -04:00
Mayukha Vadari
da2b9455f2 fix: remove get_ledger_account_hash and get_ledger_tx_hash host functions (#5850)
* remove `get_ledger_account_hash` and `get_ledger_tx_hash`

* fix build+tests
2025-10-06 16:38:40 -04:00
Mayukha Vadari
86525d8583 test: add tests for fee voting (#5747) 2025-10-06 16:32:04 -04:00
Mayukha Vadari
c41e52f57a Move Smart Escrow tests to separate file (#5849) 2025-10-06 16:27:21 -04:00
Mayukha Vadari
55772a0d07 add sfData preflight checks + tests (#5839) 2025-10-02 17:50:43 -04:00
Mayukha Vadari
965a9e89ac Merge remote-tracking branch 'upstream/ripple/se/fees' into ripple/smart-escrow 2025-10-02 15:51:35 -04:00
Mayukha Vadari
51ee06429b Merge branch 'ripple/wamr-host-functions' into ripple/se/fees 2025-10-02 14:35:36 -04:00
Mayukha Vadari
cb622488c0 Merge branch 'ripple/wamr' into ripple/wamr-host-functions 2025-10-02 14:35:25 -04:00
Mayukha Vadari
32f971fec6 Merge branch 'develop' into ripple/wamr 2025-10-02 14:35:13 -04:00
Mayukha Vadari
ca85d09f02 Merge branch 'ripple/wamr-host-functions' into ripple/se/fees 2025-09-30 14:43:24 -04:00
Mayukha Vadari
8dea76baa4 Merge branch 'ripple/wamr' into ripple/wamr-host-functions 2025-09-30 14:42:49 -04:00
Mayukha Vadari
299fbe04c4 Merge branch 'develop' into ripple/wamr 2025-09-30 14:42:24 -04:00
Mayukha Vadari
57fc1df7d7 switch from wasm32-unknown-unknown to wasm32v1-none (#5814) 2025-09-29 15:43:22 -04:00
Mayukha Vadari
8d266d3941 remove STInt64 (#5815) 2025-09-29 15:43:10 -04:00
Mayukha Vadari
a865b4da1c Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-09-26 17:10:10 -04:00
Mayukha Vadari
e59f5f3b01 fix tests 2025-09-26 17:09:53 -04:00
Mayukha Vadari
8729688feb Merge remote-tracking branch 'upstream/ripple/se/fees' into ripple/smart-escrow 2025-09-26 16:56:51 -04:00
Mayukha Vadari
c8b06e7de1 Merge branch 'ripple/wamr-host-functions' into ripple/se/fees 2025-09-26 16:50:34 -04:00
Mayukha Vadari
eaba76f9e6 Merge branch 'ripple/wamr' into ripple/wamr-host-functions 2025-09-26 16:37:25 -04:00
Mayukha Vadari
cb702cc238 Merge branch 'develop' into ripple/wamr 2025-09-26 16:37:04 -04:00
Mayukha Vadari
f1f798bb85 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-09-26 15:52:22 -04:00
Mayukha Vadari
c3fd52c177 Merge branch 'ripple/wamr-host-functions' into ripple/se/fees 2025-09-26 15:51:56 -04:00
Mayukha Vadari
b69b4a0a4a Merge branch 'ripple/wamr' into ripple/wamr-host-functions 2025-09-26 15:51:48 -04:00
Mayukha Vadari
50d6072a73 Merge branch 'develop' into ripple/wamr 2025-09-26 15:51:40 -04:00
Olek
d24cd50e61 Switch to own wamr fork (#5808) 2025-09-23 16:39:21 -04:00
Mayukha Vadari
85bff20ae5 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-09-23 10:22:30 -04:00
Mayukha Vadari
737fab5471 fix issues 2025-09-22 23:55:57 -04:00
Mayukha Vadari
e6592e93a9 Merge branch 'ripple/se/fees' into ripple/smart-escrow 2025-09-22 18:32:36 -04:00
Mayukha Vadari
5a6c4e8ae0 Merge branch 'ripple/wamr-host-functions' into ripple/se/fees 2025-09-22 18:23:54 -04:00
Mayukha Vadari
9f5875158c Merge branch 'ripple/wamr' into ripple/wamr-host-functions 2025-09-22 18:23:45 -04:00
Mayukha Vadari
c3dc33c861 Merge branch 'develop' into ripple/wamr 2025-09-22 18:23:35 -04:00
Mayukha Vadari
7420f47658 SmartEscrow fee voting changes 2025-09-22 18:22:44 -04:00
Olek
6be8f2124c Latests HF perf test (#5789) 2025-09-18 15:51:39 -04:00
Mayukha Vadari
edfed06001 fix merge issues 2025-09-18 15:39:49 -04:00
Mayukha Vadari
1c646dba91 Merge remote-tracking branch 'upstream/ripple/wamr' into wamr-host-functions 2025-09-18 15:29:02 -04:00
Mayukha Vadari
6781068058 Merge branch 'develop' into ripple/wamr 2025-09-18 15:27:54 -04:00
Mayukha Vadari
cfe57c1dfe Merge branch 'ripple/wamr' into ripple/wamr-host-functions 2025-09-18 14:37:58 -04:00
Mayukha Vadari
c34d09a971 Merge branch 'develop' into ripple/wamr 2025-09-18 14:24:34 -04:00
Mayukha Vadari
ebd90c4742 chore: remove unneeded float stuff (#5729) 2025-09-11 18:41:24 -04:00
Mayukha Vadari
ba52d34828 test: improve codecov in HostFuncWrapper.cpp (#5730) 2025-09-11 18:09:08 -04:00
Mayukha Vadari
2f869b3cfc Merge branch 'develop' into ripple/smart-escrow 2025-09-11 16:41:14 -04:00
Mayukha Vadari
ffa21c27a7 fix test 2025-09-11 16:40:59 -04:00
Mayukha Vadari
1b6312afb3 rearrange files 2025-09-11 16:34:03 -04:00
Mayukha Vadari
bf32dc2e72 add fixtures files 2025-09-11 16:28:11 -04:00
Mayukha Vadari
a15d65f7a2 update tests 2025-09-11 16:20:33 -04:00
Mayukha Vadari
2de8488855 add temBAD_WASM 2025-09-11 16:02:17 -04:00
Mayukha Vadari
129aa4bfaa bring out IOUAmount.h 2025-09-11 13:18:42 -04:00
Mayukha Vadari
b1d70db63b limits 2025-09-10 15:05:06 -04:00
Mayukha Vadari
f03c3aafe4 misc host function files 2025-09-10 15:02:48 -04:00
Mayukha Vadari
51a9f106d1 CODEOWNERS 2025-09-10 14:59:09 -04:00
Mayukha Vadari
bfc048e3fe add tests 2025-09-10 14:57:23 -04:00
Mayukha Vadari
83418644f7 add host functions 2025-09-10 14:56:21 -04:00
Mayukha Vadari
dbc9dd5bfc Add WAMR integration code 2025-09-10 14:56:08 -04:00
Mayukha Vadari
5c480cf883 Merge branch 'develop' into ripple/smart-escrow 2025-09-10 14:51:45 -04:00
Mayukha Vadari
45ab15d4b5 add WAMR dependency 2025-09-10 14:40:48 -04:00
Mayukha Vadari
adc64e7866 Merge branch 'develop' into ripple/smart-escrow 2025-09-10 10:46:51 -04:00
Mayukha Vadari
4d4a1cfe82 Merge branch 'develop' into ripple/smart-escrow 2025-09-09 16:19:27 -04:00
Mayukha Vadari
f2c7da3705 Merge branch 'develop' into ripple/smart-escrow 2025-09-09 14:51:25 -04:00
Mayukha Vadari
3ab0a82cd3 Merge branch 'develop' into ripple/smart-escrow 2025-09-08 15:20:01 -04:00
Mayukha Vadari
a46d772147 fix build and tests (#5768)
* fix conan.lock

* add conan.lock to triggers

* update on-trigger.yml too

* fix tests

* roll back unrelated changes
2025-09-04 17:05:39 -04:00
Mayukha Vadari
f3c50318e8 Merge branch 'develop' into ripple/smart-escrow 2025-09-04 13:42:59 -04:00
Mayukha Vadari
e7aa924c0e Merge branch 'develop' into ripple/smart-escrow 2025-09-03 15:54:57 -04:00
Mayukha Vadari
5266f04970 chore: rollback unrelated changes (#5737) 2025-09-02 18:26:01 -04:00
Mayukha Vadari
db957cf191 Merge branch 'develop' into ripple/smart-escrow 2025-08-29 16:54:17 -04:00
Mayukha Vadari
8ac514363d get new fees and reserves working (#5714) 2025-08-29 10:58:52 -04:00
Mayukha Vadari
c2ea68cca4 Merge branch 'develop' into ripple/smart-escrow 2025-08-28 14:02:10 -04:00
Mayukha Vadari
3d86881ce7 Merge branch 'develop' into ripple/smart-escrow 2025-08-27 13:58:38 -04:00
Mayukha Vadari
697d1470f4 change: adjust the function signatures for get_ledger_sqn and get_parent_ledger_time (#5733) 2025-08-27 13:58:27 -04:00
Mayukha Vadari
0b5f8f4051 Merge remote-tracking branch 'upstream/develop' into ripple/smart-escrow 2025-08-26 17:59:43 -04:00
Mayukha Vadari
0fed78fbcc Merge branch 'develop' into ripple/smart-escrow 2025-08-26 15:04:33 -04:00
Mayukha Vadari
8c38ef726b chore: exclude a bunch of code that doesn't need to be tested from codecov (#5721)
* exclude the bulk of HostFunc.h from codecov

* fix codecov (maybe)

* adjust

* more codecov excl
2025-08-26 15:04:27 -04:00
Mayukha Vadari
2399d90334 Merge branch 'develop' into ripple/smart-escrow 2025-08-25 10:42:48 -04:00
Mayukha Vadari
6367d68d1e Merge branch 'develop' into ripple/smart-escrow 2025-08-22 14:02:12 -04:00
Mayukha Vadari
155a84c8a3 Merge branch 'develop' into ripple/smart-escrow 2025-08-22 13:24:43 -04:00
Mayukha Vadari
10558c9eff Merge remote-tracking branch 'upstream/develop' into develop5.5 2025-08-22 10:38:10 -04:00
Mayukha Vadari
dd30d811e6 feat: last set of host functions (#5674) 2025-08-15 16:51:30 -04:00
Mayukha Vadari
293d8e4ddb test: store Rust source code in rippled, instead of just opaque hex strings (#5653) 2025-08-15 15:32:36 -04:00
Mayukha Vadari
77875c9133 fix: actually return int instead of bool (#5651) 2025-08-15 13:58:38 -04:00
Olek
647b47567e Fix float point binary format (#5688) 2025-08-15 09:51:40 -04:00
Olek
b0a1ad3b06 Disable float point instructions (#5679) 2025-08-14 14:59:55 -04:00
Mayukha Vadari
1d141bf2e8 chore: move WASM files to separate folder (#5666) 2025-08-14 11:46:10 -04:00
Olek
0d0e279ae2 Float point Hostfunctions unit tests (#5656)
* Added direct unittests for float hostfunctions
2025-08-14 10:21:21 -04:00
Mayukha Vadari
5dc0cee28a cache data instead of setting it in updateData (#5642)
Co-authored-by: Oleksandr <115580134+oleks-rip@users.noreply.github.com>
2025-08-11 15:52:21 -04:00
Mayukha Vadari
c15947da56 fix CI 2025-08-06 12:41:12 -04:00
Mayukha Vadari
9bc04244e7 Merge remote-tracking branch 'upstream/ripple/smart-escrow' into develop5 2025-08-06 12:36:25 -04:00
Mayukha Vadari
38c7a27010 clean up WASM functions a bit (#5628) 2025-08-05 18:06:34 -04:00
Mayukha Vadari
58741d2791 feat: return an int instead of boolean from finish, display in metadata (#5641)
* create STInt32 and STInt64, use it for sfWasmReturnCode in metadata

* get it actually working

* add tests

* update comment

* change type

* respond to comments
2025-08-04 09:25:47 -04:00
Mayukha Vadari
8426470506 feat: add other misc host functions (#5574) 2025-07-31 18:39:21 -04:00
Olek
ccc3280b1a Update wamr to 2.4.1 (#5640) 2025-07-31 13:42:53 -04:00
Mayukha Vadari
2847075705 fix: ensure GasUsed shows up in the metadata even on tecWASM_REJECTED (#5633)
* always set gas used

* fix

* add tests

* clean up
2025-07-31 10:57:56 -04:00
Olek
3108ca0549 Float point HF (#5611)
- added support for 8-byte  float point
2025-07-30 14:38:03 +00:00
Mayukha Vadari
3b849ff497 Add unit tests for host functions (#5578) 2025-07-29 17:54:48 -04:00
Mayukha Vadari
66776b6a85 test: codecov for WasmHostFuncWrapper.cpp (#5601) 2025-07-29 16:06:21 -04:00
Mayukha Vadari
c8c241b50d Merge remote-tracking branch 'upstream/develop' into develop4.6 2025-07-29 11:34:44 -04:00
Bronek Kozicki
44cb588371 Build options cleanup (#5581)
As we no longer support old compiler versions, we are bringing back some warnings by removing no longer relevant `-Wno-...` options.
2025-07-28 13:02:41 -04:00
Oleksandr
6f91b8f8d1 Fix windows 2025-07-28 12:29:39 -04:00
Mayukha Vadari
3d93379132 add header 2025-07-25 15:10:47 -04:00
Mayukha Vadari
84fd7d0126 Merge remote-tracking branch 'upstream/develop' into develop4.6 2025-07-25 14:56:36 -04:00
Mayukha Vadari
7f52287aae rename variables (#5609)
* rename variables

* instanceWrapper -> runtime

* size -> srcSize

* begin -> ptr
2025-07-25 11:54:31 -04:00
Mayukha Vadari
98b8986868 fix merge issues (mostly with Conan2 upgrade) 2025-07-23 15:52:03 -04:00
Mayukha Vadari
250f2842ee Merge remote-tracking branch 'upstream/develop' into develop4.5 2025-07-23 13:43:47 -04:00
Olek
9eca1a3a0c MPT and IOU support for amount and issue (#5573)
* MPT and IOU support for ammount and issue

* Fix tests
Update wasm code to the latest version
Remove deprecated tests
Remove deprecated wasm
2025-07-22 17:43:21 +00:00
Mayukha Vadari
24b7a03224 feat: add more keylet host functions (#5522) 2025-07-17 12:37:41 -04:00
Mayukha Vadari
9007097d24 Simplify host function boilerplate (#5534)
* enum for HF errors

* switch getData functions to be templates

* getData<SField> working

* Slice -> Bytes in host functions

* RET -> helper function instead of macro

* get template function working

* more organization/cleanup

* fix failures

* more cleanup

* Bytes -> Slice

* SFieldParam macro -> type alias

* fix return type

* fix bugs

* replace std::make_index_sequence

* remove `failed` from output

* remove complex function

* more uniformity

* respond to comments

* enum class HostFunctionError

* rename variable

* respond to comments

* remove templating

* [WIP] basic getData tests

* weird linker error

* fix issue
2025-07-15 04:28:59 +05:30
Olek
bc445ec6a2 Add hostfunctions schedule table
Remove opcode schedule table from wamr
2025-07-11 18:08:36 -04:00
Mayukha Vadari
4fa0ae521e disallow a computation allowance of 0 (#5541) 2025-07-09 00:34:17 +05:30
Olek
7bdf5fa8b8 Fix build.md wamr version (#5535) 2025-07-04 00:48:03 +05:30
Olek
65b0b976d9 Sync error codes (#5527)
* Sync error codes
2025-07-02 17:33:39 -04:00
Elliot.
a0d275feec chore: Clear CODEOWNERS (#5528) 2025-07-02 10:39:57 -07:00
Mayukha Vadari
ece3a8d7be Merge branch 'develop' into develop4 2025-06-30 21:33:30 +05:30
Olek
463acf51b5 preflight checks for wasm (#5517) 2025-06-30 09:34:38 -04:00
Olek
1cd16fab87 Host-functions perf test fixes (#5514) 2025-06-27 09:59:28 -04:00
Olek
add55c4f33 Host functions gas cost for wasm_runtime interface (#5500) 2025-06-25 14:04:04 +00:00
Olek
51a9c0ff59 Host function gas cost (#5488)
* Update Wamr to 2.3.1
* Add gas cost per host-function
* Fix windows build
* Fix wasm test
* Add no import test
2025-06-12 15:54:49 -04:00
Mayukha Vadari
6e8a5f0f4e fix: make host function traces easier to use, fix get_NFT bug (#5466)
Co-authored-by: Olek <115580134+oleks-rip@users.noreply.github.com>
2025-06-05 14:24:13 -04:00
Mayukha Vadari
8a33702f26 fix merge issues 2025-06-05 12:38:26 -04:00
Mayukha Vadari
a072d49802 Merge remote-tracking branch 'upstream/ripple/smart-escrow' into develop3.5 2025-06-05 11:51:53 -04:00
Mayukha Vadari
a0aeeb8e07 Merge remote-tracking branch 'upstream/develop' into develop3.5 2025-06-05 11:50:38 -04:00
Olek
383b225690 Fix processing nonexistent field (#5467) 2025-06-04 17:32:11 -04:00
Mayukha Vadari
ace2247800 Merge remote-tracking branch 'upstream/ripple/smart-escrow' into develop3.5 2025-06-04 14:15:17 -04:00
Olek
6a6fed5dce More hostfunctions (#5451)
* Bug fixes:
- Fix bugs found during schedule table tests
- Add more tests
- Add parameters passing for runEscrowWasm function

* Add new host-functions
 fix wamr logging
 add runtime passing through HF
 fix runEscrowWasm interface

* Improve logs

* Fix logging bug

* Set 4k limit for update_data HF

* allHF wasm module fixes
2025-05-30 19:01:27 -04:00
Mayukha Vadari
1f8aece8cd feat: add a GasUsed parameter to the metadata (#5456) 2025-05-29 16:36:55 -04:00
Mayukha Vadari
6c6f8cd4f9 Merge remote-tracking branch 'upstream/develop' into develop3 2025-05-29 13:05:11 -04:00
Mayukha Vadari
fb1311e013 uncomment???? 2025-05-28 14:00:50 -04:00
Mayukha Vadari
ce31acf030 debug comments 2025-05-28 13:48:38 -04:00
Mayukha Vadari
31ad5ac63b Merge remote-tracking branch 'upstream/ripple/smart-escrow' into develop3 2025-05-27 18:29:41 -04:00
Mayukha Vadari
1ede0bdec4 fix: fix fixtures (#5445) 2025-05-23 17:37:14 -04:00
Mayukha Vadari
aef32ead2c better WASM logging to match rippled (#5395)
* basic logging

* pass in Journal

* log level based on journal level

* clean up

* attempt at adding WAMR logging properly

* improve logline

* maybe_unused

* fix

* fix

* fix segfault

* add test
2025-05-23 10:31:02 -04:00
Mayukha Vadari
5b43ec7f73 refactor: switch function name from ready to finish (#5430) 2025-05-20 16:12:19 -04:00
Olek
1e9ff88a00 Fix CI build issues
* Mac build fix
* Windows build fix
* Windows instruction counter fix
2025-05-08 12:39:37 -04:00
Mayukha Vadari
bb9bb5f5c5 Merge branch 'ripple/smart-escrow' into develop2 2025-05-01 18:44:06 -04:00
Mayukha Vadari
c533abd8b6 Update size and compute cap defaults (#5417) 2025-05-01 18:41:51 -04:00
Olek
bb9bc764bc Switch to WAMR (#5416)
* Switch to WAMR
2025-05-01 18:02:06 -04:00
Mayukha Vadari
b4b53a6cb7 Merge branch 'ripple/smart-escrow' into develop2 2025-04-29 15:25:54 -04:00
Mayukha Vadari
9c0204906c fix reference fee tests 2025-04-29 15:25:00 -04:00
Mayukha Vadari
4670b373c1 try to fix tests 2025-04-29 14:10:27 -04:00
Mayukha Vadari
f03b5883bd More host functions (#5411)
* getNFT

* escrow keylet

* account keylet

* credential keylet

* oracle keylet

* hook everything in

* fix stuff
2025-04-29 12:39:12 -04:00
Mayukha Vadari
f8b2fe4dd5 fix imports 2025-04-28 17:43:15 -04:00
Mayukha Vadari
be4a0c9c2b Merge remote-tracking branch 'upstream/ripple/smart-escrow' into develop2 2025-04-28 17:14:28 -04:00
Mayukha Vadari
f37d52d8e9 Set up fees for WASM processing (#5393)
* set up fields

* throw error if allowance is too high

* votable gas price

* fix comments

* hook everything together

* make test less flaky (hopefully)

* fix other tests

* fix some tests

* fix tests

* clean up

* add more tests

* uncomment other tests

* respond to comments

* fix build

* respond to comments
2025-04-24 08:47:13 -04:00
Mayukha Vadari
177cdaf550 Connect votable gas limit into VM (#5360)
* [WIP] add gas limit

* [WIP] host function escrow tests

* finish test

* uncomment out tests
2025-03-25 10:55:33 -04:00
pwang200
1573a443b7 smart escrow devnet 1 host functions (#5353)
* devnet 1 host functions

* clang-format

* fix build issues
2025-03-24 17:07:17 -04:00
Mayukha Vadari
911c0466c0 Merge develop into ripple/smart-escrow (#5357)
* Set version to 2.4.0

* refactor: Remove unused and add missing includes (#5293)

The codebase is filled with includes that are unused, and which thus can be removed. At the same time, the files often do not include all headers that contain the definitions used in those files. This change uses clang-format and clang-tidy to clean up the includes, with minor manual intervention to ensure the code compiles on all platforms.

* refactor: Calculate numFeatures automatically (#5324)

Requiring manual updates of numFeatures is an annoying manual process that is easily forgotten, and leads to frequent merge conflicts. This change takes advantage of the `XRPL_FEATURE` and `XRPL_FIX` macros, and adds a new `XRPL_RETIRE` macro to automatically set `numFeatures`.

* refactor: Improve ordering of headers with clang-format (#5343)

Removes all manual header groupings from source and header files by leveraging clang-format options.

* Rename "deadlock" to "stall" in `LoadManager` (#5341)

What the LoadManager class does is stall detection, which is not the same as deadlock detection. In the condition of severe CPU starvation, LoadManager will currently intentionally crash rippled reporting `LogicError: Deadlock detected`. This error message is misleading as the condition being detected is not a deadlock. This change fixes and refactors the code in response.

* Adds hub.xrpl-commons.org as a new Bootstrap Cluster (#5263)

* fix: Error message for ledger_entry rpc (#5344)

Changes the error to `malformedAddress` for `permissioned_domain` in the `ledger_entry` rpc, when the account is not a string. This change makes it more clear to a user what is wrong with their request.

* fix: Handle invalid marker parameter in grpc call (#5317)

The `end_marker` is used to limit the range of ledger entries to fetch. If `end_marker` is less than `marker`, a crash can occur. This change adds an additional check.

* fix: trust line RPC no ripple flag (#5345)

The Trustline RPC `no_ripple` flag gets set depending on `lsfDefaultRipple` flag, which is not a flag of a trustline but of the account root. The `lsfDefaultRipple` flag does not provide any insight if this particular trust line has `lsfLowNoRipple` or `lsfHighNoRipple` flag set, so it should not be used here at all. This change simplifies the logic.

* refactor: Updates Conan dependencies: RocksDB (#5335)

Updates RocksDB to version 9.7.3, the latest version supported in Conan 1.x. A patch for 9.7.4 that fixes a memory leak is included.

* fix: Remove null pointer deref, just do abort (#5338)

This change removes the existing undefined behavior from `LogicError`, so we can be certain that there will be always a stacktrace.

De-referencing a null pointer is an old trick to generate `SIGSEGV`, which would typically also create a stacktrace. However it is also an undefined behaviour and compilers can do something else. A more robust way to create a stacktrace while crashing the program is to use `std::abort`, which we have also used in this location for a long time. If we combine the two, we might not get the expected behaviour - namely, the nullpointer deref followed by `std::abort`, as handled in certain compiler versions may not immediately cause a crash. We have observed stacktrace being wiped instead, and thread put in indeterminate state, then stacktrace created without any useful information.

* chore: Add PR number to payload (#5310)

This PR adds one more payload field to the libXRPL compatibility check workflow - the PR number itself.

* chore: Update link to ripple-binary-codec (#5355)

The link to ripple-binary-codec's definitions.json appears to be outdated. The updated link is also documented here: https://xrpl.org/docs/references/protocol/binary-format#definitions-file

* Prevent consensus from getting stuck in the establish phase (#5277)

- Detects if the consensus process is "stalled". If it is, then we can declare a 
  consensus and end successfully even if we do not have 80% agreement on
  our proposal.
  - "Stalled" is defined as:
    - We have a close time consensus
    - Each disputed transaction is individually stalled:
      - It has been in the final "stuck" 95% requirement for at least 2
        (avMIN_ROUNDS) "inner rounds" of phaseEstablish,
      - and either all of the other trusted proposers or this validator, if proposing,
        have had the same vote(s) for at least 4 (avSTALLED_ROUNDS) "inner
        rounds", and at least 80% of the validators (including this one, if
        appropriate) agree about the vote (whether yes or no).
- If we have been in the establish phase for more than 10x the previous
  consensus establish phase's time, then consensus is considered "expired",
  and we will leave the round, which sends a partial validation (indicating
  that the node is moving on without validating). Two restrictions avoid
  prematurely exiting, or having an extended exit in extreme situations.
  - The 10x time is clamped to be within a range of 15s
    (ledgerMAX_CONSENSUS) to 120s (ledgerABANDON_CONSENSUS).
  - If consensus has not had an opportunity to walk through all avalanche
    states (defined as not going through 8 "inner rounds" of phaseEstablish),
    then ConsensusState::Expired is treated as ConsensusState::No.
- When enough nodes leave the round, any remaining nodes will see they've
  fallen behind, and move on, too, generally before hitting the timeout. Any
  validations or partial validations sent during this time will help the
  consensus process bring the nodes back together.

---------

Co-authored-by: Michael Legleux <mlegleux@ripple.com>
Co-authored-by: Bart <bthomee@users.noreply.github.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
Co-authored-by: Darius Tumas <Tokeiito@users.noreply.github.com>
Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
Co-authored-by: cyan317 <120398799+cindyyan317@users.noreply.github.com>
Co-authored-by: Vlad <129996061+vvysokikh1@users.noreply.github.com>
Co-authored-by: Alex Kremer <akremer@ripple.com>
2025-03-20 16:47:14 -04:00
Mayukha Vadari
b6a95f9970 PoC Smart Escrows (#5340)
* wasmedge in unittest

* add WashVM.h and cpp

* accountID comparison (vector<u8>) working

* json decode tx and ledger object with two buffers working

* wasm return a buffer working

* add a failure test case to P2P3

* host function return ledger sqn

* instruction gas and host function gas

* basics

* add scaffold

* add amendment check

* working PoC

* get test working

* fix clang-format

* prototype #2

* p2p3

* [WIP] P4

* P5

* add calculateBaseFee

* add FinishFunction preflight checks (+ tests)

* additional reserve for sfFinishFunction

* higher fees for EscrowFinish

* rename amendment to SmartEscrow

* make fee voting changes, add basic tests

* clean up

* clean up

* clean up

* more cleanup

* add subscribe tests

* add more tests

* undo formatting

* undo formatting

* remove bad comment

* more debugging statements

* fix clang-format

* fix rebase issues

* fix more rebase issues

* more rebase fixes

* add source code for wasm

* respond to comments

* add const

---------

Co-authored-by: Peng Wang <pwang200@gmail.com>
2025-03-20 14:08:06 -04:00
1137 changed files with 47602 additions and 67008 deletions

View File

@@ -37,7 +37,7 @@ BinPackParameters: false
BreakBeforeBinaryOperators: false
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true
ColumnLimit: 80
ColumnLimit: 120
CommentPragmas: "^ IWYU pragma:"
ConstructorInitializerAllOnOneLineOrOnePerLine: true
ConstructorInitializerIndentWidth: 4

View File

@@ -268,6 +268,7 @@ words:
- venv
- vfalco
- vinnie
- wasmi
- wextra
- wptr
- writeme

6
.github/CODEOWNERS vendored
View File

@@ -1,8 +1,2 @@
# Allow anyone to review any change by default.
*
# Require the rpc-reviewers team to review changes to the rpc code.
include/xrpl/protocol/ @xrplf/rpc-reviewers
src/libxrpl/protocol/ @xrplf/rpc-reviewers
src/xrpld/rpc/ @xrplf/rpc-reviewers
src/xrpld/app/misc/ @xrplf/rpc-reviewers

View File

@@ -17,8 +17,10 @@ runs:
VERSION: ${{ github.ref_name }}
run: echo "VERSION=${VERSION}" >> "${GITHUB_ENV}"
# When a tag is not pushed, then the version is extracted from the
# BuildInfo.cpp file and the shortened commit hash appended to it.
# When a tag is not pushed, then the version (e.g. 1.2.3-b0) is extracted
# from the BuildInfo.cpp file and the shortened commit hash appended to it.
# We use a plus sign instead of a hyphen because Conan recipe versions do
# not support two hyphens.
- name: Generate version for non-tag event
if: ${{ github.event_name != 'tag' }}
shell: bash
@@ -32,7 +34,7 @@ runs:
echo 'Appending shortened commit hash to version.'
SHA='${{ github.sha }}'
VERSION="${VERSION}-${SHA:0:7}"
VERSION="${VERSION}+${SHA:0:7}"
echo "VERSION=${VERSION}" >> "${GITHUB_ENV}"

View File

@@ -31,7 +31,7 @@ runs:
conan config install conan/profiles/ -tf $(conan config home)/profiles/
echo 'Conan profile:'
conan profile show
conan profile show --profile ci
- name: Set up Conan remote
shell: bash

View File

@@ -19,7 +19,6 @@ libxrpl.protocol > xrpl.json
libxrpl.protocol > xrpl.protocol
libxrpl.resource > xrpl.basics
libxrpl.resource > xrpl.json
libxrpl.resource > xrpl.protocol
libxrpl.resource > xrpl.resource
libxrpl.server > xrpl.basics
libxrpl.server > xrpl.json

View File

@@ -44,7 +44,7 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
# We build and test all configurations by default, except for Windows in
# Debug, because it is too slow, as well as when code coverage is
# enabled as that mode already runs the tests.
build_only = True
build_only = False
if os["distro_name"] == "windows" and build_type == "Debug":
build_only = True

View File

@@ -59,6 +59,7 @@ jobs:
# Keep the paths below in sync with those in `on-trigger.yml`.
.github/actions/build-deps/**
.github/actions/build-test/**
.github/actions/generate-version/**
.github/actions/setup-conan/**
.github/scripts/strategy-matrix/**
.github/workflows/reusable-build.yml

View File

@@ -16,6 +16,7 @@ on:
# Keep the paths below in sync with those in `on-pr.yml`.
- ".github/actions/build-deps/**"
- ".github/actions/build-test/**"
- ".github/actions/generate-version/**"
- ".github/actions/setup-conan/**"
- ".github/scripts/strategy-matrix/**"
- ".github/workflows/reusable-build.yml"

View File

@@ -125,6 +125,8 @@ jobs:
subtract: ${{ inputs.nproc_subtract }}
- name: Setup Conan
env:
SANITIZERS: ${{ inputs.sanitizers }}
uses: ./.github/actions/setup-conan
- name: Build dependencies

View File

@@ -49,10 +49,6 @@ jobs:
id: version
uses: ./.github/actions/generate-version
- name: Determine recipe reference
id: ref
run: echo "ref=xrpl/${{ steps.version.outputs.version }}" >> "${GITHUB_OUTPUT}"
- name: Set up Conan
uses: ./.github/actions/setup-conan
with:
@@ -62,17 +58,40 @@ jobs:
- name: Log into Conan remote
env:
REMOTE_NAME: ${{ inputs.remote_name }}
REMOTE_USERNAME: ${{ inputs.remote_username }}
REMOTE_PASSWORD: ${{ inputs.remote_password }}
REMOTE_USERNAME: ${{ secrets.remote_username }}
REMOTE_PASSWORD: ${{ secrets.remote_password }}
run: conan remote login "${REMOTE_NAME}" "${REMOTE_USERNAME}" --password "${REMOTE_PASSWORD}"
- name: Upload Conan recipe
- name: Upload Conan recipe (version)
env:
RECIPE_REF: ${{ steps.ref.outputs.ref }}
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export .
conan upload --confirm --check --remote="${REMOTE_NAME}" ${RECIPE_REF}
conan export . --version=${{ steps.version.outputs.version }}
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/${{ steps.version.outputs.version }}
- name: Upload Conan recipe (develop)
if: ${{ github.ref == 'refs/heads/develop' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=develop
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/develop
- name: Upload Conan recipe (rc)
if: ${{ startsWith(github.ref, 'refs/heads/release') }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=rc
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/rc
- name: Upload Conan recipe (release)
if: ${{ github.event_name == 'tag' }}
env:
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export . --version=release
conan upload --confirm --check --remote="${REMOTE_NAME}" xrpl/release
outputs:
ref: ${{ steps.ref.outputs.ref }}
ref: xrpl/${{ steps.version.outputs.version }}

View File

@@ -84,6 +84,8 @@ jobs:
subtract: ${{ env.NPROC_SUBTRACT }}
- name: Setup Conan
env:
SANITIZERS: ${{ matrix.sanitizers }}
uses: ./.github/actions/setup-conan
with:
remote_name: ${{ env.CONAN_REMOTE_NAME }}
@@ -98,6 +100,7 @@ jobs:
# Set the verbosity to "quiet" for Windows to avoid an excessive
# amount of logs. For other OSes, the "verbose" logs are more useful.
log_verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }}
sanitizers: ${{ matrix.sanitizers }}
- name: Log into Conan remote
if: ${{ github.repository_owner == 'XRPLF' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}

View File

@@ -148,8 +148,8 @@ function extract_version {
}
# Define which recipes to export.
recipes=('ed25519' 'grpc' 'openssl' 'secp256k1' 'snappy' 'soci')
folders=('all' 'all' '3.x.x' 'all' 'all' 'all')
recipes=('ed25519' 'grpc' 'nudb' 'openssl' 'secp256k1' 'snappy' 'soci')
folders=('all' 'all' 'all' '3.x.x' 'all' 'all' 'all')
# Selectively check out the recipes from our CCI fork.
cd external

View File

@@ -8,7 +8,9 @@ if(POLICY CMP0077)
endif()
# Fix "unrecognized escape" issues when passing CMAKE_MODULE_PATH on Windows.
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
if(DEFINED CMAKE_MODULE_PATH)
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
endif()
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
project(xrpl)
@@ -103,6 +105,7 @@ find_package(OpenSSL REQUIRED)
find_package(secp256k1 REQUIRED)
find_package(SOCI REQUIRED)
find_package(SQLite3 REQUIRED)
find_package(wasmi REQUIRED)
find_package(xxHash REQUIRED)
target_link_libraries(xrpl_libs INTERFACE

View File

@@ -1290,6 +1290,39 @@
# Example:
# owner_reserve = 2000000 # 2 XRP
#
# extension_compute_limit = <gas>
#
# The extension compute limit is the maximum amount of gas that can be
# consumed by a single transaction. The gas limit is used to prevent
# transactions from consuming too many resources.
#
# If this parameter is unspecified, xrpld will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# extension_compute_limit = 1000000 # 1 million gas
#
# extension_size_limit = <bytes>
#
# The extension size limit is the maximum size of a WASM extension in
# bytes. The size limit is used to prevent extensions from consuming
# too many resources.
#
# If this parameter is unspecified, xrpld will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# extension_size_limit = 100000 # 100 kb
#
# gas_price = <bytes>
#
# The gas price is the conversion between WASM gas and its price in drops.
#
# If this parameter is unspecified, xrpld will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# gas_price = 1000000 # 1 drop per gas
#-------------------------------------------------------------------------------
#
# 9. Misc Settings

View File

@@ -13,6 +13,7 @@ include_guard(GLOBAL)
set(is_clang FALSE)
set(is_gcc FALSE)
set(is_msvc FALSE)
set(is_xcode FALSE)
if(CMAKE_CXX_COMPILER_ID MATCHES ".*Clang") # Clang or AppleClang
set(is_clang TRUE)
@@ -24,6 +25,11 @@ else()
message(FATAL_ERROR "Unsupported C++ compiler: ${CMAKE_CXX_COMPILER_ID}")
endif()
# Xcode generator detection
if(CMAKE_GENERATOR STREQUAL "Xcode")
set(is_xcode TRUE)
endif()
# --------------------------------------------------------------------
# Operating system detection

View File

@@ -32,14 +32,14 @@ target_protobuf_sources(xrpl.libpb xrpl/proto
target_compile_options(xrpl.libpb
PUBLIC
$<$<BOOL:${MSVC}>:-wd4996>
$<$<BOOL:${XCODE}>:
$<$<BOOL:${is_msvc}>:-wd4996>
$<$<BOOL:${is_xcode}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>
PRIVATE
$<$<BOOL:${MSVC}>:-wd4065>
$<$<NOT:$<BOOL:${MSVC}>>:-Wno-deprecated-declarations>
$<$<BOOL:${is_msvc}>:-wd4065>
$<$<NOT:$<BOOL:${is_msvc}>>:-Wno-deprecated-declarations>
)
target_link_libraries(xrpl.libpb
@@ -63,6 +63,7 @@ target_link_libraries(xrpl.imports.main
Xrpl::opts
Xrpl::syslibs
secp256k1::secp256k1
wasmi::wasmi
xrpl.libpb
xxHash::xxhash
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp>

View File

@@ -4,6 +4,12 @@
include(create_symbolic_link)
# If no suffix is defined for executables (e.g. Windows uses .exe but Linux
# and macOS use none), then explicitly set it to the empty string.
if(NOT DEFINED suffix)
set(suffix "")
endif()
install (
TARGETS
common

View File

@@ -4,6 +4,11 @@
include(CompilationEnv)
# Set defaults for optional variables to avoid uninitialized variable warnings
if(NOT DEFINED voidstar)
set(voidstar OFF)
endif()
add_library (opts INTERFACE)
add_library (Xrpl::opts ALIAS opts)
target_compile_definitions (opts
@@ -52,7 +57,7 @@ add_library (xrpl_syslibs INTERFACE)
add_library (Xrpl::syslibs ALIAS xrpl_syslibs)
target_link_libraries (xrpl_syslibs
INTERFACE
$<$<BOOL:${MSVC}>:
$<$<BOOL:${is_msvc}>:
legacy_stdio_definitions.lib
Shlwapi
kernel32
@@ -69,10 +74,10 @@ target_link_libraries (xrpl_syslibs
odbccp32
crypt32
>
$<$<NOT:$<BOOL:${MSVC}>>:dl>
$<$<NOT:$<OR:$<BOOL:${MSVC}>,$<BOOL:${APPLE}>>>:rt>)
$<$<NOT:$<BOOL:${is_msvc}>>:dl>
$<$<NOT:$<OR:$<BOOL:${is_msvc}>,$<BOOL:${is_macos}>>>:rt>)
if (NOT MSVC)
if (NOT is_msvc)
set (THREADS_PREFER_PTHREAD_FLAG ON)
find_package (Threads)
target_link_libraries (xrpl_syslibs INTERFACE Threads::Threads)

View File

@@ -43,7 +43,10 @@
include(CompilationEnv)
# Read environment variable
set(SANITIZERS $ENV{SANITIZERS})
set(SANITIZERS "")
if(DEFINED ENV{SANITIZERS})
set(SANITIZERS "$ENV{SANITIZERS}")
endif()
# Set SANITIZERS_ENABLED flag for use in other modules
if(SANITIZERS MATCHES "address|thread|undefinedbehavior")

View File

@@ -4,10 +4,11 @@
include(CompilationEnv)
if("$ENV{CI}" STREQUAL "true" OR "$ENV{CONTINUOUS_INTEGRATION}" STREQUAL "true")
set(is_ci TRUE)
else()
set(is_ci FALSE)
set(is_ci FALSE)
if(DEFINED ENV{CI})
if("$ENV{CI}" STREQUAL "true")
set(is_ci TRUE)
endif()
endif()
get_directory_property(has_parent PARENT_DIRECTORY)

View File

@@ -30,7 +30,6 @@ target_link_libraries(xrpl_boost
Boost::process
Boost::program_options
Boost::regex
Boost::system
Boost::thread)
if(Boost_COMPILER)
target_link_libraries(xrpl_boost INTERFACE Boost::disable_autolinking)

View File

@@ -3,6 +3,7 @@
"requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1765850150.075",
"xxhash/0.8.3#681d36a0a6111fc56e5e45ea182c19cc%1765850149.987",
"wasmi/1.0.6#407c9db14601a8af1c7dd3b388f3e4cd%1768164779.349",
"sqlite3/3.49.1#8631739a4c9b93bd3d6b753bac548a63%1765850149.926",
"soci/4.0.3#a9f8d773cd33e356b5879a4b0564f287%1765850149.46",
"snappy/1.1.10#968fef506ff261592ec30c574d4a7809%1765850147.878",
@@ -11,7 +12,7 @@
"re2/20230301#ca3b241baec15bd31ea9187150e0b333%1765850148.103",
"protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1765850161.038",
"openssl/3.5.4#1b986e61b38fdfda3b40bebc1b234393%1768312656.257",
"nudb/2.0.9#fb8dfd1a5557f5e0528114c2da17721e%1765850143.957",
"nudb/2.0.9#0432758a24204da08fee953ec9ea03cb%1769436073.32",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1765850143.914",
"libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1765842973.492",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1765842973.03",
@@ -23,7 +24,7 @@
"date/3.0.4#862e11e80030356b53c2c38599ceb32b%1765850143.772",
"c-ares/1.34.5#5581c2b62a608b40bb85d965ab3ec7c8%1765850144.336",
"bzip2/1.0.8#c470882369c2d95c5c77e970c0c7e321%1765850143.837",
"boost/1.88.0#8852c0b72ce8271fb8ff7c53456d4983%1765850172.862",
"boost/1.90.0#d5e8defe7355494953be18524a7f135b%1765955095.179",
"abseil/20250127.0#99262a368bd01c0ccca8790dfced9719%1766517936.993"
],
"build_requires": [
@@ -42,18 +43,22 @@
],
"python_requires": [],
"overrides": {
"boost/1.90.0#d5e8defe7355494953be18524a7f135b": [
null,
"boost/1.90.0"
],
"protobuf/5.27.0": [
"protobuf/6.32.1"
],
"lz4/1.9.4": [
"lz4/1.10.0"
],
"boost/1.83.0": [
"boost/1.88.0"
],
"sqlite3/3.44.2": [
"sqlite3/3.49.1"
],
"boost/1.83.0": [
"boost/1.90.0"
],
"lz4/[>=1.9.4 <2]": [
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504"
]

View File

@@ -35,6 +35,7 @@ class Xrpl(ConanFile):
"openssl/3.5.4",
"secp256k1/0.7.0",
"soci/4.0.3",
"wasmi/1.0.6",
"zlib/1.3.1",
]
@@ -131,7 +132,7 @@ class Xrpl(ConanFile):
transitive_headers_opt = (
{"transitive_headers": True} if conan_version.split(".")[0] == "2" else {}
)
self.requires("boost/1.88.0", force=True, **transitive_headers_opt)
self.requires("boost/1.90.0", force=True, **transitive_headers_opt)
self.requires("date/3.0.4", **transitive_headers_opt)
self.requires("lz4/1.10.0", force=True)
self.requires("protobuf/6.32.1", force=True)
@@ -203,7 +204,6 @@ class Xrpl(ConanFile):
"boost::program_options",
"boost::process",
"boost::regex",
"boost::system",
"boost::thread",
"date::date",
"ed25519::ed25519",
@@ -216,6 +216,7 @@ class Xrpl(ConanFile):
"soci::soci",
"secp256k1::secp256k1",
"sqlite3::sqlite",
"wasmi::wasmi",
"xxhash::xxhash",
"zlib::zlib",
]

View File

@@ -13,9 +13,7 @@ namespace xrpl {
@throws runtime_error
*/
void
extractTarLz4(
boost::filesystem::path const& src,
boost::filesystem::path const& dst);
extractTarLz4(boost::filesystem::path const& src, boost::filesystem::path const& dst);
} // namespace xrpl

View File

@@ -14,8 +14,7 @@
namespace xrpl {
using IniFileSections =
std::unordered_map<std::string, std::vector<std::string>>;
using IniFileSections = std::unordered_map<std::string, std::vector<std::string>>;
//------------------------------------------------------------------------------
@@ -86,8 +85,7 @@ public:
if (lines_.empty())
return "";
if (lines_.size() > 1)
Throw<std::runtime_error>(
"A legacy value must have exactly one line. Section: " + name_);
Throw<std::runtime_error>("A legacy value must have exactly one line. Section: " + name_);
return lines_[0];
}
@@ -233,10 +231,7 @@ public:
The previous value, if any, is overwritten.
*/
void
overwrite(
std::string const& section,
std::string const& key,
std::string const& value);
overwrite(std::string const& section, std::string const& key, std::string const& value);
/** Remove all the key/value pairs from the section.
*/
@@ -274,9 +269,7 @@ public:
bool
had_trailing_comments() const
{
return std::any_of(map_.cbegin(), map_.cend(), [](auto s) {
return s.second.had_trailing_comments();
});
return std::any_of(map_.cbegin(), map_.cend(), [](auto s) { return s.second.had_trailing_comments(); });
}
protected:
@@ -315,10 +308,7 @@ set(T& target, std::string const& name, Section const& section)
*/
template <class T>
bool
set(T& target,
T const& defaultValue,
std::string const& name,
Section const& section)
set(T& target, T const& defaultValue, std::string const& name, Section const& section)
{
bool found_and_valid = set<T>(target, name, section);
if (!found_and_valid)
@@ -333,9 +323,7 @@ set(T& target,
// NOTE This routine might be more clumsy than the previous two
template <class T = std::string>
T
get(Section const& section,
std::string const& name,
T const& defaultValue = T{})
get(Section const& section, std::string const& name, T const& defaultValue = T{})
{
try
{

View File

@@ -25,8 +25,7 @@ public:
Buffer() = default;
/** Create an uninitialized buffer with the given size. */
explicit Buffer(std::size_t size)
: p_(size ? new std::uint8_t[size] : nullptr), size_(size)
explicit Buffer(std::size_t size) : p_(size ? new std::uint8_t[size] : nullptr), size_(size)
{
}
@@ -62,8 +61,7 @@ public:
/** Move-construct.
The other buffer is reset.
*/
Buffer(Buffer&& other) noexcept
: p_(std::move(other.p_)), size_(other.size_)
Buffer(Buffer&& other) noexcept : p_(std::move(other.p_)), size_(other.size_)
{
other.size_ = 0;
}
@@ -94,8 +92,7 @@ public:
{
// Ensure the slice isn't a subset of the buffer.
XRPL_ASSERT(
s.size() == 0 || size_ == 0 || s.data() < p_.get() ||
s.data() >= p_.get() + size_,
s.size() == 0 || size_ == 0 || s.data() < p_.get() || s.data() >= p_.get() + size_,
"xrpl::Buffer::operator=(Slice) : input not a subset");
if (auto p = alloc(s.size()))

View File

@@ -36,10 +36,7 @@ lz4Compress(void const* in, std::size_t inSize, BufferFactory&& bf)
auto compressed = bf(outCapacity);
auto compressedSize = LZ4_compress_default(
reinterpret_cast<char const*>(in),
reinterpret_cast<char*>(compressed),
inSize,
outCapacity);
reinterpret_cast<char const*>(in), reinterpret_cast<char*>(compressed), inSize, outCapacity);
if (compressedSize == 0)
Throw<std::runtime_error>("lz4 compress: failed");
@@ -70,10 +67,8 @@ lz4Decompress(
Throw<std::runtime_error>("lz4Decompress: integer overflow (output)");
if (LZ4_decompress_safe(
reinterpret_cast<char const*>(in),
reinterpret_cast<char*>(decompressed),
inSize,
decompressedSize) != decompressedSize)
reinterpret_cast<char const*>(in), reinterpret_cast<char*>(decompressed), inSize, decompressedSize) !=
decompressedSize)
Throw<std::runtime_error>("lz4Decompress: failed");
return decompressedSize;
@@ -89,11 +84,7 @@ lz4Decompress(
*/
template <typename InputStream>
std::size_t
lz4Decompress(
InputStream& in,
std::size_t inSize,
std::uint8_t* decompressed,
std::size_t decompressedSize)
lz4Decompress(InputStream& in, std::size_t inSize, std::uint8_t* decompressed, std::size_t decompressedSize)
{
std::vector<std::uint8_t> compressed;
std::uint8_t const* chunk = nullptr;
@@ -116,9 +107,7 @@ lz4Decompress(
compressed.resize(inSize);
}
chunkSize = chunkSize < (inSize - copiedInSize)
? chunkSize
: (inSize - copiedInSize);
chunkSize = chunkSize < (inSize - copiedInSize) ? chunkSize : (inSize - copiedInSize);
std::copy(chunk, chunk + chunkSize, compressed.data() + copiedInSize);
@@ -135,8 +124,7 @@ lz4Decompress(
if (in.ByteCount() > (currentBytes + copiedInSize))
in.BackUp(in.ByteCount() - currentBytes - copiedInSize);
if ((copiedInSize == 0 && chunkSize < inSize) ||
(copiedInSize > 0 && copiedInSize != inSize))
if ((copiedInSize == 0 && chunkSize < inSize) || (copiedInSize > 0 && copiedInSize != inSize))
Throw<std::runtime_error>("lz4 decompress: insufficient input size");
return lz4Decompress(chunk, inSize, decompressed, decompressedSize);

View File

@@ -56,9 +56,7 @@ private:
if (m_value != value_type())
{
std::size_t elapsed =
std::chrono::duration_cast<std::chrono::seconds>(now - m_when)
.count();
std::size_t elapsed = std::chrono::duration_cast<std::chrono::seconds>(now - m_when).count();
// A span larger than four times the window decays the
// value to an insignificant amount so just reset it.

View File

@@ -108,23 +108,20 @@ Unexpected(E (&)[N]) -> Unexpected<E const*>;
// Definition of Expected. All of the machinery comes from boost::result.
template <class T, class E>
class [[nodiscard]] Expected
: private boost::outcome_v2::result<T, E, detail::throw_policy>
class [[nodiscard]] Expected : private boost::outcome_v2::result<T, E, detail::throw_policy>
{
using Base = boost::outcome_v2::result<T, E, detail::throw_policy>;
public:
template <typename U>
requires std::convertible_to<U, T>
constexpr Expected(U&& r)
: Base(boost::outcome_v2::in_place_type_t<T>{}, std::forward<U>(r))
constexpr Expected(U&& r) : Base(boost::outcome_v2::in_place_type_t<T>{}, std::forward<U>(r))
{
}
template <typename U>
requires std::convertible_to<U, E> && (!std::is_reference_v<U>)
constexpr Expected(Unexpected<U> e)
: Base(boost::outcome_v2::in_place_type_t<E>{}, std::move(e.value()))
constexpr Expected(Unexpected<U> e) : Base(boost::outcome_v2::in_place_type_t<E>{}, std::move(e.value()))
{
}
@@ -195,8 +192,7 @@ public:
// Specialization of Expected<void, E>. Allows returning either success
// (without a value) or the reason for the failure.
template <class E>
class [[nodiscard]] Expected<void, E>
: private boost::outcome_v2::result<void, E, detail::throw_policy>
class [[nodiscard]] Expected<void, E> : private boost::outcome_v2::result<void, E, detail::throw_policy>
{
using Base = boost::outcome_v2::result<void, E, detail::throw_policy>;

View File

@@ -15,10 +15,7 @@ getFileContents(
std::optional<std::size_t> maxSize = std::nullopt);
void
writeFileContents(
boost::system::error_code& ec,
boost::filesystem::path const& destPath,
std::string const& contents);
writeFileContents(boost::system::error_code& ec, boost::filesystem::path const& destPath, std::string const& contents);
} // namespace xrpl

View File

@@ -45,8 +45,8 @@ struct SharedIntrusiveAdoptNoIncrementTag
//
template <class T>
concept CAdoptTag = std::is_same_v<T, SharedIntrusiveAdoptIncrementStrongTag> ||
std::is_same_v<T, SharedIntrusiveAdoptNoIncrementTag>;
concept CAdoptTag =
std::is_same_v<T, SharedIntrusiveAdoptIncrementStrongTag> || std::is_same_v<T, SharedIntrusiveAdoptNoIncrementTag>;
//------------------------------------------------------------------------------
@@ -122,9 +122,7 @@ public:
controlled by the rhs param.
*/
template <class TT>
SharedIntrusive(
StaticCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs);
SharedIntrusive(StaticCastTagSharedIntrusive, SharedIntrusive<TT> const& rhs);
/** Create a new SharedIntrusive by statically casting the pointer
controlled by the rhs param.
@@ -136,9 +134,7 @@ public:
controlled by the rhs param.
*/
template <class TT>
SharedIntrusive(
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs);
SharedIntrusive(DynamicCastTagSharedIntrusive, SharedIntrusive<TT> const& rhs);
/** Create a new SharedIntrusive by dynamically casting the pointer
controlled by the rhs param.
@@ -304,9 +300,7 @@ class SharedWeakUnion
// Tagged pointer. Low bit determines if this is a strong or a weak
// pointer. The low bit must be masked to zero when converting back to a
// pointer. If the low bit is '1', this is a weak pointer.
static_assert(
alignof(T) >= 2,
"Bad alignment: Combo pointer requires low bit to be zero");
static_assert(alignof(T) >= 2, "Bad alignment: Combo pointer requires low bit to be zero");
public:
SharedWeakUnion() = default;
@@ -450,9 +444,7 @@ make_SharedIntrusive(Args&&... args)
auto p = new TT(std::forward<Args>(args)...);
static_assert(
noexcept(SharedIntrusive<TT>(
std::declval<TT*>(),
std::declval<SharedIntrusiveAdoptNoIncrementTag>())),
noexcept(SharedIntrusive<TT>(std::declval<TT*>(), std::declval<SharedIntrusiveAdoptNoIncrementTag>())),
"SharedIntrusive constructor should not throw or this can leak "
"memory");

View File

@@ -12,9 +12,7 @@ template <class T>
template <CAdoptTag TAdoptTag>
SharedIntrusive<T>::SharedIntrusive(T* p, TAdoptTag) noexcept : ptr_{p}
{
if constexpr (std::is_same_v<
TAdoptTag,
SharedIntrusiveAdoptIncrementStrongTag>)
if constexpr (std::is_same_v<TAdoptTag, SharedIntrusiveAdoptIncrementStrongTag>)
{
if (p)
p->addStrongRef();
@@ -46,16 +44,14 @@ SharedIntrusive<T>::SharedIntrusive(SharedIntrusive<TT> const& rhs)
}
template <class T>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive&& rhs)
: ptr_{rhs.unsafeExchange(nullptr)}
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive&& rhs) : ptr_{rhs.unsafeExchange(nullptr)}
{
}
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive<TT>&& rhs)
: ptr_{rhs.unsafeExchange(nullptr)}
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive<TT>&& rhs) : ptr_{rhs.unsafeExchange(nullptr)}
{
}
template <class T>
@@ -112,9 +108,7 @@ requires std::convertible_to<TT*, T*>
SharedIntrusive<T>&
SharedIntrusive<T>::operator=(SharedIntrusive<TT>&& rhs)
{
static_assert(
!std::is_same_v<T, TT>,
"This overload should not be instantiated for T == TT");
static_assert(!std::is_same_v<T, TT>, "This overload should not be instantiated for T == TT");
unsafeReleaseAndStore(rhs.unsafeExchange(nullptr));
return *this;
@@ -139,9 +133,7 @@ template <CAdoptTag TAdoptTag>
void
SharedIntrusive<T>::adopt(T* p)
{
if constexpr (std::is_same_v<
TAdoptTag,
SharedIntrusiveAdoptIncrementStrongTag>)
if constexpr (std::is_same_v<TAdoptTag, SharedIntrusiveAdoptIncrementStrongTag>)
{
if (p)
p->addStrongRef();
@@ -157,9 +149,7 @@ SharedIntrusive<T>::~SharedIntrusive()
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
StaticCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs)
SharedIntrusive<T>::SharedIntrusive(StaticCastTagSharedIntrusive, SharedIntrusive<TT> const& rhs)
: ptr_{[&] {
auto p = static_cast<T*>(rhs.unsafeGetRawPtr());
if (p)
@@ -171,18 +161,14 @@ SharedIntrusive<T>::SharedIntrusive(
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
StaticCastTagSharedIntrusive,
SharedIntrusive<TT>&& rhs)
SharedIntrusive<T>::SharedIntrusive(StaticCastTagSharedIntrusive, SharedIntrusive<TT>&& rhs)
: ptr_{static_cast<T*>(rhs.unsafeExchange(nullptr))}
{
}
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs)
SharedIntrusive<T>::SharedIntrusive(DynamicCastTagSharedIntrusive, SharedIntrusive<TT> const& rhs)
: ptr_{[&] {
auto p = dynamic_cast<T*>(rhs.unsafeGetRawPtr());
if (p)
@@ -194,9 +180,7 @@ SharedIntrusive<T>::SharedIntrusive(
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT>&& rhs)
SharedIntrusive<T>::SharedIntrusive(DynamicCastTagSharedIntrusive, SharedIntrusive<TT>&& rhs)
{
// This can be simplified without the `exchange`, but the `exchange` is kept
// in anticipation of supporting atomic operations.
@@ -315,8 +299,7 @@ WeakIntrusive<T>::WeakIntrusive(WeakIntrusive&& rhs) : ptr_{rhs.ptr_}
}
template <class T>
WeakIntrusive<T>::WeakIntrusive(SharedIntrusive<T> const& rhs)
: ptr_{rhs.unsafeGetRawPtr()}
WeakIntrusive<T>::WeakIntrusive(SharedIntrusive<T> const& rhs) : ptr_{rhs.unsafeGetRawPtr()}
{
if (ptr_)
ptr_->addWeakRef();

View File

@@ -160,22 +160,19 @@ private:
See description of the `refCounts` field for a fuller description of
this field.
*/
static constexpr FieldType partialDestroyStartedMask =
(one << (FieldTypeBits - 1));
static constexpr FieldType partialDestroyStartedMask = (one << (FieldTypeBits - 1));
/** Flag that is set when the partialDestroy function has finished running
See description of the `refCounts` field for a fuller description of
this field.
*/
static constexpr FieldType partialDestroyFinishedMask =
(one << (FieldTypeBits - 2));
static constexpr FieldType partialDestroyFinishedMask = (one << (FieldTypeBits - 2));
/** Mask that will zero out all the `count` bits and leave the tag bits
unchanged.
*/
static constexpr FieldType tagMask =
partialDestroyStartedMask | partialDestroyFinishedMask;
static constexpr FieldType tagMask = partialDestroyStartedMask | partialDestroyFinishedMask;
/** Mask that will zero out the `tag` bits and leave the count bits
unchanged.
@@ -184,13 +181,11 @@ private:
/** Mask that will zero out everything except the strong count.
*/
static constexpr FieldType strongMask =
((one << StrongCountNumBits) - 1) & valueMask;
static constexpr FieldType strongMask = ((one << StrongCountNumBits) - 1) & valueMask;
/** Mask that will zero out everything except the weak count.
*/
static constexpr FieldType weakMask =
(((one << WeakCountNumBits) - 1) << StrongCountNumBits) & valueMask;
static constexpr FieldType weakMask = (((one << WeakCountNumBits) - 1) << StrongCountNumBits) & valueMask;
/** Unpack the count and tag fields from the packed atomic integer form. */
struct RefCountPair
@@ -215,10 +210,8 @@ private:
FieldType
combinedValue() const noexcept;
static constexpr CountType maxStrongValue =
static_cast<CountType>((one << StrongCountNumBits) - 1);
static constexpr CountType maxWeakValue =
static_cast<CountType>((one << WeakCountNumBits) - 1);
static constexpr CountType maxStrongValue = static_cast<CountType>((one << StrongCountNumBits) - 1);
static constexpr CountType maxWeakValue = static_cast<CountType>((one << WeakCountNumBits) - 1);
/** Put an extra margin to detect when running up against limits.
This is only used in debug code, and is useful if we reduce the
number of bits in the strong and weak counts (to 16 and 14 bits).
@@ -274,8 +267,7 @@ IntrusiveRefCounts::releaseStrongRef() const
}
}
if (refCounts.compare_exchange_weak(
prevIntVal, nextIntVal, std::memory_order_acq_rel))
if (refCounts.compare_exchange_weak(prevIntVal, nextIntVal, std::memory_order_acq_rel))
{
// Can't be in partial destroy because only decrementing the strong
// count to zero can start a partial destroy, and that can't happen
@@ -331,8 +323,7 @@ IntrusiveRefCounts::addWeakReleaseStrongRef() const
action = partialDestroy;
}
}
if (refCounts.compare_exchange_weak(
prevIntVal, nextIntVal, std::memory_order_acq_rel))
if (refCounts.compare_exchange_weak(prevIntVal, nextIntVal, std::memory_order_acq_rel))
{
XRPL_ASSERT(
(!(prevIntVal & partialDestroyStartedMask)),
@@ -376,8 +367,7 @@ IntrusiveRefCounts::checkoutStrongRefFromWeak() const noexcept
auto curValue = RefCountPair{1, 1}.combinedValue();
auto desiredValue = RefCountPair{2, 1}.combinedValue();
while (!refCounts.compare_exchange_weak(
curValue, desiredValue, std::memory_order_acq_rel))
while (!refCounts.compare_exchange_weak(curValue, desiredValue, std::memory_order_acq_rel))
{
RefCountPair const prev{curValue};
if (!prev.strong)
@@ -406,20 +396,15 @@ inline IntrusiveRefCounts::~IntrusiveRefCounts() noexcept
{
#ifndef NDEBUG
auto v = refCounts.load(std::memory_order_acquire);
XRPL_ASSERT(
(!(v & valueMask)),
"xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : count must be zero");
XRPL_ASSERT((!(v & valueMask)), "xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : count must be zero");
auto t = v & tagMask;
XRPL_ASSERT(
(!t || t == tagMask),
"xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : valid tag");
XRPL_ASSERT((!t || t == tagMask), "xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : valid tag");
#endif
}
//------------------------------------------------------------------------------
inline IntrusiveRefCounts::RefCountPair::RefCountPair(
IntrusiveRefCounts::FieldType v) noexcept
inline IntrusiveRefCounts::RefCountPair::RefCountPair(IntrusiveRefCounts::FieldType v) noexcept
: strong{static_cast<CountType>(v & strongMask)}
, weak{static_cast<CountType>((v & weakMask) >> StrongCountNumBits)}
, partialDestroyStartedBit{v & partialDestroyStartedMask}
@@ -449,10 +434,8 @@ IntrusiveRefCounts::RefCountPair::combinedValue() const noexcept
(strong < checkStrongMaxValue && weak < checkWeakMaxValue),
"xrpl::IntrusiveRefCounts::RefCountPair::combinedValue : inputs "
"inside range");
return (static_cast<IntrusiveRefCounts::FieldType>(weak)
<< IntrusiveRefCounts::StrongCountNumBits) |
static_cast<IntrusiveRefCounts::FieldType>(strong) |
partialDestroyStartedBit | partialDestroyFinishedBit;
return (static_cast<IntrusiveRefCounts::FieldType>(weak) << IntrusiveRefCounts::StrongCountNumBits) |
static_cast<IntrusiveRefCounts::FieldType>(strong) | partialDestroyStartedBit | partialDestroyFinishedBit;
}
template <class T>
@@ -460,11 +443,9 @@ inline void
partialDestructorFinished(T** o)
{
T& self = **o;
IntrusiveRefCounts::RefCountPair p =
self.refCounts.fetch_or(IntrusiveRefCounts::partialDestroyFinishedMask);
IntrusiveRefCounts::RefCountPair p = self.refCounts.fetch_or(IntrusiveRefCounts::partialDestroyFinishedMask);
XRPL_ASSERT(
(!p.partialDestroyFinishedBit && p.partialDestroyStartedBit &&
!p.strong),
(!p.partialDestroyFinishedBit && p.partialDestroyStartedBit && !p.strong),
"xrpl::partialDestructorFinished : not a weak ref");
if (!p.weak)
{

View File

@@ -55,8 +55,7 @@ template <class = void>
boost::thread_specific_ptr<detail::LocalValues>&
getLocalValues()
{
static boost::thread_specific_ptr<detail::LocalValues> tsp(
&detail::LocalValues::cleanup);
static boost::thread_specific_ptr<detail::LocalValues> tsp(&detail::LocalValues::cleanup);
return tsp;
}
@@ -105,9 +104,7 @@ LocalValue<T>::operator*()
}
return *reinterpret_cast<T*>(
lvs->values
.emplace(this, std::make_unique<detail::LocalValues::Value<T>>(t_))
.first->second->get());
lvs->values.emplace(this, std::make_unique<detail::LocalValues::Value<T>>(t_)).first->second->get());
}
} // namespace xrpl

View File

@@ -39,22 +39,17 @@ private:
std::string partition_;
public:
Sink(
std::string const& partition,
beast::severities::Severity thresh,
Logs& logs);
Sink(std::string const& partition, beast::severities::Severity thresh, Logs& logs);
Sink(Sink const&) = delete;
Sink&
operator=(Sink const&) = delete;
void
write(beast::severities::Severity level, std::string const& text)
override;
write(beast::severities::Severity level, std::string const& text) override;
void
writeAlways(beast::severities::Severity level, std::string const& text)
override;
writeAlways(beast::severities::Severity level, std::string const& text) override;
};
/** Manages a system file containing logged output.
@@ -140,11 +135,7 @@ private:
};
std::mutex mutable mutex_;
std::map<
std::string,
std::unique_ptr<beast::Journal::Sink>,
boost::beast::iless>
sinks_;
std::map<std::string, std::unique_ptr<beast::Journal::Sink>, boost::beast::iless> sinks_;
beast::severities::Severity thresh_;
File file_;
bool silent_ = false;
@@ -180,11 +171,7 @@ public:
partition_severities() const;
void
write(
beast::severities::Severity level,
std::string const& partition,
std::string const& text,
bool console);
write(beast::severities::Severity level, std::string const& partition, std::string const& text, bool console);
std::string
rotate();
@@ -201,9 +188,7 @@ public:
}
virtual std::unique_ptr<beast::Journal::Sink>
makeSink(
std::string const& partition,
beast::severities::Severity startingLevel);
makeSink(std::string const& partition, beast::severities::Severity startingLevel);
public:
static LogSeverity

View File

@@ -74,10 +74,7 @@ struct MantissaRange
enum mantissa_scale { small, large };
explicit constexpr MantissaRange(mantissa_scale scale_)
: min(getMin(scale_))
, max(min * 10 - 1)
, log(logTen(min).value_or(-1))
, scale(scale_)
: min(getMin(scale_)), max(min * 10 - 1), log(logTen(min).value_or(-1)), scale(scale_)
{
}
@@ -107,8 +104,7 @@ private:
// Like std::integral, but only 64-bit integral types.
template <class T>
concept Integral64 =
std::is_same_v<T, std::int64_t> || std::is_same_v<T, std::uint64_t>;
concept Integral64 = std::is_same_v<T, std::int64_t> || std::is_same_v<T, std::uint64_t>;
/** Number is a floating point type that can represent a wide range of values.
*
@@ -245,22 +241,11 @@ public:
Number(rep mantissa);
explicit Number(rep mantissa, int exponent);
explicit constexpr Number(
bool negative,
internalrep mantissa,
int exponent,
unchecked) noexcept;
explicit constexpr Number(bool negative, internalrep mantissa, int exponent, unchecked) noexcept;
// Assume unsigned values are... unsigned. i.e. positive
explicit constexpr Number(
internalrep mantissa,
int exponent,
unchecked) noexcept;
explicit constexpr Number(internalrep mantissa, int exponent, unchecked) noexcept;
// Only unit tests are expected to use this ctor
explicit Number(
bool negative,
internalrep mantissa,
int exponent,
normalized);
explicit Number(bool negative, internalrep mantissa, int exponent, normalized);
// Assume unsigned values are... unsigned. i.e. positive
explicit Number(internalrep mantissa, int exponent, normalized);
@@ -310,8 +295,7 @@ public:
friend constexpr bool
operator==(Number const& x, Number const& y) noexcept
{
return x.negative_ == y.negative_ && x.mantissa_ == y.mantissa_ &&
x.exponent_ == y.exponent_;
return x.negative_ == y.negative_ && x.mantissa_ == y.mantissa_ && x.exponent_ == y.exponent_;
}
friend constexpr bool
@@ -519,37 +503,25 @@ private:
class Guard;
};
inline constexpr Number::Number(
bool negative,
internalrep mantissa,
int exponent,
unchecked) noexcept
inline constexpr Number::Number(bool negative, internalrep mantissa, int exponent, unchecked) noexcept
: negative_(negative), mantissa_{mantissa}, exponent_{exponent}
{
}
inline constexpr Number::Number(
internalrep mantissa,
int exponent,
unchecked) noexcept
inline constexpr Number::Number(internalrep mantissa, int exponent, unchecked) noexcept
: Number(false, mantissa, exponent, unchecked{})
{
}
constexpr static Number numZero{};
inline Number::Number(
bool negative,
internalrep mantissa,
int exponent,
normalized)
inline Number::Number(bool negative, internalrep mantissa, int exponent, normalized)
: Number(negative, mantissa, exponent, unchecked{})
{
normalize();
}
inline Number::Number(internalrep mantissa, int exponent, normalized)
: Number(false, mantissa, exponent, normalized{})
inline Number::Number(internalrep mantissa, int exponent, normalized) : Number(false, mantissa, exponent, normalized{})
{
}
@@ -696,15 +668,13 @@ Number::min() noexcept
inline Number
Number::max() noexcept
{
return Number{
false, std::min(range_.get().max, maxRep), maxExponent, unchecked{}};
return Number{false, std::min(range_.get().max, maxRep), maxExponent, unchecked{}};
}
inline Number
Number::lowest() noexcept
{
return Number{
true, std::min(range_.get().max, maxRep), maxExponent, unchecked{}};
return Number{true, std::min(range_.get().max, maxRep), maxExponent, unchecked{}};
}
inline bool
@@ -713,8 +683,7 @@ Number::isnormal() const noexcept
MantissaRange const& range = range_;
auto const abs_m = mantissa_;
return *this == Number{} ||
(range.min <= abs_m && abs_m <= range.max &&
(abs_m <= maxRep || abs_m % 10 == 0) && minExponent <= exponent_ &&
(range.min <= abs_m && abs_m <= range.max && (abs_m <= maxRep || abs_m % 10 == 0) && minExponent <= exponent_ &&
exponent_ <= maxExponent);
}
@@ -727,10 +696,7 @@ Number::normalizeToRange(T minMantissa, T maxMantissa) const
int exponent = exponent_;
if constexpr (std::is_unsigned_v<T>)
XRPL_ASSERT_PARTS(
!negative,
"xrpl::Number::normalizeToRange",
"Number is non-negative for unsigned range.");
XRPL_ASSERT_PARTS(!negative, "xrpl::Number::normalizeToRange", "Number is non-negative for unsigned range.");
Number::normalize(negative, mantissa, exponent, minMantissa, maxMantissa);
auto const sign = negative ? -1 : 1;
@@ -751,6 +717,10 @@ abs(Number x) noexcept
Number
power(Number const& f, unsigned n);
// logarithm with base 10
Number
log10(Number const& value, int iterations = 50);
// Returns f^(1/d)
// Uses NewtonRaphson iterations until the result stops changing
// to find the root of the polynomial g(x) = x^d - f
@@ -799,8 +769,7 @@ public:
{
Number::setround(mode_);
}
explicit saveNumberRoundMode(Number::rounding_mode mode) noexcept
: mode_{mode}
explicit saveNumberRoundMode(Number::rounding_mode mode) noexcept : mode_{mode}
{
}
saveNumberRoundMode(saveNumberRoundMode const&) = delete;
@@ -817,8 +786,7 @@ class NumberRoundModeGuard
saveNumberRoundMode saved_;
public:
explicit NumberRoundModeGuard(Number::rounding_mode mode) noexcept
: saved_{Number::setround(mode)}
explicit NumberRoundModeGuard(Number::rounding_mode mode) noexcept : saved_{Number::setround(mode)}
{
}
@@ -838,9 +806,7 @@ class NumberMantissaScaleGuard
MantissaRange::mantissa_scale const saved_;
public:
explicit NumberMantissaScaleGuard(
MantissaRange::mantissa_scale scale) noexcept
: saved_{Number::getMantissaScale()}
explicit NumberMantissaScaleGuard(MantissaRange::mantissa_scale scale) noexcept : saved_{Number::getMantissaScale()}
{
Number::setMantissaScale(scale);
}

View File

@@ -11,8 +11,7 @@ namespace xrpl {
class Resolver
{
public:
using HandlerType =
std::function<void(std::string, std::vector<beast::IP::Endpoint>)>;
using HandlerType = std::function<void(std::string, std::vector<beast::IP::Endpoint>)>;
virtual ~Resolver() = 0;
@@ -41,9 +40,7 @@ public:
}
virtual void
resolve(
std::vector<std::string> const& names,
HandlerType const& handler) = 0;
resolve(std::vector<std::string> const& names, HandlerType const& handler) = 0;
/** @} */
};

View File

@@ -5,34 +5,28 @@
namespace xrpl {
template <class T>
SharedWeakCachePointer<T>::SharedWeakCachePointer(
SharedWeakCachePointer const& rhs) = default;
SharedWeakCachePointer<T>::SharedWeakCachePointer(SharedWeakCachePointer const& rhs) = default;
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>::SharedWeakCachePointer(
std::shared_ptr<TT> const& rhs)
: combo_{rhs}
SharedWeakCachePointer<T>::SharedWeakCachePointer(std::shared_ptr<TT> const& rhs) : combo_{rhs}
{
}
template <class T>
SharedWeakCachePointer<T>::SharedWeakCachePointer(
SharedWeakCachePointer&& rhs) = default;
SharedWeakCachePointer<T>::SharedWeakCachePointer(SharedWeakCachePointer&& rhs) = default;
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>::SharedWeakCachePointer(std::shared_ptr<TT>&& rhs)
: combo_{std::move(rhs)}
SharedWeakCachePointer<T>::SharedWeakCachePointer(std::shared_ptr<TT>&& rhs) : combo_{std::move(rhs)}
{
}
template <class T>
SharedWeakCachePointer<T>&
SharedWeakCachePointer<T>::operator=(SharedWeakCachePointer const& rhs) =
default;
SharedWeakCachePointer<T>::operator=(SharedWeakCachePointer const& rhs) = default;
template <class T>
template <class TT>

View File

@@ -51,11 +51,7 @@ class SlabAllocator
// The extent of the underlying memory block:
std::size_t const size_;
SlabBlock(
SlabBlock* next,
std::uint8_t* data,
std::size_t size,
std::size_t item)
SlabBlock(SlabBlock* next, std::uint8_t* data, std::size_t size, std::size_t item)
: next_(next), p_(data), size_(size)
{
// We don't need to grab the mutex here, since we're the only
@@ -126,9 +122,7 @@ class SlabAllocator
void
deallocate(std::uint8_t* ptr) noexcept
{
XRPL_ASSERT(
own(ptr),
"xrpl::SlabAllocator::SlabBlock::deallocate : own input");
XRPL_ASSERT(own(ptr), "xrpl::SlabAllocator::SlabBlock::deallocate : own input");
std::lock_guard l(m_);
@@ -162,18 +156,13 @@ public:
contexts (e.g. when minimal memory usage is needed) and
allows for graceful failure.
*/
constexpr explicit SlabAllocator(
std::size_t extra,
std::size_t alloc = 0,
std::size_t align = 0)
constexpr explicit SlabAllocator(std::size_t extra, std::size_t alloc = 0, std::size_t align = 0)
: itemAlignment_(align ? align : alignof(Type))
, itemSize_(
boost::alignment::align_up(sizeof(Type) + extra, itemAlignment_))
, itemSize_(boost::alignment::align_up(sizeof(Type) + extra, itemAlignment_))
, slabSize_(alloc)
{
XRPL_ASSERT(
(itemAlignment_ & (itemAlignment_ - 1)) == 0,
"xrpl::SlabAllocator::SlabAllocator : valid alignment");
(itemAlignment_ & (itemAlignment_ - 1)) == 0, "xrpl::SlabAllocator::SlabAllocator : valid alignment");
}
SlabAllocator(SlabAllocator const& other) = delete;
@@ -222,8 +211,7 @@ public:
// We want to allocate the memory at a 2 MiB boundary, to make it
// possible to use hugepage mappings on Linux:
auto buf =
boost::alignment::aligned_alloc(megabytes(std::size_t(2)), size);
auto buf = boost::alignment::aligned_alloc(megabytes(std::size_t(2)), size);
// clang-format off
if (!buf) [[unlikely]]
@@ -241,31 +229,21 @@ public:
// We need to carve out a bit of memory for the slab header
// and then align the rest appropriately:
auto slabData = reinterpret_cast<void*>(
reinterpret_cast<std::uint8_t*>(buf) + sizeof(SlabBlock));
auto slabData = reinterpret_cast<void*>(reinterpret_cast<std::uint8_t*>(buf) + sizeof(SlabBlock));
auto slabSize = size - sizeof(SlabBlock);
// This operation is essentially guaranteed not to fail but
// let's be careful anyways.
if (!boost::alignment::align(
itemAlignment_, itemSize_, slabData, slabSize))
if (!boost::alignment::align(itemAlignment_, itemSize_, slabData, slabSize))
{
boost::alignment::aligned_free(buf);
return nullptr;
}
slab = new (buf) SlabBlock(
slabs_.load(),
reinterpret_cast<std::uint8_t*>(slabData),
slabSize,
itemSize_);
slab = new (buf) SlabBlock(slabs_.load(), reinterpret_cast<std::uint8_t*>(slabData), slabSize, itemSize_);
// Link the new slab
while (!slabs_.compare_exchange_weak(
slab->next_,
slab,
std::memory_order_release,
std::memory_order_relaxed))
while (!slabs_.compare_exchange_weak(slab->next_, slab, std::memory_order_release, std::memory_order_relaxed))
{
; // Nothing to do
}
@@ -322,10 +300,7 @@ public:
std::size_t align;
public:
constexpr SlabConfig(
std::size_t extra_,
std::size_t alloc_ = 0,
std::size_t align_ = alignof(Type))
constexpr SlabConfig(std::size_t extra_, std::size_t alloc_ = 0, std::size_t align_ = alignof(Type))
: extra(extra_), alloc(alloc_), align(align_)
{
}
@@ -336,23 +311,14 @@ public:
// Ensure that the specified allocators are sorted from smallest to
// largest by size:
std::sort(
std::begin(cfg),
std::end(cfg),
[](SlabConfig const& a, SlabConfig const& b) {
return a.extra < b.extra;
});
std::begin(cfg), std::end(cfg), [](SlabConfig const& a, SlabConfig const& b) { return a.extra < b.extra; });
// We should never have two slabs of the same size
if (std::adjacent_find(
std::begin(cfg),
std::end(cfg),
[](SlabConfig const& a, SlabConfig const& b) {
return a.extra == b.extra;
}) != cfg.end())
if (std::adjacent_find(std::begin(cfg), std::end(cfg), [](SlabConfig const& a, SlabConfig const& b) {
return a.extra == b.extra;
}) != cfg.end())
{
throw std::runtime_error(
"SlabAllocatorSet<" + beast::type_name<Type>() +
">: duplicate slab size");
throw std::runtime_error("SlabAllocatorSet<" + beast::type_name<Type>() + ">: duplicate slab size");
}
for (auto const& c : cfg)

View File

@@ -41,8 +41,7 @@ public:
operator=(Slice const&) noexcept = default;
/** Create a slice pointing to existing memory. */
Slice(void const* data, std::size_t size) noexcept
: data_(reinterpret_cast<std::uint8_t const*>(data)), size_(size)
Slice(void const* data, std::size_t size) noexcept : data_(reinterpret_cast<std::uint8_t const*>(data)), size_(size)
{
}
@@ -85,9 +84,7 @@ public:
std::uint8_t
operator[](std::size_t i) const noexcept
{
XRPL_ASSERT(
i < size_,
"xrpl::Slice::operator[](std::size_t) const : valid input");
XRPL_ASSERT(i < size_, "xrpl::Slice::operator[](std::size_t) const : valid input");
return data_[i];
}
@@ -162,9 +159,7 @@ public:
@throws std::out_of_range if pos > size()
*/
Slice
substr(
std::size_t pos,
std::size_t count = std::numeric_limits<std::size_t>::max()) const
substr(std::size_t pos, std::size_t count = std::numeric_limits<std::size_t>::max()) const
{
if (pos > size())
throw std::out_of_range("Requested sub-slice is out of bounds");
@@ -203,11 +198,7 @@ operator!=(Slice const& lhs, Slice const& rhs) noexcept
inline bool
operator<(Slice const& lhs, Slice const& rhs) noexcept
{
return std::lexicographical_compare(
lhs.data(),
lhs.data() + lhs.size(),
rhs.data(),
rhs.data() + rhs.size());
return std::lexicographical_compare(lhs.data(), lhs.data() + lhs.size(), rhs.data(), rhs.data() + rhs.size());
}
template <class Stream>
@@ -219,18 +210,14 @@ operator<<(Stream& s, Slice const& v)
}
template <class T, std::size_t N>
std::enable_if_t<
std::is_same<T, char>::value || std::is_same<T, unsigned char>::value,
Slice>
std::enable_if_t<std::is_same<T, char>::value || std::is_same<T, unsigned char>::value, Slice>
makeSlice(std::array<T, N> const& a)
{
return Slice(a.data(), a.size());
}
template <class T, class Alloc>
std::enable_if_t<
std::is_same<T, char>::value || std::is_same<T, unsigned char>::value,
Slice>
std::enable_if_t<std::is_same<T, char>::value || std::is_same<T, unsigned char>::value, Slice>
makeSlice(std::vector<T, Alloc> const& v)
{
return Slice(v.data(), v.size());

View File

@@ -109,8 +109,7 @@ struct parsedURL
bool
operator==(parsedURL const& other) const
{
return scheme == other.scheme && domain == other.domain &&
port == other.port && path == other.path;
return scheme == other.scheme && domain == other.domain && port == other.port && path == other.path;
}
};

View File

@@ -56,8 +56,7 @@ public:
clock_type::duration expiration,
clock_type& clock,
beast::Journal journal,
beast::insight::Collector::ptr const& collector =
beast::insight::NullCollector::New());
beast::insight::Collector::ptr const& collector = beast::insight::NullCollector::New());
public:
/** Return the clock associated with the cache. */
@@ -114,15 +113,10 @@ public:
*/
template <class R>
bool
canonicalize(
key_type const& key,
SharedPointerType& data,
R&& replaceCallback);
canonicalize(key_type const& key, SharedPointerType& data, R&& replaceCallback);
bool
canonicalize_replace_cache(
key_type const& key,
SharedPointerType const& data);
canonicalize_replace_cache(key_type const& key, SharedPointerType const& data);
bool
canonicalize_replace_client(key_type const& key, SharedPointerType& data);
@@ -136,8 +130,7 @@ public:
*/
template <class ReturnType = bool>
auto
insert(key_type const& key, T const& value)
-> std::enable_if_t<!IsKeyCache, ReturnType>;
insert(key_type const& key, T const& value) -> std::enable_if_t<!IsKeyCache, ReturnType>;
template <class ReturnType = bool>
auto
@@ -183,10 +176,7 @@ private:
struct Stats
{
template <class Handler>
Stats(
std::string const& prefix,
Handler const& handler,
beast::insight::Collector::ptr const& collector)
Stats(std::string const& prefix, Handler const& handler, beast::insight::Collector::ptr const& collector)
: hook(collector->make_hook(handler))
, size(collector->make_gauge(prefix, "size"))
, hit_rate(collector->make_gauge(prefix, "hit_rate"))
@@ -208,8 +198,7 @@ private:
public:
clock_type::time_point last_access;
explicit KeyOnlyEntry(clock_type::time_point const& last_access_)
: last_access(last_access_)
explicit KeyOnlyEntry(clock_type::time_point const& last_access_) : last_access(last_access_)
{
}
@@ -226,9 +215,7 @@ private:
shared_weak_combo_pointer_type ptr;
clock_type::time_point last_access;
ValueEntry(
clock_type::time_point const& last_access_,
shared_pointer_type const& ptr_)
ValueEntry(clock_type::time_point const& last_access_, shared_pointer_type const& ptr_)
: ptr(ptr_), last_access(last_access_)
{
}
@@ -262,18 +249,13 @@ private:
}
};
typedef
typename std::conditional<IsKeyCache, KeyOnlyEntry, ValueEntry>::type
Entry;
typedef typename std::conditional<IsKeyCache, KeyOnlyEntry, ValueEntry>::type Entry;
using KeyOnlyCacheType =
hardened_partitioned_hash_map<key_type, KeyOnlyEntry, Hash, KeyEqual>;
using KeyOnlyCacheType = hardened_partitioned_hash_map<key_type, KeyOnlyEntry, Hash, KeyEqual>;
using KeyValueCacheType =
hardened_partitioned_hash_map<key_type, ValueEntry, Hash, KeyEqual>;
using KeyValueCacheType = hardened_partitioned_hash_map<key_type, ValueEntry, Hash, KeyEqual>;
using cache_type =
hardened_partitioned_hash_map<key_type, Entry, Hash, KeyEqual>;
using cache_type = hardened_partitioned_hash_map<key_type, Entry, Hash, KeyEqual>;
[[nodiscard]] std::thread
sweepHelper(

View File

@@ -15,22 +15,13 @@ template <
class Hash,
class KeyEqual,
class Mutex>
inline TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
TaggedCache(
std::string const& name,
int size,
clock_type::duration expiration,
clock_type& clock,
beast::Journal journal,
beast::insight::Collector::ptr const& collector)
inline TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::TaggedCache(
std::string const& name,
int size,
clock_type::duration expiration,
clock_type& clock,
beast::Journal journal,
beast::insight::Collector::ptr const& collector)
: m_journal(journal)
, m_clock(clock)
, m_stats(name, std::bind(&TaggedCache::collect_metrics, this), collector)
@@ -53,15 +44,8 @@ template <
class KeyEqual,
class Mutex>
inline auto
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::clock() -> clock_type&
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::clock()
-> clock_type&
{
return m_clock;
}
@@ -76,15 +60,7 @@ template <
class KeyEqual,
class Mutex>
inline std::size_t
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::size() const
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::size() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
@@ -100,15 +76,7 @@ template <
class KeyEqual,
class Mutex>
inline int
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getCacheSize() const
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getCacheSize() const
{
std::lock_guard lock(m_mutex);
return m_cache_count;
@@ -124,15 +92,7 @@ template <
class KeyEqual,
class Mutex>
inline int
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getTrackSize() const
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getTrackSize() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
@@ -148,15 +108,7 @@ template <
class KeyEqual,
class Mutex>
inline float
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getHitRate()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getHitRate()
{
std::lock_guard lock(m_mutex);
auto const total = static_cast<float>(m_hits + m_misses);
@@ -173,15 +125,7 @@ template <
class KeyEqual,
class Mutex>
inline void
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::clear()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::clear()
{
std::lock_guard lock(m_mutex);
m_cache.clear();
@@ -198,15 +142,7 @@ template <
class KeyEqual,
class Mutex>
inline void
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::reset()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::reset()
{
std::lock_guard lock(m_mutex);
m_cache.clear();
@@ -226,15 +162,8 @@ template <
class Mutex>
template <class KeyComparable>
inline bool
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::touch_if_exists(KeyComparable const& key)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::touch_if_exists(
KeyComparable const& key)
{
std::lock_guard lock(m_mutex);
auto const iter(m_cache.find(key));
@@ -258,15 +187,7 @@ template <
class KeyEqual,
class Mutex>
inline void
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::sweep()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::sweep()
{
// Keep references to all the stuff we sweep
// For performance, each worker thread should exit before the swept data
@@ -280,8 +201,7 @@ TaggedCache<
{
std::lock_guard lock(m_mutex);
if (m_target_size == 0 ||
(static_cast<int>(m_cache.size()) <= m_target_size))
if (m_target_size == 0 || (static_cast<int>(m_cache.size()) <= m_target_size))
{
when_expire = now - m_target_age;
}
@@ -293,10 +213,8 @@ TaggedCache<
if (when_expire > (now - minimumAge))
when_expire = now - minimumAge;
JLOG(m_journal.trace())
<< m_name << " is growing fast " << m_cache.size() << " of "
<< m_target_size << " aging at " << (now - when_expire).count()
<< " of " << m_target_age.count();
JLOG(m_journal.trace()) << m_name << " is growing fast " << m_cache.size() << " of " << m_target_size
<< " aging at " << (now - when_expire).count() << " of " << m_target_age.count();
}
std::vector<std::thread> workers;
@@ -305,13 +223,7 @@ TaggedCache<
for (std::size_t p = 0; p < m_cache.partitions(); ++p)
{
workers.push_back(sweepHelper(
when_expire,
now,
m_cache.map()[p],
allStuffToSweep[p],
allRemovals,
lock));
workers.push_back(sweepHelper(when_expire, now, m_cache.map()[p], allStuffToSweep[p], allRemovals, lock));
}
for (std::thread& worker : workers)
worker.join();
@@ -322,9 +234,7 @@ TaggedCache<
// and decrement the reference count on each strong pointer.
JLOG(m_journal.debug())
<< m_name << " TaggedCache sweep lock duration "
<< std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - start)
.count()
<< std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - start).count()
<< "ms";
}
@@ -338,15 +248,9 @@ template <
class KeyEqual,
class Mutex>
inline bool
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::del(key_type const& key, bool valid)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::del(
key_type const& key,
bool valid)
{
// Remove from cache, if !valid, remove from map too. Returns true if
// removed from cache
@@ -385,19 +289,10 @@ template <
class Mutex>
template <class R>
inline bool
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
canonicalize(
key_type const& key,
SharedPointerType& data,
R&& replaceCallback)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::canonicalize(
key_type const& key,
SharedPointerType& data,
R&& replaceCallback)
{
// Return canonical value, store if needed, refresh in cache
// Return values: true=we had the data already
@@ -408,9 +303,7 @@ TaggedCache<
if (cit == m_cache.end())
{
m_cache.emplace(
std::piecewise_construct,
std::forward_as_tuple(key),
std::forward_as_tuple(m_clock.now(), data));
std::piecewise_construct, std::forward_as_tuple(key), std::forward_as_tuple(m_clock.now(), data));
++m_cache_count;
return false;
}
@@ -480,21 +373,10 @@ template <
class KeyEqual,
class Mutex>
inline bool
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
canonicalize_replace_cache(
key_type const& key,
SharedPointerType const& data)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
canonicalize_replace_cache(key_type const& key, SharedPointerType const& data)
{
return canonicalize(
key, const_cast<SharedPointerType&>(data), []() { return true; });
return canonicalize(key, const_cast<SharedPointerType&>(data), []() { return true; });
}
template <
@@ -507,15 +389,7 @@ template <
class KeyEqual,
class Mutex>
inline bool
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::
canonicalize_replace_client(key_type const& key, SharedPointerType& data)
{
return canonicalize(key, data, []() { return false; });
@@ -531,15 +405,8 @@ template <
class KeyEqual,
class Mutex>
inline SharedPointerType
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::fetch(key_type const& key)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::fetch(
key_type const& key)
{
std::lock_guard<mutex_type> l(m_mutex);
auto ret = initialFetch(key, l);
@@ -559,16 +426,9 @@ template <
class Mutex>
template <class ReturnType>
inline auto
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::insert(key_type const& key, T const& value)
-> std::enable_if_t<!IsKeyCache, ReturnType>
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::insert(
key_type const& key,
T const& value) -> std::enable_if_t<!IsKeyCache, ReturnType>
{
static_assert(
std::is_same_v<std::shared_ptr<T>, SharedPointerType> ||
@@ -597,23 +457,13 @@ template <
class Mutex>
template <class ReturnType>
inline auto
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::insert(key_type const& key)
-> std::enable_if_t<IsKeyCache, ReturnType>
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::insert(
key_type const& key) -> std::enable_if_t<IsKeyCache, ReturnType>
{
std::lock_guard lock(m_mutex);
clock_type::time_point const now(m_clock.now());
auto [it, inserted] = m_cache.emplace(
std::piecewise_construct,
std::forward_as_tuple(key),
std::forward_as_tuple(now));
auto [it, inserted] =
m_cache.emplace(std::piecewise_construct, std::forward_as_tuple(key), std::forward_as_tuple(now));
if (!inserted)
it->second.last_access = now;
return inserted;
@@ -629,15 +479,9 @@ template <
class KeyEqual,
class Mutex>
inline bool
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::retrieve(key_type const& key, T& data)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::retrieve(
key_type const& key,
T& data)
{
// retrieve the value of the stored data
auto entry = fetch(key);
@@ -659,15 +503,8 @@ template <
class KeyEqual,
class Mutex>
inline auto
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::peekMutex() -> mutex_type&
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::peekMutex()
-> mutex_type&
{
return m_mutex;
}
@@ -682,15 +519,8 @@ template <
class KeyEqual,
class Mutex>
inline auto
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getKeys() const -> std::vector<key_type>
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::getKeys() const
-> std::vector<key_type>
{
std::vector<key_type> v;
@@ -714,15 +544,7 @@ template <
class KeyEqual,
class Mutex>
inline double
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::rate() const
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::rate() const
{
std::lock_guard lock(m_mutex);
auto const tot = m_hits + m_misses;
@@ -742,15 +564,9 @@ template <
class Mutex>
template <class Handler>
inline SharedPointerType
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::fetch(key_type const& digest, Handler const& h)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::fetch(
key_type const& digest,
Handler const& h)
{
{
std::lock_guard l(m_mutex);
@@ -764,8 +580,7 @@ TaggedCache<
std::lock_guard l(m_mutex);
++m_misses;
auto const [it, inserted] =
m_cache.emplace(digest, Entry(m_clock.now(), std::move(sle)));
auto const [it, inserted] = m_cache.emplace(digest, Entry(m_clock.now(), std::move(sle)));
if (!inserted)
it->second.touch(m_clock.now());
return it->second.ptr.getStrong();
@@ -782,16 +597,9 @@ template <
class KeyEqual,
class Mutex>
inline SharedPointerType
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
initialFetch(key_type const& key, std::lock_guard<mutex_type> const& l)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::initialFetch(
key_type const& key,
std::lock_guard<mutex_type> const& l)
{
auto cit = m_cache.find(key);
if (cit == m_cache.end())
@@ -827,15 +635,7 @@ template <
class KeyEqual,
class Mutex>
inline void
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::collect_metrics()
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::collect_metrics()
{
m_stats.size.set(getCacheSize());
@@ -861,22 +661,13 @@ template <
class KeyEqual,
class Mutex>
inline std::thread
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
sweepHelper(
clock_type::time_point const& when_expire,
[[maybe_unused]] clock_type::time_point const& now,
typename KeyValueCacheType::map_type& partition,
SweptPointersVector& stuffToSweep,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::sweepHelper(
clock_type::time_point const& when_expire,
[[maybe_unused]] clock_type::time_point const& now,
typename KeyValueCacheType::map_type& partition,
SweptPointersVector& stuffToSweep,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
{
return std::thread([&, this]() {
int cacheRemovals = 0;
@@ -930,10 +721,8 @@ TaggedCache<
if (mapRemovals || cacheRemovals)
{
JLOG(m_journal.debug())
<< "TaggedCache partition sweep " << m_name
<< ": cache = " << partition.size() << "-" << cacheRemovals
<< ", map-=" << mapRemovals;
JLOG(m_journal.debug()) << "TaggedCache partition sweep " << m_name << ": cache = " << partition.size()
<< "-" << cacheRemovals << ", map-=" << mapRemovals;
}
allRemovals += cacheRemovals;
@@ -950,22 +739,13 @@ template <
class KeyEqual,
class Mutex>
inline std::thread
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
sweepHelper(
clock_type::time_point const& when_expire,
clock_type::time_point const& now,
typename KeyOnlyCacheType::map_type& partition,
SweptPointersVector&,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
TaggedCache<Key, T, IsKeyCache, SharedWeakUnionPointer, SharedPointerType, Hash, KeyEqual, Mutex>::sweepHelper(
clock_type::time_point const& when_expire,
clock_type::time_point const& now,
typename KeyOnlyCacheType::map_type& partition,
SweptPointersVector&,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
{
return std::thread([&, this]() {
int cacheRemovals = 0;
@@ -995,10 +775,8 @@ TaggedCache<
if (mapRemovals || cacheRemovals)
{
JLOG(m_journal.debug())
<< "TaggedCache partition sweep " << m_name
<< ": cache = " << partition.size() << "-" << cacheRemovals
<< ", map-=" << mapRemovals;
JLOG(m_journal.debug()) << "TaggedCache partition sweep " << m_name << ": cache = " << partition.size()
<< "-" << cacheRemovals << ", map-=" << mapRemovals;
}
allRemovals += cacheRemovals;

View File

@@ -40,8 +40,7 @@ template <
class Hash = beast::uhash<>,
class Pred = std::equal_to<Key>,
class Allocator = std::allocator<std::pair<Key const, Value>>>
using hash_multimap =
std::unordered_multimap<Key, Value, Hash, Pred, Allocator>;
using hash_multimap = std::unordered_multimap<Key, Value, Hash, Pred, Allocator>;
template <
class Value,
@@ -75,8 +74,7 @@ template <
class Hash = hardened_hash<strong_hash>,
class Pred = std::equal_to<Key>,
class Allocator = std::allocator<std::pair<Key const, Value>>>
using hardened_partitioned_hash_map =
partitioned_unordered_map<Key, Value, Hash, Pred, Allocator>;
using hardened_partitioned_hash_map = partitioned_unordered_map<Key, Value, Hash, Pred, Allocator>;
template <
class Key,
@@ -84,8 +82,7 @@ template <
class Hash = hardened_hash<strong_hash>,
class Pred = std::equal_to<Key>,
class Allocator = std::allocator<std::pair<Key const, Value>>>
using hardened_hash_multimap =
std::unordered_multimap<Key, Value, Hash, Pred, Allocator>;
using hardened_hash_multimap = std::unordered_multimap<Key, Value, Hash, Pred, Allocator>;
template <
class Value,
@@ -99,8 +96,7 @@ template <
class Hash = hardened_hash<strong_hash>,
class Pred = std::equal_to<Value>,
class Allocator = std::allocator<Value>>
using hardened_hash_multiset =
std::unordered_multiset<Value, Hash, Pred, Allocator>;
using hardened_hash_multiset = std::unordered_multiset<Value, Hash, Pred, Allocator>;
} // namespace xrpl

View File

@@ -52,13 +52,7 @@ generalized_set_intersection(
// std::set_intersection.
template <class FwdIter1, class InputIter2, class Pred, class Comp>
FwdIter1
remove_if_intersect_or_match(
FwdIter1 first1,
FwdIter1 last1,
InputIter2 first2,
InputIter2 last2,
Pred pred,
Comp comp)
remove_if_intersect_or_match(FwdIter1 first1, FwdIter1 last1, InputIter2 first2, InputIter2 last2, Pred pred, Comp comp)
{
// [original-first1, current-first1) is the set of elements to be preserved.
// [current-first1, i) is the set of elements that have been removed.

View File

@@ -46,8 +46,7 @@ base64_encode(std::uint8_t const* data, std::size_t len);
inline std::string
base64_encode(std::string const& s)
{
return base64_encode(
reinterpret_cast<std::uint8_t const*>(s.data()), s.size());
return base64_encode(reinterpret_cast<std::uint8_t const*>(s.data()), s.size());
}
std::string

View File

@@ -65,13 +65,9 @@ struct is_contiguous_container<Slice> : std::true_type
template <std::size_t Bits, class Tag = void>
class base_uint
{
static_assert(
(Bits % 32) == 0,
"The length of a base_uint in bits must be a multiple of 32.");
static_assert((Bits % 32) == 0, "The length of a base_uint in bits must be a multiple of 32.");
static_assert(
Bits >= 64,
"The length of a base_uint in bits must be at least 64.");
static_assert(Bits >= 64, "The length of a base_uint in bits must be at least 64.");
static constexpr std::size_t WIDTH = Bits / 32;
@@ -182,9 +178,7 @@ private:
{
// Local lambda that converts a single hex char to four bits and
// ORs those bits into a uint32_t.
auto hexCharToUInt = [](char c,
std::uint32_t shift,
std::uint32_t& accum) -> ParseResult {
auto hexCharToUInt = [](char c, std::uint32_t shift, std::uint32_t& accum) -> ParseResult {
std::uint32_t nibble = 0xFFu;
if (c < '0' || c > 'f')
return ParseResult::badChar;
@@ -221,8 +215,7 @@ private:
std::uint32_t accum = {};
for (std::uint32_t shift : {4u, 0u, 12u, 8u, 20u, 16u, 28u, 24u})
{
if (auto const result = hexCharToUInt(*in++, shift, accum);
result != ParseResult::okay)
if (auto const result = hexCharToUInt(*in++, shift, accum); result != ParseResult::okay)
return Unexpected(result);
}
ret[i++] = accum;
@@ -261,8 +254,7 @@ public:
// This constructor is intended to be used at compile time since it might
// throw at runtime. Consider declaring this constructor consteval once
// we get to C++23.
explicit constexpr base_uint(std::string_view sv) noexcept(false)
: data_(parseFromStringViewThrows(sv))
explicit constexpr base_uint(std::string_view sv) noexcept(false) : data_(parseFromStringViewThrows(sv))
{
}
@@ -387,8 +379,7 @@ public:
// prefix operator
for (int i = WIDTH - 1; i >= 0; --i)
{
data_[i] = boost::endian::native_to_big(
boost::endian::big_to_native(data_[i]) + 1);
data_[i] = boost::endian::native_to_big(boost::endian::big_to_native(data_[i]) + 1);
if (data_[i] != 0)
break;
}
@@ -412,8 +403,7 @@ public:
for (int i = WIDTH - 1; i >= 0; --i)
{
auto prev = data_[i];
data_[i] = boost::endian::native_to_big(
boost::endian::big_to_native(data_[i]) - 1);
data_[i] = boost::endian::native_to_big(boost::endian::big_to_native(data_[i]) - 1);
if (prev != 0)
break;
@@ -453,11 +443,9 @@ public:
for (int i = WIDTH; i--;)
{
std::uint64_t n = carry + boost::endian::big_to_native(data_[i]) +
boost::endian::big_to_native(b.data_[i]);
std::uint64_t n = carry + boost::endian::big_to_native(data_[i]) + boost::endian::big_to_native(b.data_[i]);
data_[i] =
boost::endian::native_to_big(static_cast<std::uint32_t>(n));
data_[i] = boost::endian::native_to_big(static_cast<std::uint32_t>(n));
carry = n >> 32;
}
@@ -557,8 +545,7 @@ operator<=>(base_uint<Bits, Tag> const& lhs, base_uint<Bits, Tag> const& rhs)
if (ret.first == lhs.cend())
return std::strong_ordering::equivalent;
return (*ret.first > *ret.second) ? std::strong_ordering::greater
: std::strong_ordering::less;
return (*ret.first > *ret.second) ? std::strong_ordering::greater : std::strong_ordering::less;
}
template <std::size_t Bits, typename Tag>
@@ -617,9 +604,7 @@ template <std::size_t Bits, class Tag>
inline std::string
to_short_string(base_uint<Bits, Tag> const& a)
{
static_assert(
base_uint<Bits, Tag>::bytes > 4,
"For 4 bytes or less, use a native type");
static_assert(base_uint<Bits, Tag>::bytes > 4, "For 4 bytes or less, use a native type");
return strHex(a.cbegin(), a.cbegin() + 4) + "...";
}
@@ -653,8 +638,7 @@ static_assert(sizeof(uint256) == 256 / 8, "There should be no padding bytes");
namespace beast {
template <std::size_t Bits, class Tag>
struct is_uniquely_represented<xrpl::base_uint<Bits, Tag>>
: public std::true_type
struct is_uniquely_represented<xrpl::base_uint<Bits, Tag>> : public std::true_type
{
explicit is_uniquely_represented() = default;
};

View File

@@ -16,12 +16,9 @@ namespace xrpl {
// A few handy aliases
using days = std::chrono::duration<
int,
std::ratio_multiply<std::chrono::hours::period, std::ratio<24>>>;
using days = std::chrono::duration<int, std::ratio_multiply<std::chrono::hours::period, std::ratio<24>>>;
using weeks = std::chrono::
duration<int, std::ratio_multiply<days::period, std::ratio<7>>>;
using weeks = std::chrono::duration<int, std::ratio_multiply<days::period, std::ratio<7>>>;
/** Clock for measuring the network time.
@@ -34,8 +31,7 @@ using weeks = std::chrono::
*/
constexpr static std::chrono::seconds epoch_offset =
date::sys_days{date::year{2000} / 1 / 1} -
date::sys_days{date::year{1970} / 1 / 1};
date::sys_days{date::year{2000} / 1 / 1} - date::sys_days{date::year{1970} / 1 / 1};
static_assert(epoch_offset.count() == 946684800);
@@ -64,8 +60,7 @@ to_string(NetClock::time_point tp)
{
// 2000-01-01 00:00:00 UTC is 946684800s from 1970-01-01 00:00:00 UTC
using namespace std::chrono;
return to_string(
system_clock::time_point{tp.time_since_epoch() + epoch_offset});
return to_string(system_clock::time_point{tp.time_since_epoch() + epoch_offset});
}
template <class Duration>
@@ -82,8 +77,7 @@ to_string_iso(NetClock::time_point tp)
// 2000-01-01 00:00:00 UTC is 946684800s from 1970-01-01 00:00:00 UTC
// Note, NetClock::duration is seconds, as checked by static_assert
static_assert(std::is_same_v<NetClock::duration::period, std::ratio<1>>);
return to_string_iso(date::sys_time<NetClock::duration>{
tp.time_since_epoch() + epoch_offset});
return to_string_iso(date::sys_time<NetClock::duration>{tp.time_since_epoch() + epoch_offset});
}
/** A clock for measuring elapsed time.

View File

@@ -36,15 +36,10 @@ template <class E, class... Args>
[[noreturn]] inline void
Throw(Args&&... args)
{
static_assert(
std::is_convertible<E*, std::exception*>::value,
"Exception must derive from std::exception.");
static_assert(std::is_convertible<E*, std::exception*>::value, "Exception must derive from std::exception.");
E e(std::forward<Args>(args)...);
LogThrow(
std::string(
"Throwing exception of type " + beast::type_name<E>() + ": ") +
e.what());
LogThrow(std::string("Throwing exception of type " + beast::type_name<E>() + ": ") + e.what());
throw e;
}

View File

@@ -24,8 +24,7 @@ public:
Collection const& collection;
std::string const delimiter;
explicit CollectionAndDelimiter(Collection const& c, std::string delim)
: collection(c), delimiter(std::move(delim))
explicit CollectionAndDelimiter(Collection const& c, std::string delim) : collection(c), delimiter(std::move(delim))
{
}
@@ -33,11 +32,7 @@ public:
friend Stream&
operator<<(Stream& s, CollectionAndDelimiter const& cd)
{
return join(
s,
std::begin(cd.collection),
std::end(cd.collection),
cd.delimiter);
return join(s, std::begin(cd.collection), std::end(cd.collection), cd.delimiter);
}
};
@@ -69,8 +64,7 @@ public:
char const* collection;
std::string const delimiter;
explicit CollectionAndDelimiter(char const c[N], std::string delim)
: collection(c), delimiter(std::move(delim))
explicit CollectionAndDelimiter(char const c[N], std::string delim) : collection(c), delimiter(std::move(delim))
{
}

View File

@@ -51,8 +51,7 @@ public:
using const_reference = value_type const&;
using pointer = value_type*;
using const_pointer = value_type const*;
using map_type = std::
unordered_map<key_type, mapped_type, hasher, key_equal, allocator_type>;
using map_type = std::unordered_map<key_type, mapped_type, hasher, key_equal, allocator_type>;
using partition_map_type = std::vector<map_type>;
struct iterator
@@ -113,8 +112,7 @@ public:
friend bool
operator==(iterator const& lhs, iterator const& rhs)
{
return lhs.map_ == rhs.map_ && lhs.ait_ == rhs.ait_ &&
lhs.mit_ == rhs.mit_;
return lhs.map_ == rhs.map_ && lhs.ait_ == rhs.ait_ && lhs.mit_ == rhs.mit_;
}
friend bool
@@ -190,8 +188,7 @@ public:
friend bool
operator==(const_iterator const& lhs, const_iterator const& rhs)
{
return lhs.map_ == rhs.map_ && lhs.ait_ == rhs.ait_ &&
lhs.mit_ == rhs.mit_;
return lhs.map_ == rhs.map_ && lhs.ait_ == rhs.ait_ && lhs.mit_ == rhs.mit_;
}
friend bool
@@ -231,14 +228,11 @@ private:
}
public:
partitioned_unordered_map(
std::optional<std::size_t> partitions = std::nullopt)
partitioned_unordered_map(std::optional<std::size_t> partitions = std::nullopt)
{
// Set partitions to the number of hardware threads if the parameter
// is either empty or set to 0.
partitions_ = partitions && *partitions
? *partitions
: std::thread::hardware_concurrency();
partitions_ = partitions && *partitions ? *partitions : std::thread::hardware_concurrency();
map_.resize(partitions_);
XRPL_ASSERT(
partitions_,
@@ -337,10 +331,8 @@ public:
auto const& key = std::get<0>(keyTuple);
iterator it(&map_);
it.ait_ = it.map_->begin() + partitioner(key);
auto [eit, inserted] = it.ait_->emplace(
std::piecewise_construct,
std::forward<T>(keyTuple),
std::forward<U>(valueTuple));
auto [eit, inserted] =
it.ait_->emplace(std::piecewise_construct, std::forward<T>(keyTuple), std::forward<U>(valueTuple));
it.mit_ = eit;
return {it, inserted};
}
@@ -351,8 +343,7 @@ public:
{
iterator it(&map_);
it.ait_ = it.map_->begin() + partitioner(key);
auto [eit, inserted] =
it.ait_->emplace(std::forward<T>(key), std::forward<U>(val));
auto [eit, inserted] = it.ait_->emplace(std::forward<T>(key), std::forward<U>(val));
it.mit_ = eit;
return {it, inserted};
}

View File

@@ -20,8 +20,7 @@ static_assert(
"The Ripple default PRNG engine must return an unsigned integral type.");
static_assert(
std::numeric_limits<beast::xor_shift_engine::result_type>::max() >=
std::numeric_limits<std::uint64_t>::max(),
std::numeric_limits<beast::xor_shift_engine::result_type>::max() >= std::numeric_limits<std::uint64_t>::max(),
"The Ripple default PRNG engine return must be at least 64 bits wide.");
#endif
@@ -90,9 +89,7 @@ default_prng()
*/
/** @{ */
template <class Engine, class Integral>
std::enable_if_t<
std::is_integral<Integral>::value && detail::is_engine<Engine>::value,
Integral>
std::enable_if_t<std::is_integral<Integral>::value && detail::is_engine<Engine>::value, Integral>
rand_int(Engine& engine, Integral min, Integral max)
{
XRPL_ASSERT(max > min, "xrpl::rand_int : max over min inputs");
@@ -111,9 +108,7 @@ rand_int(Integral min, Integral max)
}
template <class Engine, class Integral>
std::enable_if_t<
std::is_integral<Integral>::value && detail::is_engine<Engine>::value,
Integral>
std::enable_if_t<std::is_integral<Integral>::value && detail::is_engine<Engine>::value, Integral>
rand_int(Engine& engine, Integral max)
{
return rand_int(engine, Integral(0), max);
@@ -127,9 +122,7 @@ rand_int(Integral max)
}
template <class Integral, class Engine>
std::enable_if_t<
std::is_integral<Integral>::value && detail::is_engine<Engine>::value,
Integral>
std::enable_if_t<std::is_integral<Integral>::value && detail::is_engine<Engine>::value, Integral>
rand_int(Engine& engine)
{
return rand_int(engine, std::numeric_limits<Integral>::max());
@@ -147,23 +140,17 @@ rand_int()
/** @{ */
template <class Byte, class Engine>
std::enable_if_t<
(std::is_same<Byte, unsigned char>::value ||
std::is_same<Byte, std::uint8_t>::value) &&
(std::is_same<Byte, unsigned char>::value || std::is_same<Byte, std::uint8_t>::value) &&
detail::is_engine<Engine>::value,
Byte>
rand_byte(Engine& engine)
{
return static_cast<Byte>(rand_int<Engine, std::uint32_t>(
engine,
std::numeric_limits<Byte>::min(),
std::numeric_limits<Byte>::max()));
return static_cast<Byte>(
rand_int<Engine, std::uint32_t>(engine, std::numeric_limits<Byte>::min(), std::numeric_limits<Byte>::max()));
}
template <class Byte = std::uint8_t>
std::enable_if_t<
(std::is_same<Byte, unsigned char>::value ||
std::is_same<Byte, std::uint8_t>::value),
Byte>
std::enable_if_t<(std::is_same<Byte, unsigned char>::value || std::is_same<Byte, std::uint8_t>::value), Byte>
rand_byte()
{
return rand_byte<Byte>(default_prng());

View File

@@ -12,38 +12,29 @@ namespace xrpl {
template <class Src, class Dest>
concept SafeToCast = (std::is_integral_v<Src> && std::is_integral_v<Dest>) &&
(std::is_signed<Src>::value || std::is_unsigned<Dest>::value) &&
(std::is_signed<Src>::value != std::is_signed<Dest>::value
? sizeof(Dest) > sizeof(Src)
: sizeof(Dest) >= sizeof(Src));
(std::is_signed<Src>::value != std::is_signed<Dest>::value ? sizeof(Dest) > sizeof(Src)
: sizeof(Dest) >= sizeof(Src));
template <class Dest, class Src>
inline constexpr std::
enable_if_t<std::is_integral_v<Dest> && std::is_integral_v<Src>, Dest>
safe_cast(Src s) noexcept
inline constexpr std::enable_if_t<std::is_integral_v<Dest> && std::is_integral_v<Src>, Dest>
safe_cast(Src s) noexcept
{
static_assert(
std::is_signed_v<Dest> || std::is_unsigned_v<Src>,
"Cannot cast signed to unsigned");
constexpr unsigned not_same =
std::is_signed_v<Dest> != std::is_signed_v<Src>;
static_assert(
sizeof(Dest) >= sizeof(Src) + not_same,
"Destination is too small to hold all values of source");
static_assert(std::is_signed_v<Dest> || std::is_unsigned_v<Src>, "Cannot cast signed to unsigned");
constexpr unsigned not_same = std::is_signed_v<Dest> != std::is_signed_v<Src>;
static_assert(sizeof(Dest) >= sizeof(Src) + not_same, "Destination is too small to hold all values of source");
return static_cast<Dest>(s);
}
template <class Dest, class Src>
inline constexpr std::
enable_if_t<std::is_enum_v<Dest> && std::is_integral_v<Src>, Dest>
safe_cast(Src s) noexcept
inline constexpr std::enable_if_t<std::is_enum_v<Dest> && std::is_integral_v<Src>, Dest>
safe_cast(Src s) noexcept
{
return static_cast<Dest>(safe_cast<std::underlying_type_t<Dest>>(s));
}
template <class Dest, class Src>
inline constexpr std::
enable_if_t<std::is_integral_v<Dest> && std::is_enum_v<Src>, Dest>
safe_cast(Src s) noexcept
inline constexpr std::enable_if_t<std::is_integral_v<Dest> && std::is_enum_v<Src>, Dest>
safe_cast(Src s) noexcept
{
return safe_cast<Dest>(static_cast<std::underlying_type_t<Src>>(s));
}
@@ -53,9 +44,8 @@ inline constexpr std::
// underlying types become safe, it can be converted to a safe_cast.
template <class Dest, class Src>
inline constexpr std::
enable_if_t<std::is_integral_v<Dest> && std::is_integral_v<Src>, Dest>
unsafe_cast(Src s) noexcept
inline constexpr std::enable_if_t<std::is_integral_v<Dest> && std::is_integral_v<Src>, Dest>
unsafe_cast(Src s) noexcept
{
static_assert(
!SafeToCast<Src, Dest>,
@@ -65,17 +55,15 @@ inline constexpr std::
}
template <class Dest, class Src>
inline constexpr std::
enable_if_t<std::is_enum_v<Dest> && std::is_integral_v<Src>, Dest>
unsafe_cast(Src s) noexcept
inline constexpr std::enable_if_t<std::is_enum_v<Dest> && std::is_integral_v<Src>, Dest>
unsafe_cast(Src s) noexcept
{
return static_cast<Dest>(unsafe_cast<std::underlying_type_t<Dest>>(s));
}
template <class Dest, class Src>
inline constexpr std::
enable_if_t<std::is_integral_v<Dest> && std::is_enum_v<Src>, Dest>
unsafe_cast(Src s) noexcept
inline constexpr std::enable_if_t<std::is_integral_v<Dest> && std::is_enum_v<Src>, Dest>
unsafe_cast(Src s) noexcept
{
return unsafe_cast<Dest>(static_cast<std::underlying_type_t<Src>>(s));
}

View File

@@ -36,10 +36,8 @@ public:
}
scope_exit(scope_exit&& rhs) noexcept(
std::is_nothrow_move_constructible_v<EF> ||
std::is_nothrow_copy_constructible_v<EF>)
: exit_function_{std::forward<EF>(rhs.exit_function_)}
, execute_on_destruction_{rhs.execute_on_destruction_}
std::is_nothrow_move_constructible_v<EF> || std::is_nothrow_copy_constructible_v<EF>)
: exit_function_{std::forward<EF>(rhs.exit_function_)}, execute_on_destruction_{rhs.execute_on_destruction_}
{
rhs.release();
}
@@ -50,14 +48,11 @@ public:
template <class EFP>
explicit scope_exit(
EFP&& f,
std::enable_if_t<
!std::is_same_v<std::remove_cv_t<EFP>, scope_exit> &&
std::is_constructible_v<EF, EFP>>* = 0) noexcept
std::enable_if_t<!std::is_same_v<std::remove_cv_t<EFP>, scope_exit> && std::is_constructible_v<EF, EFP>>* =
0) noexcept
: exit_function_{std::forward<EFP>(f)}
{
static_assert(
std::
is_nothrow_constructible_v<EF, decltype(std::forward<EFP>(f))>);
static_assert(std::is_nothrow_constructible_v<EF, decltype(std::forward<EFP>(f))>);
}
void
@@ -80,14 +75,12 @@ class scope_fail
public:
~scope_fail()
{
if (execute_on_destruction_ &&
std::uncaught_exceptions() > uncaught_on_creation_)
if (execute_on_destruction_ && std::uncaught_exceptions() > uncaught_on_creation_)
exit_function_();
}
scope_fail(scope_fail&& rhs) noexcept(
std::is_nothrow_move_constructible_v<EF> ||
std::is_nothrow_copy_constructible_v<EF>)
std::is_nothrow_move_constructible_v<EF> || std::is_nothrow_copy_constructible_v<EF>)
: exit_function_{std::forward<EF>(rhs.exit_function_)}
, execute_on_destruction_{rhs.execute_on_destruction_}
, uncaught_on_creation_{rhs.uncaught_on_creation_}
@@ -101,14 +94,11 @@ public:
template <class EFP>
explicit scope_fail(
EFP&& f,
std::enable_if_t<
!std::is_same_v<std::remove_cv_t<EFP>, scope_fail> &&
std::is_constructible_v<EF, EFP>>* = 0) noexcept
std::enable_if_t<!std::is_same_v<std::remove_cv_t<EFP>, scope_fail> && std::is_constructible_v<EF, EFP>>* =
0) noexcept
: exit_function_{std::forward<EFP>(f)}
{
static_assert(
std::
is_nothrow_constructible_v<EF, decltype(std::forward<EFP>(f))>);
static_assert(std::is_nothrow_constructible_v<EF, decltype(std::forward<EFP>(f))>);
}
void
@@ -131,14 +121,12 @@ class scope_success
public:
~scope_success() noexcept(noexcept(exit_function_()))
{
if (execute_on_destruction_ &&
std::uncaught_exceptions() <= uncaught_on_creation_)
if (execute_on_destruction_ && std::uncaught_exceptions() <= uncaught_on_creation_)
exit_function_();
}
scope_success(scope_success&& rhs) noexcept(
std::is_nothrow_move_constructible_v<EF> ||
std::is_nothrow_copy_constructible_v<EF>)
std::is_nothrow_move_constructible_v<EF> || std::is_nothrow_copy_constructible_v<EF>)
: exit_function_{std::forward<EF>(rhs.exit_function_)}
, execute_on_destruction_{rhs.execute_on_destruction_}
, uncaught_on_creation_{rhs.uncaught_on_creation_}
@@ -152,9 +140,7 @@ public:
template <class EFP>
explicit scope_success(
EFP&& f,
std::enable_if_t<
!std::is_same_v<std::remove_cv_t<EFP>, scope_success> &&
std::is_constructible_v<EF, EFP>>* =
std::enable_if_t<!std::is_same_v<std::remove_cv_t<EFP>, scope_success> && std::is_constructible_v<EF, EFP>>* =
0) noexcept(std::is_nothrow_constructible_v<EF, EFP> || std::is_nothrow_constructible_v<EF, EFP&>)
: exit_function_{std::forward<EFP>(f)}
{
@@ -213,12 +199,9 @@ class scope_unlock
std::unique_lock<Mutex>* plock;
public:
explicit scope_unlock(std::unique_lock<Mutex>& lock) noexcept(true)
: plock(&lock)
explicit scope_unlock(std::unique_lock<Mutex>& lock) noexcept(true) : plock(&lock)
{
XRPL_ASSERT(
plock->owns_lock(),
"xrpl::scope_unlock::scope_unlock : mutex must be locked");
XRPL_ASSERT(plock->owns_lock(), "xrpl::scope_unlock::scope_unlock : mutex must be locked");
plock->unlock();
}

View File

@@ -100,12 +100,9 @@ public:
@note For performance reasons, you should strive to have `lock` be
on a cacheline by itself.
*/
packed_spinlock(std::atomic<T>& lock, int index)
: bits_(lock), mask_(static_cast<T>(1) << index)
packed_spinlock(std::atomic<T>& lock, int index) : bits_(lock), mask_(static_cast<T>(1) << index)
{
XRPL_ASSERT(
index >= 0 && (mask_ != 0),
"xrpl::packed_spinlock::packed_spinlock : valid index and mask");
XRPL_ASSERT(index >= 0 && (mask_ != 0), "xrpl::packed_spinlock::packed_spinlock : valid index and mask");
}
[[nodiscard]] bool
@@ -178,10 +175,7 @@ public:
T expected = 0;
return lock_.compare_exchange_weak(
expected,
std::numeric_limits<T>::max(),
std::memory_order_acquire,
std::memory_order_relaxed);
expected, std::numeric_limits<T>::max(), std::memory_order_acquire, std::memory_order_relaxed);
}
void

View File

@@ -11,9 +11,7 @@ std::string
strHex(FwdIt begin, FwdIt end)
{
static_assert(
std::is_convertible<
typename std::iterator_traits<FwdIt>::iterator_category,
std::forward_iterator_tag>::value,
std::is_convertible<typename std::iterator_traits<FwdIt>::iterator_category, std::forward_iterator_tag>::value,
"FwdIt must be a forward iterator");
std::string result;
result.reserve(2 * std::distance(begin, end));

View File

@@ -31,9 +31,7 @@ class tagged_integer
tagged_integer<Int, Tag>,
boost::bitwise<
tagged_integer<Int, Tag>,
boost::unit_steppable<
tagged_integer<Int, Tag>,
boost::shiftable<tagged_integer<Int, Tag>>>>>>
boost::unit_steppable<tagged_integer<Int, Tag>, boost::shiftable<tagged_integer<Int, Tag>>>>>>
{
private:
Int m_value;
@@ -46,14 +44,10 @@ public:
template <
class OtherInt,
class = typename std::enable_if<
std::is_integral<OtherInt>::value &&
sizeof(OtherInt) <= sizeof(Int)>::type>
class = typename std::enable_if<std::is_integral<OtherInt>::value && sizeof(OtherInt) <= sizeof(Int)>::type>
explicit constexpr tagged_integer(OtherInt value) noexcept : m_value(value)
{
static_assert(
sizeof(tagged_integer) == sizeof(Int),
"tagged_integer is adding padding");
static_assert(sizeof(tagged_integer) == sizeof(Int), "tagged_integer is adding padding");
}
bool

View File

@@ -32,11 +32,7 @@ private:
public:
io_latency_probe(duration const& period, boost::asio::io_context& ios)
: m_count(1)
, m_period(period)
, m_ios(ios)
, m_timer(m_ios)
, m_cancel(false)
: m_count(1), m_period(period), m_ios(ios), m_timer(m_ios), m_cancel(false)
{
}
@@ -91,10 +87,7 @@ public:
std::lock_guard lock(m_mutex);
if (m_cancel)
throw std::logic_error("io_latency_probe is canceled");
boost::asio::post(
m_ios,
sample_op<Handler>(
std::forward<Handler>(handler), Clock::now(), false, this));
boost::asio::post(m_ios, sample_op<Handler>(std::forward<Handler>(handler), Clock::now(), false, this));
}
/** Initiate continuous i/o latency sampling.
@@ -108,10 +101,7 @@ public:
std::lock_guard lock(m_mutex);
if (m_cancel)
throw std::logic_error("io_latency_probe is canceled");
boost::asio::post(
m_ios,
sample_op<Handler>(
std::forward<Handler>(handler), Clock::now(), true, this));
boost::asio::post(m_ios, sample_op<Handler>(std::forward<Handler>(handler), Clock::now(), true, this));
}
private:
@@ -151,15 +141,8 @@ private:
bool m_repeat;
io_latency_probe* m_probe;
sample_op(
Handler const& handler,
time_point const& start,
bool repeat,
io_latency_probe* probe)
: m_handler(handler)
, m_start(start)
, m_repeat(repeat)
, m_probe(probe)
sample_op(Handler const& handler, time_point const& start, bool repeat, io_latency_probe* probe)
: m_handler(handler), m_start(start), m_repeat(repeat), m_probe(probe)
{
XRPL_ASSERT(
m_probe,
@@ -214,23 +197,19 @@ private:
// Calculate when we want to sample again, and
// adjust for the expected latency.
//
typename Clock::time_point const when(
now + m_probe->m_period - 2 * elapsed);
typename Clock::time_point const when(now + m_probe->m_period - 2 * elapsed);
if (when <= now)
{
// The latency is too high to maintain the desired
// period so don't bother with a timer.
//
boost::asio::post(
m_probe->m_ios,
sample_op<Handler>(m_handler, now, m_repeat, m_probe));
boost::asio::post(m_probe->m_ios, sample_op<Handler>(m_handler, now, m_repeat, m_probe));
}
else
{
m_probe->m_timer.expires_after(when - now);
m_probe->m_timer.async_wait(
sample_op<Handler>(m_handler, now, m_repeat, m_probe));
m_probe->m_timer.async_wait(sample_op<Handler>(m_handler, now, m_repeat, m_probe));
}
}
}
@@ -241,9 +220,7 @@ private:
if (!m_probe)
return;
typename Clock::time_point const now(Clock::now());
boost::asio::post(
m_probe->m_ios,
sample_op<Handler>(m_handler, now, m_repeat, m_probe));
boost::asio::post(m_probe->m_ios, sample_op<Handler>(m_handler, now, m_repeat, m_probe));
}
};
};

View File

@@ -29,8 +29,7 @@ private:
time_point now_;
public:
explicit manual_clock(time_point const& now = time_point(duration(0)))
: now_(now)
explicit manual_clock(time_point const& now = time_point(duration(0))) : now_(now)
{
}
@@ -44,9 +43,7 @@ public:
void
set(time_point const& when)
{
XRPL_ASSERT(
!Clock::is_steady || when >= now_,
"beast::manual_clock::set(time_point) : forward input");
XRPL_ASSERT(!Clock::is_steady || when >= now_, "beast::manual_clock::set(time_point) : forward input");
now_ = when;
}
@@ -64,8 +61,7 @@ public:
advance(std::chrono::duration<Rep, Period> const& elapsed)
{
XRPL_ASSERT(
!Clock::is_steady || (now_ + elapsed) >= now_,
"beast::manual_clock::advance(duration) : forward input");
!Clock::is_steady || (now_ + elapsed) >= now_, "beast::manual_clock::advance(duration) : forward input");
now_ += elapsed;
}

View File

@@ -10,14 +10,12 @@ namespace beast {
/** Expire aged container items past the specified age. */
template <class AgedContainer, class Rep, class Period>
typename std::enable_if<is_aged_container<AgedContainer>::value, std::size_t>::
type
expire(AgedContainer& c, std::chrono::duration<Rep, Period> const& age)
typename std::enable_if<is_aged_container<AgedContainer>::value, std::size_t>::type
expire(AgedContainer& c, std::chrono::duration<Rep, Period> const& age)
{
std::size_t n(0);
auto const expired(c.clock().now() - age);
for (auto iter(c.chronological.cbegin());
iter != c.chronological.cend() && iter.when() <= expired;)
for (auto iter(c.chronological.cbegin()); iter != c.chronological.cend() && iter.when() <= expired;)
{
iter = c.erase(iter);
++n;

View File

@@ -15,8 +15,7 @@ template <
class Clock = std::chrono::steady_clock,
class Compare = std::less<Key>,
class Allocator = std::allocator<std::pair<Key const, T>>>
using aged_map = detail::
aged_ordered_container<false, true, Key, T, Clock, Compare, Allocator>;
using aged_map = detail::aged_ordered_container<false, true, Key, T, Clock, Compare, Allocator>;
}

View File

@@ -15,8 +15,7 @@ template <
class Clock = std::chrono::steady_clock,
class Compare = std::less<Key>,
class Allocator = std::allocator<std::pair<Key const, T>>>
using aged_multimap = detail::
aged_ordered_container<true, true, Key, T, Clock, Compare, Allocator>;
using aged_multimap = detail::aged_ordered_container<true, true, Key, T, Clock, Compare, Allocator>;
}

View File

@@ -14,8 +14,7 @@ template <
class Clock = std::chrono::steady_clock,
class Compare = std::less<Key>,
class Allocator = std::allocator<Key>>
using aged_multiset = detail::
aged_ordered_container<true, false, Key, void, Clock, Compare, Allocator>;
using aged_multiset = detail::aged_ordered_container<true, false, Key, void, Clock, Compare, Allocator>;
}

View File

@@ -14,8 +14,7 @@ template <
class Clock = std::chrono::steady_clock,
class Compare = std::less<Key>,
class Allocator = std::allocator<Key>>
using aged_set = detail::
aged_ordered_container<false, false, Key, void, Clock, Compare, Allocator>;
using aged_set = detail::aged_ordered_container<false, false, Key, void, Clock, Compare, Allocator>;
}

View File

@@ -16,15 +16,7 @@ template <
class Hash = std::hash<Key>,
class KeyEqual = std::equal_to<Key>,
class Allocator = std::allocator<std::pair<Key const, T>>>
using aged_unordered_map = detail::aged_unordered_container<
false,
true,
Key,
T,
Clock,
Hash,
KeyEqual,
Allocator>;
using aged_unordered_map = detail::aged_unordered_container<false, true, Key, T, Clock, Hash, KeyEqual, Allocator>;
}

View File

@@ -16,15 +16,7 @@ template <
class Hash = std::hash<Key>,
class KeyEqual = std::equal_to<Key>,
class Allocator = std::allocator<std::pair<Key const, T>>>
using aged_unordered_multimap = detail::aged_unordered_container<
true,
true,
Key,
T,
Clock,
Hash,
KeyEqual,
Allocator>;
using aged_unordered_multimap = detail::aged_unordered_container<true, true, Key, T, Clock, Hash, KeyEqual, Allocator>;
}

View File

@@ -15,15 +15,8 @@ template <
class Hash = std::hash<Key>,
class KeyEqual = std::equal_to<Key>,
class Allocator = std::allocator<Key>>
using aged_unordered_multiset = detail::aged_unordered_container<
true,
false,
Key,
void,
Clock,
Hash,
KeyEqual,
Allocator>;
using aged_unordered_multiset =
detail::aged_unordered_container<true, false, Key, void, Clock, Hash, KeyEqual, Allocator>;
}

View File

@@ -15,15 +15,7 @@ template <
class Hash = std::hash<Key>,
class KeyEqual = std::equal_to<Key>,
class Allocator = std::allocator<Key>>
using aged_unordered_set = detail::aged_unordered_container<
false,
false,
Key,
void,
Clock,
Hash,
KeyEqual,
Allocator>;
using aged_unordered_set = detail::aged_unordered_container<false, false, Key, void, Clock, Hash, KeyEqual, Allocator>;
}

View File

@@ -16,14 +16,12 @@ template <bool is_const, class Iterator>
class aged_container_iterator
{
public:
using iterator_category =
typename std::iterator_traits<Iterator>::iterator_category;
using iterator_category = typename std::iterator_traits<Iterator>::iterator_category;
using value_type = typename std::conditional<
is_const,
typename Iterator::value_type::stashed::value_type const,
typename Iterator::value_type::stashed::value_type>::type;
using difference_type =
typename std::iterator_traits<Iterator>::difference_type;
using difference_type = typename std::iterator_traits<Iterator>::difference_type;
using pointer = value_type*;
using reference = value_type&;
using time_point = typename Iterator::value_type::stashed::time_point;
@@ -38,31 +36,22 @@ public:
class = typename std::enable_if<
(other_is_const == false || is_const == true) &&
std::is_same<Iterator, OtherIterator>::value == false>::type>
explicit aged_container_iterator(
aged_container_iterator<other_is_const, OtherIterator> const& other)
explicit aged_container_iterator(aged_container_iterator<other_is_const, OtherIterator> const& other)
: m_iter(other.m_iter)
{
}
// Disable constructing a const_iterator from a non-const_iterator.
template <
bool other_is_const,
class = typename std::enable_if<
other_is_const == false || is_const == true>::type>
aged_container_iterator(
aged_container_iterator<other_is_const, Iterator> const& other)
: m_iter(other.m_iter)
template <bool other_is_const, class = typename std::enable_if<other_is_const == false || is_const == true>::type>
aged_container_iterator(aged_container_iterator<other_is_const, Iterator> const& other) : m_iter(other.m_iter)
{
}
// Disable assigning a const_iterator to a non-const iterator
template <bool other_is_const, class OtherIterator>
auto
operator=(
aged_container_iterator<other_is_const, OtherIterator> const& other) ->
typename std::enable_if<
other_is_const == false || is_const == true,
aged_container_iterator&>::type
operator=(aged_container_iterator<other_is_const, OtherIterator> const& other) ->
typename std::enable_if<other_is_const == false || is_const == true, aged_container_iterator&>::type
{
m_iter = other.m_iter;
return *this;
@@ -70,16 +59,14 @@ public:
template <bool other_is_const, class OtherIterator>
bool
operator==(aged_container_iterator<other_is_const, OtherIterator> const&
other) const
operator==(aged_container_iterator<other_is_const, OtherIterator> const& other) const
{
return m_iter == other.m_iter;
}
template <bool other_is_const, class OtherIterator>
bool
operator!=(aged_container_iterator<other_is_const, OtherIterator> const&
other) const
operator!=(aged_container_iterator<other_is_const, OtherIterator> const& other) const
{
return m_iter != other.m_iter;
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -17,16 +17,11 @@ namespace detail {
template <class T>
struct is_empty_base_optimization_derived
: std::integral_constant<
bool,
std::is_empty<T>::value && !boost::is_final<T>::value>
: std::integral_constant<bool, std::is_empty<T>::value && !boost::is_final<T>::value>
{
};
template <
class T,
int UniqueID = 0,
bool isDerived = is_empty_base_optimization_derived<T>::value>
template <class T, int UniqueID = 0, bool isDerived = is_empty_base_optimization_derived<T>::value>
class empty_base_optimization : private T
{
public:

View File

@@ -35,9 +35,7 @@ template <std::size_t N>
void
setCurrentThreadName(char const (&newThreadName)[N])
{
static_assert(
N <= maxThreadNameLength + 1,
"Thread name cannot exceed 15 characters");
static_assert(N <= maxThreadNameLength + 1, "Thread name cannot exceed 15 characters");
setCurrentThreadName(std::string_view(newThreadName, N - 1));
}

View File

@@ -40,8 +40,7 @@ struct LexicalCast<std::string, In>
std::enable_if_t<std::is_enum_v<Enumeration>, bool>
operator()(std::string& out, Enumeration in)
{
out = std::to_string(
static_cast<std::underlying_type_t<Enumeration>>(in));
out = std::to_string(static_cast<std::underlying_type_t<Enumeration>>(in));
return true;
}
};
@@ -52,14 +51,10 @@ struct LexicalCast<Out, std::string_view>
{
explicit LexicalCast() = default;
static_assert(
std::is_integral_v<Out>,
"beast::LexicalCast can only be used with integral types");
static_assert(std::is_integral_v<Out>, "beast::LexicalCast can only be used with integral types");
template <class Integral = Out>
std::enable_if_t<
std::is_integral_v<Integral> && !std::is_same_v<Integral, bool>,
bool>
std::enable_if_t<std::is_integral_v<Integral> && !std::is_same_v<Integral, bool>, bool>
operator()(Integral& out, std::string_view in) const
{
auto first = in.data();
@@ -79,10 +74,9 @@ struct LexicalCast<Out, std::string_view>
std::string result;
// Convert the input to lowercase
std::transform(
in.begin(), in.end(), std::back_inserter(result), [](auto c) {
return std::tolower(static_cast<unsigned char>(c));
});
std::transform(in.begin(), in.end(), std::back_inserter(result), [](auto c) {
return std::tolower(static_cast<unsigned char>(c));
});
if (result == "1" || result == "true")
{
@@ -140,8 +134,7 @@ struct LexicalCast<Out, char const*>
bool
operator()(Out& out, char const* in) const
{
XRPL_ASSERT(
in, "beast::detail::LexicalCast(char const*) : non-null input");
XRPL_ASSERT(in, "beast::detail::LexicalCast(char const*) : non-null input");
return LexicalCast<Out, std::string_view>()(out, in);
}
};

View File

@@ -55,8 +55,7 @@ class ListIterator
{
public:
using iterator_category = std::bidirectional_iterator_tag;
using value_type =
typename beast::detail::CopyConst<N, typename N::value_type>::type;
using value_type = typename beast::detail::CopyConst<N, typename N::value_type>::type;
using difference_type = std::ptrdiff_t;
using pointer = value_type*;
using reference = value_type&;

View File

@@ -14,21 +14,16 @@ class LockFreeStackIterator
{
protected:
using Node = typename Container::Node;
using NodePtr =
typename std::conditional<IsConst, Node const*, Node*>::type;
using NodePtr = typename std::conditional<IsConst, Node const*, Node*>::type;
public:
using iterator_category = std::forward_iterator_tag;
using value_type = typename Container::value_type;
using difference_type = typename Container::difference_type;
using pointer = typename std::conditional<
IsConst,
typename Container::const_pointer,
typename Container::pointer>::type;
using reference = typename std::conditional<
IsConst,
typename Container::const_reference,
typename Container::reference>::type;
using pointer =
typename std::conditional<IsConst, typename Container::const_pointer, typename Container::pointer>::type;
using reference =
typename std::conditional<IsConst, typename Container::const_reference, typename Container::reference>::type;
LockFreeStackIterator() : m_node()
{
@@ -39,9 +34,7 @@ public:
}
template <bool OtherIsConst>
explicit LockFreeStackIterator(
LockFreeStackIterator<Container, OtherIsConst> const& other)
: m_node(other.m_node)
explicit LockFreeStackIterator(LockFreeStackIterator<Container, OtherIsConst> const& other) : m_node(other.m_node)
{
}
@@ -160,8 +153,7 @@ public:
using size_type = std::size_t;
using difference_type = std::ptrdiff_t;
using iterator = LockFreeStackIterator<LockFreeStack<Element, Tag>, false>;
using const_iterator =
LockFreeStackIterator<LockFreeStack<Element, Tag>, true>;
using const_iterator = LockFreeStackIterator<LockFreeStack<Element, Tag>, true>;
LockFreeStack() : m_end(nullptr), m_head(&m_end)
{
@@ -199,11 +191,7 @@ public:
{
first = (old_head == &m_end);
node->m_next = old_head;
} while (!m_head.compare_exchange_strong(
old_head,
node,
std::memory_order_release,
std::memory_order_relaxed));
} while (!m_head.compare_exchange_strong(old_head, node, std::memory_order_release, std::memory_order_relaxed));
return first;
}
@@ -226,11 +214,7 @@ public:
if (node == &m_end)
return nullptr;
new_head = node->m_next.load();
} while (!m_head.compare_exchange_strong(
node,
new_head,
std::memory_order_release,
std::memory_order_relaxed));
} while (!m_head.compare_exchange_strong(node, new_head, std::memory_order_release, std::memory_order_relaxed));
return static_cast<Element*>(node);
}

View File

@@ -27,8 +27,7 @@ template <class T>
inline void
reverse_bytes(T& t)
{
unsigned char* bytes = static_cast<unsigned char*>(
std::memmove(std::addressof(t), std::addressof(t), sizeof(T)));
unsigned char* bytes = static_cast<unsigned char*>(std::memmove(std::addressof(t), std::addressof(t), sizeof(T)));
for (unsigned i = 0; i < sizeof(T) / 2; ++i)
std::swap(bytes[i], bytes[sizeof(T) - 1 - i]);
}
@@ -53,11 +52,7 @@ template <class T, class Hasher>
inline void
maybe_reverse_bytes(T& t, Hasher&)
{
maybe_reverse_bytes(
t,
std::integral_constant<
bool,
Hasher::endian != boost::endian::order::native>{});
maybe_reverse_bytes(t, std::integral_constant<bool, Hasher::endian != boost::endian::order::native>{});
}
} // namespace detail
@@ -71,10 +66,8 @@ maybe_reverse_bytes(T& t, Hasher&)
template <class T>
struct is_uniquely_represented
: public std::integral_constant<
bool,
std::is_integral<T>::value || std::is_enum<T>::value ||
std::is_pointer<T>::value>
: public std::
integral_constant<bool, std::is_integral<T>::value || std::is_enum<T>::value || std::is_pointer<T>::value>
{
explicit is_uniquely_represented() = default;
};
@@ -92,8 +85,7 @@ struct is_uniquely_represented<T volatile> : public is_uniquely_represented<T>
};
template <class T>
struct is_uniquely_represented<T const volatile>
: public is_uniquely_represented<T>
struct is_uniquely_represented<T const volatile> : public is_uniquely_represented<T>
{
explicit is_uniquely_represented() = default;
};
@@ -104,8 +96,7 @@ template <class T, class U>
struct is_uniquely_represented<std::pair<T, U>>
: public std::integral_constant<
bool,
is_uniquely_represented<T>::value &&
is_uniquely_represented<U>::value &&
is_uniquely_represented<T>::value && is_uniquely_represented<U>::value &&
sizeof(T) + sizeof(U) == sizeof(std::pair<T, U>)>
{
explicit is_uniquely_represented() = default;
@@ -117,8 +108,7 @@ template <class... T>
struct is_uniquely_represented<std::tuple<T...>>
: public std::integral_constant<
bool,
std::conjunction_v<is_uniquely_represented<T>...> &&
sizeof(std::tuple<T...>) == (sizeof(T) + ...)>
std::conjunction_v<is_uniquely_represented<T>...> && sizeof(std::tuple<T...>) == (sizeof(T) + ...)>
{
explicit is_uniquely_represented() = default;
};
@@ -135,10 +125,8 @@ struct is_uniquely_represented<T[N]> : public is_uniquely_represented<T>
template <class T, std::size_t N>
struct is_uniquely_represented<std::array<T, N>>
: public std::integral_constant<
bool,
is_uniquely_represented<T>::value &&
sizeof(T) * N == sizeof(std::array<T, N>)>
: public std::
integral_constant<bool, is_uniquely_represented<T>::value && sizeof(T) * N == sizeof(std::array<T, N>)>
{
explicit is_uniquely_represented() = default;
};
@@ -158,12 +146,10 @@ struct is_uniquely_represented<std::array<T, N>>
*/
/** @{ */
template <class T, class HashAlgorithm>
struct is_contiguously_hashable
: public std::integral_constant<
bool,
is_uniquely_represented<T>::value &&
(sizeof(T) == 1 ||
HashAlgorithm::endian == boost::endian::order::native)>
struct is_contiguously_hashable : public std::integral_constant<
bool,
is_uniquely_represented<T>::value &&
(sizeof(T) == 1 || HashAlgorithm::endian == boost::endian::order::native)>
{
explicit is_contiguously_hashable() = default;
};
@@ -173,8 +159,7 @@ struct is_contiguously_hashable<T[N], HashAlgorithm>
: public std::integral_constant<
bool,
is_uniquely_represented<T[N]>::value &&
(sizeof(T) == 1 ||
HashAlgorithm::endian == boost::endian::order::native)>
(sizeof(T) == 1 || HashAlgorithm::endian == boost::endian::order::native)>
{
explicit is_contiguously_hashable() = default;
};
@@ -219,8 +204,7 @@ hash_append(Hasher& h, T const& t) noexcept
template <class Hasher, class T>
inline std::enable_if_t<
!is_contiguously_hashable<T, Hasher>::value &&
(std::is_integral<T>::value || std::is_pointer<T>::value ||
std::is_enum<T>::value)>
(std::is_integral<T>::value || std::is_pointer<T>::value || std::is_enum<T>::value)>
hash_append(Hasher& h, T t) noexcept
{
detail::reverse_bytes(t);
@@ -254,15 +238,11 @@ hash_append(Hasher& h, T (&a)[N]) noexcept;
template <class Hasher, class CharT, class Traits, class Alloc>
std::enable_if_t<!is_contiguously_hashable<CharT, Hasher>::value>
hash_append(
Hasher& h,
std::basic_string<CharT, Traits, Alloc> const& s) noexcept;
hash_append(Hasher& h, std::basic_string<CharT, Traits, Alloc> const& s) noexcept;
template <class Hasher, class CharT, class Traits, class Alloc>
std::enable_if_t<is_contiguously_hashable<CharT, Hasher>::value>
hash_append(
Hasher& h,
std::basic_string<CharT, Traits, Alloc> const& s) noexcept;
hash_append(Hasher& h, std::basic_string<CharT, Traits, Alloc> const& s) noexcept;
template <class Hasher, class T, class U>
std::enable_if_t<!is_contiguously_hashable<std::pair<T, U>, Hasher>::value>
@@ -294,14 +274,10 @@ hash_append(Hasher& h, std::unordered_set<Key, Hash, Pred, Alloc> const& s);
template <class Hasher, class Key, class Compare, class Alloc>
std::enable_if_t<!is_contiguously_hashable<Key, Hasher>::value>
hash_append(
Hasher& h,
boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept;
hash_append(Hasher& h, boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept;
template <class Hasher, class Key, class Compare, class Alloc>
std::enable_if_t<is_contiguously_hashable<Key, Hasher>::value>
hash_append(
Hasher& h,
boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept;
hash_append(Hasher& h, boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept;
template <class Hasher, class T0, class T1, class... T>
void
hash_append(Hasher& h, T0 const& t0, T1 const& t1, T const&... t) noexcept;
@@ -320,9 +296,7 @@ hash_append(Hasher& h, T (&a)[N]) noexcept
template <class Hasher, class CharT, class Traits, class Alloc>
inline std::enable_if_t<!is_contiguously_hashable<CharT, Hasher>::value>
hash_append(
Hasher& h,
std::basic_string<CharT, Traits, Alloc> const& s) noexcept
hash_append(Hasher& h, std::basic_string<CharT, Traits, Alloc> const& s) noexcept
{
for (auto c : s)
hash_append(h, c);
@@ -331,9 +305,7 @@ hash_append(
template <class Hasher, class CharT, class Traits, class Alloc>
inline std::enable_if_t<is_contiguously_hashable<CharT, Hasher>::value>
hash_append(
Hasher& h,
std::basic_string<CharT, Traits, Alloc> const& s) noexcept
hash_append(Hasher& h, std::basic_string<CharT, Traits, Alloc> const& s) noexcept
{
h(s.data(), s.size() * sizeof(CharT));
hash_append(h, s.size());
@@ -342,8 +314,7 @@ hash_append(
// pair
template <class Hasher, class T, class U>
inline std::enable_if_t<
!is_contiguously_hashable<std::pair<T, U>, Hasher>::value>
inline std::enable_if_t<!is_contiguously_hashable<std::pair<T, U>, Hasher>::value>
hash_append(Hasher& h, std::pair<T, U> const& p) noexcept
{
hash_append(h, p.first, p.second);
@@ -380,18 +351,14 @@ hash_append(Hasher& h, std::array<T, N> const& a) noexcept
template <class Hasher, class Key, class Compare, class Alloc>
std::enable_if_t<!is_contiguously_hashable<Key, Hasher>::value>
hash_append(
Hasher& h,
boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept
hash_append(Hasher& h, boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept
{
for (auto const& t : v)
hash_append(h, t);
}
template <class Hasher, class Key, class Compare, class Alloc>
std::enable_if_t<is_contiguously_hashable<Key, Hasher>::value>
hash_append(
Hasher& h,
boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept
hash_append(Hasher& h, boost::container::flat_set<Key, Compare, Alloc> const& v) noexcept
{
h(&(v.begin()), v.size() * sizeof(Key));
}
@@ -414,10 +381,7 @@ hash_one(Hasher& h, T const& t) noexcept
template <class Hasher, class... T, std::size_t... I>
inline void
tuple_hash(
Hasher& h,
std::tuple<T...> const& t,
std::index_sequence<I...>) noexcept
tuple_hash(Hasher& h, std::tuple<T...> const& t, std::index_sequence<I...>) noexcept
{
for_each_item(hash_one(h, std::get<I>(t))...);
}
@@ -425,8 +389,7 @@ tuple_hash(
} // namespace detail
template <class Hasher, class... T>
inline std::enable_if_t<
!is_contiguously_hashable<std::tuple<T...>, Hasher>::value>
inline std::enable_if_t<!is_contiguously_hashable<std::tuple<T...>, Hasher>::value>
hash_append(Hasher& h, std::tuple<T...> const& t) noexcept
{
detail::tuple_hash(h, t, std::index_sequence_for<T...>{});
@@ -452,9 +415,7 @@ hash_append(Hasher& h, std::chrono::duration<Rep, Period> const& d) noexcept
template <class Hasher, class Clock, class Duration>
inline void
hash_append(
Hasher& h,
std::chrono::time_point<Clock, Duration> const& tp) noexcept
hash_append(Hasher& h, std::chrono::time_point<Clock, Duration> const& tp) noexcept
{
hash_append(h, tp.time_since_epoch());
}

View File

@@ -49,8 +49,7 @@ private:
{
std::memcpy(writeBuffer_.data(), data, len);
writeBuffer_ = writeBuffer_.subspan(len);
readBuffer_ = std::span{
std::begin(buffer_), buffer_.size() - writeBuffer_.size()};
readBuffer_ = std::span{std::begin(buffer_), buffer_.size() - writeBuffer_.size()};
}
}
@@ -98,8 +97,7 @@ private:
{
if (seed_.has_value())
{
return XXH3_64bits_withSeed(
readBuffer_.data(), readBuffer_.size(), *seed_);
return XXH3_64bits_withSeed(readBuffer_.data(), readBuffer_.size(), *seed_);
}
else
{
@@ -128,17 +126,13 @@ public:
}
}
template <
class Seed,
std::enable_if_t<std::is_unsigned<Seed>::value>* = nullptr>
template <class Seed, std::enable_if_t<std::is_unsigned<Seed>::value>* = nullptr>
explicit xxhasher(Seed seed) : seed_(seed)
{
resetBuffers();
}
template <
class Seed,
std::enable_if_t<std::is_unsigned<Seed>::value>* = nullptr>
template <class Seed, std::enable_if_t<std::is_unsigned<Seed>::value>* = nullptr>
xxhasher(Seed seed, Seed) : seed_(seed)
{
resetBuffers();

View File

@@ -23,9 +23,7 @@ public:
@param journal Destination for logging output.
*/
static std::shared_ptr<StatsDCollector>
New(IP::Endpoint const& address,
std::string const& prefix,
Journal journal);
New(IP::Endpoint const& address, std::string const& prefix, Journal journal);
};
} // namespace insight

View File

@@ -26,8 +26,7 @@ struct ci_equal_pred
operator()(char c1, char c2)
{
// VFALCO TODO Use a table lookup here
return std::tolower(static_cast<unsigned char>(c1)) ==
std::tolower(static_cast<unsigned char>(c2));
return std::tolower(static_cast<unsigned char>(c1)) == std::tolower(static_cast<unsigned char>(c2));
}
};
@@ -97,8 +96,7 @@ trim_right(String const& s)
*/
template <
class FwdIt,
class Result = std::vector<
std::basic_string<typename std::iterator_traits<FwdIt>::value_type>>,
class Result = std::vector<std::basic_string<typename std::iterator_traits<FwdIt>::value_type>>,
class Char>
Result
split(FwdIt first, FwdIt last, Char delim)
@@ -172,10 +170,7 @@ split(FwdIt first, FwdIt last, Char delim)
return result;
}
template <
class FwdIt,
class Result = std::vector<
std::basic_string<typename std::iterator_traits<FwdIt>::value_type>>>
template <class FwdIt, class Result = std::vector<std::basic_string<typename std::iterator_traits<FwdIt>::value_type>>>
Result
split_commas(FwdIt first, FwdIt last)
{
@@ -223,8 +218,7 @@ public:
bool
operator==(list_iterator const& other) const
{
return other.it_ == it_ && other.end_ == end_ &&
other.value_.size() == value_.size();
return other.it_ == it_ && other.end_ == end_ && other.value_.size() == value_.size();
}
bool
@@ -288,14 +282,12 @@ list_iterator::increment()
++it_;
if (it_ == end_)
{
value_ = boost::string_ref(
&*start, std::distance(start, it_));
value_ = boost::string_ref(&*start, std::distance(start, it_));
return;
}
if (*it_ == '"')
{
value_ = boost::string_ref(
&*start, std::distance(start, it_));
value_ = boost::string_ref(&*start, std::distance(start, it_));
++it_;
return;
}
@@ -321,8 +313,7 @@ list_iterator::increment()
++it_;
if (it_ == end_ || *it_ == ',' || is_lws(*it_))
{
value_ =
boost::string_ref(&*start, std::distance(start, it_));
value_ = boost::string_ref(&*start, std::distance(start, it_));
return;
}
}
@@ -344,8 +335,7 @@ inline boost::iterator_range<list_iterator>
make_list(boost::string_ref const& field)
{
return boost::iterator_range<list_iterator>{
list_iterator{field.begin(), field.end()},
list_iterator{field.end(), field.end()}};
list_iterator{field.begin(), field.end()}, list_iterator{field.end(), field.end()}};
}
/** Returns true if the specified token exists in the list.
@@ -367,12 +357,8 @@ bool
is_keep_alive(boost::beast::http::message<isRequest, Body, Fields> const& m)
{
if (m.version() <= 10)
return boost::beast::http::token_list{
m[boost::beast::http::field::connection]}
.exists("keep-alive");
return !boost::beast::http::token_list{
m[boost::beast::http::field::connection]}
.exists("close");
return boost::beast::http::token_list{m[boost::beast::http::field::connection]}.exists("keep-alive");
return !boost::beast::http::token_list{m[boost::beast::http::field::connection]}.exists("close");
}
} // namespace rfc2616

View File

@@ -31,9 +31,7 @@ protected:
boost::asio::io_context ios_;
private:
boost::optional<boost::asio::executor_work_guard<
boost::asio::io_context::executor_type>>
work_;
boost::optional<boost::asio::executor_work_guard<boost::asio::io_context::executor_type>> work_;
std::vector<std::thread> threads_;
std::mutex m_;
std::condition_variable cv_;
@@ -43,8 +41,7 @@ public:
/// The type of yield context passed to functions.
using yield_context = boost::asio::yield_context;
explicit enable_yield_to(std::size_t concurrency = 1)
: work_(boost::asio::make_work_guard(ios_))
explicit enable_yield_to(std::size_t concurrency = 1) : work_(boost::asio::make_work_guard(ios_))
{
threads_.reserve(concurrency);
while (concurrency--)

View File

@@ -23,12 +23,7 @@ global_suites()
template <class Suite>
struct insert_suite
{
insert_suite(
char const* name,
char const* module,
char const* library,
bool manual,
int priority)
insert_suite(char const* name, char const* module, char const* library, bool manual, int priority)
{
global_suites().insert<Suite>(name, module, library, manual, priority);
}

View File

@@ -53,8 +53,7 @@ public:
//------------------------------------------------------------------------------
template <class>
selector::selector(mode_t mode, std::string const& pattern)
: mode_(mode), pat_(pattern)
selector::selector(mode_t mode, std::string const& pattern) : mode_(mode), pat_(pattern)
{
if (mode_ == automatch && pattern.empty())
mode_ = all;

View File

@@ -140,10 +140,7 @@ reporter<_>::results::add(suite_results const& r)
if (elapsed >= std::chrono::seconds{1})
{
auto const iter = std::lower_bound(
top.begin(),
top.end(),
elapsed,
[](run_time const& t1, typename clock_type::duration const& t2) {
top.begin(), top.end(), elapsed, [](run_time const& t1, typename clock_type::duration const& t2) {
return t1.second > t2;
});
if (iter != top.end())
@@ -176,10 +173,8 @@ reporter<_>::~reporter()
os_ << std::setw(8) << fmtdur(i.second) << " " << i.first << '\n';
}
auto const elapsed = clock_type::now() - results_.start;
os_ << fmtdur(elapsed) << ", " << amount{results_.suites, "suite"} << ", "
<< amount{results_.cases, "case"} << ", "
<< amount{results_.total, "test"} << " total, "
<< amount{results_.failed, "failure"} << std::endl;
os_ << fmtdur(elapsed) << ", " << amount{results_.suites, "suite"} << ", " << amount{results_.cases, "case"} << ", "
<< amount{results_.total, "test"} << " total, " << amount{results_.failed, "failure"} << std::endl;
}
template <class _>
@@ -214,9 +209,7 @@ void
reporter<_>::on_case_begin(std::string const& name)
{
case_results_ = case_results(name);
os_ << suite_results_.name
<< (case_results_.name.empty() ? "" : (" " + case_results_.name))
<< std::endl;
os_ << suite_results_.name << (case_results_.name.empty() ? "" : (" " + case_results_.name)) << std::endl;
}
template <class _>
@@ -239,8 +232,7 @@ reporter<_>::on_fail(std::string const& reason)
{
++case_results_.failed;
++case_results_.total;
os_ << "#" << case_results_.total << " failed"
<< (reason.empty() ? "" : ": ") << reason << std::endl;
os_ << "#" << case_results_.total << " failed" << (reason.empty() ? "" : ": ") << reason << std::endl;
}
template <class _>

View File

@@ -24,8 +24,7 @@ public:
{
}
test(bool pass_, std::string const& reason_)
: pass(pass_), reason(reason_)
test(bool pass_, std::string const& reason_) : pass(pass_), reason(reason_)
{
}

View File

@@ -92,17 +92,13 @@ private:
}
};
template <
class CharT,
class Traits = std::char_traits<CharT>,
class Allocator = std::allocator<CharT>>
template <class CharT, class Traits = std::char_traits<CharT>, class Allocator = std::allocator<CharT>>
class log_os : public std::basic_ostream<CharT, Traits>
{
log_buf<CharT, Traits, Allocator> buf_;
public:
explicit log_os(suite& self)
: std::basic_ostream<CharT, Traits>(&buf_), buf_(self)
explicit log_os(suite& self) : std::basic_ostream<CharT, Traits>(&buf_), buf_(self)
{
}
};
@@ -241,11 +237,7 @@ public:
template <class Condition, class String>
bool
expect(
Condition const& shouldBeTrue,
String const& reason,
char const* file,
int line);
expect(Condition const& shouldBeTrue, String const& reason, char const* file, int line);
/** @} */
//
@@ -349,8 +341,7 @@ public:
}
template <class T>
scoped_testcase(suite& self, std::stringstream& ss, T const& t)
: suite_(self), ss_(ss)
scoped_testcase(suite& self, std::stringstream& ss, T const& t) : suite_(self), ss_(ss)
{
ss_.clear();
ss_.str({});
@@ -423,11 +414,7 @@ suite::expect(Condition const& shouldBeTrue, String const& reason)
template <class Condition, class String>
bool
suite::expect(
Condition const& shouldBeTrue,
String const& reason,
char const* file,
int line)
suite::expect(Condition const& shouldBeTrue, String const& reason, char const* file, int line)
{
if (shouldBeTrue)
{
@@ -576,8 +563,7 @@ suite::run(runner& r)
If the condition is false, the file and line number are reported.
*/
#define BEAST_EXPECTS(cond, reason) \
((cond) ? (pass(), true) : (fail((reason), __FILE__, __LINE__), false))
#define BEAST_EXPECTS(cond, reason) ((cond) ? (pass(), true) : (fail((reason), __FILE__, __LINE__), false))
#endif
} // namespace unit_test
@@ -587,11 +573,9 @@ suite::run(runner& r)
// detail:
// This inserts the suite with the given manual flag
#define BEAST_DEFINE_TESTSUITE_INSERT( \
Class, Module, Library, manual, priority) \
static beast::unit_test::detail::insert_suite<Class##_test> \
Library##Module##Class##_test_instance( \
#Class, #Module, #Library, manual, priority)
#define BEAST_DEFINE_TESTSUITE_INSERT(Class, Module, Library, manual, priority) \
static beast::unit_test::detail::insert_suite<Class##_test> Library##Module##Class##_test_instance( \
#Class, #Module, #Library, manual, priority)
//------------------------------------------------------------------------------
@@ -646,8 +630,7 @@ suite::run(runner& r)
#else
#include <xrpl/beast/unit_test/global_suites.h>
#define BEAST_DEFINE_TESTSUITE(Class, Module, Library) \
BEAST_DEFINE_TESTSUITE_INSERT(Class, Module, Library, false, 0)
#define BEAST_DEFINE_TESTSUITE(Class, Module, Library) BEAST_DEFINE_TESTSUITE_INSERT(Class, Module, Library, false, 0)
#define BEAST_DEFINE_TESTSUITE_MANUAL(Class, Module, Library) \
BEAST_DEFINE_TESTSUITE_INSERT(Class, Module, Library, true, 0)
#define BEAST_DEFINE_TESTSUITE_PRIO(Class, Module, Library, Priority) \

View File

@@ -28,13 +28,7 @@ class suite_info
run_type run_;
public:
suite_info(
std::string name,
std::string module,
std::string library,
bool manual,
int priority,
run_type run)
suite_info(std::string name, std::string module, std::string library, bool manual, int priority, run_type run)
: name_(std::move(name))
, module_(std::move(module))
, library_(std::move(library))
@@ -88,10 +82,8 @@ public:
{
// we want higher priority suites sorted first, thus the negation
// of priority value here
return std::forward_as_tuple(
-lhs.priority_, lhs.library_, lhs.module_, lhs.name_) <
std::forward_as_tuple(
-rhs.priority_, rhs.library_, rhs.module_, rhs.name_);
return std::forward_as_tuple(-lhs.priority_, lhs.library_, lhs.module_, lhs.name_) <
std::forward_as_tuple(-rhs.priority_, rhs.library_, rhs.module_, rhs.name_);
}
};
@@ -100,20 +92,10 @@ public:
/// Convenience for producing suite_info for a given test type.
template <class Suite>
suite_info
make_suite_info(
std::string name,
std::string module,
std::string library,
bool manual,
int priority)
make_suite_info(std::string name, std::string module, std::string library, bool manual, int priority)
{
return suite_info(
std::move(name),
std::move(module),
std::move(library),
manual,
priority,
[](runner& r) { Suite{}(r); });
std::move(name), std::move(module), std::move(library), manual, priority, [](runner& r) { Suite{}(r); });
}
} // namespace unit_test

View File

@@ -33,24 +33,14 @@ public:
*/
template <class Suite>
void
insert(
char const* name,
char const* module,
char const* library,
bool manual,
int priority);
insert(char const* name, char const* module, char const* library, bool manual, int priority);
};
//------------------------------------------------------------------------------
template <class Suite>
void
suite_list::insert(
char const* name,
char const* module,
char const* library,
bool manual,
int priority)
suite_list::insert(char const* name, char const* module, char const* library, bool manual, int priority)
{
#ifndef NDEBUG
{
@@ -65,8 +55,7 @@ suite_list::insert(
BOOST_ASSERT(result.second); // Duplicate type
}
#endif
cont().emplace(
make_suite_info<Suite>(name, module, library, manual, priority));
cont().emplace(make_suite_info<Suite>(name, module, library, manual, priority));
}
} // namespace unit_test

View File

@@ -45,8 +45,7 @@ public:
template <class F, class... Args>
explicit thread(suite& s, F&& f, Args&&... args) : s_(&s)
{
std::function<void(void)> b =
std::bind(std::forward<F>(f), std::forward<Args>(args)...);
std::function<void(void)> b = std::bind(std::forward<F>(f), std::forward<Args>(args)...);
t_ = std::thread(&thread::run, this, std::move(b));
}

View File

@@ -130,8 +130,7 @@ public:
class ScopedStream
{
public:
ScopedStream(ScopedStream const& other)
: ScopedStream(other.m_sink, other.m_level)
ScopedStream(ScopedStream const& other) : ScopedStream(other.m_sink, other.m_level)
{
}
@@ -167,16 +166,12 @@ public:
};
#ifndef __INTELLISENSE__
static_assert(
std::is_default_constructible<ScopedStream>::value == false,
"");
static_assert(std::is_default_constructible<ScopedStream>::value == false, "");
static_assert(std::is_copy_constructible<ScopedStream>::value == true, "");
static_assert(std::is_move_constructible<ScopedStream>::value == true, "");
static_assert(std::is_copy_assignable<ScopedStream>::value == false, "");
static_assert(std::is_move_assignable<ScopedStream>::value == false, "");
static_assert(
std::is_nothrow_destructible<ScopedStream>::value == true,
"");
static_assert(std::is_nothrow_destructible<ScopedStream>::value == true, "");
#endif
//--------------------------------------------------------------------------
@@ -186,8 +181,7 @@ public:
{
public:
/** Create a stream which produces no output. */
explicit Stream()
: m_sink(getNullSink()), m_level(severities::kDisabled)
explicit Stream() : m_sink(getNullSink()), m_level(severities::kDisabled)
{
}
@@ -197,9 +191,7 @@ public:
*/
Stream(Sink& sink, Severity level) : m_sink(sink), m_level(level)
{
XRPL_ASSERT(
m_level < severities::kDisabled,
"beast::Journal::Stream::Stream : maximum level");
XRPL_ASSERT(m_level < severities::kDisabled, "beast::Journal::Stream::Stream : maximum level");
}
/** Construct or copy another Stream. */
@@ -430,8 +422,7 @@ class basic_logstream : public std::basic_ostream<CharT, Traits>
detail::logstream_buf<CharT, Traits> buf_;
public:
explicit basic_logstream(beast::Journal::Stream const& strm)
: std::basic_ostream<CharT, Traits>(&buf_), buf_(strm)
explicit basic_logstream(beast::Journal::Stream const& strm) : std::basic_ostream<CharT, Traits>(&buf_), buf_(strm)
{
}
};

View File

@@ -18,16 +18,12 @@ private:
std::string prefix_;
public:
explicit WrappedSink(
beast::Journal::Sink& sink,
std::string const& prefix = "")
explicit WrappedSink(beast::Journal::Sink& sink, std::string const& prefix = "")
: Sink(sink), sink_(sink), prefix_(prefix)
{
}
explicit WrappedSink(
beast::Journal const& journal,
std::string const& prefix = "")
explicit WrappedSink(beast::Journal const& journal, std::string const& prefix = "")
: WrappedSink(journal.sink(), prefix)
{
}

View File

@@ -20,8 +20,7 @@
#endif
#define XRPL_ASSERT ALWAYS_OR_UNREACHABLE
#define XRPL_ASSERT_PARTS(cond, function, description, ...) \
XRPL_ASSERT(cond, function " : " description)
#define XRPL_ASSERT_PARTS(cond, function, description, ...) XRPL_ASSERT(cond, function " : " description)
// How to use the instrumentation macros:
//

View File

@@ -10,10 +10,8 @@ template <bool IsConst, class T>
struct maybe_const
{
explicit maybe_const() = default;
using type = typename std::conditional<
IsConst,
typename std::remove_const<T>::type const,
typename std::remove_const<T>::type>::type;
using type = typename std::
conditional<IsConst, typename std::remove_const<T>::type const, typename std::remove_const<T>::type>::type;
};
/** Alias for omitting `typename`. */

View File

@@ -36,10 +36,7 @@ rngfill(void* const buffer, std::size_t const bytes, Generator& g)
}
}
template <
class Generator,
std::size_t N,
class = std::enable_if_t<N % sizeof(typename Generator::result_type) == 0>>
template <class Generator, std::size_t N, class = std::enable_if_t<N % sizeof(typename Generator::result_type) == 0>>
void
rngfill(std::array<std::uint8_t, N>& a, Generator& g)
{

View File

@@ -76,21 +76,18 @@ private:
std::remove_reference_t<Closure> closure_;
static_assert(
std::is_same<decltype(closure_(std::declval<Args_t>()...)), Ret_t>::
value,
std::is_same<decltype(closure_(std::declval<Args_t>()...)), Ret_t>::value,
"Closure arguments don't match ClosureCounter Ret_t or Args_t");
public:
Substitute() = delete;
Substitute(Substitute const& rhs)
: counter_(rhs.counter_), closure_(rhs.closure_)
Substitute(Substitute const& rhs) : counter_(rhs.counter_), closure_(rhs.closure_)
{
++counter_;
}
Substitute(Substitute&& rhs) noexcept(
std::is_nothrow_move_constructible<Closure>::value)
Substitute(Substitute&& rhs) noexcept(std::is_nothrow_move_constructible<Closure>::value)
: counter_(rhs.counter_), closure_(std::move(rhs.closure_))
{
++counter_;
@@ -150,13 +147,11 @@ public:
waitForClosures_ = true;
if (closureCount_ > 0)
{
if (!allClosuresDoneCond_.wait_for(
lock, wait, [this] { return closureCount_ == 0; }))
if (!allClosuresDoneCond_.wait_for(lock, wait, [this] { return closureCount_ == 0; }))
{
if (auto stream = j.error())
stream << name << " waiting for ClosureCounter::join().";
allClosuresDoneCond_.wait(
lock, [this] { return closureCount_ == 0; });
allClosuresDoneCond_.wait(lock, [this] { return closureCount_ == 0; });
}
}
}

View File

@@ -6,20 +6,13 @@
namespace xrpl {
template <class F>
JobQueue::Coro::Coro(
Coro_create_t,
JobQueue& jq,
JobType type,
std::string const& name,
F&& f)
JobQueue::Coro::Coro(Coro_create_t, JobQueue& jq, JobType type, std::string const& name, F&& f)
: jq_(jq)
, type_(type)
, name_(name)
, running_(false)
, coro_(
[this, fn = std::forward<F>(f)](
boost::coroutines::asymmetric_coroutine<void>::push_type&
do_yield) {
[this, fn = std::forward<F>(f)](boost::coroutines::asymmetric_coroutine<void>::push_type& do_yield) {
yield_ = &do_yield;
yield();
fn(shared_from_this());
@@ -57,8 +50,7 @@ JobQueue::Coro::post()
}
// sp keeps 'this' alive
if (jq_.addJob(
type_, name_, [this, sp = shared_from_this()]() { resume(); }))
if (jq_.addJob(type_, name_, [this, sp = shared_from_this()]() { resume(); }))
{
return true;
}
@@ -84,8 +76,7 @@ JobQueue::Coro::resume()
auto saved = detail::getLocalValues().release();
detail::getLocalValues().reset(&lvs_);
std::lock_guard lock(mutex_);
XRPL_ASSERT(
static_cast<bool>(coro_), "xrpl::JobQueue::Coro::resume : is runnable");
XRPL_ASSERT(static_cast<bool>(coro_), "xrpl::JobQueue::Coro::resume : is runnable");
coro_();
detail::getLocalValues().release();
detail::getLocalValues().reset(saved);

View File

@@ -3,19 +3,16 @@
#include <xrpl/basics/CountedObject.h>
#include <xrpl/core/ClosureCounter.h>
#include <xrpl/core/LoadMonitor.h>
#include <functional>
#include <memory>
namespace xrpl {
class LoadEvent;
class LoadMonitor;
// Note that this queue should only be used for CPU-bound jobs
// It is primarily intended for signature checking
enum JobType : int {
enum JobType {
// Special type indicating an invalid job - will go away soon.
jtINVALID = -1,
@@ -96,11 +93,7 @@ public:
Job(JobType type, std::uint64_t index);
// VFALCO TODO try to remove the dependency on LoadMonitor.
Job(JobType type,
std::string const& name,
std::uint64_t index,
LoadMonitor& lm,
std::function<void()> const& job);
Job(JobType type, std::string const& name, std::uint64_t index, LoadMonitor& lm, std::function<void()> const& job);
JobType
getType() const;

View File

@@ -14,8 +14,6 @@
namespace xrpl {
class LoadEvent;
namespace perf {
class PerfLog;
}
@@ -143,14 +141,11 @@ public:
*/
template <
typename JobHandler,
typename = std::enable_if_t<std::is_same<
decltype(std::declval<JobHandler&&>()()),
void>::value>>
typename = std::enable_if_t<std::is_same<decltype(std::declval<JobHandler&&>()()), void>::value>>
bool
addJob(JobType type, std::string const& name, JobHandler&& jobHandler)
{
if (auto optionalCountedJob =
jobCounter_.wrap(std::forward<JobHandler>(jobHandler)))
if (auto optionalCountedJob = jobCounter_.wrap(std::forward<JobHandler>(jobHandler)))
{
return addRefCountedJob(type, name, std::move(*optionalCountedJob));
}
@@ -266,10 +261,7 @@ private:
//
// return true if func added to queue.
bool
addRefCountedJob(
JobType type,
std::string const& name,
JobFunction const& func);
addRefCountedJob(JobType type, std::string const& name, JobFunction const& func);
// Returns the next Job we should run now.
//
@@ -398,8 +390,7 @@ JobQueue::postCoro(JobType t, std::string const& name, F&& f)
Last param is the function the coroutine runs. Signature of
void(std::shared_ptr<Coro>).
*/
auto coro = std::make_shared<Coro>(
Coro_create_t{}, *this, t, name, std::forward<F>(f));
auto coro = std::make_shared<Coro>(Coro_create_t{}, *this, t, name, std::forward<F>(f));
if (!coro->post())
{
// The Coro was not successfully posted. Disable it so it's destructor

View File

@@ -4,12 +4,9 @@
#include <xrpl/basics/Log.h>
#include <xrpl/beast/insight/Collector.h>
#include <xrpl/core/JobTypeInfo.h>
#include <xrpl/core/LoadMonitor.h>
namespace xrpl {
enum JobType : int;
struct JobTypeData
{
private:
@@ -35,19 +32,10 @@ public:
beast::insight::Event dequeue;
beast::insight::Event execute;
JobTypeData(
JobTypeInfo const& info_,
beast::insight::Collector::ptr const& collector,
Logs& logs) noexcept
: m_load(logs.journal("LoadMonitor"))
, m_collector(collector)
, info(info_)
, waiting(0)
, running(0)
, deferred(0)
JobTypeData(JobTypeInfo const& info_, beast::insight::Collector::ptr const& collector, Logs& logs) noexcept
: m_load(logs.journal("LoadMonitor")), m_collector(collector), info(info_), waiting(0), running(0), deferred(0)
{
m_load.setTargetLatency(
info.getAverageLatency(), info.getPeakLatency());
m_load.setTargetLatency(info.getAverageLatency(), info.getPeakLatency());
if (!info.special())
{

Some files were not shown because too many files have changed in this diff Show More