Compare commits

..

69 Commits

Author SHA1 Message Date
Bart
8990c45c40 Set version to 3.1.0-b1 (#6152) 2025-12-17 18:38:32 -05:00
Ed Hennis
7527e35379 Set version to 3.0.0 2025-12-09 12:11:16 -05:00
Vladislav Vysokikh
e19b2c55c2 Version 3.0.0-rc2 2025-12-02 15:11:44 -05:00
Ed Hennis
138d6e751b Implement Lending Protocol (unsupported) (#5270)
- Spec: XLS-66
- Introduces amendment "LendingProtocol", but leaves it UNSUPPORTED to
  allow for standalone testing, future development work, and potential
  bug fixes.
- AccountInfo RPC will indicate the type of pseudo-account when
  appropriate.
- Refactors and improves several existing classes and functional areas,
  including Number, STAmount, STObject, json_value, Asset, directory
  handling, View helper functions, and unit test helpers.
2025-12-02 15:11:43 -05:00
Bronek Kozicki
b195011eff fix: Apply object reserve for Vault pseudo-account (#5954) 2025-11-28 11:58:16 +00:00
Bart
cd00aa591f ci: Clean workspace on Windows self-hosted runners (#6024)
This change updates the `cleanup-workspace` action to its latest version, which added support for Windows.
2025-11-28 11:58:15 +00:00
hustrust
d3466de16c docs: fix spelling in comments (#6002) 2025-11-28 11:58:15 +00:00
Ayaz Salikhov
90894ec6c1 ci: Update Conan to 2.22.2 (#6019)
This updates the CI image hashes after following change: https://github.com/XRPLF/ci/pull/81. And, since we use latest Conan, we can have `conan.lock` with a newline at the end, and we don't need to exclude it from `pre-commit` hooks any longer.
2025-11-28 11:20:31 +00:00
Bronek Kozicki
6b55db490e fix: JSON parsing of negative STNumber and STAmount (#5990)
This change fixes JSON parsing of negative `int` input in `STNumber` and `STAmount`. The conversion of JSON to `STNumber` or `STAmount` may trigger a condition where we negate smallest possible `int` value, which is undefined behaviour. We use a temporary storage as `int64_t` to avoid this bug. Note that this only affects RPC, because we do not parse JSON in the protocol layer, and hence no amendment is needed.
2025-11-28 11:20:31 +00:00
Bart
2237644ec5 ci: Trigger clio pipeline on PRs targeting release branches (#6080)
This change triggers the Clio pipeline on PRs that target any of the `release*` branches (in addition to the `master` branch), as opposed to only the `release` branch.
2025-11-26 11:01:22 +00:00
Ed Hennis
587d4ac5cc refactor: Add support for extra transaction signature validation (#5851)
- Restructures `STTx` signature checking code to be able to handle
  a `sigObject`, which may be the full transaction, or may be an object
  field containing a separate signature. Either way, the `sigObject` can
  be a single- or multi-sign signature.
- This is distinct from 550f90a75e (#5594), which changed the check in
  Transactor, which validates whether a given account is allowed to sign
  for the given transaction. This cryptographically checks the signature
  validity.
2025-11-25 18:30:44 +00:00
Bart
13b169ffcb ci: Remove missing commits check (#6077)
This change removes the CI check for missing commits, as well as a stray path to the publish-docs workflow that isn't used in the on-trigger workflow.
2025-11-25 17:29:07 +00:00
Jingchen
d42f8e0bda chore: Clean up comment in NetworkOps_test.cpp (#6066)
This change removes a copyright notice that was accidentally copied over from another file.
2025-11-25 17:29:07 +00:00
Vito Tumas
0276a6b6bd docs: Improve VaultWithdraw documentation (#6068) 2025-11-25 17:29:07 +00:00
Ayaz Salikhov
234dc6bdca ci: Only upload artifacts in XRPLF repo owner (#6060)
This change prevents uploading too many artifacts in non-public repositories.
2025-11-25 17:29:06 +00:00
sunnyraindy
82c2bf7144 chore: Fix some typos in comments (#6040) 2025-11-25 17:29:06 +00:00
Bart
210d49df44 ci: Fix filtering out of Clang 20+ on ARM (#6046)
This change fixes the strategy matrix check to filter out the Clang 20+ on ARM, which still fail due to problems with Boost.
2025-11-25 17:29:06 +00:00
Bart
3ffa30bf24 ci: Use new Debian Trixie images (#6034)
This change uses the new Debian Trixie CI images added by XRPLF/ci#83.
2025-11-25 17:29:05 +00:00
Bronek Kozicki
e7e4d52e38 Version 3.0.0-rc1 2025-11-12 09:30:43 +00:00
Bronek Kozicki
4135d56aa0 fix: floating point representation errors in vault (#5997)
This change fixes floating point errors in conversion of shares to assets and other way, used in `VaultDeposit`, `VaultWithdraw` and `VaultClawback`. In the floating point calculations the division introduces a larger error than multiplication. If we do division first, then the error introduced will be increased by the multiplication that follows, which is therefore the wrong order to perform these two operations. This change flips the order of arithmetic operations, which minimizes the error.
2025-11-12 09:30:42 +00:00
Ayaz Salikhov
865557024e ci: Specify bash as the default shell in workflows (#6021) 2025-11-12 09:30:36 +00:00
Bronek Kozicki
91b96d6386 chore: Move running of unit tests out of coverage target (#6018)
This change makes the progress of unit tests visible and also gives more flexibility when running them.
2025-11-11 15:42:14 +00:00
Bart
f1dbb20d7b chore: Make CMake improvements (#6010)
This change removes unused definitions from the CMake files, moves variable definitions from `XrplSanity` to `XrplSettings` where they better belong, and updates the minimum GCC and Clang versions to match what we actually minimally support.
2025-11-11 12:13:12 +00:00
Bronek Kozicki
2a2881ee53 chore: Unify build & test, add ctest to coverage (#6013)
This change unifies the build and test jobs into a single job, and adds `ctest` to coverage reporting.

The mechanics of coverage reporting is slightly complex and most of it is encapsulated in the `coverage` target. The status quo way of preparing coverage reports involves running a single target `cmake --build . --target coverage`, which does three things:
* Build the `rippled` binary (via target dependency)
* Prepare coverage reports:
  * Run `./rippled -u` unit tests.
  * Gather test output and build reports.

This makes it awkward to add an additional `ctest` step between build and coverage reporting steps. The better solution is to split `coverage` target into separate build, followed by `ctest`, followed by test generation. Luckily, the `coverage` target has been designed specifically to support such case; it does not need to build `rippled`, it's just a dependency. Similarly it allows additional tests to be run before gathering test outputs; in principle we could even strip it from running tests and run them separately instead. This means we can keep build, `ctest` and generation of coverage reports as separate steps, as long as the state of build directory is fully (including file timestamps, additional coverage files etc.) preserved between the steps. This means that in order to run `ctest` for coverage reporting we need to integrate build and test into a single job, which this change does.
2025-11-11 12:13:11 +00:00
Shawn Xie
ee2dff337d fix: domain order book insertion #5998 2025-11-11 12:13:11 +00:00
Ed Hennis
102a89f351 test: Count crashed test suites (#5924)
When outputting the unit test summary, this change counts crashed tests as failures.
2025-11-11 12:13:09 +00:00
Bart
9add957962 ci: Update CI image hashes to use netstat (#5987)
To debug test failures we would like to use `netstat`, but that package wasn't installed yet in the CI images. This change uses the new CI images created by https://github.com/XRPLF/ci/pull/79.
2025-11-11 12:06:55 +00:00
Vlad
0f1b607bb4 refactor: Improve txset handling (#5951) 2025-11-03 11:36:41 +00:00
Bronek Kozicki
4425f84c1f Remove directory size limit (#5935)
This change introduces the `fixDirectoryLimit` amendment to remove the directory pages limit. We found that the directory size limit is easier to hit than originally assumed, and there is no good reason to keep this limit, since the object reserve provides the necessary incentive to avoid creating unnecessary objects on the ledger.
2025-10-31 14:26:44 +00:00
Bronek Kozicki
8a6cc3ded8 fix: Change Credential sfSubjectNode to optional (#5936)
Field `sfSubjectNode` is not populated by `CredentialCreate` in self-issued credentials. Rather than fixup the Credentials already on the ledger, we can in this case safely change the object template for this field from `soeREQUIRED` to `soeOPTIONAL`.
2025-10-31 14:26:44 +00:00
Bart
3eec6ffcd7 ci: Check whether test failures are caused by port exhaustion (#5938)
This change adds an extra step to the CI test job that outputs network info, which may allow us to confirm whether random test failures are caused by port exhaustion.
2025-10-31 14:26:43 +00:00
Ayaz Salikhov
6b56c805dd chore: Use new prepare-runner (#5970)
See: XRPLF/actions#19.
2025-10-31 14:26:43 +00:00
Bart
da0eff9c1b ci: Use nproc-2 to set parallelism for builds and tests (#5939)
This change reduces the number of cores used to build and test, as using all cores may be contributing to occasional build and test failures.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:42 +00:00
Bart
6e326e6c11 ci: Use commit hash so workflows are not canceled when merging multiple PRs (#5950)
This change changes the CI concurrency group for pushes to the `develop` branch to use the commit hash instead of the target branch.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:41 +00:00
Bart
405575fd53 ci: Only upload codecov reports in the original repo, not in forks (#5953)
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:41 +00:00
Bart
f38f299a86 ci: Only log into Conan when uploading packages (#5952)
There are separate steps for logging into Conan and uploading packages. However, at the moment sometimes the login step is executed even though no packages will be uploaded. The condition for performing both steps should be the same.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:40 +00:00
Bronek Kozicki
8951419dbe fix: invariant error in fee-sized VaultWithdraw (#5876)
This changes fixes an invariant error where the amount withdrawn is equal to the transaction fee.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:39 +00:00
Ayaz Salikhov
a8b1a01d9e ci: Only run .exe files during test phase on Windows (#5947) 2025-10-31 14:26:39 +00:00
Jingchen
994b490db5 refactor: Migrate json unit tests to use doctest (#5533)
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:38 +00:00
Shawn Xie
7c8b16797f Change fixMPTDeliveredAmount to Supported::yes (#5833)
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:37 +00:00
Ayaz Salikhov
9507d9c276 fix: Upload all test binaries (#5932) 2025-10-31 14:26:37 +00:00
Ayaz Salikhov
8ca21406e6 chore: Better pre-commit failure message (#5940) 2025-10-31 14:26:36 +00:00
Ayaz Salikhov
178f4248e4 fix: Clean up build profile options (#5934)
The `-Wno-missing-template-arg-list-after-template-kw` flag is only needed for the grpc library. Use `+=` for the default build flags to make it easier to extend in the future.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:36 +00:00
Ayaz Salikhov
e8069a40f2 Use "${ENVVAR}" instead of ${{ env.ENVVAR }} syntax in GitHub Actions (#5923) 2025-10-31 14:26:35 +00:00
Bronek Kozicki
5ebc29c481 fix: Enforce reserve when creating trust line or MPToken in VaultWithdraw (#5857)
Similarly to other transaction typed that can create a trust line or MPToken for the transaction submitter (e.g. CashCheck #5285, EscrowFinish #5185 ), VaultWithdraw should enforce reserve before creating a new object. Additionally, the lsfRequireDestTag account flag should be enforced for the transaction submitter.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:35 +00:00
Mayukha Vadari
224b055124 chore: remove unnecessary LCOV_EXCL_LINE (#5913) 2025-10-31 14:26:34 +00:00
Bart
f99c1158d5 chore: Set explicit timeouts for build and test jobs (#5912)
The default job timeout is 5 hours, while build times are anywhere between 4-20 mins and test times between 2-10. As a runner occasionally gets stuck, we should fail much quicker.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:33 +00:00
Bart
a2594b6fe0 chore: Set fail fast to false, except for when the merge group is used (#5897)
This PR sets the fail-fast strategy option to false (it defaults to true), unless it is run by a merge group.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:33 +00:00
Bart
12fb54c66e chore: Clean up Conan variables in CI (#5903)
This change sanitizes inputs by setting them as environment variables, and adjusts the number of CPUs used for building. Namely, GitHub inputs should be sanitized, per recommendation by Semgrep, as using them directly poses a security risk. A recent change further overrode the global configuration by having builds use all cores, but as we have noticed an increased number of job cancelation this change updates it to use all cores less one.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:32 +00:00
Bart
51917be96d chore: Add support for RHEL 8 (#5880)
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:32 +00:00
Ayaz Salikhov
97b8f5c4b3 refactor: Update pre-commit workflow to latest version (#5902)
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:31 +00:00
Mayukha Vadari
a127314a89 refactor: replace string JSONs with Json::Value (#5886)
There are some tests that write out JSONs as a string instead of using the Json::Value library, which are cleaned up by this change.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:31 +00:00
Mayukha Vadari
0754cca98c refactor: replace boost::lexical_cast<std::string> with to_string (#5883)
This change replaces boost::lexical_cast<std::string> with to_string in some of the tests to make them more readable.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:30 +00:00
Mayukha Vadari
eb66ae1bd4 refactor: replace JSON LastLedgerSequence with last_ledger_seq (#5884)
This change replaces instances of JSON LastLedgerSequence with last_ledger_seq, which makes the tests a bit simpler and easier to read.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:29 +00:00
Jingchen
5c3b44d1af chore: Reduce build log verbosity on Windows (#5865)
Windows is extremely chatty and generates tons of logs when building, making it practically impossible to use the build logs to debug issues. This change sets the verbosity to 'quiet' on Windows.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:29 +00:00
Bart
e4b334faba fix: Update tools image shas (#5896)
This change updates the Docker image hashes of the tools-rippled images to fix a missing dependency.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:28 +00:00
zingero
adad20b862 docs: Fix typo in JSON writer documentation (#5881)
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:28 +00:00
tequ
9907fa07a9 refactor: Add paychan namespace and update related tests (#5840)
This change adds a paychan namespace to the TestHelpers and implementation files, improving organization and clarity. Additionally, it updates the AMM test to use the new `paychan::create` function for payment channel creation.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:27 +00:00
Ayaz Salikhov
d032bd681a chore: Support CMake 4 without workarounds (#5866) 2025-10-31 14:26:26 +00:00
Mayukha Vadari
b444457c19 chore: Exclude code/unreachable transaction code from Codecov (#5847)
This change excludes from Codecov unreachable/difficult-to-test transaction code (such as `tecINTERNAL`) and old code (from amendments that have been enabled for a long time that are only around for ledger replay reasons). This removes about 200 lines of misses and increases the Codecov coverage by 0.3% (79.2% to 79.5%).
2025-10-31 14:26:26 +00:00
Bart
4f076cb955 chore: Add wildcard to support triggering for release pipelines (#5879)
This change adds a wildcard to the release branch in the CI pipeline spec. Namely, after adopting an improved release process, with release branches that now look like release-X.Y, the trigger pipeline was no longer running as it only searched for an exact match to release.
2025-10-31 14:26:25 +00:00
Bronek Kozicki
220ab26225 Add vault invariants (#5518)
This change adds invariants for SingleAssetVault #5224 (XLS-065), which had been intentionally skipped earlier to keep the SAV PR size manageable.
2025-10-31 14:26:25 +00:00
tequ
89d81655c6 test: Add more tests for Simulate RPC metadata (#5827) 2025-10-31 14:26:24 +00:00
Bronek Kozicki
7bc2d5cba4 chore: Fix release build error (#5864)
This change fixes a release build error with GCC 15.2.

The `fields` variable is only used in `XRPL_ASSERT`, which evaluates to nothing in a Release build, leaving the variable unused. This change silences the build warning.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:23 +00:00
Bart
a34b36e021 refactor: Update CI strategy matrix to use new RHEL 9 and RHEL 10 images (#5856)
This change uses the new RHEL 9 and 10 images to build and test the binary, and adds support for having different Docker image SHAs per distro-compiler combination.

Instead of supporting RHEL each minor version, we are simplifying our pipelines by only supporting RHEL major versions. Our CI Docker images have already been updated accordingly, and we recently added support for RHEL 10 as well. Up until now, the CI Docker images had all been rebuilt at the same time, but that is not necessarily true as the most recent push to the CI repo has shown where the RHEL images now have a different SHA than the Debian and Ubuntu ones.

Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
2025-10-31 14:26:22 +00:00
Mayukha Vadari
5b2ab905c0 chore: exclude all UNREACHABLE blocks from codecov (#5846) 2025-10-31 14:26:16 +00:00
Bart
5e43e91d4a Revert "refactor: Update to Boost 1.88 and 1.86" (#5872)
This change reverts #5570, and then also makes the same changes as was done in #5759 to revert to Boost 1.83. We would like to expose Boost 1.88 to more in-depth testing, but with release 3.0.0 coming out soon there is insufficient time to do so, hence the reversion to Boost 1.83 (skipping Boost 1.86 as it has a bug in the executors).
2025-10-10 11:32:13 -04:00
Bart
5bac21c05b Revert "fix: FD/handle guarding + exponential backoff (#5823)" (#5869)
This reverts commit 330a3215bc.
2025-10-08 16:01:29 -04:00
Vito Tumas
1643d22103 Revert "Bugfix: Adds graceful peer disconnection (#5669)" (#5855)
This reverts commit 17a2606591.
2025-10-08 19:16:47 +01:00
265 changed files with 23924 additions and 5492 deletions

View File

@@ -4,20 +4,23 @@ description: "Install Conan dependencies, optionally forcing a rebuild of all de
# Note that actions do not support 'type' and all inputs are strings, see
# https://docs.github.com/en/actions/reference/workflows-and-actions/metadata-syntax#inputs.
inputs:
verbosity:
description: "The build verbosity."
required: false
default: "verbose"
build_dir:
description: "The directory where to build."
required: true
build_type:
description: 'The build type to use ("Debug", "Release").'
required: true
build_nproc:
description: "The number of processors to use for building."
required: true
force_build:
description: 'Force building of all dependencies ("true", "false").'
required: false
default: "false"
log_verbosity:
description: "The logging verbosity."
required: false
default: "verbose"
runs:
using: composite
@@ -26,19 +29,21 @@ runs:
shell: bash
env:
BUILD_DIR: ${{ inputs.build_dir }}
BUILD_NPROC: ${{ inputs.build_nproc }}
BUILD_OPTION: ${{ inputs.force_build == 'true' && '*' || 'missing' }}
BUILD_TYPE: ${{ inputs.build_type }}
VERBOSITY: ${{ inputs.verbosity }}
LOG_VERBOSITY: ${{ inputs.log_verbosity }}
run: |
echo 'Installing dependencies.'
mkdir -p '${{ env.BUILD_DIR }}'
cd '${{ env.BUILD_DIR }}'
mkdir -p "${BUILD_DIR}"
cd "${BUILD_DIR}"
conan install \
--output-folder . \
--build=${{ env.BUILD_OPTION }} \
--build="${BUILD_OPTION}" \
--options:host='&:tests=True' \
--options:host='&:xrpld=True' \
--settings:all build_type='${{ env.BUILD_TYPE }}' \
--conf:all tools.build:verbosity='${{ env.VERBOSITY }}' \
--conf:all tools.compilation:verbosity='${{ env.VERBOSITY }}' \
--settings:all build_type="${BUILD_TYPE}" \
--conf:all tools.build:jobs=${BUILD_NPROC} \
--conf:all tools.build:verbosity="${LOG_VERBOSITY}" \
--conf:all tools.compilation:verbosity="${LOG_VERBOSITY}" \
..

View File

@@ -39,8 +39,8 @@ runs:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
CONAN_REMOTE_URL: ${{ inputs.conan_remote_url }}
run: |
echo "Adding Conan remote '${{ env.CONAN_REMOTE_NAME }}' at '${{ env.CONAN_REMOTE_URL }}'."
conan remote add --index 0 --force '${{ env.CONAN_REMOTE_NAME }}' '${{ env.CONAN_REMOTE_URL }}'
echo "Adding Conan remote '${CONAN_REMOTE_NAME}' at '${CONAN_REMOTE_URL}'."
conan remote add --index 0 --force "${CONAN_REMOTE_NAME}" "${CONAN_REMOTE_URL}"
echo 'Listing Conan remotes.'
conan remote list

View File

@@ -138,6 +138,7 @@ test.toplevel > test.csf
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
tests.libxrpl > xrpl.basics
tests.libxrpl > xrpl.json
tests.libxrpl > xrpl.net
xrpl.json > xrpl.basics
xrpl.ledger > xrpl.basics

View File

@@ -130,16 +130,14 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
if os['distro_name'] == 'rhel' and architecture['platform'] == 'linux/arm64':
continue
# We skip all clang-20 on arm64 due to boost 1.86 build error
if f'{os['compiler_name']}-{os['compiler_version']}' == 'clang-20' and architecture['platform'] == 'linux/arm64':
# We skip all clang 20+ on arm64 due to Boost build error.
if f'{os['compiler_name']}-{os['compiler_version']}' in ['clang-20', 'clang-21'] and architecture['platform'] == 'linux/arm64':
continue
# Enable code coverage for Debian Bookworm using GCC 15 in Debug and no
# Unity on linux/amd64
if f'{os['compiler_name']}-{os['compiler_version']}' == 'gcc-15' and build_type == 'Debug' and '-Dunity=OFF' in cmake_args and architecture['platform'] == 'linux/amd64':
cmake_args = f'-Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0 {cmake_args}'
cmake_target = 'coverage'
build_only = True
# Generate a unique name for the configuration, e.g. macos-arm64-debug
# or debian-bookworm-gcc-12-amd64-release-unity.

View File

@@ -15,168 +15,196 @@
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "12",
"image_sha": "6948666"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "13",
"image_sha": "6948666"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "6948666"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "15",
"image_sha": "6948666"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "16",
"image_sha": "6948666"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "17",
"image_sha": "6948666"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "18",
"image_sha": "6948666"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "19",
"image_sha": "6948666"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "20",
"image_sha": "6948666"
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "gcc",
"compiler_version": "15",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "clang",
"compiler_version": "20",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "clang",
"compiler_version": "21",
"image_sha": "0525eae"
},
{
"distro_name": "rhel",
"distro_version": "8",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "10e69b4"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "8",
"compiler_name": "clang",
"compiler_version": "any",
"image_sha": "10e69b4"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "gcc",
"compiler_version": "12",
"image_sha": "10e69b4"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "gcc",
"compiler_version": "13",
"image_sha": "10e69b4"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "10e69b4"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "clang",
"compiler_version": "any",
"image_sha": "10e69b4"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "10",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "10e69b4"
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "10",
"compiler_name": "clang",
"compiler_version": "any",
"image_sha": "10e69b4"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "jammy",
"compiler_name": "gcc",
"compiler_version": "12",
"image_sha": "6948666"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "gcc",
"compiler_version": "13",
"image_sha": "6948666"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "6948666"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "16",
"image_sha": "6948666"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "17",
"image_sha": "6948666"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "18",
"image_sha": "6948666"
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "19",
"image_sha": "6948666"
"image_sha": "e1782cd"
}
],
"build_type": ["Debug", "Release"],

View File

@@ -1,8 +1,13 @@
# This workflow runs all workflows to check, build and test the project on
# various Linux flavors, as well as on MacOS and Windows, for various PR events.
# various Linux flavors, as well as on MacOS and Windows, on every push to a
# user branch. However, it will not run if the pull request is a draft unless it
# has the 'DraftRunCI' label.
name: PR
on:
merge_group:
types:
- checks_requested
pull_request:
types:
- opened
@@ -20,18 +25,10 @@ defaults:
jobs:
# This job determines whether the rest of the workflow should run. It runs
# when the PR is not a draft or has the 'DraftRunCI' label and does NOT have
# the 'SkipRunCI' and 'MergeQueueCI' labels. The former two labels will result
# in the PR not being allowed to be merged, whereas the latter label will
# allow it to be merged without running the full CI suite, because the merge
# queue will verify that the code builds and passes tests before the PR is
# actually merged.
# when the PR is not a draft (which should also cover merge-group) or
# has the 'DraftRunCI' label.
should-run:
if: ${{
(!github.event.pull_request.draft || contains(github.event.pull_request.labels.*.name, 'DraftRunCI')) &&
!contains(github.event.pull_request.labels.*.name, 'SkipRunCI') &&
!contains(github.event.pull_request.labels.*.name, 'MergeQueueCI')
}}
if: ${{ !github.event.pull_request.draft || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
runs-on: ubuntu-latest
steps:
- name: Checkout repository
@@ -78,20 +75,20 @@ jobs:
conanfile.py
conan.lock
- name: Check whether to run
# This step determines whether the rest of the workflow should run. If
# it runs and all jobs complete successfully, then the 'passed' job is
# marked as 'skipped' and, because its direct dependencies ran, the PR
# is allowed to be merged. This workflow will run when at least one of
# the following is true:
# * Any of the files checked in the `changes` step were modified.
# * The PR is NOT a draft and is labeled 'Ready to merge'.
# This step determines whether the rest of the workflow should
# run. The rest of the workflow will run if this job runs AND at
# least one of:
# * Any of the files checked in the `changes` step were modified
# * The PR is NOT a draft and is labeled "Ready to merge"
# * The workflow is running from the merge queue
id: go
env:
FILES: ${{ steps.changes.outputs.any_changed }}
DRAFT: ${{ github.event.pull_request.draft }}
READY: ${{ contains(github.event.pull_request.labels.*.name, 'Ready to merge') }}
MERGE: ${{ github.event_name == 'merge_group' }}
run: |
echo "go=${{ (env.DRAFT != 'true' && env.READY == 'true') || env.FILES == 'true' }}" >> "${GITHUB_OUTPUT}"
echo "go=${{ (env.DRAFT != 'true' && env.READY == 'true') || env.FILES == 'true' || env.MERGE == 'true' }}" >> "${GITHUB_OUTPUT}"
cat "${GITHUB_OUTPUT}"
outputs:
go: ${{ steps.go.outputs.go == 'true' }}
@@ -118,7 +115,7 @@ jobs:
needs:
- should-run
- build-test
if: ${{ needs.should-run.outputs.go == 'true' && contains(fromJSON('["release", "master"]'), github.ref_name) }}
if: ${{ needs.should-run.outputs.go == 'true' && (startsWith(github.base_ref, 'release') || github.base_ref == 'master') }}
uses: ./.github/workflows/reusable-notify-clio.yml
secrets:
clio_notify_token: ${{ secrets.CLIO_NOTIFY_TOKEN }}
@@ -126,7 +123,7 @@ jobs:
conan_remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}
passed:
if: ${{ failure() || cancelled() }}
if: failure() || cancelled()
needs:
- build-test
- check-levelization

View File

@@ -1,35 +0,0 @@
# This workflow runs all workflows to check, build and test the project on
# various Linux flavors, as well as on MacOS and Windows, when a PR is added to
# the merge queue.
name: Merge Queue
on:
merge_group:
types:
- checks_requested
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
check-commit-message:
uses: ./.github/workflows/reusable-check-commit-message.yml
check-levelization:
uses: ./.github/workflows/reusable-check-levelization.yml
build-test:
uses: ./.github/workflows/reusable-build-test.yml
strategy:
fail-fast: false
matrix:
os: [linux, macos, windows]
with:
os: ${{ matrix.os }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -14,9 +14,7 @@ on:
- "master"
paths:
# These paths are unique to `on-trigger.yml`.
- ".github/workflows/reusable-check-missing-commits.yml"
- ".github/workflows/on-trigger.yml"
- ".github/workflows/publish-docs.yml"
# Keep the paths below in sync with those in `on-pr.yml`.
- ".github/actions/build-deps/**"
@@ -50,7 +48,12 @@ on:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
# When a PR is merged into the develop branch it will be assigned a unique
# group identifier, so execution will continue even if another PR is merged
# while it is still running. In all other cases the group identifier is shared
# per branch, so that any in-progress runs are cancelled when a new commit is
# pushed.
group: ${{ github.workflow }}-${{ github.event_name == 'push' && github.ref == 'refs/heads/develop' && github.sha || github.ref }}
cancel-in-progress: true
defaults:
@@ -58,10 +61,6 @@ defaults:
shell: bash
jobs:
check-missing-commits:
if: ${{ github.event_name == 'push' && github.ref_type == 'branch' && contains(fromJSON('["develop", "release"]'), github.ref_name) }}
uses: ./.github/workflows/reusable-check-missing-commits.yml
build-test:
uses: ./.github/workflows/reusable-build-test.yml
strategy:

View File

@@ -9,7 +9,7 @@ on:
jobs:
# Call the workflow in the XRPLF/actions repo that runs the pre-commit hooks.
run-hooks:
uses: XRPLF/actions/.github/workflows/pre-commit.yml@a8d7472b450eb53a1e5228f64552e5974457a21a
uses: XRPLF/actions/.github/workflows/pre-commit.yml@34790936fae4c6c751f62ec8c06696f9c1a5753a
with:
runs_on: ubuntu-latest
container: '{ "image": "ghcr.io/xrplf/ci/tools-rippled-pre-commit:sha-a8c7be1" }'

View File

@@ -23,6 +23,7 @@ defaults:
env:
BUILD_DIR: .build
NPROC_SUBTRACT: 2
jobs:
publish:
@@ -33,6 +34,13 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Get number of processors
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
id: nproc
with:
subtract: ${{ env.NPROC_SUBTRACT }}
- name: Check configuration
run: |
echo 'Checking path.'
@@ -46,12 +54,16 @@ jobs:
echo 'Checking Doxygen version.'
doxygen --version
- name: Build documentation
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
run: |
mkdir -p ${{ env.BUILD_DIR }}
cd ${{ env.BUILD_DIR }}
mkdir -p "${BUILD_DIR}"
cd "${BUILD_DIR}"
cmake -Donly_docs=ON ..
cmake --build . --target docs --parallel $(nproc)
cmake --build . --target docs --parallel ${BUILD_NPROC}
- name: Publish documentation
if: ${{ github.ref_type == 'branch' && github.ref_name == github.event.repository.default_branch }}
uses: peaceiris/actions-gh-pages@4f9cc6602d3f66b9c108549d475ec49e8ef4d45e # v4.0.0

View File

@@ -7,19 +7,23 @@ on:
description: "The directory where to build."
required: true
type: string
build_only:
description: 'Whether to only build or to build and test the code ("true", "false").'
required: true
type: boolean
build_type:
description: 'The build type to use ("Debug", "Release").'
type: string
required: true
cmake_args:
description: "Additional arguments to pass to CMake."
required: false
type: string
default: ""
cmake_target:
description: "The CMake target to build."
type: string
@@ -29,6 +33,7 @@ on:
description: Runner to run the job on as a JSON string
required: true
type: string
image:
description: "The image to run in (leave empty to run natively)"
required: true
@@ -39,31 +44,170 @@ on:
required: true
type: string
nproc_subtract:
description: "The number of processors to subtract when calculating parallelism."
required: false
type: number
default: 2
secrets:
CODECOV_TOKEN:
description: "The Codecov token to use for uploading coverage reports."
required: true
jobs:
build:
uses: ./.github/workflows/reusable-build.yml
with:
build_dir: ${{ inputs.build_dir }}
build_type: ${{ inputs.build_type }}
cmake_args: ${{ inputs.cmake_args }}
cmake_target: ${{ inputs.cmake_target }}
runs_on: ${{ inputs.runs_on }}
image: ${{ inputs.image }}
config_name: ${{ inputs.config_name }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
defaults:
run:
shell: bash
test:
needs: build
uses: ./.github/workflows/reusable-test.yml
with:
run_tests: ${{ !inputs.build_only }}
verify_voidstar: ${{ contains(inputs.cmake_args, '-Dvoidstar=ON') }}
runs_on: ${{ inputs.runs_on }}
image: ${{ inputs.image }}
config_name: ${{ inputs.config_name }}
jobs:
build-and-test:
name: ${{ inputs.config_name }}
runs-on: ${{ fromJSON(inputs.runs_on) }}
container: ${{ inputs.image != '' && inputs.image || null }}
timeout-minutes: 60
env:
ENABLED_VOIDSTAR: ${{ contains(inputs.cmake_args, '-Dvoidstar=ON') }}
ENABLED_COVERAGE: ${{ contains(inputs.cmake_args, '-Dcoverage=ON') }}
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/.github/actions/cleanup-workspace@01b244d2718865d427b499822fbd3f15e7197fcc
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@99685816bb60a95a66852f212f382580e180df3a
with:
disable_ccache: false
- name: Print build environment
uses: ./.github/actions/print-env
- name: Get number of processors
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
id: nproc
with:
subtract: ${{ inputs.nproc_subtract }}
- name: Setup Conan
uses: ./.github/actions/setup-conan
- name: Build dependencies
uses: ./.github/actions/build-deps
with:
build_dir: ${{ inputs.build_dir }}
build_nproc: ${{ steps.nproc.outputs.nproc }}
build_type: ${{ inputs.build_type }}
# Set the verbosity to "quiet" for Windows to avoid an excessive
# amount of logs. For other OSes, the "verbose" logs are more useful.
log_verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }}
- name: Configure CMake
working-directory: ${{ inputs.build_dir }}
env:
BUILD_TYPE: ${{ inputs.build_type }}
CMAKE_ARGS: ${{ inputs.cmake_args }}
run: |
cmake \
-G '${{ runner.os == 'Windows' && 'Visual Studio 17 2022' || 'Ninja' }}' \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE="${BUILD_TYPE}" \
${CMAKE_ARGS} \
..
- name: Build the binary
working-directory: ${{ inputs.build_dir }}
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
BUILD_TYPE: ${{ inputs.build_type }}
CMAKE_TARGET: ${{ inputs.cmake_target }}
run: |
cmake \
--build . \
--config "${BUILD_TYPE}" \
--parallel "${BUILD_NPROC}" \
--target "${CMAKE_TARGET}"
- name: Upload rippled artifact (Linux)
if: ${{ github.repository_owner == 'XRPLF' && runner.os == 'Linux' }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
env:
BUILD_DIR: ${{ inputs.build_dir }}
with:
name: rippled-${{ inputs.config_name }}
path: ${{ env.BUILD_DIR }}/rippled
retention-days: 3
if-no-files-found: error
- name: Check linking (Linux)
if: ${{ runner.os == 'Linux' }}
working-directory: ${{ inputs.build_dir }}
run: |
ldd ./rippled
if [ "$(ldd ./rippled | grep -E '(libstdc\+\+|libgcc)' | wc -l)" -eq 0 ]; then
echo 'The binary is statically linked.'
else
echo 'The binary is dynamically linked.'
exit 1
fi
- name: Verify presence of instrumentation (Linux)
if: ${{ runner.os == 'Linux' && env.ENABLED_VOIDSTAR == 'true' }}
working-directory: ${{ inputs.build_dir }}
run: |
./rippled --version | grep libvoidstar
- name: Run the separate tests
if: ${{ !inputs.build_only }}
working-directory: ${{ inputs.build_dir }}
# Windows locks some of the build files while running tests, and parallel jobs can collide
env:
BUILD_TYPE: ${{ inputs.build_type }}
PARALLELISM: ${{ runner.os == 'Windows' && '1' || steps.nproc.outputs.nproc }}
run: |
ctest \
--output-on-failure \
-C "${BUILD_TYPE}" \
-j "${PARALLELISM}"
- name: Run the embedded tests
if: ${{ !inputs.build_only }}
working-directory: ${{ runner.os == 'Windows' && format('{0}/{1}', inputs.build_dir, inputs.build_type) || inputs.build_dir }}
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
run: |
./rippled --unittest --unittest-jobs "${BUILD_NPROC}"
- name: Debug failure (Linux)
if: ${{ failure() && runner.os == 'Linux' && !inputs.build_only }}
run: |
echo "IPv4 local port range:"
cat /proc/sys/net/ipv4/ip_local_port_range
echo "Netstat:"
netstat -an
- name: Prepare coverage report
if: ${{ !inputs.build_only && env.ENABLED_COVERAGE == 'true' }}
working-directory: ${{ inputs.build_dir }}
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
BUILD_TYPE: ${{ inputs.build_type }}
run: |
cmake \
--build . \
--config "${BUILD_TYPE}" \
--parallel "${BUILD_NPROC}" \
--target coverage
- name: Upload coverage report
if: ${{ github.repository_owner == 'XRPLF' && !inputs.build_only && env.ENABLED_COVERAGE == 'true' }}
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
with:
disable_search: true
disable_telem: true
fail_ci_if_error: true
files: ${{ inputs.build_dir }}/coverage.xml
plugins: noop
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@@ -1,121 +0,0 @@
name: Build rippled
on:
workflow_call:
inputs:
build_dir:
description: "The directory where to build."
required: true
type: string
build_type:
description: 'The build type to use ("Debug", "Release").'
required: true
type: string
cmake_args:
description: "Additional arguments to pass to CMake."
required: true
type: string
cmake_target:
description: "The CMake target to build."
required: true
type: string
runs_on:
description: Runner to run the job on as a JSON string
required: true
type: string
image:
description: "The image to run in (leave empty to run natively)"
required: true
type: string
config_name:
description: "The name of the configuration."
required: true
type: string
secrets:
CODECOV_TOKEN:
description: "The Codecov token to use for uploading coverage reports."
required: true
defaults:
run:
shell: bash
jobs:
build:
name: Build ${{ inputs.config_name }}
runs-on: ${{ fromJSON(inputs.runs_on) }}
container: ${{ inputs.image != '' && inputs.image || null }}
steps:
- name: Cleanup workspace
if: ${{ runner.os == 'macOS' }}
uses: XRPLF/actions/.github/actions/cleanup-workspace@3f044c7478548e3c32ff68980eeb36ece02b364e
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@638e0dc11ea230f91bd26622fb542116bb5254d5
with:
disable_ccache: false
- name: Print build environment
uses: ./.github/actions/print-env
- name: Setup Conan
uses: ./.github/actions/setup-conan
- name: Build dependencies
uses: ./.github/actions/build-deps
with:
build_dir: ${{ inputs.build_dir }}
build_type: ${{ inputs.build_type }}
- name: Configure CMake
shell: bash
working-directory: ${{ inputs.build_dir }}
env:
BUILD_TYPE: ${{ inputs.build_type }}
CMAKE_ARGS: ${{ inputs.cmake_args }}
run: |
cmake \
-G '${{ runner.os == 'Windows' && 'Visual Studio 17 2022' || 'Ninja' }}' \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=${{ env.BUILD_TYPE }} \
${{ env.CMAKE_ARGS }} \
..
- name: Build the binary
shell: bash
working-directory: ${{ inputs.build_dir }}
env:
BUILD_TYPE: ${{ inputs.build_type }}
CMAKE_TARGET: ${{ inputs.cmake_target }}
run: |
cmake \
--build . \
--config ${{ env.BUILD_TYPE }} \
--parallel $(nproc) \
--target ${{ env.CMAKE_TARGET }}
- name: Upload rippled artifact
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: rippled-${{ inputs.config_name }}
path: ${{ inputs.build_dir }}/${{ runner.os == 'Windows' && inputs.build_type || '' }}/rippled${{ runner.os == 'Windows' && '.exe' || '' }}
retention-days: 3
if-no-files-found: error
- name: Upload coverage report
if: ${{ inputs.cmake_target == 'coverage' }}
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
with:
disable_search: true
disable_telem: true
fail_ci_if_error: true
files: ${{ inputs.build_dir }}/coverage.xml
plugins: noop
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@@ -1,72 +0,0 @@
# This workflow checks that the commit message follows our expected format.
name: Check commit message
# This workflow can only be triggered by other workflows.
on: workflow_call
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-single-commit
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
check:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
with:
fetch-depth: 0
- name: Check commit message
env:
MESSAGE: |
If you are reading this, then the commit message does not match our
expected format. See CONTRIBUTING.md for instructions.
It is further possible that you did not run the script in
`bin/git/squash-commits.sh` before clicking the 'Merge when ready'
button in GitHub's UI.
run: |
set -o pipefail
COMMIT_MSG='${{ github.event.head_commit.message }}'
echo "Commit message: ${COMMIT_MSG}"
# If the commit title contains a prefix then it must be one we permit.
COMMIT_TITLE=$(echo "${COMMIT_MSG}" | head -n1)
echo "Commit title: '${COMMIT_TITLE}'"
if [[ "${COMMIT_TITLE}" =~ ^[^:]+:[^:]+ ]]; then
if ! [[ "${COMMIT_TITLE}" =~ ^(build|chore|docs|fix|perf|refactor|test):[^:]+ ]]; then
echo "Commit title prefix is not permitted."
echo "${MESSAGE}"
exit 1
fi
fi
# The commit description may not only contain a list of commits, which
# is what happens if a PR containing multiple commits is squashed and
# merged by GitHub.
COMMIT_DESC=$(echo "${COMMIT_MSG}" | tail -n+3)
echo "Commit description: '${COMMIT_DESC}'"
while IFS=$'\n' read -r LINE; do
echo "Processing: ${LINE}"
# Check for lines starting with '-' or '*' followed by a space.
if ! [[ "${LINE}" =~ ^[[:space:]]*[-*][[:space:]]+.* ]]; then
exit 0
fi
# Check if the next line is empty.
IFS=$'\n' read -r LINE
if [ -n "${LINE}" ]; then
exit 0
fi
done <<< "${COMMIT_DESC}"
echo "Commit description is not permitted."
echo "${MESSAGE}"
exit 1

View File

@@ -1,62 +0,0 @@
# This workflow checks that all commits in the "master" branch are also in the
# "release" and "develop" branches, and that all commits in the "release" branch
# are also in the "develop" branch.
name: Check for missing commits
# This workflow can only be triggered by other workflows.
on: workflow_call
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-missing-commits
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
check:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
with:
fetch-depth: 0
- name: Check for missing commits
env:
MESSAGE: |
If you are reading this, then the commits indicated above are missing
from the "develop" and/or "release" branch. Do a reverse-merge as soon
as possible. See CONTRIBUTING.md for instructions.
run: |
set -o pipefail
# Branches are ordered by how "canonical" they are. Every commit in one
# branch should be in all the branches behind it.
order=(master release develop)
branches=()
for branch in "${order[@]}"; do
# Check that the branches exist so that this job will work on forked
# repos, which don't necessarily have master and release branches.
echo "Checking if ${branch} exists."
if git ls-remote --exit-code --heads origin \
refs/heads/${branch} > /dev/null; then
branches+=(origin/${branch})
fi
done
prior=()
for branch in "${branches[@]}"; do
if [[ ${#prior[@]} -ne 0 ]]; then
echo "Checking ${prior[@]} for commits missing from ${branch}."
git log --oneline --no-merges "${prior[@]}" \
^$branch | tee -a "missing-commits.txt"
echo
fi
prior+=("${branch}")
done
if [[ $(cat missing-commits.txt | wc -l) -ne 0 ]]; then
echo "${MESSAGE}"
exit 1
fi

View File

@@ -51,7 +51,7 @@ jobs:
run: |
echo 'Generating user and channel.'
echo "user=clio" >> "${GITHUB_OUTPUT}"
echo "channel=pr_${{ env.PR_NUMBER }}" >> "${GITHUB_OUTPUT}"
echo "channel=pr_${PR_NUMBER}" >> "${GITHUB_OUTPUT}"
echo 'Extracting version.'
echo "version=$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" | awk -F '"' '{print $2}')" >> "${GITHUB_OUTPUT}"
- name: Calculate conan reference
@@ -66,13 +66,13 @@ jobs:
- name: Log into Conan remote
env:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
run: conan remote login ${{ env.CONAN_REMOTE_NAME }} "${{ secrets.conan_remote_username }}" --password "${{ secrets.conan_remote_password }}"
run: conan remote login "${CONAN_REMOTE_NAME}" "${{ secrets.conan_remote_username }}" --password "${{ secrets.conan_remote_password }}"
- name: Upload package
env:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
run: |
conan export --user=${{ steps.generate.outputs.user }} --channel=${{ steps.generate.outputs.channel }} .
conan upload --confirm --check --remote=${{ env.CONAN_REMOTE_NAME }} xrpl/${{ steps.conan_ref.outputs.conan_ref }}
conan upload --confirm --check --remote="${CONAN_REMOTE_NAME}" xrpl/${{ steps.conan_ref.outputs.conan_ref }}
outputs:
conan_ref: ${{ steps.conan_ref.outputs.conan_ref }}
@@ -88,4 +88,4 @@ jobs:
gh api --method POST -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" \
/repos/xrplf/clio/dispatches -f "event_type=check_libxrpl" \
-F "client_payload[conan_ref]=${{ needs.upload.outputs.conan_ref }}" \
-F "client_payload[pr_url]=${{ env.PR_URL }}"
-F "client_payload[pr_url]=${PR_URL}"

View File

@@ -18,6 +18,10 @@ on:
description: "The generated strategy matrix."
value: ${{ jobs.generate-matrix.outputs.matrix }}
defaults:
run:
shell: bash
jobs:
generate-matrix:
runs-on: ubuntu-latest
@@ -38,4 +42,4 @@ jobs:
env:
GENERATE_CONFIG: ${{ inputs.os != '' && format('--config={0}.json', inputs.os) || '' }}
GENERATE_OPTION: ${{ inputs.strategy_matrix == 'all' && '--all' || '' }}
run: ./generate.py ${{ env.GENERATE_OPTION }} ${{ env.GENERATE_CONFIG }} >> "${GITHUB_OUTPUT}"
run: ./generate.py ${GENERATE_OPTION} ${GENERATE_CONFIG} >> "${GITHUB_OUTPUT}"

View File

@@ -1,69 +0,0 @@
name: Test rippled
on:
workflow_call:
inputs:
verify_voidstar:
description: "Whether to verify the presence of voidstar instrumentation."
required: true
type: boolean
run_tests:
description: "Whether to run unit tests"
required: true
type: boolean
runs_on:
description: Runner to run the job on as a JSON string
required: true
type: string
image:
description: "The image to run in (leave empty to run natively)"
required: true
type: string
config_name:
description: "The name of the configuration."
required: true
type: string
jobs:
test:
name: Test ${{ inputs.config_name }}
runs-on: ${{ fromJSON(inputs.runs_on) }}
container: ${{ inputs.image != '' && inputs.image || null }}
steps:
- name: Download rippled artifact
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: rippled-${{ inputs.config_name }}
- name: Make binary executable (Linux and macOS)
shell: bash
if: ${{ runner.os == 'Linux' || runner.os == 'macOS' }}
run: |
chmod +x ./rippled
- name: Check linking (Linux)
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
ldd ./rippled
if [ "$(ldd ./rippled | grep -E '(libstdc\+\+|libgcc)' | wc -l)" -eq 0 ]; then
echo 'The binary is statically linked.'
else
echo 'The binary is dynamically linked.'
exit 1
fi
- name: Verifying presence of instrumentation
if: ${{ inputs.verify_voidstar }}
shell: bash
run: |
./rippled --version | grep libvoidstar
- name: Test the binary
if: ${{ inputs.run_tests }}
shell: bash
run: |
./rippled --unittest --unittest-jobs $(nproc)
ctest -j $(nproc) --output-on-failure

View File

@@ -34,11 +34,16 @@ on:
env:
CONAN_REMOTE_NAME: xrplf
CONAN_REMOTE_URL: https://conan.ripplex.io
NPROC_SUBTRACT: 2
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
# Generate the strategy matrix to be used by the following job.
generate-matrix:
@@ -57,16 +62,27 @@ jobs:
runs-on: ${{ matrix.architecture.runner }}
container: ${{ contains(matrix.architecture.platform, 'linux') && format('ghcr.io/xrplf/ci/{0}-{1}:{2}-{3}-sha-{4}', matrix.os.distro_name, matrix.os.distro_version, matrix.os.compiler_name, matrix.os.compiler_version, matrix.os.image_sha) || null }}
steps:
- name: Cleanup workspace
if: ${{ runner.os == 'macOS' }}
uses: XRPLF/actions/.github/actions/cleanup-workspace@3f044c7478548e3c32ff68980eeb36ece02b364e
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/.github/actions/cleanup-workspace@01b244d2718865d427b499822fbd3f15e7197fcc
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@638e0dc11ea230f91bd26622fb542116bb5254d5
uses: XRPLF/actions/.github/actions/prepare-runner@99685816bb60a95a66852f212f382580e180df3a
with:
disable_ccache: false
- name: Print build environment
uses: ./.github/actions/print-env
- name: Get number of processors
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
id: nproc
with:
subtract: ${{ env.NPROC_SUBTRACT }}
- name: Setup Conan
uses: ./.github/actions/setup-conan
with:
@@ -77,18 +93,19 @@ jobs:
uses: ./.github/actions/build-deps
with:
build_dir: .build
build_nproc: ${{ steps.nproc.outputs.nproc }}
build_type: ${{ matrix.build_type }}
force_build: ${{ github.event_name == 'schedule' || github.event.inputs.force_source_build == 'true' }}
# The verbosity is set to "quiet" for Windows to avoid an excessive amount of logs, while it
# is set to "verbose" otherwise to provide more information during the build process.
verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }}
# Set the verbosity to "quiet" for Windows to avoid an excessive
# amount of logs. For other OSes, the "verbose" logs are more useful.
log_verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }}
- name: Log into Conan remote
if: ${{ github.repository_owner == 'XRPLF' && github.event_name != 'pull_request' }}
run: conan remote login ${{ env.CONAN_REMOTE_NAME }} "${{ secrets.CONAN_REMOTE_USERNAME }}" --password "${{ secrets.CONAN_REMOTE_PASSWORD }}"
if: ${{ github.repository_owner == 'XRPLF' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
run: conan remote login "${CONAN_REMOTE_NAME}" "${{ secrets.CONAN_REMOTE_USERNAME }}" --password "${{ secrets.CONAN_REMOTE_PASSWORD }}"
- name: Upload Conan packages
if: ${{ github.repository_owner == 'XRPLF' && github.event_name != 'pull_request' && github.event_name != 'schedule' }}
if: ${{ github.repository_owner == 'XRPLF' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') }}
env:
FORCE_OPTION: ${{ github.event.inputs.force_upload == 'true' && '--force' || '' }}
run: conan upload "*" --remote='${{ env.CONAN_REMOTE_NAME }}' --confirm ${{ env.FORCE_OPTION }}
run: conan upload "*" --remote="${CONAN_REMOTE_NAME}" --confirm ${FORCE_OPTION}

View File

@@ -34,6 +34,5 @@ repos:
exclude: |
(?x)^(
external/.*|
.github/scripts/levelization/results/.*\.txt|
conan\.lock
.github/scripts/levelization/results/.*\.txt
)$

View File

@@ -495,18 +495,18 @@ A coverage report is created when the following steps are completed, in order:
1. `rippled` binary built with instrumentation data, enabled by the `coverage`
option mentioned above
2. completed run of unit tests, which populates coverage capture data
2. completed one or more run of the unit tests, which populates coverage capture data
3. completed run of the `gcovr` tool (which internally invokes either `gcov` or `llvm-cov`)
to assemble both instrumentation data and the coverage capture data into a coverage report
The above steps are automated into a single target `coverage`. The instrumented
The last step of the above is automated into a single target `coverage`. The instrumented
`rippled` binary can also be used for regular development or testing work, at
the cost of extra disk space utilization and a small performance hit
(to store coverage capture). In case of a spurious failure of unit tests, it is
possible to re-run the `coverage` target without rebuilding the `rippled` binary
(since it is simply a dependency of the coverage report target). It is also possible
to select only specific tests for the purpose of the coverage report, by setting
the `coverage_test` variable in `cmake`
(to store coverage capture data). Since `rippled` binary is simply a dependency of the
coverage report target, it is possible to re-run the `coverage` target without
rebuilding the `rippled` binary. Note, running of the unit tests before the `coverage`
target is left to the developer. Each such run will append to the coverage data
collected in the build directory.
The default coverage report format is `html-details`, but the user
can override it to any of the formats listed in `Builds/CMake/CodeCoverage.cmake`
@@ -515,11 +515,6 @@ to generate more than one format at a time by setting the `coverage_extra_args`
variable in `cmake`. The specific command line used to run the `gcovr` tool will be
displayed if the `CODE_COVERAGE_VERBOSE` variable is set.
By default, the code coverage tool runs parallel unit tests with `--unittest-jobs`
set to the number of available CPU cores. This may cause spurious test
errors on Apple. Developers can override the number of unit test jobs with
the `coverage_test_parallelism` variable in `cmake`.
Example use with some cmake variables set:
```

View File

@@ -1,130 +0,0 @@
#!/bin/bash
if [[ $# -ne 3 || "$1" == "--help" || "$1" = "-h" ]]
then
name=$( basename $0 )
cat <<- USAGE
Usage: $name pr "title" "description"
* All commits in the specified PR will be squashed and a new commit prepared
with the provided title and description as commit message.
* This script will not push the new commit. You will need to do so yourself
by force-pushing, since you will be rewriting history. You must be the
author of the PR or a maintainer of the repository in order to perform this
operation.
* The 'gh' CLI tool must be installed and authenticated.
* To write a multiline description, you can use "\$(cat <<EOF
line 1
line 2
EOF
)" to pass it as a single argument.
* If you get a '[rejected]' error when updating the target branch and then
locally merging the changes into the source branch, it is likely because the
source branch already exists on your machine (e.g. you ran this script
multiple times). In that case, you can delete the local source branch
(e.g. 'git branch -D [source]') and try again.
USAGE
exit 0
fi
pr="$1"
shift
title=$1
shift
description=$1
shift
set -e
echo "Checking workspace."
diff=$(git status --porcelain)
if [ -n "${diff}" ]; then
echo "Error: Workspace is not clean. Please commit or stash your changes."
exit 1
fi
echo "Checking out PR ${pr}."
gh pr checkout "${pr}"
echo "Getting the target branch of the PR."
target=$(gh pr view --json "baseRefName" --jq '.baseRefName')
if [ -z "${target}" ]; then
echo "Error: Could not determine target branch of PR ${pr}."
exit 1
fi
echo "Getting the source branch of the PR."
source=$(git branch --show-current)
echo "Ensuring the PR source branch '${source}' is up to date with the target branch '${target}'."
git checkout ${target}
git pull --rebase
gh pr checkout "${pr}"
git merge ${target} --no-edit
# TODO: check for conflicts and abort if there are any.
echo "Squashing commits in the PR."
git reset --soft $(git merge-base ${target} HEAD)
git commit -S -m "${title}" -m "${description}"
# We assume that external contributors will create a fork in their personal
# repository, i.e. the repo owner matches the currently logged in user. In that
# case they can push directly to their branch. If the owner is 'XRPLF', we also
# push directly to the branch, as we assume that the user running this script
# will be a maintainer.
echo "Gathering user details."
owner=$(gh pr view --json "headRepositoryOwner" --jq '.headRepositoryOwner.login')
user=$(gh api user --jq '.login')
echo "The PR is owned by '${owner}'. The current user is '${user}'."
if [ "${owner}" = 'XRPLF' ] || [ "${owner}" = "${user}" ]; then
remote="$(git remote -v | grep ":${owner}/rippled.git (push)" | head -1 | cut -f1)"
else
remote="${owner}"
fi
if [ "${remote}" = "origin" ]; then
cat << EOF
----------------------------------------------------------------------
This script will not push. Verify everything is correct, then force
push to the source branch using the following commands:
gh pr edit ${pr} --add-label 'MergeQueueCI'
git push --force-with-lease origin ${source}
The first command adds a label to the PR to skip running CI on the new
commit. As we are using a merge queue, CI will be run when the PR is added to
the queue, which is required to pass before the changes are merged.
Remember to navigate back to your previous branch after pushing. You
may also want to delete the branch after the commit has been pushed.
git branch -D ${source}
----------------------------------------------------------------------
EOF
else
cat << EOF
----------------------------------------------------------------------
This script will not push. Verify everything is correct, then force
push to the fork using the following commands:
gh pr edit ${pr} --add-label 'MergeQueueCI'
git remote add ${remote} git@github.com:${remote}/rippled.git
git fetch ${remote}
git push --force-with-lease ${remote} ${source}
git remote remove ${remote}
The first command adds a label to the PR to skip running CI on the new
commit. As we are using a merge queue, CI will be run when the PR is added to
the queue, which is required to pass before the changes are merged.
Remember to navigate back to your previous branch after pushing. You
may also want to delete the branch after the commit has been pushed.
git branch -D ${source}
----------------------------------------------------------------------
EOF
fi

View File

@@ -1,21 +1,3 @@
macro(group_sources_in source_dir curdir)
file(GLOB children RELATIVE ${source_dir}/${curdir}
${source_dir}/${curdir}/*)
foreach (child ${children})
if (IS_DIRECTORY ${source_dir}/${curdir}/${child})
group_sources_in(${source_dir} ${curdir}/${child})
else()
string(REPLACE "/" "\\" groupname ${curdir})
source_group(${groupname} FILES
${source_dir}/${curdir}/${child})
endif()
endforeach()
endmacro()
macro(group_sources curdir)
group_sources_in(${PROJECT_SOURCE_DIR} ${curdir})
endmacro()
macro (exclude_from_default target_)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_ALL ON)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_DEFAULT_BUILD ON)

View File

@@ -109,6 +109,9 @@
# - add a new function add_code_coverage_to_target
# - remove some unused code
#
# 2025-11-11, Bronek Kozicki
# - make EXECUTABLE and EXECUTABLE_ARGS optional
#
# USAGE:
#
# 1. Copy this file into your cmake modules path.
@@ -317,6 +320,10 @@ function(setup_target_for_coverage_gcovr)
set(Coverage_FORMAT xml)
endif()
if(NOT DEFINED Coverage_EXECUTABLE AND DEFINED Coverage_EXECUTABLE_ARGS)
message(FATAL_ERROR "EXECUTABLE_ARGS must not be set if EXECUTABLE is not set")
endif()
if("--output" IN_LIST GCOVR_ADDITIONAL_ARGS)
message(FATAL_ERROR "Unsupported --output option detected in GCOVR_ADDITIONAL_ARGS! Aborting...")
else()
@@ -398,17 +405,18 @@ function(setup_target_for_coverage_gcovr)
endforeach()
# Set up commands which will be run to generate coverage data
# Run tests
set(GCOVR_EXEC_TESTS_CMD
${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS}
)
# If EXECUTABLE is not set, the user is expected to run the tests manually
# before running the coverage target NAME
if(DEFINED Coverage_EXECUTABLE)
set(GCOVR_EXEC_TESTS_CMD
${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS}
)
endif()
# Create folder
if(DEFINED GCOVR_CREATE_FOLDER)
set(GCOVR_FOLDER_CMD
${CMAKE_COMMAND} -E make_directory ${GCOVR_CREATE_FOLDER})
else()
set(GCOVR_FOLDER_CMD echo) # dummy
endif()
# Running gcovr
@@ -425,11 +433,13 @@ function(setup_target_for_coverage_gcovr)
if(CODE_COVERAGE_VERBOSE)
message(STATUS "Executed command report")
message(STATUS "Command to run tests: ")
string(REPLACE ";" " " GCOVR_EXEC_TESTS_CMD_SPACED "${GCOVR_EXEC_TESTS_CMD}")
message(STATUS "${GCOVR_EXEC_TESTS_CMD_SPACED}")
if(NOT "${GCOVR_EXEC_TESTS_CMD}" STREQUAL "")
message(STATUS "Command to run tests: ")
string(REPLACE ";" " " GCOVR_EXEC_TESTS_CMD_SPACED "${GCOVR_EXEC_TESTS_CMD}")
message(STATUS "${GCOVR_EXEC_TESTS_CMD_SPACED}")
endif()
if(NOT GCOVR_FOLDER_CMD STREQUAL "echo")
if(NOT "${GCOVR_FOLDER_CMD}" STREQUAL "")
message(STATUS "Command to create a folder: ")
string(REPLACE ";" " " GCOVR_FOLDER_CMD_SPACED "${GCOVR_FOLDER_CMD}")
message(STATUS "${GCOVR_FOLDER_CMD_SPACED}")

View File

@@ -12,7 +12,7 @@ if (static OR MSVC)
else ()
set (Boost_USE_STATIC_RUNTIME OFF)
endif ()
find_dependency (Boost 1.70
find_dependency (Boost
COMPONENTS
chrono
container
@@ -52,5 +52,3 @@ if (TARGET ZLIB::ZLIB)
set_target_properties(OpenSSL::Crypto PROPERTIES
INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
endif ()
include ("${CMAKE_CURRENT_LIST_DIR}/RippleTargets.cmake")

View File

@@ -16,16 +16,13 @@ set(CMAKE_CXX_EXTENSIONS OFF)
target_compile_definitions (common
INTERFACE
$<$<CONFIG:Debug>:DEBUG _DEBUG>
#[===[
NOTE: CMAKE release builds already have NDEBUG defined, so no need to add it
explicitly except for the special case of (profile ON) and (assert OFF).
Presumably this is because we don't want profile builds asserting unless
asserts were specifically requested.
]===]
$<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG>
# TODO: Remove once we have migrated functions from OpenSSL 1.x to 3.x.
OPENSSL_SUPPRESS_DEPRECATED
)
$<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG>)
# ^^^^ NOTE: CMAKE release builds already have NDEBUG
# defined, so no need to add it explicitly except for
# this special case of (profile ON) and (assert OFF)
# -- presumably this is because we don't want profile
# builds asserting unless asserts were specifically
# requested
if (MSVC)
# remove existing exception flag since we set it to -EHa

View File

@@ -72,10 +72,7 @@ include(target_link_modules)
# Level 01
add_module(xrpl beast)
target_link_libraries(xrpl.libxrpl.beast PUBLIC
xrpl.imports.main
xrpl.libpb
)
target_link_libraries(xrpl.libxrpl.beast PUBLIC xrpl.imports.main)
# Level 02
add_module(xrpl basics)

View File

@@ -11,6 +11,9 @@ if(CMAKE_CXX_COMPILER_ID MATCHES "MSVC")
return()
endif()
include(ProcessorCount)
ProcessorCount(PROCESSOR_COUNT)
include(CodeCoverage)
# The instructions for these commands come from the `CodeCoverage` module,
@@ -26,15 +29,13 @@ list(APPEND GCOVR_ADDITIONAL_ARGS
--exclude-throw-branches
--exclude-noncode-lines
--exclude-unreachable-branches -s
-j ${coverage_test_parallelism})
-j ${PROCESSOR_COUNT})
setup_target_for_coverage_gcovr(
NAME coverage
FORMAT ${coverage_format}
EXECUTABLE rippled
EXECUTABLE_ARGS --unittest$<$<BOOL:${coverage_test}>:=${coverage_test}> --unittest-jobs ${coverage_test_parallelism} --quiet --unittest-log
EXCLUDE "src/test" "src/tests" "include/xrpl/beast/test" "include/xrpl/beast/unit_test" "${CMAKE_BINARY_DIR}/pb-xrpl.libpb"
DEPENDENCIES rippled
DEPENDENCIES rippled xrpl.tests
)
add_code_coverage_to_target(opts INTERFACE)

View File

@@ -38,7 +38,7 @@ install(CODE "
set(CMAKE_MODULE_PATH \"${CMAKE_MODULE_PATH}\")
include(create_symbolic_link)
create_symbolic_link(xrpl \
\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_INCLUDEDIR}/ripple)
\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_INCLUDEDIR}/ripple)
")
install (EXPORT RippleExports
@@ -72,7 +72,7 @@ if (is_root_project AND TARGET rippled)
set(CMAKE_MODULE_PATH \"${CMAKE_MODULE_PATH}\")
include(create_symbolic_link)
create_symbolic_link(rippled${suffix} \
\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_BINDIR}/xrpld${suffix})
\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_BINDIR}/xrpld${suffix})
")
endif ()

View File

@@ -1,5 +1,5 @@
#[===================================================================[
convenience variables and sanity checks
sanity checks
#]===================================================================]
get_property(is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
@@ -16,39 +16,19 @@ if (NOT is_multiconfig)
endif ()
endif ()
get_directory_property(has_parent PARENT_DIRECTORY)
if (has_parent)
set (is_root_project OFF)
else ()
set (is_root_project ON)
endif ()
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES ".*Clang") # both Clang and AppleClang
set (is_clang TRUE)
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang" AND
CMAKE_CXX_COMPILER_VERSION VERSION_LESS 8.0)
message (FATAL_ERROR "This project requires clang 8 or later")
CMAKE_CXX_COMPILER_VERSION VERSION_LESS 16.0)
message (FATAL_ERROR "This project requires clang 16 or later")
endif ()
# TODO min AppleClang version check ?
elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
set (is_gcc TRUE)
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 8.0)
message (FATAL_ERROR "This project requires GCC 8 or later")
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 12.0)
message (FATAL_ERROR "This project requires GCC 12 or later")
endif ()
endif ()
if (CMAKE_SYSTEM_NAME STREQUAL "Linux")
set (is_linux TRUE)
else ()
set (is_linux FALSE)
endif ()
if ("$ENV{CI}" STREQUAL "true" OR "$ENV{CONTINUOUS_INTEGRATION}" STREQUAL "true")
set (is_ci TRUE)
else ()
set (is_ci FALSE)
endif ()
# check for in-source build and fail
if ("${CMAKE_CURRENT_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
message (FATAL_ERROR "Builds (in-source) are not allowed in "

View File

@@ -1,10 +1,25 @@
#[===================================================================[
declare user options/settings
declare options and variables
#]===================================================================]
include(ProcessorCount)
if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
set (is_linux TRUE)
else()
set(is_linux FALSE)
endif()
ProcessorCount(PROCESSOR_COUNT)
if("$ENV{CI}" STREQUAL "true" OR "$ENV{CONTINUOUS_INTEGRATION}" STREQUAL "true")
set(is_ci TRUE)
else()
set(is_ci FALSE)
endif()
get_directory_property(has_parent PARENT_DIRECTORY)
if(has_parent)
set(is_root_project OFF)
else()
set(is_root_project ON)
endif()
option(assert "Enables asserts, even in release builds" OFF)
@@ -25,29 +40,28 @@ if(unity)
endif()
set(CMAKE_UNITY_BUILD ON CACHE BOOL "Do a unity build")
endif()
if(is_clang AND is_linux)
option(voidstar "Enable Antithesis instrumentation." OFF)
endif()
if(is_gcc OR is_clang)
include(ProcessorCount)
ProcessorCount(PROCESSOR_COUNT)
option(coverage "Generates coverage info." OFF)
option(profile "Add profiling flags" OFF)
set(coverage_test_parallelism "${PROCESSOR_COUNT}" CACHE STRING
"Unit tests parallelism for the purpose of coverage report.")
set(coverage_format "html-details" CACHE STRING
"Output format of the coverage report.")
set(coverage_extra_args "" CACHE STRING
"Additional arguments to pass to gcovr.")
set(coverage_test "" CACHE STRING
"On gcc & clang, the specific unit test(s) to run for coverage. Default is all tests.")
if(coverage_test AND NOT coverage)
set(coverage ON CACHE BOOL "gcc/clang only" FORCE)
endif()
option(wextra "compile with extra gcc/clang warnings enabled" ON)
else()
set(profile OFF CACHE BOOL "gcc/clang only" FORCE)
set(coverage OFF CACHE BOOL "gcc/clang only" FORCE)
set(wextra OFF CACHE BOOL "gcc/clang only" FORCE)
endif()
if(is_linux)
option(BUILD_SHARED_LIBS "build shared ripple libraries" OFF)
option(static "link protobuf, openssl, libc++, and boost statically" ON)
@@ -64,11 +78,13 @@ else()
set(use_gold OFF CACHE BOOL "gold linker, linux only" FORCE)
set(use_mold OFF CACHE BOOL "mold linker, linux only" FORCE)
endif()
if(is_clang)
option(use_lld "enables detection of lld linker" ON)
else()
set(use_lld OFF CACHE BOOL "try lld linker, clang only" FORCE)
endif()
option(jemalloc "Enables jemalloc for heap profiling" OFF)
option(werr "treat warnings as errors" OFF)
option(local_protobuf
@@ -102,38 +118,26 @@ if(san)
message(FATAL_ERROR "${san} sanitizer does not seem to be supported by your compiler")
endif()
endif()
set(container_label "" CACHE STRING "tag to use for package building containers")
option(packages_only
"ONLY generate package building targets. This is special use-case and almost \
certainly not what you want. Use with caution as you won't be able to build \
any compiled targets locally." OFF)
option(have_package_container
"Sometimes you already have the tagged container you want to use for package \
building and you don't want docker to rebuild it. This flag will detach the \
dependency of the package build from the container build. It's an advanced \
use case and most likely you should not be touching this flag." OFF)
# the remaining options are obscure and rarely used
option(beast_no_unit_test_inline
"Prevents unit test definitions from being inserted into global table"
OFF)
option(single_io_service_thread
"Restricts the number of threads calling io_context::run to one. \
"Restricts the number of threads calling io_service::run to one. \
This can be useful when debugging."
OFF)
option(boost_show_deprecated
"Allow boost to fail on deprecated usage. Only useful if you're trying\
to find deprecated calls."
OFF)
option(beast_hashers
"Use local implementations for sha/ripemd hashes (experimental, not recommended)"
OFF)
if(WIN32)
option(beast_disable_autolink "Disables autolinking of system libraries on WIN32" OFF)
else()
set(beast_disable_autolink OFF CACHE BOOL "WIN32 only" FORCE)
endif()
if(coverage)
message(STATUS "coverage build requested - forcing Debug build")
set(CMAKE_BUILD_TYPE Debug CACHE STRING "build type" FORCE)

View File

@@ -1,4 +1,4 @@
option (validator_keys "Enables building of validator-keys tool as a separate target (imported via FetchContent)" OFF)
option (validator_keys "Enables building of validator-keys-tool as a separate target (imported via FetchContent)" OFF)
if (validator_keys)
git_branch (current_branch)
@@ -6,15 +6,17 @@ if (validator_keys)
if (NOT (current_branch STREQUAL "release"))
set (current_branch "master")
endif ()
message (STATUS "Tracking ValidatorKeys branch: ${current_branch}")
message (STATUS "tracking ValidatorKeys branch: ${current_branch}")
FetchContent_Declare (
validator_keys
validator_keys_src
GIT_REPOSITORY https://github.com/ripple/validator-keys-tool.git
GIT_TAG "${current_branch}"
)
FetchContent_MakeAvailable(validator_keys)
set_target_properties(validator-keys PROPERTIES RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}")
install(TARGETS validator-keys RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR})
FetchContent_GetProperties (validator_keys_src)
if (NOT validator_keys_src_POPULATED)
message (STATUS "Pausing to download ValidatorKeys...")
FetchContent_Populate (validator_keys_src)
endif ()
add_subdirectory (${validator_keys_src_SOURCE_DIR} ${CMAKE_BINARY_DIR}/validator-keys)
endif ()

View File

@@ -24,7 +24,6 @@ target_link_libraries(ripple_boost
Boost::date_time
Boost::filesystem
Boost::json
Boost::process
Boost::program_options
Boost::regex
Boost::system

View File

@@ -7,7 +7,7 @@ function(xrpl_add_test name)
"${CMAKE_CURRENT_SOURCE_DIR}/${name}/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/${name}.cpp"
)
add_executable(${target} EXCLUDE_FROM_ALL ${ARGN} ${sources})
add_executable(${target} ${ARGN} ${sources})
isolate_headers(
${target}
@@ -22,20 +22,4 @@ function(xrpl_add_test name)
UNITY_BUILD_BATCH_SIZE 0) # Adjust as needed
add_test(NAME ${target} COMMAND ${target})
set_tests_properties(
${target} PROPERTIES
FIXTURES_REQUIRED ${target}_fixture
)
add_test(
NAME ${target}.build
COMMAND
${CMAKE_COMMAND}
--build ${CMAKE_BINARY_DIR}
--config $<CONFIG>
--target ${target}
)
set_tests_properties(${target}.build PROPERTIES
FIXTURES_SETUP ${target}_fixture
)
endfunction()

View File

@@ -9,7 +9,7 @@
"rocksdb/10.0.1#85537f46e538974d67da0c3977de48ac%1756234304.347",
"re2/20230301#dfd6e2bf050eb90ddd8729cfb4c844a4%1756234257.976",
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1%1756234251.614",
"openssl/3.5.4#a1d5835cc6ed5c5b8f3cd5b9b5d24205%1759746684.671",
"openssl/1.1.1w#a8f0792d7c5121b954578a7149d23e03%1756223730.729",
"nudb/2.0.9#c62cfd501e57055a7e0d8ee3d5e5427d%1756234237.107",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1756234228.999",
"libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1756223727.64",
@@ -21,7 +21,7 @@
"date/3.0.4#f74bbba5a08fa388256688743136cb6f%1756234217.493",
"c-ares/1.34.5#b78b91e7cfb1f11ce777a285bbf169c6%1756234217.915",
"bzip2/1.0.8#00b4a4658791c1f06914e087f0e792f5%1756234261.716",
"boost/1.88.0#8852c0b72ce8271fb8ff7c53456d4983%1756223752.326",
"boost/1.83.0#5d975011d65b51abb2d2f6eb8386b368%1754325043.336",
"abseil/20230802.1#f0f91485b111dc9837a68972cb19ca7b%1756234220.907"
],
"build_requires": [
@@ -46,11 +46,11 @@
"lz4/1.10.0"
],
"boost/1.83.0": [
"boost/1.88.0"
"boost/1.83.0"
],
"sqlite3/3.44.2": [
"sqlite3/3.49.1"
]
},
"config_requires": []
}
}

View File

@@ -1,6 +1,5 @@
# Global configuration for Conan. This is used to set the number of parallel
# downloads, uploads, and build jobs.
# downloads and uploads.
core:non_interactive=True
core.download:parallel={{ os.cpu_count() }}
core.upload:parallel={{ os.cpu_count() }}
tools.build:jobs={{ os.cpu_count() - 1 }}

View File

@@ -21,11 +21,14 @@ compiler.libcxx={{detect_api.detect_libcxx(compiler, version, compiler_exe)}}
[conf]
{% if compiler == "clang" and compiler_version >= 19 %}
tools.build:cxxflags=['-Wno-missing-template-arg-list-after-template-kw']
grpc/1.50.1:tools.build:cxxflags+=['-Wno-missing-template-arg-list-after-template-kw']
{% endif %}
{% if compiler == "apple-clang" and compiler_version >= 17 %}
tools.build:cxxflags=['-Wno-missing-template-arg-list-after-template-kw']
grpc/1.50.1:tools.build:cxxflags+=['-Wno-missing-template-arg-list-after-template-kw']
{% endif %}
{% if compiler == "clang" and compiler_version == 16 %}
tools.build:cxxflags=['-DBOOST_ASIO_DISABLE_CONCEPTS']
{% endif %}
{% if compiler == "gcc" and compiler_version < 13 %}
tools.build:cxxflags=['-Wno-restrict']
tools.build:cxxflags+=['-Wno-restrict']
{% endif %}

View File

@@ -27,7 +27,7 @@ class Xrpl(ConanFile):
'grpc/1.50.1',
'libarchive/3.8.1',
'nudb/2.0.9',
'openssl/3.5.4',
'openssl/1.1.1w',
'soci/4.0.3',
'zlib/1.3.1',
]
@@ -100,13 +100,11 @@ class Xrpl(ConanFile):
def configure(self):
if self.settings.compiler == 'apple-clang':
self.options['boost'].visibility = 'global'
if self.settings.compiler in ['clang', 'gcc']:
self.options['boost'].without_cobalt = True
def requirements(self):
# Conan 2 requires transitive headers to be specified
transitive_headers_opt = {'transitive_headers': True} if conan_version.split('.')[0] == '2' else {}
self.requires('boost/1.88.0', force=True, **transitive_headers_opt)
self.requires('boost/1.83.0', force=True, **transitive_headers_opt)
self.requires('date/3.0.4', **transitive_headers_opt)
self.requires('lz4/1.10.0', force=True)
self.requires('protobuf/3.21.12', force=True)
@@ -177,7 +175,6 @@ class Xrpl(ConanFile):
'boost::filesystem',
'boost::json',
'boost::program_options',
'boost::process',
'boost::regex',
'boost::system',
'boost::thread',

View File

@@ -541,7 +541,7 @@ SECP256K1_API int secp256k1_ecdsa_signature_serialize_compact(
/** Verify an ECDSA signature.
*
* Returns: 1: correct signature
* 0: incorrect or unparseable signature
* 0: incorrect or unparsable signature
* Args: ctx: pointer to a context object
* In: sig: the signature being verified.
* msghash32: the 32-byte message hash being verified.

View File

@@ -32,6 +32,15 @@ class Number;
std::string
to_string(Number const& amount);
template <typename T>
constexpr bool
isPowerOfTen(T value)
{
while (value >= 10 && value % 10 == 0)
value /= 10;
return value == 1;
}
class Number
{
using rep = std::int64_t;
@@ -41,7 +50,9 @@ class Number
public:
// The range for the mantissa when normalized
constexpr static std::int64_t minMantissa = 1'000'000'000'000'000LL;
constexpr static std::int64_t maxMantissa = 9'999'999'999'999'999LL;
static_assert(isPowerOfTen(minMantissa));
constexpr static std::int64_t maxMantissa = minMantissa * 10 - 1;
static_assert(maxMantissa == 9'999'999'999'999'999LL);
// The range for the exponent when normalized
constexpr static int minExponent = -32768;
@@ -151,22 +162,7 @@ public:
}
Number
truncate() const noexcept
{
if (exponent_ >= 0 || mantissa_ == 0)
return *this;
Number ret = *this;
while (ret.exponent_ < 0 && ret.mantissa_ != 0)
{
ret.exponent_ += 1;
ret.mantissa_ /= rep(10);
}
// We are guaranteed that normalize() will never throw an exception
// because exponent is either negative or zero at this point.
ret.normalize();
return ret;
}
truncate() const noexcept;
friend constexpr bool
operator>(Number const& x, Number const& y) noexcept
@@ -211,6 +207,8 @@ private:
class Guard;
};
constexpr static Number numZero{};
inline constexpr Number::Number(rep mantissa, int exponent, unchecked) noexcept
: mantissa_{mantissa}, exponent_{exponent}
{

View File

@@ -23,7 +23,7 @@
#include <xrpl/basics/Resolver.h>
#include <xrpl/beast/utility/Journal.h>
#include <boost/asio/io_context.hpp>
#include <boost/asio/io_service.hpp>
namespace ripple {
@@ -33,7 +33,7 @@ public:
explicit ResolverAsio() = default;
static std::unique_ptr<ResolverAsio>
New(boost::asio::io_context&, beast::Journal);
New(boost::asio::io_service&, beast::Journal);
};
} // namespace ripple

View File

@@ -176,7 +176,7 @@ public:
@param count the number of items the slab allocator can allocate; note
that a count of 0 is valid and means that the allocator
is, effectively, disabled. This can be very useful in some
contexts (e.g. when mimimal memory usage is needed) and
contexts (e.g. when minimal memory usage is needed) and
allows for graceful failure.
*/
constexpr explicit SlabAllocator(

View File

@@ -565,7 +565,7 @@ operator<=>(base_uint<Bits, Tag> const& lhs, base_uint<Bits, Tag> const& rhs)
// This comparison might seem wrong on a casual inspection because it
// compares data internally stored as std::uint32_t byte-by-byte. But
// note that the underlying data is stored in big endian, even if the
// plaform is little endian. This makes the comparison correct.
// platform is little endian. This makes the comparison correct.
//
// FIXME: use std::lexicographical_compare_three_way once support is
// added to MacOS.

View File

@@ -28,7 +28,7 @@ namespace ripple {
/*
* MSVC 2019 version 16.9.0 added [[nodiscard]] to the std comparison
* operator() functions. boost::bimap checks that the comparitor is a
* operator() functions. boost::bimap checks that the comparator is a
* BinaryFunction, in part by calling the function and ignoring the value.
* These two things don't play well together. These wrapper classes simply
* strip [[nodiscard]] from operator() for use in boost::bimap.

View File

@@ -23,8 +23,7 @@
#include <xrpl/beast/utility/instrumentation.h>
#include <boost/asio/basic_waitable_timer.hpp>
#include <boost/asio/io_context.hpp>
#include <boost/asio/post.hpp>
#include <boost/asio/io_service.hpp>
#include <chrono>
#include <condition_variable>
@@ -33,7 +32,7 @@
namespace beast {
/** Measures handler latency on an io_context queue. */
/** Measures handler latency on an io_service queue. */
template <class Clock>
class io_latency_probe
{
@@ -45,12 +44,12 @@ private:
std::condition_variable_any m_cond;
std::size_t m_count;
duration const m_period;
boost::asio::io_context& m_ios;
boost::asio::io_service& m_ios;
boost::asio::basic_waitable_timer<std::chrono::steady_clock> m_timer;
bool m_cancel;
public:
io_latency_probe(duration const& period, boost::asio::io_context& ios)
io_latency_probe(duration const& period, boost::asio::io_service& ios)
: m_count(1)
, m_period(period)
, m_ios(ios)
@@ -65,16 +64,16 @@ public:
cancel(lock, true);
}
/** Return the io_context associated with the latency probe. */
/** Return the io_service associated with the latency probe. */
/** @{ */
boost::asio::io_context&
get_io_context()
boost::asio::io_service&
get_io_service()
{
return m_ios;
}
boost::asio::io_context const&
get_io_context() const
boost::asio::io_service const&
get_io_service() const
{
return m_ios;
}
@@ -110,10 +109,8 @@ public:
std::lock_guard lock(m_mutex);
if (m_cancel)
throw std::logic_error("io_latency_probe is canceled");
boost::asio::post(
m_ios,
sample_op<Handler>(
std::forward<Handler>(handler), Clock::now(), false, this));
m_ios.post(sample_op<Handler>(
std::forward<Handler>(handler), Clock::now(), false, this));
}
/** Initiate continuous i/o latency sampling.
@@ -127,10 +124,8 @@ public:
std::lock_guard lock(m_mutex);
if (m_cancel)
throw std::logic_error("io_latency_probe is canceled");
boost::asio::post(
m_ios,
sample_op<Handler>(
std::forward<Handler>(handler), Clock::now(), true, this));
m_ios.post(sample_op<Handler>(
std::forward<Handler>(handler), Clock::now(), true, this));
}
private:
@@ -241,13 +236,12 @@ private:
// The latency is too high to maintain the desired
// period so don't bother with a timer.
//
boost::asio::post(
m_probe->m_ios,
m_probe->m_ios.post(
sample_op<Handler>(m_handler, now, m_repeat, m_probe));
}
else
{
m_probe->m_timer.expires_after(when - now);
m_probe->m_timer.expires_from_now(when - now);
m_probe->m_timer.async_wait(
sample_op<Handler>(m_handler, now, m_repeat, m_probe));
}
@@ -260,8 +254,7 @@ private:
if (!m_probe)
return;
typename Clock::time_point const now(Clock::now());
boost::asio::post(
m_probe->m_ios,
m_probe->m_ios.post(
sample_op<Handler>(m_handler, now, m_repeat, m_probe));
}
};

View File

@@ -8,11 +8,9 @@
#ifndef BEAST_TEST_YIELD_TO_HPP
#define BEAST_TEST_YIELD_TO_HPP
#include <boost/asio/executor_work_guard.hpp>
#include <boost/asio/io_context.hpp>
#include <boost/asio/io_service.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/optional.hpp>
#include <boost/thread/csbl/memory/allocator_arg.hpp>
#include <condition_variable>
#include <mutex>
@@ -31,12 +29,10 @@ namespace test {
class enable_yield_to
{
protected:
boost::asio::io_context ios_;
boost::asio::io_service ios_;
private:
boost::optional<boost::asio::executor_work_guard<
boost::asio::io_context::executor_type>>
work_;
boost::optional<boost::asio::io_service::work> work_;
std::vector<std::thread> threads_;
std::mutex m_;
std::condition_variable cv_;
@@ -46,8 +42,7 @@ public:
/// The type of yield context passed to functions.
using yield_context = boost::asio::yield_context;
explicit enable_yield_to(std::size_t concurrency = 1)
: work_(boost::asio::make_work_guard(ios_))
explicit enable_yield_to(std::size_t concurrency = 1) : work_(ios_)
{
threads_.reserve(concurrency);
while (concurrency--)
@@ -61,9 +56,9 @@ public:
t.join();
}
/// Return the `io_context` associated with the object
boost::asio::io_context&
get_io_context()
/// Return the `io_service` associated with the object
boost::asio::io_service&
get_io_service()
{
return ios_;
}
@@ -116,18 +111,13 @@ enable_yield_to::spawn(F0&& f, FN&&... fn)
{
boost::asio::spawn(
ios_,
boost::allocator_arg,
boost::context::fixedsize_stack(2 * 1024 * 1024),
[&](yield_context yield) {
f(yield);
std::lock_guard lock{m_};
if (--running_ == 0)
cv_.notify_all();
},
[](std::exception_ptr e) {
if (e)
std::rethrow_exception(e);
});
boost::coroutines::attributes(2 * 1024 * 1024));
spawn(fn...);
}

View File

@@ -42,7 +42,7 @@ public:
The argument string is available to suites and
allows for customization of the test. Each suite
defines its own syntax for the argumnet string.
defines its own syntax for the argument string.
The same argument is passed to all suites.
*/
void

View File

@@ -32,7 +32,7 @@ OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
// The duplication is because Visual Studio 2019 cannot compile that header
// even with the option -Zc:__cplusplus added.
#define ALWAYS(cond, message, ...) assert((message) && (cond))
#define ALWAYS_OR_UNREACHABLE(cond, message, ...) assert((message) && (cond))
#define ALWAYS_OR_UNREACHABLE(cond, message) assert((message) && (cond))
#define SOMETIMES(cond, message, ...)
#define REACHABLE(message, ...)
#define UNREACHABLE(message, ...) assert((message) && false)

View File

@@ -217,7 +217,7 @@ Reader::parse(Value& root, BufferSequence const& bs)
std::string s;
s.reserve(buffer_size(bs));
for (auto const& b : bs)
s.append(static_cast<char const*>(b.data()), buffer_size(b));
s.append(buffer_cast<char const*>(b), buffer_size(b));
return parse(s, root);
}

View File

@@ -24,6 +24,7 @@
#include <xrpl/json/json_forwards.h>
#include <cstring>
#include <limits>
#include <map>
#include <string>
#include <vector>
@@ -158,9 +159,9 @@ public:
using ArrayIndex = UInt;
static Value const null;
static Int const minInt;
static Int const maxInt;
static UInt const maxUInt;
static constexpr Int minInt = std::numeric_limits<Int>::min();
static constexpr Int maxInt = std::numeric_limits<Int>::max();
static constexpr UInt maxUInt = std::numeric_limits<UInt>::max();
private:
class CZString
@@ -263,6 +264,10 @@ public:
bool
asBool() const;
/** Correct absolute value from int or unsigned int */
UInt
asAbsUInt() const;
// TODO: What is the "empty()" method this docstring mentions?
/** isNull() tests to see if this field is null. Don't use this method to
test for emptiness: use empty(). */
@@ -395,6 +400,9 @@ public:
/// Return true if the object has a member named key.
bool
isMember(std::string const& key) const;
/// Return true if the object has a member named key.
bool
isMember(StaticString const& key) const;
/// \brief Return a list of the member names.
///

View File

@@ -387,6 +387,45 @@ public:
emptyDirDelete(Keylet const& directory);
};
namespace directory {
/** Helper functions for managing low-level directory operations.
These are not part of the ApplyView interface.
Don't use them unless you really, really know what you're doing.
Instead use dirAdd, dirInsert, etc.
*/
std::uint64_t
createRoot(
ApplyView& view,
Keylet const& directory,
uint256 const& key,
std::function<void(std::shared_ptr<SLE> const&)> const& describe);
auto
findPreviousPage(ApplyView& view, Keylet const& directory, SLE::ref start);
std::uint64_t
insertKey(
ApplyView& view,
SLE::ref node,
std::uint64_t page,
bool preserveOrder,
STVector256& indexes,
uint256 const& key);
std::optional<std::uint64_t>
insertPage(
ApplyView& view,
std::uint64_t page,
SLE::pointer node,
std::uint64_t nextPage,
SLE::ref next,
uint256 const& key,
Keylet const& directory,
std::function<void(std::shared_ptr<SLE> const&)> const& describe);
} // namespace directory
} // namespace ripple
#endif

View File

@@ -24,6 +24,7 @@
#include <xrpl/ledger/ApplyView.h>
#include <xrpl/ledger/OpenView.h>
#include <xrpl/ledger/ReadView.h>
#include <xrpl/protocol/Asset.h>
#include <xrpl/protocol/Indexes.h>
#include <xrpl/protocol/MPTIssue.h>
#include <xrpl/protocol/Protocol.h>
@@ -242,6 +243,80 @@ isDeepFrozen(
Currency const& currency,
AccountID const& issuer);
[[nodiscard]] inline bool
isDeepFrozen(
ReadView const& view,
AccountID const& account,
Issue const& issue,
int = 0 /*ignored*/)
{
return isDeepFrozen(view, account, issue.currency, issue.account);
}
[[nodiscard]] inline bool
isDeepFrozen(
ReadView const& view,
AccountID const& account,
MPTIssue const& mptIssue,
int depth = 0)
{
// Unlike IOUs, frozen / locked MPTs are not allowed to send or receive
// funds, so checking "deep frozen" is the same as checking "frozen".
return isFrozen(view, account, mptIssue, depth);
}
/**
* isFrozen check is recursive for MPT shares in a vault, descending to
* assets in the vault, up to maxAssetCheckDepth recursion depth. This is
* purely defensive, as we currently do not allow such vaults to be created.
*/
[[nodiscard]] inline bool
isDeepFrozen(
ReadView const& view,
AccountID const& account,
Asset const& asset,
int depth = 0)
{
return std::visit(
[&](auto const& issue) {
return isDeepFrozen(view, account, issue, depth);
},
asset.value());
}
[[nodiscard]] inline TER
checkDeepFrozen(
ReadView const& view,
AccountID const& account,
Issue const& issue)
{
return isDeepFrozen(view, account, issue) ? (TER)tecFROZEN
: (TER)tesSUCCESS;
}
[[nodiscard]] inline TER
checkDeepFrozen(
ReadView const& view,
AccountID const& account,
MPTIssue const& mptIssue)
{
return isDeepFrozen(view, account, mptIssue) ? (TER)tecLOCKED
: (TER)tesSUCCESS;
}
[[nodiscard]] inline TER
checkDeepFrozen(
ReadView const& view,
AccountID const& account,
Asset const& asset)
{
return std::visit(
[&](auto const& issue) {
return checkDeepFrozen(view, account, issue);
},
asset.value());
}
[[nodiscard]] bool
isLPTokenFrozen(
ReadView const& view,
@@ -287,6 +362,49 @@ accountHolds(
AuthHandling zeroIfUnauthorized,
beast::Journal j);
// Returns the amount an account can spend total.
//
// These functions use accountHolds, but unlike accountHolds:
// * The account can go into debt.
// * If the account is the asset issuer the only limit is defined by the asset /
// issuance.
//
// <-- saAmount: amount of currency held by account. May be negative.
[[nodiscard]] STAmount
accountSpendable(
ReadView const& view,
AccountID const& account,
Currency const& currency,
AccountID const& issuer,
FreezeHandling zeroIfFrozen,
beast::Journal j);
[[nodiscard]] STAmount
accountSpendable(
ReadView const& view,
AccountID const& account,
Issue const& issue,
FreezeHandling zeroIfFrozen,
beast::Journal j);
[[nodiscard]] STAmount
accountSpendable(
ReadView const& view,
AccountID const& account,
MPTIssue const& mptIssue,
FreezeHandling zeroIfFrozen,
AuthHandling zeroIfUnauthorized,
beast::Journal j);
[[nodiscard]] STAmount
accountSpendable(
ReadView const& view,
AccountID const& account,
Asset const& asset,
FreezeHandling zeroIfFrozen,
AuthHandling zeroIfUnauthorized,
beast::Journal j);
// Returns the amount an account can spend of the currency type saDefault, or
// returns saDefault if this account is the issuer of the currency in
// question. Should be used in favor of accountHolds when questioning how much
@@ -533,7 +651,11 @@ dirNext(
describeOwnerDir(AccountID const& account);
[[nodiscard]] TER
dirLink(ApplyView& view, AccountID const& owner, std::shared_ptr<SLE>& object);
dirLink(
ApplyView& view,
AccountID const& owner,
std::shared_ptr<SLE>& object,
SF_UINT64 const& node = sfOwnerNode);
AccountID
pseudoAccountAddress(ReadView const& view, uint256 const& pseudoOwnerKey);
@@ -552,14 +674,17 @@ createPseudoAccount(
uint256 const& pseudoOwnerKey,
SField const& ownerField);
// Returns true iff sleAcct is a pseudo-account.
// Returns true iff sleAcct is a pseudo-account or specific
// pseudo-accounts in pseudoFieldFilter.
//
// Returns false if sleAcct is
// * NOT a pseudo-account OR
// * NOT a ltACCOUNT_ROOT OR
// * null pointer
[[nodiscard]] bool
isPseudoAccount(std::shared_ptr<SLE const> sleAcct);
isPseudoAccount(
std::shared_ptr<SLE const> sleAcct,
std::set<SField const*> const& pseudoFieldFilter = {});
// Returns the list of fields that define an ACCOUNT_ROOT as a pseudo-account if
// set
@@ -573,14 +698,91 @@ isPseudoAccount(std::shared_ptr<SLE const> sleAcct);
getPseudoAccountFields();
[[nodiscard]] inline bool
isPseudoAccount(ReadView const& view, AccountID accountId)
isPseudoAccount(
ReadView const& view,
AccountID const& accountId,
std::set<SField const*> const& pseudoFieldFilter = {})
{
return isPseudoAccount(view.read(keylet::account(accountId)));
return isPseudoAccount(
view.read(keylet::account(accountId)), pseudoFieldFilter);
}
[[nodiscard]] TER
canAddHolding(ReadView const& view, Asset const& asset);
/** Validates that the destination SLE and tag are valid
- Checks that the SLE is not null.
- If the SLE requires a destination tag, checks that there is a tag.
*/
[[nodiscard]] TER
checkDestinationAndTag(SLE::const_ref toSle, bool hasDestinationTag);
/** Checks that can withdraw funds from an object to itself or a destination.
*
* The receiver may be either the submitting account (sfAccount) or a different
* destination account (sfDestination).
*
* - Checks that the receiver account exists.
* - If the receiver requires a destination tag, check that one exists, even
* if withdrawing to self.
* - If withdrawing to self, succeed.
* - If not, checks if the receiver requires deposit authorization, and if
* the sender has it.
*/
[[nodiscard]] TER
canWithdraw(
AccountID const& from,
ReadView const& view,
AccountID const& to,
SLE::const_ref toSle,
bool hasDestinationTag);
/** Checks that can withdraw funds from an object to itself or a destination.
*
* The receiver may be either the submitting account (sfAccount) or a different
* destination account (sfDestination).
*
* - Checks that the receiver account exists.
* - If the receiver requires a destination tag, check that one exists, even
* if withdrawing to self.
* - If withdrawing to self, succeed.
* - If not, checks if the receiver requires deposit authorization, and if
* the sender has it.
*/
[[nodiscard]] TER
canWithdraw(
AccountID const& from,
ReadView const& view,
AccountID const& to,
bool hasDestinationTag);
/** Checks that can withdraw funds from an object to itself or a destination.
*
* The receiver may be either the submitting account (sfAccount) or a different
* destination account (sfDestination).
*
* - Checks that the receiver account exists.
* - If the receiver requires a destination tag, check that one exists, even
* if withdrawing to self.
* - If withdrawing to self, succeed.
* - If not, checks if the receiver requires deposit authorization, and if
* the sender has it.
*/
[[nodiscard]] TER
canWithdraw(ReadView const& view, STTx const& tx);
[[nodiscard]] TER
doWithdraw(
ApplyView& view,
STTx const& tx,
AccountID const& senderAcct,
AccountID const& dstAcct,
AccountID const& sourceAcct,
XRPAmount priorBalance,
STAmount const& amount,
beast::Journal j);
/// Any transactors that call addEmptyHolding() in doApply must call
/// canAddHolding() in preflight with the same View and Asset
[[nodiscard]] TER
@@ -750,6 +952,22 @@ accountSend(
beast::Journal j,
WaiveTransferFee waiveFee = WaiveTransferFee::No);
using MultiplePaymentDestinations = std::vector<std::pair<AccountID, Number>>;
/** Like accountSend, except one account is sending multiple payments (with the
* same asset!) simultaneously
*
* Calls static accountSendMultiIOU if saAmount represents Issue.
* Calls static accountSendMultiMPT if saAmount represents MPTIssue.
*/
[[nodiscard]] TER
accountSendMulti(
ApplyView& view,
AccountID const& senderID,
Asset const& asset,
MultiplePaymentDestinations const& receivers,
beast::Journal j,
WaiveTransferFee waiveFee = WaiveTransferFee::No);
[[nodiscard]] TER
issueIOU(
ApplyView& view,
@@ -821,7 +1039,8 @@ requireAuth(
* purely defensive, as we currently do not allow such vaults to be created.
*
* If StrongAuth then return tecNO_AUTH if MPToken doesn't exist or
* lsfMPTRequireAuth is set and MPToken is not authorized.
* lsfMPTRequireAuth is set and MPToken is not authorized. Vault and LoanBroker
* pseudo-accounts are implicitly authorized.
*
* If WeakAuth then return tecNO_AUTH if lsfMPTRequireAuth is set and MPToken
* doesn't exist or is not authorized (explicitly or via credentials, if
@@ -894,6 +1113,26 @@ canTransfer(
AccountID const& from,
AccountID const& to);
[[nodiscard]] TER
canTransfer(
ReadView const& view,
Issue const& issue,
AccountID const& from,
AccountID const& to);
[[nodiscard]] TER inline canTransfer(
ReadView const& view,
Asset const& asset,
AccountID const& from,
AccountID const& to)
{
return std::visit(
[&]<ValidIssueType TIss>(TIss const& issue) -> TER {
return canTransfer(view, issue, from, to);
},
asset.value());
}
/** Deleter function prototype. Returns the status of the entry deletion
* (if should not be skipped) and if the entry should be skipped. The status
* is always tesSUCCESS if the entry should be skipped.

View File

@@ -47,7 +47,7 @@ public:
public:
AutoSocket(
boost::asio::io_context& s,
boost::asio::io_service& s,
boost::asio::ssl::context& c,
bool secureOnly,
bool plainOnly)
@@ -58,7 +58,7 @@ public:
mSocket = std::make_unique<ssl_socket>(s, c);
}
AutoSocket(boost::asio::io_context& s, boost::asio::ssl::context& c)
AutoSocket(boost::asio::io_service& s, boost::asio::ssl::context& c)
: AutoSocket(s, c, false, false)
{
}

View File

@@ -23,7 +23,7 @@
#include <xrpl/basics/ByteUtilities.h>
#include <xrpl/beast/utility/Journal.h>
#include <boost/asio/io_context.hpp>
#include <boost/asio/io_service.hpp>
#include <boost/asio/streambuf.hpp>
#include <chrono>
@@ -51,7 +51,7 @@ public:
static void
get(bool bSSL,
boost::asio::io_context& io_context,
boost::asio::io_service& io_service,
std::deque<std::string> deqSites,
unsigned short const port,
std::string const& strPath,
@@ -65,7 +65,7 @@ public:
static void
get(bool bSSL,
boost::asio::io_context& io_context,
boost::asio::io_service& io_service,
std::string strSite,
unsigned short const port,
std::string const& strPath,
@@ -80,7 +80,7 @@ public:
static void
request(
bool bSSL,
boost::asio::io_context& io_context,
boost::asio::io_service& io_service,
std::string strSite,
unsigned short const port,
std::function<

View File

@@ -153,7 +153,7 @@ public:
{
strm.set_verify_callback(
std::bind(
&rfc6125_verify,
&rfc2818_verify,
host,
std::placeholders::_1,
std::placeholders::_2,
@@ -167,7 +167,7 @@ public:
/**
* @brief callback invoked for name verification - just passes through
* to the asio `host_name_verification` (rfc6125) implementation.
* to the asio rfc2818 implementation.
*
* @param domain hostname expected
* @param preverified passed by implementation
@@ -175,13 +175,13 @@ public:
* @param j journal for logging
*/
static bool
rfc6125_verify(
rfc2818_verify(
std::string const& domain,
bool preverified,
boost::asio::ssl::verify_context& ctx,
beast::Journal j)
{
if (boost::asio::ssl::host_name_verification(domain)(preverified, ctx))
if (boost::asio::ssl::rfc2818_verification(domain)(preverified, ctx))
return true;
JLOG(j.warn()) << "Outbound SSL connection to " << domain

View File

@@ -100,7 +100,27 @@ public:
bool
native() const
{
return holds<Issue>() && get<Issue>().native();
return std::visit(
[&]<ValidIssueType TIss>(TIss const& issue) {
if constexpr (std::is_same_v<TIss, Issue>)
return issue.native();
if constexpr (std::is_same_v<TIss, MPTIssue>)
return false;
},
issue_);
}
bool
integral() const
{
return std::visit(
[&]<ValidIssueType TIss>(TIss const& issue) {
if constexpr (std::is_same_v<TIss, Issue>)
return issue.native();
if constexpr (std::is_same_v<TIss, MPTIssue>)
return true;
},
issue_);
}
friend constexpr bool

View File

@@ -346,6 +346,24 @@ vault(uint256 const& vaultKey)
return {ltVAULT, vaultKey};
}
Keylet
loanbroker(AccountID const& owner, std::uint32_t seq) noexcept;
inline Keylet
loanbroker(uint256 const& key)
{
return {ltLOAN_BROKER, key};
}
Keylet
loan(uint256 const& loanBrokerID, std::uint32_t loanSeq) noexcept;
inline Keylet
loan(uint256 const& key)
{
return {ltLOAN, key};
}
Keylet
permissionedDomain(AccountID const& account, std::uint32_t seq) noexcept;

View File

@@ -205,6 +205,11 @@ enum LedgerSpecificFlags {
// ltVAULT
lsfVaultPrivate = 0x00010000,
// ltLOAN
lsfLoanDefault = 0x00010000,
lsfLoanImpaired = 0x00020000,
lsfLoanOverpayment = 0x00040000, // True, loan allows overpayments
};
//------------------------------------------------------------------------------

View File

@@ -22,6 +22,7 @@
#include <xrpl/basics/ByteUtilities.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/protocol/Units.h>
#include <cstdint>
@@ -55,7 +56,10 @@ std::size_t constexpr oversizeMetaDataCap = 5200;
/** The maximum number of entries per directory page */
std::size_t constexpr dirNodeMaxEntries = 32;
/** The maximum number of pages allowed in a directory */
/** The maximum number of pages allowed in a directory
Made obsolete by fixDirectoryLimit amendment.
*/
std::uint64_t constexpr dirNodeMaxPages = 262144;
/** The maximum number of items in an NFT page */
@@ -81,6 +85,140 @@ std::size_t constexpr maxDeletableTokenOfferEntries = 500;
*/
std::uint16_t constexpr maxTransferFee = 50000;
/** There are 10,000 basis points (bips) in 100%.
*
* Basis points represent 0.01%.
*
* Given a value X, to find the amount for B bps,
* use X * B / bipsPerUnity
*
* Example: If a loan broker has 999 XRP of debt, and must maintain 1,000 bps of
* that debt as cover (10%), then the minimum cover amount is 999,000,000 drops
* * 1000 / bipsPerUnity = 99,900,00 drops or 99.9 XRP.
*
* Given a percentage P, to find the number of bps that percentage represents,
* use P * bipsPerUnity.
*
* Example: 50% is 0.50 * bipsPerUnity = 5,000 bps.
*/
Bips32 constexpr bipsPerUnity(100 * 100);
static_assert(bipsPerUnity == Bips32{10'000});
TenthBips32 constexpr tenthBipsPerUnity(bipsPerUnity.value() * 10);
static_assert(tenthBipsPerUnity == TenthBips32(100'000));
constexpr Bips32
percentageToBips(std::uint32_t percentage)
{
return Bips32(percentage * bipsPerUnity.value() / 100);
}
constexpr TenthBips32
percentageToTenthBips(std::uint32_t percentage)
{
return TenthBips32(percentage * tenthBipsPerUnity.value() / 100);
}
template <typename T, class TBips>
constexpr T
bipsOfValue(T value, Bips<TBips> bips)
{
return value * bips.value() / bipsPerUnity.value();
}
template <typename T, class TBips>
constexpr T
tenthBipsOfValue(T value, TenthBips<TBips> bips)
{
return value * bips.value() / tenthBipsPerUnity.value();
}
namespace Lending {
/** The maximum management fee rate allowed by a loan broker in 1/10 bips.
Valid values are between 0 and 10% inclusive.
*/
TenthBips16 constexpr maxManagementFeeRate(
unsafe_cast<std::uint16_t>(percentageToTenthBips(10).value()));
static_assert(maxManagementFeeRate == TenthBips16(std::uint16_t(10'000u)));
/** The maximum coverage rate required of a loan broker in 1/10 bips.
Valid values are between 0 and 100% inclusive.
*/
TenthBips32 constexpr maxCoverRate = percentageToTenthBips(100);
static_assert(maxCoverRate == TenthBips32(100'000u));
/** The maximum overpayment fee on a loan in 1/10 bips.
*
Valid values are between 0 and 100% inclusive.
*/
TenthBips32 constexpr maxOverpaymentFee = percentageToTenthBips(100);
static_assert(maxOverpaymentFee == TenthBips32(100'000u));
/** Annualized interest rate of the Loan in 1/10 bips.
*
* Valid values are between 0 and 100% inclusive.
*/
TenthBips32 constexpr maxInterestRate = percentageToTenthBips(100);
static_assert(maxInterestRate == TenthBips32(100'000u));
/** The maximum premium added to the interest rate for late payments on a loan
* in 1/10 bips.
*
* Valid values are between 0 and 100% inclusive.
*/
TenthBips32 constexpr maxLateInterestRate = percentageToTenthBips(100);
static_assert(maxLateInterestRate == TenthBips32(100'000u));
/** The maximum close interest rate charged for repaying a loan early in 1/10
* bips.
*
* Valid values are between 0 and 100% inclusive.
*/
TenthBips32 constexpr maxCloseInterestRate = percentageToTenthBips(100);
static_assert(maxCloseInterestRate == TenthBips32(100'000u));
/** The maximum overpayment interest rate charged on loan overpayments in 1/10
* bips.
*
* Valid values are between 0 and 100% inclusive.
*/
TenthBips32 constexpr maxOverpaymentInterestRate = percentageToTenthBips(100);
static_assert(maxOverpaymentInterestRate == TenthBips32(100'000u));
/** LoanPay transaction cost will be one base fee per X combined payments
*
* The number of payments is estimated based on the Amount paid and the Loan's
* Fixed Payment size. Overpayments (indicated with the tfLoanOverpayment flag)
* count as one more payment.
*
* This number was chosen arbitrarily, but should not be changed once released
* without an amendment
*/
static constexpr int loanPaymentsPerFeeIncrement = 5;
/** Maximum number of combined payments that a LoanPay transaction will process
*
* This limit is enforced during the loan payment process, and thus is not
* estimated. If the limit is hit, no further payments or overpayments will be
* processed, no matter how much of the transation Amount is left, but the
* transaction will succeed with the payments that have been processed up to
* that point.
*
* This limit is independent of loanPaymentsPerFeeIncrement, so a transaction
* could potentially be charged for many more payments than actually get
* processed. Users should take care not to submit a transaction paying more
* than loanMaximumPaymentsPerTransaction * Loan.PeriodicPayment. Because
* overpayments are charged as a payment, if submitting
* loanMaximumPaymentsPerTransaction * Loan.PeriodicPayment, users should not
* set the tfLoanOverpayment flag.
*
* Even though they're independent, loanMaximumPaymentsPerTransaction should be
* a multiple of loanPaymentsPerFeeIncrement.
*
* This number was chosen arbitrarily, but should not be changed once released
* without an amendment
*/
static constexpr int loanMaximumPaymentsPerTransaction = 100;
} // namespace Lending
/** The maximum length of a URI inside an NFT */
std::size_t constexpr maxTokenURILength = 256;

View File

@@ -139,8 +139,8 @@ field_code(int id, int index)
SFields are created at compile time.
Each SField, once constructed, lives until program termination, and there
is only one instance per fieldType/fieldValue pair which serves the entire
application.
is only one instance per fieldType/fieldValue pair which serves the
entire application.
*/
class SField
{

View File

@@ -66,16 +66,18 @@ public:
static int const cMaxOffset = 80;
// Maximum native value supported by the code
static std::uint64_t const cMinValue = 1000000000000000ull;
static std::uint64_t const cMaxValue = 9999999999999999ull;
static std::uint64_t const cMaxNative = 9000000000000000000ull;
constexpr static std::uint64_t cMinValue = 1'000'000'000'000'000ull;
static_assert(isPowerOfTen(cMinValue));
constexpr static std::uint64_t cMaxValue = cMinValue * 10 - 1;
static_assert(cMaxValue == 9'999'999'999'999'999ull);
constexpr static std::uint64_t cMaxNative = 9'000'000'000'000'000'000ull;
// Max native value on network.
static std::uint64_t const cMaxNativeN = 100000000000000000ull;
static std::uint64_t const cIssuedCurrency = 0x8000000000000000ull;
static std::uint64_t const cPositive = 0x4000000000000000ull;
static std::uint64_t const cMPToken = 0x2000000000000000ull;
static std::uint64_t const cValueMask = ~(cPositive | cMPToken);
constexpr static std::uint64_t cMaxNativeN = 100'000'000'000'000'000ull;
constexpr static std::uint64_t cIssuedCurrency = 0x8'000'000'000'000'000ull;
constexpr static std::uint64_t cPositive = 0x4'000'000'000'000'000ull;
constexpr static std::uint64_t cMPToken = 0x2'000'000'000'000'000ull;
constexpr static std::uint64_t cValueMask = ~(cPositive | cMPToken);
static std::uint64_t const uRateOne;
@@ -174,6 +176,9 @@ public:
int
exponent() const noexcept;
bool
integral() const noexcept;
bool
native() const noexcept;
@@ -454,6 +459,12 @@ STAmount::exponent() const noexcept
return mOffset;
}
inline bool
STAmount::integral() const noexcept
{
return mAsset.integral();
}
inline bool
STAmount::native() const noexcept
{
@@ -572,7 +583,7 @@ STAmount::clear()
{
// The -100 is used to allow 0 to sort less than a small positive values
// which have a negative exponent.
mOffset = native() ? 0 : -100;
mOffset = integral() ? 0 : -100;
mValue = 0;
mIsNegative = false;
}
@@ -695,6 +706,53 @@ divRoundStrict(
std::uint64_t
getRate(STAmount const& offerOut, STAmount const& offerIn);
/** Round an arbitrary precision Amount to the precision of an STAmount that has
* a given exponent.
*
* This is used to ensure that calculations involving IOU amounts do not collect
* dust beyond the precision of the reference value.
*
* @param value The value to be rounded
* @param scale An exponent value to establish the precision limit of
* `value`. Should be larger than `value.exponent()`.
* @param rounding Optional Number rounding mode
*
*/
STAmount
roundToScale(
STAmount const& value,
std::int32_t scale,
Number::rounding_mode rounding = Number::getround());
/** Round an arbitrary precision Number to the precision of a given Asset.
*
* This is used to ensure that calculations do not collect dust beyond the
* precision of the reference value for IOUs, or fractional amounts for the
* integral types XRP and MPT.
*
* @param asset The relevant asset
* @param value The value to be rounded
* @param scale Only relevant to IOU assets. An exponent value to establish the
* precision limit of `value`. Should be larger than `value.exponent()`.
* @param rounding Optional Number rounding mode
*/
template <AssetType A>
Number
roundToAsset(
A const& asset,
Number const& value,
std::int32_t scale,
Number::rounding_mode rounding = Number::getround())
{
NumberRoundModeGuard mg(rounding);
STAmount const ret{asset, value};
if (ret.integral())
return ret;
// Note that the ctor will round integral types (XRP, MPT) via canonicalize,
// so no extra work is needed for those.
return roundToScale(ret, scale);
}
//------------------------------------------------------------------------------
inline bool

View File

@@ -244,6 +244,9 @@ public:
getFieldPathSet(SField const& field) const;
STVector256 const&
getFieldV256(SField const& field) const;
// If not found, returns an object constructed with the given field
STObject
getFieldObject(SField const& field) const;
STArray const&
getFieldArray(SField const& field) const;
STCurrency const&
@@ -390,6 +393,8 @@ public:
setFieldV256(SField const& field, STVector256 const& v);
void
setFieldArray(SField const& field, STArray const& v);
void
setFieldObject(SField const& field, STObject const& v);
template <class Tag>
void
@@ -496,6 +501,8 @@ public:
value_type
operator*() const;
/// Do not use operator->() unless the field is required, or you've checked
/// that it's set.
T const*
operator->() const;
@@ -519,7 +526,26 @@ protected:
// Constraint += and -= ValueProxy operators
// to value types that support arithmetic operations
template <typename U>
concept IsArithmetic = std::is_arithmetic_v<U> || std::is_same_v<U, STAmount>;
concept IsArithmeticNumber = std::is_arithmetic_v<U> ||
std::is_same_v<U, Number> || std::is_same_v<U, STAmount>;
template <
typename U,
typename Value = typename U::value_type,
typename Unit = typename U::unit_type>
concept IsArithmeticValueUnit =
std::is_same_v<U, unit::ValueUnit<Unit, Value>> &&
IsArithmeticNumber<Value> && std::is_class_v<Unit>;
template <typename U, typename Value = typename U::value_type>
concept IsArithmeticST = !IsArithmeticValueUnit<U> && IsArithmeticNumber<Value>;
template <typename U>
concept IsArithmetic =
IsArithmeticNumber<U> || IsArithmeticST<U> || IsArithmeticValueUnit<U>;
template <class T, class U>
concept Addable = requires(T t, U u) { t = t + u; };
template <typename T, typename U>
concept IsArithmeticCompatible =
IsArithmetic<typename T::value_type> && Addable<typename T::value_type, U>;
template <class T>
class STObject::ValueProxy : public Proxy<T>
@@ -539,10 +565,12 @@ public:
// Convenience operators for value types supporting
// arithmetic operations
template <IsArithmetic U>
requires IsArithmeticCompatible<T, U>
ValueProxy&
operator+=(U const& u);
template <IsArithmetic U>
requires IsArithmeticCompatible<T, U>
ValueProxy&
operator-=(U const& u);
@@ -732,6 +760,8 @@ STObject::Proxy<T>::operator*() const -> value_type
return this->value();
}
/// Do not use operator->() unless the field is required, or you've checked that
/// it's set.
template <class T>
T const*
STObject::Proxy<T>::operator->() const
@@ -778,6 +808,7 @@ STObject::ValueProxy<T>::operator=(U&& u)
template <typename T>
template <IsArithmetic U>
requires IsArithmeticCompatible<T, U>
STObject::ValueProxy<T>&
STObject::ValueProxy<T>::operator+=(U const& u)
{
@@ -787,6 +818,7 @@ STObject::ValueProxy<T>::operator+=(U const& u)
template <class T>
template <IsArithmetic U>
requires IsArithmeticCompatible<T, U>
STObject::ValueProxy<T>&
STObject::ValueProxy<T>::operator-=(U const& u)
{

View File

@@ -87,8 +87,14 @@ public:
getFullText() const override;
// Outer transaction functions / signature functions.
static Blob
getSignature(STObject const& sigObject);
Blob
getSignature() const;
getSignature() const
{
return getSignature(*this);
}
uint256
getSigningHash() const;
@@ -119,13 +125,20 @@ public:
getJson(JsonOptions options, bool binary) const;
void
sign(PublicKey const& publicKey, SecretKey const& secretKey);
sign(
PublicKey const& publicKey,
SecretKey const& secretKey,
std::optional<std::reference_wrapper<SField const>> signatureTarget =
{});
/** Check the signature.
@return `true` if valid signature. If invalid, the error message string.
*/
enum class RequireFullyCanonicalSig : bool { no, yes };
/** Check the signature.
@param requireCanonicalSig If `true`, check that the signature is fully
canonical. If `false`, only check that the signature is valid.
@param rules The current ledger rules.
@return `true` if valid signature. If invalid, the error message string.
*/
Expected<void, std::string>
checkSign(RequireFullyCanonicalSig requireCanonicalSig, Rules const& rules)
const;
@@ -150,17 +163,34 @@ public:
char status,
std::string const& escapedMetaData) const;
std::vector<uint256>
std::vector<uint256> const&
getBatchTransactionIDs() const;
private:
/** Check the signature.
@param requireCanonicalSig If `true`, check that the signature is fully
canonical. If `false`, only check that the signature is valid.
@param rules The current ledger rules.
@param sigObject Reference to object that contains the signature fields.
Will be *this more often than not.
@return `true` if valid signature. If invalid, the error message string.
*/
Expected<void, std::string>
checkSingleSign(RequireFullyCanonicalSig requireCanonicalSig) const;
checkSign(
RequireFullyCanonicalSig requireCanonicalSig,
Rules const& rules,
STObject const& sigObject) const;
Expected<void, std::string>
checkSingleSign(
RequireFullyCanonicalSig requireCanonicalSig,
STObject const& sigObject) const;
Expected<void, std::string>
checkMultiSign(
RequireFullyCanonicalSig requireCanonicalSig,
Rules const& rules) const;
Rules const& rules,
STObject const& sigObject) const;
Expected<void, std::string>
checkBatchSingleSign(
@@ -179,7 +209,7 @@ private:
move(std::size_t n, void* buf) override;
friend class detail::STVar;
mutable std::vector<uint256> batch_txn_ids_;
mutable std::vector<uint256> batchTxnIds_;
};
bool

View File

@@ -285,6 +285,32 @@ constexpr std::uint32_t tfIndependent = 0x00080000;
constexpr std::uint32_t const tfBatchMask =
~(tfUniversal | tfAllOrNothing | tfOnlyOne | tfUntilFailure | tfIndependent) | tfInnerBatchTxn;
// LoanSet and LoanPay flags:
// LoanSet: True, indicates the loan supports overpayments
// LoanPay: True, indicates any excess in this payment can be used
// as an overpayment. False, no overpayments will be taken.
constexpr std::uint32_t const tfLoanOverpayment = 0x00010000;
// LoanPay exclusive flags:
// tfLoanFullPayment: True, indicates that the payment is an early
// full payment. It must pay the entire loan including close
// interest and fees, or it will fail. False: Not a full payment.
constexpr std::uint32_t const tfLoanFullPayment = 0x00020000;
// tfLoanLatePayment: True, indicates that the payment is late,
// and includes late iterest and fees. If the loan is not late,
// it will fail. False: not a late payment. If the current payment
// is overdue, the transaction will fail.
constexpr std::uint32_t const tfLoanLatePayment = 0x00040000;
constexpr std::uint32_t const tfLoanSetMask = ~(tfUniversal |
tfLoanOverpayment);
constexpr std::uint32_t const tfLoanPayMask = ~(tfUniversal |
tfLoanOverpayment | tfLoanFullPayment | tfLoanLatePayment);
// LoanManage flags:
constexpr std::uint32_t const tfLoanDefault = 0x00010000;
constexpr std::uint32_t const tfLoanImpair = 0x00020000;
constexpr std::uint32_t const tfLoanUnimpair = 0x00040000;
constexpr std::uint32_t const tfLoanManageMask = ~(tfUniversal | tfLoanDefault | tfLoanImpair | tfLoanUnimpair);
// clang-format on
} // namespace ripple

View File

@@ -27,17 +27,19 @@
#error "undefined macro: XRPL_RETIRE"
#endif
// clang-format off
// Add new amendments to the top of this list.
// Keep it sorted in reverse chronological order.
// If you add an amendment here, then do not forget to increment `numFeatures`
// in include/xrpl/protocol/Feature.h.
XRPL_FEATURE(LendingProtocol, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (DirectoryLimit, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (IncludeKeyletFields, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(DynamicMPT, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (TokenEscrowV1, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (DelegateV1_1, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (PriceOracleOrder, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (MPTDeliveredAmount, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (MPTDeliveredAmount, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (AMMClawbackRounding, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(TokenEscrow, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (EnforceNFTokenTrustlineV2, Supported::yes, VoteBehavior::DefaultNo)
@@ -156,3 +158,5 @@ XRPL_RETIRE(fix1512)
XRPL_RETIRE(fix1523)
XRPL_RETIRE(fix1528)
XRPL_RETIRE(FlowCross)
// clang-format on

View File

@@ -168,6 +168,7 @@ LEDGER_ENTRY(ltACCOUNT_ROOT, 0x0061, AccountRoot, account, ({
{sfFirstNFTokenSequence, soeOPTIONAL},
{sfAMMID, soeOPTIONAL}, // pseudo-account designator
{sfVaultID, soeOPTIONAL}, // pseudo-account designator
{sfLoanBrokerID, soeOPTIONAL}, // pseudo-account designator
}))
/** A ledger object which contains a list of object identifiers.
@@ -457,7 +458,7 @@ LEDGER_ENTRY(ltCREDENTIAL, 0x0081, Credential, credential, ({
{sfExpiration, soeOPTIONAL},
{sfURI, soeOPTIONAL},
{sfIssuerNode, soeREQUIRED},
{sfSubjectNode, soeREQUIRED},
{sfSubjectNode, soeOPTIONAL},
{sfPreviousTxnID, soeREQUIRED},
{sfPreviousTxnLgrSeq, soeREQUIRED},
}))
@@ -498,10 +499,10 @@ LEDGER_ENTRY(ltVAULT, 0x0084, Vault, vault, ({
{sfAccount, soeREQUIRED},
{sfData, soeOPTIONAL},
{sfAsset, soeREQUIRED},
{sfAssetsTotal, soeREQUIRED},
{sfAssetsAvailable, soeREQUIRED},
{sfAssetsTotal, soeDEFAULT},
{sfAssetsAvailable, soeDEFAULT},
{sfAssetsMaximum, soeDEFAULT},
{sfLossUnrealized, soeREQUIRED},
{sfLossUnrealized, soeDEFAULT},
{sfShareMPTID, soeREQUIRED},
{sfWithdrawalPolicy, soeREQUIRED},
{sfScale, soeDEFAULT},
@@ -509,5 +510,117 @@ LEDGER_ENTRY(ltVAULT, 0x0084, Vault, vault, ({
// no PermissionedDomainID ever (use MPTIssuance.sfDomainID)
}))
/** Reserve 0x0084-0x0087 for future Vault-related objects. */
/** A ledger object representing a loan broker
\sa keylet::loanbroker
*/
LEDGER_ENTRY(ltLOAN_BROKER, 0x0088, LoanBroker, loan_broker, ({
{sfPreviousTxnID, soeREQUIRED},
{sfPreviousTxnLgrSeq, soeREQUIRED},
{sfSequence, soeREQUIRED},
{sfOwnerNode, soeREQUIRED},
{sfVaultNode, soeREQUIRED},
{sfVaultID, soeREQUIRED},
{sfAccount, soeREQUIRED},
{sfOwner, soeREQUIRED},
{sfLoanSequence, soeREQUIRED},
{sfData, soeDEFAULT},
{sfManagementFeeRate, soeDEFAULT},
{sfOwnerCount, soeDEFAULT},
{sfDebtTotal, soeDEFAULT},
{sfDebtMaximum, soeDEFAULT},
{sfCoverAvailable, soeDEFAULT},
{sfCoverRateMinimum, soeDEFAULT},
{sfCoverRateLiquidation, soeDEFAULT},
}))
/** A ledger object representing a loan between a Borrower and a Loan Broker
\sa keylet::loan
*/
LEDGER_ENTRY(ltLOAN, 0x0089, Loan, loan, ({
{sfPreviousTxnID, soeREQUIRED},
{sfPreviousTxnLgrSeq, soeREQUIRED},
{sfOwnerNode, soeREQUIRED},
{sfLoanBrokerNode, soeREQUIRED},
{sfLoanBrokerID, soeREQUIRED},
{sfLoanSequence, soeREQUIRED},
{sfBorrower, soeREQUIRED},
{sfLoanOriginationFee, soeDEFAULT},
{sfLoanServiceFee, soeDEFAULT},
{sfLatePaymentFee, soeDEFAULT},
{sfClosePaymentFee, soeDEFAULT},
{sfOverpaymentFee, soeDEFAULT},
{sfInterestRate, soeDEFAULT},
{sfLateInterestRate, soeDEFAULT},
{sfCloseInterestRate, soeDEFAULT},
{sfOverpaymentInterestRate, soeDEFAULT},
{sfStartDate, soeREQUIRED},
{sfPaymentInterval, soeREQUIRED},
{sfGracePeriod, soeDEFAULT},
{sfPreviousPaymentDate, soeDEFAULT},
{sfNextPaymentDueDate, soeDEFAULT},
// The loan object tracks these values:
//
// - PaymentRemaining: The number of payments left in the loan. When it
// reaches 0, the loan is paid off, and all other relevant values
// must also be 0.
//
// - PeriodicPayment: The fixed, unrounded amount to be paid each
// interval. Stored with as much precision as possible.
// Payment transactions must round this value *UP*.
//
// - TotalValueOutstanding: The rounded total amount owed by the
// borrower to the lender / vault.
//
// - PrincipalOutstanding: The rounded portion of the
// TotalValueOutstanding that is from the principal borrowed.
//
// - ManagementFeeOutstanding: The rounded portion of the
// TotalValueOutstanding that represents management fees
// specifically owed to the broker based on the initial
// loan parameters.
//
// There are additional values that can be computed from these:
//
// - InterestOutstanding = TotalValueOutstanding - PrincipalOutstanding
// The total amount of interest still pending on the loan,
// independent of management fees.
//
// - InterestOwedToVault = InterestOutstanding - ManagementFeeOutstanding
// The amount of the total interest that is owed to the vault, and
// will be sent to it as part of a payment.
//
// - TrueTotalLoanValue = PaymentRemaining * PeriodicPayment
// The unrounded true total value of the loan.
//
// - TrueTotalPrincialOutstanding can be computed using the algorithm
// in the ripple::detail::loanPrincipalFromPeriodicPayment function.
//
// - TrueTotalInterestOutstanding = TrueTotalLoanValue -
// TrueTotalPrincipalOutstanding
// The unrounded true total interest remaining.
//
// - TrueTotalManagementFeeOutstanding = TrueTotalInterestOutstanding *
// LoanBroker.ManagementFeeRate
// The unrounded true total fee still owed to the broker.
//
// Note the the "True" values may differ significantly from the tracked
// rounded values.
{sfPaymentRemaining, soeDEFAULT},
{sfPeriodicPayment, soeREQUIRED},
{sfPrincipalOutstanding, soeDEFAULT},
{sfTotalValueOutstanding, soeDEFAULT},
{sfManagementFeeOutstanding, soeDEFAULT},
// Based on the computed total value at creation, used for
// rounding calculated values so they are all on a
// consistent scale - that is, they all have the same
// number of digits after the decimal point (excluding
// trailing zeros).
{sfLoanScale, soeDEFAULT},
}))
#undef EXPAND
#undef LEDGER_ENTRY_DUPLICATE

View File

@@ -24,6 +24,8 @@
#error "undefined macro: TYPED_SFIELD"
#endif
// clang-format off
// untyped
UNTYPED_SFIELD(sfLedgerEntry, LEDGERENTRY, 257)
UNTYPED_SFIELD(sfTransaction, TRANSACTION, 257)
@@ -59,6 +61,7 @@ TYPED_SFIELD(sfHookEmitCount, UINT16, 18)
TYPED_SFIELD(sfHookExecutionIndex, UINT16, 19)
TYPED_SFIELD(sfHookApiVersion, UINT16, 20)
TYPED_SFIELD(sfLedgerFixType, UINT16, 21)
TYPED_SFIELD(sfManagementFeeRate, UINT16, 22) // 1/10 basis points (bips)
// 32-bit integers (common)
TYPED_SFIELD(sfNetworkID, UINT32, 1)
@@ -115,6 +118,21 @@ TYPED_SFIELD(sfFirstNFTokenSequence, UINT32, 50)
TYPED_SFIELD(sfOracleDocumentID, UINT32, 51)
TYPED_SFIELD(sfPermissionValue, UINT32, 52)
TYPED_SFIELD(sfMutableFlags, UINT32, 53)
TYPED_SFIELD(sfStartDate, UINT32, 54)
TYPED_SFIELD(sfPaymentInterval, UINT32, 55)
TYPED_SFIELD(sfGracePeriod, UINT32, 56)
TYPED_SFIELD(sfPreviousPaymentDate, UINT32, 57)
TYPED_SFIELD(sfNextPaymentDueDate, UINT32, 58)
TYPED_SFIELD(sfPaymentRemaining, UINT32, 59)
TYPED_SFIELD(sfPaymentTotal, UINT32, 60)
TYPED_SFIELD(sfLoanSequence, UINT32, 61)
TYPED_SFIELD(sfCoverRateMinimum, UINT32, 62) // 1/10 basis points (bips)
TYPED_SFIELD(sfCoverRateLiquidation, UINT32, 63) // 1/10 basis points (bips)
TYPED_SFIELD(sfOverpaymentFee, UINT32, 64) // 1/10 basis points (bips)
TYPED_SFIELD(sfInterestRate, UINT32, 65) // 1/10 basis points (bips)
TYPED_SFIELD(sfLateInterestRate, UINT32, 66) // 1/10 basis points (bips)
TYPED_SFIELD(sfCloseInterestRate, UINT32, 67) // 1/10 basis points (bips)
TYPED_SFIELD(sfOverpaymentInterestRate, UINT32, 68) // 1/10 basis points (bips)
// 64-bit integers (common)
TYPED_SFIELD(sfIndexNext, UINT64, 1)
@@ -146,6 +164,8 @@ TYPED_SFIELD(sfMPTAmount, UINT64, 26, SField::sMD_BaseTen|SFie
TYPED_SFIELD(sfIssuerNode, UINT64, 27)
TYPED_SFIELD(sfSubjectNode, UINT64, 28)
TYPED_SFIELD(sfLockedAmount, UINT64, 29, SField::sMD_BaseTen|SField::sMD_Default)
TYPED_SFIELD(sfVaultNode, UINT64, 30)
TYPED_SFIELD(sfLoanBrokerNode, UINT64, 31)
// 128-bit
TYPED_SFIELD(sfEmailHash, UINT128, 1)
@@ -200,6 +220,9 @@ TYPED_SFIELD(sfDomainID, UINT256, 34)
TYPED_SFIELD(sfVaultID, UINT256, 35,
SField::sMD_PseudoAccount | SField::sMD_Default)
TYPED_SFIELD(sfParentBatchID, UINT256, 36)
TYPED_SFIELD(sfLoanBrokerID, UINT256, 37,
SField::sMD_PseudoAccount | SField::sMD_Default)
TYPED_SFIELD(sfLoanID, UINT256, 38)
// number (common)
TYPED_SFIELD(sfNumber, NUMBER, 1)
@@ -207,12 +230,21 @@ TYPED_SFIELD(sfAssetsAvailable, NUMBER, 2)
TYPED_SFIELD(sfAssetsMaximum, NUMBER, 3)
TYPED_SFIELD(sfAssetsTotal, NUMBER, 4)
TYPED_SFIELD(sfLossUnrealized, NUMBER, 5)
TYPED_SFIELD(sfDebtTotal, NUMBER, 6)
TYPED_SFIELD(sfDebtMaximum, NUMBER, 7)
TYPED_SFIELD(sfCoverAvailable, NUMBER, 8)
TYPED_SFIELD(sfLoanOriginationFee, NUMBER, 9)
TYPED_SFIELD(sfLoanServiceFee, NUMBER, 10)
TYPED_SFIELD(sfLatePaymentFee, NUMBER, 11)
TYPED_SFIELD(sfClosePaymentFee, NUMBER, 12)
TYPED_SFIELD(sfPrincipalOutstanding, NUMBER, 13)
TYPED_SFIELD(sfPrincipalRequested, NUMBER, 14)
TYPED_SFIELD(sfTotalValueOutstanding, NUMBER, 15)
TYPED_SFIELD(sfPeriodicPayment, NUMBER, 16)
TYPED_SFIELD(sfManagementFeeOutstanding, NUMBER, 17)
// int32
// NOTE: Do not use `sfDummyInt32`. It's so far the only use of INT32
// in this file and has been defined here for test only.
// TODO: Replace `sfDummyInt32` with actually useful field.
TYPED_SFIELD(sfDummyInt32, INT32, 1) // for tests only
TYPED_SFIELD(sfLoanScale, INT32, 1)
// currency amount (common)
TYPED_SFIELD(sfAmount, AMOUNT, 1)
@@ -308,6 +340,8 @@ TYPED_SFIELD(sfAttestationRewardAccount, ACCOUNT, 21)
TYPED_SFIELD(sfLockingChainDoor, ACCOUNT, 22)
TYPED_SFIELD(sfIssuingChainDoor, ACCOUNT, 23)
TYPED_SFIELD(sfSubject, ACCOUNT, 24)
TYPED_SFIELD(sfBorrower, ACCOUNT, 25)
TYPED_SFIELD(sfCounterparty, ACCOUNT, 26)
// vector of 256-bit
TYPED_SFIELD(sfIndexes, VECTOR256, 1, SField::sMD_Never)
@@ -371,6 +405,7 @@ UNTYPED_SFIELD(sfCredential, OBJECT, 33)
UNTYPED_SFIELD(sfRawTransaction, OBJECT, 34)
UNTYPED_SFIELD(sfBatchSigner, OBJECT, 35)
UNTYPED_SFIELD(sfBook, OBJECT, 36)
UNTYPED_SFIELD(sfCounterpartySignature, OBJECT, 37, SField::sMD_Default, SField::notSigning)
// array of objects (common)
// ARRAY/1 is reserved for end of array
@@ -405,3 +440,5 @@ UNTYPED_SFIELD(sfAcceptedCredentials, ARRAY, 28)
UNTYPED_SFIELD(sfPermissions, ARRAY, 29)
UNTYPED_SFIELD(sfRawTransactions, ARRAY, 30)
UNTYPED_SFIELD(sfBatchSigners, ARRAY, 31, SField::sMD_Default, SField::notSigning)
// clang-format on

View File

@@ -909,7 +909,7 @@ TRANSACTION(ttVAULT_DEPOSIT, 68, VaultDeposit,
TRANSACTION(ttVAULT_WITHDRAW, 69, VaultWithdraw,
Delegation::delegatable,
featureSingleAssetVault,
mayDeleteMPT | mustModifyVault,
mayDeleteMPT | mayAuthorizeMPT | mustModifyVault,
({
{sfVaultID, soeREQUIRED},
{sfAmount, soeREQUIRED, soeMPTSupported},
@@ -944,6 +944,139 @@ TRANSACTION(ttBATCH, 71, Batch,
{sfBatchSigners, soeOPTIONAL},
}))
/** Reserve 72-73 for future Vault-related transactions */
/** This transaction creates and updates a Loan Broker */
#if TRANSACTION_INCLUDE
# include <xrpld/app/tx/detail/LoanBrokerSet.h>
#endif
TRANSACTION(ttLOAN_BROKER_SET, 74, LoanBrokerSet,
Delegation::delegatable,
featureLendingProtocol,
createPseudoAcct | mayAuthorizeMPT, ({
{sfVaultID, soeREQUIRED},
{sfLoanBrokerID, soeOPTIONAL},
{sfData, soeOPTIONAL},
{sfManagementFeeRate, soeOPTIONAL},
{sfDebtMaximum, soeOPTIONAL},
{sfCoverRateMinimum, soeOPTIONAL},
{sfCoverRateLiquidation, soeOPTIONAL},
}))
/** This transaction deletes a Loan Broker */
#if TRANSACTION_INCLUDE
# include <xrpld/app/tx/detail/LoanBrokerDelete.h>
#endif
TRANSACTION(ttLOAN_BROKER_DELETE, 75, LoanBrokerDelete,
Delegation::delegatable,
featureLendingProtocol,
mustDeleteAcct | mayAuthorizeMPT, ({
{sfLoanBrokerID, soeREQUIRED},
}))
/** This transaction deposits First Loss Capital into a Loan Broker */
#if TRANSACTION_INCLUDE
# include <xrpld/app/tx/detail/LoanBrokerCoverDeposit.h>
#endif
TRANSACTION(ttLOAN_BROKER_COVER_DEPOSIT, 76, LoanBrokerCoverDeposit,
Delegation::delegatable,
featureLendingProtocol,
noPriv, ({
{sfLoanBrokerID, soeREQUIRED},
{sfAmount, soeREQUIRED, soeMPTSupported},
}))
/** This transaction withdraws First Loss Capital from a Loan Broker */
#if TRANSACTION_INCLUDE
# include <xrpld/app/tx/detail/LoanBrokerCoverWithdraw.h>
#endif
TRANSACTION(ttLOAN_BROKER_COVER_WITHDRAW, 77, LoanBrokerCoverWithdraw,
Delegation::delegatable,
featureLendingProtocol,
mayAuthorizeMPT, ({
{sfLoanBrokerID, soeREQUIRED},
{sfAmount, soeREQUIRED, soeMPTSupported},
{sfDestination, soeOPTIONAL},
{sfDestinationTag, soeOPTIONAL},
}))
/** This transaction claws back First Loss Capital from a Loan Broker to
the issuer of the capital */
#if TRANSACTION_INCLUDE
# include <xrpld/app/tx/detail/LoanBrokerCoverClawback.h>
#endif
TRANSACTION(ttLOAN_BROKER_COVER_CLAWBACK, 78, LoanBrokerCoverClawback,
Delegation::delegatable,
featureLendingProtocol,
noPriv, ({
{sfLoanBrokerID, soeOPTIONAL},
{sfAmount, soeOPTIONAL, soeMPTSupported},
}))
/** This transaction creates a Loan */
#if TRANSACTION_INCLUDE
# include <xrpld/app/tx/detail/LoanSet.h>
#endif
TRANSACTION(ttLOAN_SET, 80, LoanSet,
Delegation::delegatable,
featureLendingProtocol,
mayAuthorizeMPT | mustModifyVault, ({
{sfLoanBrokerID, soeREQUIRED},
{sfData, soeOPTIONAL},
{sfCounterparty, soeOPTIONAL},
{sfCounterpartySignature, soeOPTIONAL},
{sfLoanOriginationFee, soeOPTIONAL},
{sfLoanServiceFee, soeOPTIONAL},
{sfLatePaymentFee, soeOPTIONAL},
{sfClosePaymentFee, soeOPTIONAL},
{sfOverpaymentFee, soeOPTIONAL},
{sfInterestRate, soeOPTIONAL},
{sfLateInterestRate, soeOPTIONAL},
{sfCloseInterestRate, soeOPTIONAL},
{sfOverpaymentInterestRate, soeOPTIONAL},
{sfPrincipalRequested, soeREQUIRED},
{sfPaymentTotal, soeOPTIONAL},
{sfPaymentInterval, soeOPTIONAL},
{sfGracePeriod, soeOPTIONAL},
}))
/** This transaction deletes an existing Loan */
#if TRANSACTION_INCLUDE
# include <xrpld/app/tx/detail/LoanDelete.h>
#endif
TRANSACTION(ttLOAN_DELETE, 81, LoanDelete,
Delegation::delegatable,
featureLendingProtocol,
noPriv, ({
{sfLoanID, soeREQUIRED},
}))
/** This transaction is used to change the delinquency status of an existing Loan */
#if TRANSACTION_INCLUDE
# include <xrpld/app/tx/detail/LoanManage.h>
#endif
TRANSACTION(ttLOAN_MANAGE, 82, LoanManage,
Delegation::delegatable,
featureLendingProtocol,
// All of the LoanManage options will modify the vault, but the
// transaction can succeed without options, essentially making it
// a noop.
mayModifyVault, ({
{sfLoanID, soeREQUIRED},
}))
/** The Borrower uses this transaction to make a Payment on the Loan. */
#if TRANSACTION_INCLUDE
# include <xrpld/app/tx/detail/LoanPay.h>
#endif
TRANSACTION(ttLOAN_PAY, 84, LoanPay,
Delegation::delegatable,
featureLendingProtocol,
mayAuthorizeMPT | mustModifyVault, ({
{sfLoanID, soeREQUIRED},
{sfAmount, soeREQUIRED, soeMPTSupported},
}))
/** This system-generated transaction type is used to update the status of the various amendments.
For details, see: https://xrpl.org/amendments.html

View File

@@ -59,6 +59,8 @@ JSS(BaseAsset); // in: Oracle
JSS(BidMax); // in: AMM Bid
JSS(BidMin); // in: AMM Bid
JSS(ClearFlag); // field.
JSS(Counterparty); // field.
JSS(CounterpartySignature);// field.
JSS(DeliverMax); // out: alias to Amount
JSS(DeliverMin); // in: TransactionSign
JSS(Destination); // in: TransactionSign; field.
@@ -392,6 +394,8 @@ JSS(load_factor_local); // out: NetworkOPs
JSS(load_factor_net); // out: NetworkOPs
JSS(load_factor_server); // out: NetworkOPs
JSS(load_fee); // out: LoadFeeTrackImp, NetworkOPs
JSS(loan_broker_id); // in: LedgerEntry
JSS(loan_seq); // in: LedgerEntry
JSS(local); // out: resource/Logic.h
JSS(local_txs); // out: GetCounts
JSS(local_static_keys); // out: ValidatorList
@@ -504,6 +508,7 @@ JSS(propose_seq); // out: LedgerPropose
JSS(proposers); // out: NetworkOPs, LedgerConsensus
JSS(protocol); // out: NetworkOPs, PeerImp
JSS(proxied); // out: RPC ping
JSS(pseudo_account); // out: AccountInfo
JSS(pubkey_node); // out: NetworkOPs
JSS(pubkey_publisher); // out: ValidatorList
JSS(pubkey_validator); // out: NetworkOPs, ValidatorList
@@ -569,6 +574,7 @@ JSS(settle_delay); // out: AccountChannels
JSS(severity); // in: LogLevel
JSS(shares); // out: VaultInfo
JSS(signature); // out: NetworkOPs, ChannelAuthorize
JSS(signature_target); // in: TransactionSign
JSS(signature_verified); // out: ChannelVerify
JSS(signing_key); // out: NetworkOPs
JSS(signing_keys); // out: ValidatorList

View File

@@ -25,7 +25,7 @@
#include <xrpl/server/Port.h>
#include <xrpl/server/detail/ServerImpl.h>
#include <boost/asio/io_context.hpp>
#include <boost/asio/io_service.hpp>
namespace ripple {
@@ -34,10 +34,10 @@ template <class Handler>
std::unique_ptr<Server>
make_Server(
Handler& handler,
boost::asio::io_context& io_context,
boost::asio::io_service& io_service,
beast::Journal journal)
{
return std::make_unique<ServerImpl<Handler>>(handler, io_context, journal);
return std::make_unique<ServerImpl<Handler>>(handler, io_service, journal);
}
} // namespace ripple

View File

@@ -88,7 +88,9 @@ public:
++iter)
{
typename BufferSequence::value_type const& buffer(*iter);
write(buffer.data(), boost::asio::buffer_size(buffer));
write(
boost::asio::buffer_cast<void const*>(buffer),
boost::asio::buffer_size(buffer));
}
}
@@ -102,7 +104,7 @@ public:
/** Detach the session.
This holds the session open so that the response can be sent
asynchronously. Calls to io_context::run made by the server
asynchronously. Calls to io_service::run made by the server
will not return until all detached sessions are closed.
*/
virtual std::shared_ptr<Session>

View File

@@ -24,13 +24,11 @@
#include <xrpl/beast/net/IPAddressConversion.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <xrpl/server/Session.h>
#include <xrpl/server/detail/Spawn.h>
#include <xrpl/server/detail/io_list.h>
#include <boost/asio/ip/tcp.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/ssl/stream.hpp>
#include <boost/asio/strand.hpp>
#include <boost/asio/streambuf.hpp>
#include <boost/beast/core/stream_traits.hpp>
#include <boost/beast/http/dynamic_body.hpp>
@@ -217,8 +215,8 @@ BaseHTTPPeer<Handler, Impl>::BaseHTTPPeer(
ConstBufferSequence const& buffers)
: port_(port)
, handler_(handler)
, work_(boost::asio::make_work_guard(executor))
, strand_(boost::asio::make_strand(executor))
, work_(executor)
, strand_(executor)
, remote_address_(remote_address)
, journal_(journal)
{
@@ -358,7 +356,7 @@ BaseHTTPPeer<Handler, Impl>::on_write(
return;
if (graceful_)
return do_close();
util::spawn(
boost::asio::spawn(
strand_,
std::bind(
&BaseHTTPPeer<Handler, Impl>::do_read,
@@ -377,7 +375,7 @@ BaseHTTPPeer<Handler, Impl>::do_writer(
{
auto const p = impl().shared_from_this();
resume = std::function<void(void)>([this, p, writer, keep_alive]() {
util::spawn(
boost::asio::spawn(
strand_,
std::bind(
&BaseHTTPPeer<Handler, Impl>::do_writer,
@@ -408,7 +406,7 @@ BaseHTTPPeer<Handler, Impl>::do_writer(
if (!keep_alive)
return do_close();
util::spawn(
boost::asio::spawn(
strand_,
std::bind(
&BaseHTTPPeer<Handler, Impl>::do_read,
@@ -450,14 +448,14 @@ BaseHTTPPeer<Handler, Impl>::write(
std::shared_ptr<Writer> const& writer,
bool keep_alive)
{
util::spawn(
boost::asio::spawn(bind_executor(
strand_,
std::bind(
&BaseHTTPPeer<Handler, Impl>::do_writer,
impl().shared_from_this(),
writer,
keep_alive,
std::placeholders::_1));
std::placeholders::_1)));
}
// DEPRECATED
@@ -492,12 +490,12 @@ BaseHTTPPeer<Handler, Impl>::complete()
}
// keep-alive
util::spawn(
boost::asio::spawn(bind_executor(
strand_,
std::bind(
&BaseHTTPPeer<Handler, Impl>::do_read,
impl().shared_from_this(),
std::placeholders::_1));
std::placeholders::_1)));
}
// DEPRECATED

View File

@@ -91,8 +91,8 @@ BasePeer<Handler, Impl>::BasePeer(
return "##" + std::to_string(++id) + " ";
}())
, j_(sink_)
, work_(boost::asio::make_work_guard(executor))
, strand_(boost::asio::make_strand(executor))
, work_(executor)
, strand_(executor)
{
}

View File

@@ -29,7 +29,6 @@
#include <xrpl/server/detail/BasePeer.h>
#include <xrpl/server/detail/LowestLayer.h>
#include <boost/asio/error.hpp>
#include <boost/beast/core/multi_buffer.hpp>
#include <boost/beast/http/message.hpp>
#include <boost/beast/websocket.hpp>
@@ -421,17 +420,11 @@ BaseWSPeer<Handler, Impl>::start_timer()
// Max seconds without completing a message
static constexpr std::chrono::seconds timeout{30};
static constexpr std::chrono::seconds timeoutLocal{3};
try
{
timer_.expires_after(
remote_endpoint().address().is_loopback() ? timeoutLocal : timeout);
}
catch (boost::system::system_error const& e)
{
return fail(e.code(), "start_timer");
}
error_code ec;
timer_.expires_from_now(
remote_endpoint().address().is_loopback() ? timeoutLocal : timeout, ec);
if (ec)
return fail(ec, "start_timer");
timer_.async_wait(bind_executor(
strand_,
std::bind(
@@ -445,14 +438,8 @@ template <class Handler, class Impl>
void
BaseWSPeer<Handler, Impl>::cancel_timer()
{
try
{
timer_.cancel();
}
catch (boost::system::system_error const&)
{
// ignored
}
error_code ec;
timer_.cancel(ec);
}
template <class Handler, class Impl>

View File

@@ -30,29 +30,15 @@
#include <boost/asio/buffer.hpp>
#include <boost/asio/io_context.hpp>
#include <boost/asio/ip/tcp.hpp>
#include <boost/asio/post.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/steady_timer.hpp>
#include <boost/beast/core/detect_ssl.hpp>
#include <boost/beast/core/multi_buffer.hpp>
#include <boost/beast/core/tcp_stream.hpp>
#include <boost/container/flat_map.hpp>
#include <boost/predef.h>
#if !BOOST_OS_WINDOWS
#include <sys/resource.h>
#include <dirent.h>
#include <unistd.h>
#endif
#include <algorithm>
#include <chrono>
#include <cstdint>
#include <functional>
#include <memory>
#include <optional>
#include <sstream>
namespace ripple {
@@ -83,7 +69,7 @@ private:
stream_type stream_;
socket_type& socket_;
endpoint_type remote_address_;
boost::asio::strand<boost::asio::io_context::executor_type> strand_;
boost::asio::io_context::strand strand_;
beast::Journal const j_;
public:
@@ -109,30 +95,13 @@ private:
Handler& handler_;
boost::asio::io_context& ioc_;
acceptor_type acceptor_;
boost::asio::strand<boost::asio::io_context::executor_type> strand_;
boost::asio::io_context::strand strand_;
bool ssl_;
bool plain_;
static constexpr std::chrono::milliseconds INITIAL_ACCEPT_DELAY{50};
static constexpr std::chrono::milliseconds MAX_ACCEPT_DELAY{2000};
std::chrono::milliseconds accept_delay_{INITIAL_ACCEPT_DELAY};
boost::asio::steady_timer backoff_timer_;
static constexpr double FREE_FD_THRESHOLD = 0.70;
struct FDStats
{
std::uint64_t used{0};
std::uint64_t limit{0};
};
void
reOpen();
std::optional<FDStats>
query_fd_stats() const;
bool
should_throttle_for_fds();
public:
Door(
Handler& handler,
@@ -186,7 +155,7 @@ Door<Handler>::Detector::Detector(
, stream_(std::move(stream))
, socket_(stream_.socket())
, remote_address_(remote_address)
, strand_(boost::asio::make_strand(ioc_))
, strand_(ioc_)
, j_(j)
{
}
@@ -195,7 +164,7 @@ template <class Handler>
void
Door<Handler>::Detector::run()
{
util::spawn(
boost::asio::spawn(
strand_,
std::bind(
&Detector::do_detect,
@@ -300,7 +269,7 @@ Door<Handler>::reOpen()
Throw<std::exception>();
}
acceptor_.listen(boost::asio::socket_base::max_listen_connections, ec);
acceptor_.listen(boost::asio::socket_base::max_connections, ec);
if (ec)
{
JLOG(j_.error()) << "Listen on port '" << port_.name
@@ -322,7 +291,7 @@ Door<Handler>::Door(
, handler_(handler)
, ioc_(io_context)
, acceptor_(io_context)
, strand_(boost::asio::make_strand(io_context))
, strand_(io_context)
, ssl_(
port_.protocol.count("https") > 0 ||
port_.protocol.count("wss") > 0 || port_.protocol.count("wss2") > 0 ||
@@ -330,7 +299,6 @@ Door<Handler>::Door(
, plain_(
port_.protocol.count("http") > 0 || port_.protocol.count("ws") > 0 ||
port_.protocol.count("ws2"))
, backoff_timer_(io_context)
{
reOpen();
}
@@ -339,7 +307,7 @@ template <class Handler>
void
Door<Handler>::run()
{
util::spawn(
boost::asio::spawn(
strand_,
std::bind(
&Door<Handler>::do_accept,
@@ -352,10 +320,8 @@ void
Door<Handler>::close()
{
if (!strand_.running_in_this_thread())
return boost::asio::post(
strand_,
return strand_.post(
std::bind(&Door<Handler>::close, this->shared_from_this()));
backoff_timer_.cancel();
error_code ec;
acceptor_.close(ec);
}
@@ -401,17 +367,6 @@ Door<Handler>::do_accept(boost::asio::yield_context do_yield)
{
while (acceptor_.is_open())
{
if (should_throttle_for_fds())
{
backoff_timer_.expires_after(accept_delay_);
boost::system::error_code tec;
backoff_timer_.async_wait(do_yield[tec]);
accept_delay_ = std::min(accept_delay_ * 2, MAX_ACCEPT_DELAY);
JLOG(j_.warn()) << "Throttling do_accept for "
<< accept_delay_.count() << "ms.";
continue;
}
error_code ec;
endpoint_type remote_address;
stream_type stream(ioc_);
@@ -421,28 +376,15 @@ Door<Handler>::do_accept(boost::asio::yield_context do_yield)
{
if (ec == boost::asio::error::operation_aborted)
break;
if (ec == boost::asio::error::no_descriptors ||
ec == boost::asio::error::no_buffer_space)
JLOG(j_.error()) << "accept: " << ec.message();
if (ec == boost::asio::error::no_descriptors)
{
JLOG(j_.warn()) << "accept: Too many open files. Pausing for "
<< accept_delay_.count() << "ms.";
backoff_timer_.expires_after(accept_delay_);
boost::system::error_code tec;
backoff_timer_.async_wait(do_yield[tec]);
accept_delay_ = std::min(accept_delay_ * 2, MAX_ACCEPT_DELAY);
}
else
{
JLOG(j_.error()) << "accept error: " << ec.message();
JLOG(j_.info()) << "re-opening acceptor";
reOpen();
}
continue;
}
accept_delay_ = INITIAL_ACCEPT_DELAY;
if (ssl_ && plain_)
{
if (auto sp = ios().template emplace<Detector>(
@@ -465,60 +407,6 @@ Door<Handler>::do_accept(boost::asio::yield_context do_yield)
}
}
template <class Handler>
std::optional<typename Door<Handler>::FDStats>
Door<Handler>::query_fd_stats() const
{
#if BOOST_OS_WINDOWS
return std::nullopt;
#else
FDStats s;
struct rlimit rl;
if (getrlimit(RLIMIT_NOFILE, &rl) != 0 || rl.rlim_cur == RLIM_INFINITY)
return std::nullopt;
s.limit = static_cast<std::uint64_t>(rl.rlim_cur);
#if BOOST_OS_LINUX
constexpr char const* kFdDir = "/proc/self/fd";
#else
constexpr char const* kFdDir = "/dev/fd";
#endif
if (DIR* d = ::opendir(kFdDir))
{
std::uint64_t cnt = 0;
while (::readdir(d) != nullptr)
++cnt;
::closedir(d);
// readdir counts '.', '..', and the DIR* itself shows in the list
s.used = (cnt >= 3) ? (cnt - 3) : 0;
return s;
}
return std::nullopt;
#endif
}
template <class Handler>
bool
Door<Handler>::should_throttle_for_fds()
{
#if BOOST_OS_WINDOWS
return false;
#else
auto const stats = query_fd_stats();
if (!stats || stats->limit == 0)
return false;
auto const& s = *stats;
auto const free = (s.limit > s.used) ? (s.limit - s.used) : 0ull;
double const free_ratio =
static_cast<double>(free) / static_cast<double>(s.limit);
if (free_ratio < FREE_FD_THRESHOLD)
{
return true;
}
return false;
#endif
}
} // namespace ripple
#endif

View File

@@ -105,7 +105,7 @@ PlainHTTPPeer<Handler>::run()
{
if (!this->handler_.onAccept(this->session(), this->remote_address_))
{
util::spawn(
boost::asio::spawn(
this->strand_,
std::bind(&PlainHTTPPeer::do_close, this->shared_from_this()));
return;
@@ -114,7 +114,7 @@ PlainHTTPPeer<Handler>::run()
if (!socket_.is_open())
return;
util::spawn(
boost::asio::spawn(
this->strand_,
std::bind(
&PlainHTTPPeer::do_read,

View File

@@ -115,14 +115,14 @@ SSLHTTPPeer<Handler>::run()
{
if (!this->handler_.onAccept(this->session(), this->remote_address_))
{
util::spawn(
boost::asio::spawn(
this->strand_,
std::bind(&SSLHTTPPeer::do_close, this->shared_from_this()));
return;
}
if (!socket_.is_open())
return;
util::spawn(
boost::asio::spawn(
this->strand_,
std::bind(
&SSLHTTPPeer::do_handshake,
@@ -164,7 +164,7 @@ SSLHTTPPeer<Handler>::do_handshake(yield_context do_yield)
this->port().protocol.count("https") > 0;
if (http)
{
util::spawn(
boost::asio::spawn(
this->strand_,
std::bind(
&SSLHTTPPeer::do_read,

View File

@@ -26,8 +26,6 @@
#include <xrpl/server/detail/io_list.h>
#include <boost/asio.hpp>
#include <boost/asio/executor_work_guard.hpp>
#include <boost/asio/io_context.hpp>
#include <array>
#include <chrono>
@@ -87,11 +85,9 @@ private:
Handler& handler_;
beast::Journal const j_;
boost::asio::io_context& io_context_;
boost::asio::strand<boost::asio::io_context::executor_type> strand_;
std::optional<boost::asio::executor_work_guard<
boost::asio::io_context::executor_type>>
work_;
boost::asio::io_service& io_service_;
boost::asio::io_service::strand strand_;
std::optional<boost::asio::io_service::work> work_;
std::mutex m_;
std::vector<Port> ports_;
@@ -104,7 +100,7 @@ private:
public:
ServerImpl(
Handler& handler,
boost::asio::io_context& io_context,
boost::asio::io_service& io_service,
beast::Journal journal);
~ServerImpl();
@@ -127,10 +123,10 @@ public:
return ios_;
}
boost::asio::io_context&
get_io_context()
boost::asio::io_service&
get_io_service()
{
return io_context_;
return io_service_;
}
bool
@@ -144,13 +140,13 @@ private:
template <class Handler>
ServerImpl<Handler>::ServerImpl(
Handler& handler,
boost::asio::io_context& io_context,
boost::asio::io_service& io_service,
beast::Journal journal)
: handler_(handler)
, j_(journal)
, io_context_(io_context)
, strand_(boost::asio::make_strand(io_context_))
, work_(std::in_place, boost::asio::make_work_guard(io_context_))
, io_service_(io_service)
, strand_(io_service_)
, work_(io_service_)
{
}
@@ -177,7 +173,7 @@ ServerImpl<Handler>::ports(std::vector<Port> const& ports)
ports_.push_back(port);
auto& internalPort = ports_.back();
if (auto sp = ios_.emplace<Door<Handler>>(
handler_, io_context_, internalPort, j_))
handler_, io_service_, internalPort, j_))
{
list_.push_back(sp);

View File

@@ -1,108 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright(c) 2025 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_SERVER_SPAWN_H_INCLUDED
#define RIPPLE_SERVER_SPAWN_H_INCLUDED
#include <xrpl/basics/Log.h>
#include <boost/asio/spawn.hpp>
#include <boost/asio/strand.hpp>
#include <concepts>
#include <type_traits>
namespace ripple::util {
namespace impl {
template <typename T>
concept IsStrand = std::same_as<
std::decay_t<T>,
boost::asio::strand<typename std::decay_t<T>::inner_executor_type>>;
/**
* @brief A completion handler that restores `boost::asio::spawn`'s behaviour
* from Boost 1.83
*
* This is intended to be passed as the third argument to `boost::asio::spawn`
* so that exceptions are not ignored but propagated to `io_context.run()` call
* site.
*
* @param ePtr The exception that was caught on the coroutine
*/
inline constexpr auto kPROPAGATE_EXCEPTIONS = [](std::exception_ptr ePtr) {
if (ePtr)
{
try
{
std::rethrow_exception(ePtr);
}
catch (std::exception const& e)
{
JLOG(debugLog().warn()) << "Spawn exception: " << e.what();
throw;
}
catch (...)
{
JLOG(debugLog().warn()) << "Spawn exception: Unknown";
throw;
}
}
};
} // namespace impl
/**
* @brief Spawns a coroutine using `boost::asio::spawn`
*
* @note This uses kPROPAGATE_EXCEPTIONS to force asio to propagate exceptions
* through `io_context`
* @note Since implicit strand was removed from boost::asio::spawn this helper
* function adds the strand back
*
* @tparam Ctx The type of the context/strand
* @tparam F The type of the function to execute
* @param ctx The execution context
* @param func The function to execute. Must return `void`
*/
template <typename Ctx, typename F>
requires std::is_invocable_r_v<void, F, boost::asio::yield_context>
void
spawn(Ctx&& ctx, F&& func)
{
if constexpr (impl::IsStrand<Ctx>)
{
boost::asio::spawn(
std::forward<Ctx>(ctx),
std::forward<F>(func),
impl::kPROPAGATE_EXCEPTIONS);
}
else
{
boost::asio::spawn(
boost::asio::make_strand(
boost::asio::get_associated_executor(std::forward<Ctx>(ctx))),
std::forward<F>(func),
impl::kPROPAGATE_EXCEPTIONS);
}
}
} // namespace ripple::util
#endif

View File

@@ -166,7 +166,7 @@ public:
May be called concurrently.
Preconditions:
No call to io_context::run on any io_context
No call to io_service::run on any io_service
used by work objects associated with this io_list
exists in the caller's call stack.
*/

View File

@@ -49,7 +49,7 @@ Section::append(std::vector<std::string> const& lines)
// <key> '=' <value>
static boost::regex const re1(
"^" // start of line
"(?:\\s*)" // whitespace (optonal)
"(?:\\s*)" // whitespace (optional)
"([a-zA-Z][_a-zA-Z0-9]*)" // <key>
"(?:\\s*)" // whitespace (optional)
"(?:=)" // '='

View File

@@ -93,6 +93,18 @@ public:
// tie, round towards even.
int
round() noexcept;
// Modify the result to the correctly rounded value
void
doRoundUp(rep& mantissa, int& exponent, std::string location);
// Modify the result to the correctly rounded value
void
doRoundDown(rep& mantissa, int& exponent);
// Modify the result to the correctly rounded value
void
doRound(rep& drops);
};
inline void
@@ -170,6 +182,61 @@ Number::Guard::round() noexcept
return 0;
}
void
Number::Guard::doRoundUp(rep& mantissa, int& exponent, std::string location)
{
auto r = round();
if (r == 1 || (r == 0 && (mantissa & 1) == 1))
{
++mantissa;
if (mantissa > maxMantissa)
{
mantissa /= 10;
++exponent;
}
}
if (exponent < minExponent)
{
mantissa = 0;
exponent = Number{}.exponent_;
}
if (exponent > maxExponent)
throw std::overflow_error(location);
}
void
Number::Guard::doRoundDown(rep& mantissa, int& exponent)
{
auto r = round();
if (r == 1 || (r == 0 && (mantissa & 1) == 1))
{
--mantissa;
if (mantissa < minMantissa)
{
mantissa *= 10;
--exponent;
}
}
if (exponent < minExponent)
{
mantissa = 0;
exponent = Number{}.exponent_;
}
}
// Modify the result to the correctly rounded value
void
Number::Guard::doRound(rep& drops)
{
auto r = round();
if (r == 1 || (r == 0 && (drops & 1) == 1))
{
++drops;
}
if (is_negative())
drops = -drops;
}
// Number
constexpr Number one{1000000000000000, -15, Number::unchecked{}};
@@ -209,18 +276,7 @@ Number::normalize()
return;
}
auto r = g.round();
if (r == 1 || (r == 0 && (mantissa_ & 1) == 1))
{
++mantissa_;
if (mantissa_ > maxMantissa)
{
mantissa_ /= 10;
++exponent_;
}
}
if (exponent_ > maxExponent)
throw std::overflow_error("Number::normalize 2");
g.doRoundUp(mantissa_, exponent_, "Number::normalize 2");
if (negative)
mantissa_ = -mantissa_;
@@ -292,18 +348,7 @@ Number::operator+=(Number const& y)
xm /= 10;
++xe;
}
auto r = g.round();
if (r == 1 || (r == 0 && (xm & 1) == 1))
{
++xm;
if (xm > maxMantissa)
{
xm /= 10;
++xe;
}
}
if (xe > maxExponent)
throw std::overflow_error("Number::addition overflow");
g.doRoundUp(xm, xe, "Number::addition overflow");
}
else
{
@@ -323,21 +368,7 @@ Number::operator+=(Number const& y)
xm -= g.pop();
--xe;
}
auto r = g.round();
if (r == 1 || (r == 0 && (xm & 1) == 1))
{
--xm;
if (xm < minMantissa)
{
xm *= 10;
--xe;
}
}
if (xe < minExponent)
{
xm = 0;
xe = Number{}.exponent_;
}
g.doRoundDown(xm, xe);
}
mantissa_ = xm * xn;
exponent_ = xe;
@@ -417,25 +448,10 @@ Number::operator*=(Number const& y)
}
xm = static_cast<rep>(zm);
xe = ze;
auto r = g.round();
if (r == 1 || (r == 0 && (xm & 1) == 1))
{
++xm;
if (xm > maxMantissa)
{
xm /= 10;
++xe;
}
}
if (xe < minExponent)
{
xm = 0;
xe = Number{}.exponent_;
}
if (xe > maxExponent)
throw std::overflow_error(
"Number::multiplication overflow : exponent is " +
std::to_string(xe));
g.doRoundUp(
xm,
xe,
"Number::multiplication overflow : exponent is " + std::to_string(xe));
mantissa_ = xm * zn;
exponent_ = xe;
XRPL_ASSERT(
@@ -500,17 +516,29 @@ Number::operator rep() const
throw std::overflow_error("Number::operator rep() overflow");
drops *= 10;
}
auto r = g.round();
if (r == 1 || (r == 0 && (drops & 1) == 1))
{
++drops;
}
if (g.is_negative())
drops = -drops;
g.doRound(drops);
}
return drops;
}
Number
Number::truncate() const noexcept
{
if (exponent_ >= 0 || mantissa_ == 0)
return *this;
Number ret = *this;
while (ret.exponent_ < 0 && ret.mantissa_ != 0)
{
ret.exponent_ += 1;
ret.mantissa_ /= rep(10);
}
// We are guaranteed that normalize() will never throw an exception
// because exponent is either negative or zero at this point.
ret.normalize();
return ret;
}
std::string
to_string(Number const& amount)
{

View File

@@ -25,9 +25,8 @@
#include <xrpl/beast/utility/Journal.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <boost/asio/bind_executor.hpp>
#include <boost/asio/error.hpp>
#include <boost/asio/io_context.hpp>
#include <boost/asio/io_service.hpp>
#include <boost/asio/ip/tcp.hpp>
#include <boost/system/detail/error_code.hpp>
@@ -125,8 +124,8 @@ public:
beast::Journal m_journal;
boost::asio::io_context& m_io_context;
boost::asio::strand<boost::asio::io_context::executor_type> m_strand;
boost::asio::io_service& m_io_service;
boost::asio::io_service::strand m_strand;
boost::asio::ip::tcp::resolver m_resolver;
std::condition_variable m_cv;
@@ -156,12 +155,12 @@ public:
std::deque<Work> m_work;
ResolverAsioImpl(
boost::asio::io_context& io_context,
boost::asio::io_service& io_service,
beast::Journal journal)
: m_journal(journal)
, m_io_context(io_context)
, m_strand(boost::asio::make_strand(io_context))
, m_resolver(io_context)
, m_io_service(io_service)
, m_strand(io_service)
, m_resolver(io_service)
, m_asyncHandlersCompleted(true)
, m_stop_called(false)
, m_stopped(true)
@@ -217,14 +216,8 @@ public:
{
if (m_stop_called.exchange(true) == false)
{
boost::asio::dispatch(
m_io_context,
boost::asio::bind_executor(
m_strand,
std::bind(
&ResolverAsioImpl::do_stop,
this,
CompletionCounter(this))));
m_io_service.dispatch(m_strand.wrap(std::bind(
&ResolverAsioImpl::do_stop, this, CompletionCounter(this))));
JLOG(m_journal.debug()) << "Queued a stop request";
}
@@ -255,16 +248,12 @@ public:
// TODO NIKB use rvalue references to construct and move
// reducing cost.
boost::asio::dispatch(
m_io_context,
boost::asio::bind_executor(
m_strand,
std::bind(
&ResolverAsioImpl::do_resolve,
this,
names,
handler,
CompletionCounter(this))));
m_io_service.dispatch(m_strand.wrap(std::bind(
&ResolverAsioImpl::do_resolve,
this,
names,
handler,
CompletionCounter(this))));
}
//-------------------------------------------------------------------------
@@ -290,20 +279,19 @@ public:
std::string name,
boost::system::error_code const& ec,
HandlerType handler,
boost::asio::ip::tcp::resolver::results_type results,
boost::asio::ip::tcp::resolver::iterator iter,
CompletionCounter)
{
if (ec == boost::asio::error::operation_aborted)
return;
std::vector<beast::IP::Endpoint> addresses;
auto iter = results.begin();
// If we get an error message back, we don't return any
// results that we may have gotten.
if (!ec)
{
while (iter != results.end())
while (iter != boost::asio::ip::tcp::resolver::iterator())
{
addresses.push_back(
beast::IPAddressConversion::from_asio(*iter));
@@ -313,14 +301,8 @@ public:
handler(name, addresses);
boost::asio::post(
m_io_context,
boost::asio::bind_executor(
m_strand,
std::bind(
&ResolverAsioImpl::do_work,
this,
CompletionCounter(this))));
m_io_service.post(m_strand.wrap(std::bind(
&ResolverAsioImpl::do_work, this, CompletionCounter(this))));
}
HostAndPort
@@ -401,21 +383,16 @@ public:
{
JLOG(m_journal.error()) << "Unable to parse '" << name << "'";
boost::asio::post(
m_io_context,
boost::asio::bind_executor(
m_strand,
std::bind(
&ResolverAsioImpl::do_work,
this,
CompletionCounter(this))));
m_io_service.post(m_strand.wrap(std::bind(
&ResolverAsioImpl::do_work, this, CompletionCounter(this))));
return;
}
boost::asio::ip::tcp::resolver::query query(host, port);
m_resolver.async_resolve(
host,
port,
query,
std::bind(
&ResolverAsioImpl::do_finish,
this,
@@ -446,14 +423,10 @@ public:
if (m_work.size() > 0)
{
boost::asio::post(
m_io_context,
boost::asio::bind_executor(
m_strand,
std::bind(
&ResolverAsioImpl::do_work,
this,
CompletionCounter(this))));
m_io_service.post(m_strand.wrap(std::bind(
&ResolverAsioImpl::do_work,
this,
CompletionCounter(this))));
}
}
}
@@ -462,9 +435,9 @@ public:
//-----------------------------------------------------------------------------
std::unique_ptr<ResolverAsio>
ResolverAsio::New(boost::asio::io_context& io_context, beast::Journal journal)
ResolverAsio::New(boost::asio::io_service& io_service, beast::Journal journal)
{
return std::make_unique<ResolverAsioImpl>(io_context, journal);
return std::make_unique<ResolverAsioImpl>(io_service, journal);
}
//-----------------------------------------------------------------------------

View File

@@ -30,11 +30,9 @@
#include <xrpl/beast/utility/instrumentation.h>
#include <boost/asio/basic_waitable_timer.hpp>
#include <boost/asio/bind_executor.hpp>
#include <boost/asio/buffer.hpp>
#include <boost/asio/error.hpp>
#include <boost/asio/executor_work_guard.hpp>
#include <boost/asio/io_context.hpp>
#include <boost/asio/io_service.hpp>
#include <boost/asio/ip/udp.hpp>
#include <boost/asio/strand.hpp>
#include <boost/system/detail/error_code.hpp>
@@ -240,11 +238,9 @@ private:
Journal m_journal;
IP::Endpoint m_address;
std::string m_prefix;
boost::asio::io_context m_io_context;
std::optional<boost::asio::executor_work_guard<
boost::asio::io_context::executor_type>>
m_work;
boost::asio::strand<boost::asio::io_context::executor_type> m_strand;
boost::asio::io_service m_io_service;
std::optional<boost::asio::io_service::work> m_work;
boost::asio::io_service::strand m_strand;
boost::asio::basic_waitable_timer<std::chrono::steady_clock> m_timer;
boost::asio::ip::udp::socket m_socket;
std::deque<std::string> m_data;
@@ -268,24 +264,18 @@ public:
: m_journal(journal)
, m_address(address)
, m_prefix(prefix)
, m_work(boost::asio::make_work_guard(m_io_context))
, m_strand(boost::asio::make_strand(m_io_context))
, m_timer(m_io_context)
, m_socket(m_io_context)
, m_work(std::ref(m_io_service))
, m_strand(m_io_service)
, m_timer(m_io_service)
, m_socket(m_io_service)
, m_thread(&StatsDCollectorImp::run, this)
{
}
~StatsDCollectorImp() override
{
try
{
m_timer.cancel();
}
catch (boost::system::system_error const&)
{
// ignored
}
boost::system::error_code ec;
m_timer.cancel(ec);
m_work.reset();
m_thread.join();
@@ -344,10 +334,10 @@ public:
//--------------------------------------------------------------------------
boost::asio::io_context&
get_io_context()
boost::asio::io_service&
get_io_service()
{
return m_io_context;
return m_io_service;
}
std::string const&
@@ -365,14 +355,8 @@ public:
void
post_buffer(std::string&& buffer)
{
boost::asio::dispatch(
m_io_context,
boost::asio::bind_executor(
m_strand,
std::bind(
&StatsDCollectorImp::do_post_buffer,
this,
std::move(buffer))));
m_io_service.dispatch(m_strand.wrap(std::bind(
&StatsDCollectorImp::do_post_buffer, this, std::move(buffer))));
}
// The keepAlive parameter makes sure the buffers sent to
@@ -402,7 +386,8 @@ public:
for (auto const& buffer : buffers)
{
std::string const s(
buffer.data(), boost::asio::buffer_size(buffer));
boost::asio::buffer_cast<char const*>(buffer),
boost::asio::buffer_size(buffer));
std::cerr << s;
}
std::cerr << '\n';
@@ -471,7 +456,7 @@ public:
set_timer()
{
using namespace std::chrono_literals;
m_timer.expires_after(1s);
m_timer.expires_from_now(1s);
m_timer.async_wait(std::bind(
&StatsDCollectorImp::on_timer, this, std::placeholders::_1));
}
@@ -513,13 +498,13 @@ public:
set_timer();
m_io_context.run();
m_io_service.run();
m_socket.shutdown(boost::asio::ip::udp::socket::shutdown_send, ec);
m_socket.close();
m_io_context.poll();
m_io_service.poll();
}
};
@@ -562,12 +547,10 @@ StatsDCounterImpl::~StatsDCounterImpl()
void
StatsDCounterImpl::increment(CounterImpl::value_type amount)
{
boost::asio::dispatch(
m_impl->get_io_context(),
std::bind(
&StatsDCounterImpl::do_increment,
std::static_pointer_cast<StatsDCounterImpl>(shared_from_this()),
amount));
m_impl->get_io_service().dispatch(std::bind(
&StatsDCounterImpl::do_increment,
std::static_pointer_cast<StatsDCounterImpl>(shared_from_this()),
amount));
}
void
@@ -609,12 +592,10 @@ StatsDEventImpl::StatsDEventImpl(
void
StatsDEventImpl::notify(EventImpl::value_type const& value)
{
boost::asio::dispatch(
m_impl->get_io_context(),
std::bind(
&StatsDEventImpl::do_notify,
std::static_pointer_cast<StatsDEventImpl>(shared_from_this()),
value));
m_impl->get_io_service().dispatch(std::bind(
&StatsDEventImpl::do_notify,
std::static_pointer_cast<StatsDEventImpl>(shared_from_this()),
value));
}
void
@@ -644,23 +625,19 @@ StatsDGaugeImpl::~StatsDGaugeImpl()
void
StatsDGaugeImpl::set(GaugeImpl::value_type value)
{
boost::asio::dispatch(
m_impl->get_io_context(),
std::bind(
&StatsDGaugeImpl::do_set,
std::static_pointer_cast<StatsDGaugeImpl>(shared_from_this()),
value));
m_impl->get_io_service().dispatch(std::bind(
&StatsDGaugeImpl::do_set,
std::static_pointer_cast<StatsDGaugeImpl>(shared_from_this()),
value));
}
void
StatsDGaugeImpl::increment(GaugeImpl::difference_type amount)
{
boost::asio::dispatch(
m_impl->get_io_context(),
std::bind(
&StatsDGaugeImpl::do_increment,
std::static_pointer_cast<StatsDGaugeImpl>(shared_from_this()),
amount));
m_impl->get_io_service().dispatch(std::bind(
&StatsDGaugeImpl::do_increment,
std::static_pointer_cast<StatsDGaugeImpl>(shared_from_this()),
amount));
}
void
@@ -736,12 +713,10 @@ StatsDMeterImpl::~StatsDMeterImpl()
void
StatsDMeterImpl::increment(MeterImpl::value_type amount)
{
boost::asio::dispatch(
m_impl->get_io_context(),
std::bind(
&StatsDMeterImpl::do_increment,
std::static_pointer_cast<StatsDMeterImpl>(shared_from_this()),
amount));
m_impl->get_io_service().dispatch(std::bind(
&StatsDMeterImpl::do_increment,
std::static_pointer_cast<StatsDMeterImpl>(shared_from_this()),
amount));
}
void

View File

@@ -25,11 +25,11 @@ namespace IP {
bool
is_private(AddressV4 const& addr)
{
return ((addr.to_uint() & 0xff000000) ==
return ((addr.to_ulong() & 0xff000000) ==
0x0a000000) || // Prefix /8, 10. #.#.#
((addr.to_uint() & 0xfff00000) ==
((addr.to_ulong() & 0xfff00000) ==
0xac100000) || // Prefix /12 172. 16.#.# - 172.31.#.#
((addr.to_uint() & 0xffff0000) ==
((addr.to_ulong() & 0xffff0000) ==
0xc0a80000) || // Prefix /16 192.168.#.#
addr.is_loopback();
}
@@ -44,7 +44,7 @@ char
get_class(AddressV4 const& addr)
{
static char const* table = "AAAABBCD";
return table[(addr.to_uint() & 0xE0000000) >> 29];
return table[(addr.to_ulong() & 0xE0000000) >> 29];
}
} // namespace IP

View File

@@ -20,8 +20,6 @@
#include <xrpl/beast/net/IPAddressV4.h>
#include <xrpl/beast/net/IPAddressV6.h>
#include <boost/asio/ip/address_v4.hpp>
namespace beast {
namespace IP {
@@ -30,9 +28,7 @@ is_private(AddressV6 const& addr)
{
return (
(addr.to_bytes()[0] & 0xfd) || // TODO fc00::/8 too ?
(addr.is_v4_mapped() &&
is_private(boost::asio::ip::make_address_v4(
boost::asio::ip::v4_mapped, addr))));
(addr.is_v4_mapped() && is_private(addr.to_v4())));
}
bool

View File

@@ -21,8 +21,6 @@
#include <xrpl/beast/net/IPEndpoint.h>
#include <boost/algorithm/string/trim.hpp>
#include <boost/asio/ip/address.hpp>
#include <boost/asio/ip/address_v4.hpp>
#include <boost/system/detail/error_code.hpp>
#include <cctype>
@@ -169,7 +167,7 @@ operator>>(std::istream& is, Endpoint& endpoint)
}
boost::system::error_code ec;
auto addr = boost::asio::ip::make_address(addrStr, ec);
auto addr = Address::from_string(addrStr, ec);
if (ec)
{
is.setstate(std::ios_base::failbit);

View File

@@ -33,9 +33,6 @@
namespace Json {
Value const Value::null;
Int const Value::minInt = Int(~(UInt(-1) / 2));
Int const Value::maxInt = Int(UInt(-1) / 2);
UInt const Value::maxUInt = UInt(-1);
class DefaultValueAllocator : public ValueAllocator
{
@@ -569,6 +566,69 @@ Value::asInt() const
return 0; // unreachable;
}
UInt
Value::asAbsUInt() const
{
switch (type_)
{
case nullValue:
return 0;
case intValue: {
// Doing this conversion through int64 avoids overflow error for
// value_.int_ = -1 * 2^31 i.e. numeric_limits<int>::min().
if (value_.int_ < 0)
return static_cast<std::int64_t>(value_.int_) * -1;
return value_.int_;
}
case uintValue:
return value_.uint_;
case realValue: {
if (value_.real_ < 0)
{
JSON_ASSERT_MESSAGE(
-1 * value_.real_ <= maxUInt,
"Real out of unsigned integer range");
return UInt(-1 * value_.real_);
}
JSON_ASSERT_MESSAGE(
value_.real_ <= maxUInt, "Real out of unsigned integer range");
return UInt(value_.real_);
}
case booleanValue:
return value_.bool_ ? 1 : 0;
case stringValue: {
char const* const str{value_.string_ ? value_.string_ : ""};
auto const temp = beast::lexicalCastThrow<std::int64_t>(str);
if (temp < 0)
{
JSON_ASSERT_MESSAGE(
-1 * temp <= maxUInt,
"String out of unsigned integer range");
return -1 * temp;
}
JSON_ASSERT_MESSAGE(
temp <= maxUInt, "String out of unsigned integer range");
return temp;
}
case arrayValue:
case objectValue:
JSON_ASSERT_MESSAGE(false, "Type is not convertible to int");
// LCOV_EXCL_START
default:
UNREACHABLE("Json::Value::asAbsInt : invalid type");
// LCOV_EXCL_STOP
}
return 0; // unreachable;
}
Value::UInt
Value::asUInt() const
{
@@ -1001,6 +1061,12 @@ Value::isMember(std::string const& key) const
return isMember(key.c_str());
}
bool
Value::isMember(StaticString const& key) const
{
return isMember(key.c_str());
}
Value::Members
Value::getMemberNames() const
{

View File

@@ -22,8 +22,154 @@
#include <xrpl/ledger/ApplyView.h>
#include <xrpl/protocol/Protocol.h>
#include <limits>
#include <type_traits>
namespace ripple {
namespace directory {
std::uint64_t
createRoot(
ApplyView& view,
Keylet const& directory,
uint256 const& key,
std::function<void(std::shared_ptr<SLE> const&)> const& describe)
{
auto newRoot = std::make_shared<SLE>(directory);
newRoot->setFieldH256(sfRootIndex, directory.key);
describe(newRoot);
STVector256 v;
v.push_back(key);
newRoot->setFieldV256(sfIndexes, v);
view.insert(newRoot);
return std::uint64_t{0};
}
auto
findPreviousPage(ApplyView& view, Keylet const& directory, SLE::ref start)
{
std::uint64_t page = start->getFieldU64(sfIndexPrevious);
auto node = start;
if (page)
{
node = view.peek(keylet::page(directory, page));
if (!node)
{ // LCOV_EXCL_START
LogicError("Directory chain: root back-pointer broken.");
// LCOV_EXCL_STOP
}
}
auto indexes = node->getFieldV256(sfIndexes);
return std::make_tuple(page, node, indexes);
}
std::uint64_t
insertKey(
ApplyView& view,
SLE::ref node,
std::uint64_t page,
bool preserveOrder,
STVector256& indexes,
uint256 const& key)
{
if (preserveOrder)
{
if (std::find(indexes.begin(), indexes.end(), key) != indexes.end())
LogicError("dirInsert: double insertion"); // LCOV_EXCL_LINE
indexes.push_back(key);
}
else
{
// We can't be sure if this page is already sorted because
// it may be a legacy page we haven't yet touched. Take
// the time to sort it.
std::sort(indexes.begin(), indexes.end());
auto pos = std::lower_bound(indexes.begin(), indexes.end(), key);
if (pos != indexes.end() && key == *pos)
LogicError("dirInsert: double insertion"); // LCOV_EXCL_LINE
indexes.insert(pos, key);
}
node->setFieldV256(sfIndexes, indexes);
view.update(node);
return page;
}
std::optional<std::uint64_t>
insertPage(
ApplyView& view,
std::uint64_t page,
SLE::pointer node,
std::uint64_t nextPage,
SLE::ref next,
uint256 const& key,
Keylet const& directory,
std::function<void(std::shared_ptr<SLE> const&)> const& describe)
{
// We rely on modulo arithmetic of unsigned integers (guaranteed in
// [basic.fundamental] paragraph 2) to detect page representation overflow.
// For signed integers this would be UB, hence static_assert here.
static_assert(std::is_unsigned_v<decltype(page)>);
// Defensive check against breaking changes in compiler.
static_assert([]<typename T>(std::type_identity<T>) constexpr -> T {
T tmp = std::numeric_limits<T>::max();
return ++tmp;
}(std::type_identity<decltype(page)>{}) == 0);
++page;
// Check whether we're out of pages.
if (page == 0)
return std::nullopt;
if (!view.rules().enabled(fixDirectoryLimit) &&
page >= dirNodeMaxPages) // Old pages limit
return std::nullopt;
// We are about to create a new node; we'll link it to
// the chain first:
node->setFieldU64(sfIndexNext, page);
view.update(node);
next->setFieldU64(sfIndexPrevious, page);
view.update(next);
// Insert the new key:
STVector256 indexes;
indexes.push_back(key);
node = std::make_shared<SLE>(keylet::page(directory, page));
node->setFieldH256(sfRootIndex, directory.key);
node->setFieldV256(sfIndexes, indexes);
// Save some space by not specifying the value 0 since
// it's the default.
if (page != 1)
node->setFieldU64(sfIndexPrevious, page - 1);
XRPL_ASSERT_PARTS(
!nextPage,
"ripple::directory::insertPage",
"nextPage has default value");
/* Reserved for future use when directory pages may be inserted in
* between two other pages instead of only at the end of the chain.
if (nextPage)
node->setFieldU64(sfIndexNext, nextPage);
*/
describe(node);
view.insert(node);
return page;
}
} // namespace directory
std::optional<std::uint64_t>
ApplyView::dirAdd(
bool preserveOrder,
@@ -36,89 +182,21 @@ ApplyView::dirAdd(
if (!root)
{
// No root, make it.
root = std::make_shared<SLE>(directory);
root->setFieldH256(sfRootIndex, directory.key);
describe(root);
STVector256 v;
v.push_back(key);
root->setFieldV256(sfIndexes, v);
insert(root);
return std::uint64_t{0};
return directory::createRoot(*this, directory, key, describe);
}
std::uint64_t page = root->getFieldU64(sfIndexPrevious);
auto node = root;
if (page)
{
node = peek(keylet::page(directory, page));
if (!node)
LogicError("Directory chain: root back-pointer broken.");
}
auto indexes = node->getFieldV256(sfIndexes);
auto [page, node, indexes] =
directory::findPreviousPage(*this, directory, root);
// If there's space, we use it:
if (indexes.size() < dirNodeMaxEntries)
{
if (preserveOrder)
{
if (std::find(indexes.begin(), indexes.end(), key) != indexes.end())
LogicError("dirInsert: double insertion");
indexes.push_back(key);
}
else
{
// We can't be sure if this page is already sorted because
// it may be a legacy page we haven't yet touched. Take
// the time to sort it.
std::sort(indexes.begin(), indexes.end());
auto pos = std::lower_bound(indexes.begin(), indexes.end(), key);
if (pos != indexes.end() && key == *pos)
LogicError("dirInsert: double insertion");
indexes.insert(pos, key);
}
node->setFieldV256(sfIndexes, indexes);
update(node);
return page;
return directory::insertKey(
*this, node, page, preserveOrder, indexes, key);
}
// Check whether we're out of pages.
if (++page >= dirNodeMaxPages)
return std::nullopt;
// We are about to create a new node; we'll link it to
// the chain first:
node->setFieldU64(sfIndexNext, page);
update(node);
root->setFieldU64(sfIndexPrevious, page);
update(root);
// Insert the new key:
indexes.clear();
indexes.push_back(key);
node = std::make_shared<SLE>(keylet::page(directory, page));
node->setFieldH256(sfRootIndex, directory.key);
node->setFieldV256(sfIndexes, indexes);
// Save some space by not specifying the value 0 since
// it's the default.
if (page != 1)
node->setFieldU64(sfIndexPrevious, page - 1);
describe(node);
insert(node);
return page;
return directory::insertPage(
*this, page, node, 0, root, key, directory, describe);
}
bool
@@ -148,10 +226,10 @@ ApplyView::emptyDirDelete(Keylet const& directory)
auto nextPage = node->getFieldU64(sfIndexNext);
if (nextPage == rootPage && prevPage != rootPage)
LogicError("Directory chain: fwd link broken");
LogicError("Directory chain: fwd link broken"); // LCOV_EXCL_LINE
if (prevPage == rootPage && nextPage != rootPage)
LogicError("Directory chain: rev link broken");
LogicError("Directory chain: rev link broken"); // LCOV_EXCL_LINE
// Older versions of the code would, in some cases, allow the last
// page to be empty. Remove such pages:
@@ -160,7 +238,10 @@ ApplyView::emptyDirDelete(Keylet const& directory)
auto last = peek(keylet::page(directory, nextPage));
if (!last)
{ // LCOV_EXCL_START
LogicError("Directory chain: fwd link broken.");
// LCOV_EXCL_STOP
}
if (!last->getFieldV256(sfIndexes).empty())
return false;
@@ -232,10 +313,16 @@ ApplyView::dirRemove(
if (page == rootPage)
{
if (nextPage == page && prevPage != page)
{ // LCOV_EXCL_START
LogicError("Directory chain: fwd link broken");
// LCOV_EXCL_STOP
}
if (prevPage == page && nextPage != page)
{ // LCOV_EXCL_START
LogicError("Directory chain: rev link broken");
// LCOV_EXCL_STOP
}
// Older versions of the code would, in some cases,
// allow the last page to be empty. Remove such
@@ -244,7 +331,10 @@ ApplyView::dirRemove(
{
auto last = peek(keylet::page(directory, nextPage));
if (!last)
{ // LCOV_EXCL_START
LogicError("Directory chain: fwd link broken.");
// LCOV_EXCL_STOP
}
if (last->getFieldV256(sfIndexes).empty())
{
@@ -276,10 +366,10 @@ ApplyView::dirRemove(
// This can never happen for nodes other than the root:
if (nextPage == page)
LogicError("Directory chain: fwd link broken");
LogicError("Directory chain: fwd link broken"); // LCOV_EXCL_LINE
if (prevPage == page)
LogicError("Directory chain: rev link broken");
LogicError("Directory chain: rev link broken"); // LCOV_EXCL_LINE
// This node isn't the root, so it can either be in the
// middle of the list, or at the end. Unlink it first
@@ -287,14 +377,14 @@ ApplyView::dirRemove(
// root:
auto prev = peek(keylet::page(directory, prevPage));
if (!prev)
LogicError("Directory chain: fwd link broken.");
LogicError("Directory chain: fwd link broken."); // LCOV_EXCL_LINE
// Fix previous to point to its new next.
prev->setFieldU64(sfIndexNext, nextPage);
update(prev);
auto next = peek(keylet::page(directory, nextPage));
if (!next)
LogicError("Directory chain: rev link broken.");
LogicError("Directory chain: rev link broken."); // LCOV_EXCL_LINE
// Fix next to point to its new previous.
next->setFieldU64(sfIndexPrevious, prevPage);
update(next);
@@ -318,7 +408,10 @@ ApplyView::dirRemove(
// And the root points to the last page:
auto root = peek(keylet::page(directory, rootPage));
if (!root)
{ // LCOV_EXCL_START
LogicError("Directory chain: root link broken.");
// LCOV_EXCL_STOP
}
root->setFieldU64(sfIndexPrevious, prevPage);
update(root);

File diff suppressed because it is too large Load Diff

View File

@@ -24,7 +24,6 @@
#include <xrpl/net/HTTPClientSSLContext.h>
#include <boost/asio.hpp>
#include <boost/asio/ip/resolver_query_base.hpp>
#include <boost/asio/ip/tcp.hpp>
#include <boost/asio/ssl.hpp>
#include <boost/regex.hpp>
@@ -56,16 +55,16 @@ class HTTPClientImp : public std::enable_shared_from_this<HTTPClientImp>,
{
public:
HTTPClientImp(
boost::asio::io_context& io_context,
boost::asio::io_service& io_service,
unsigned short const port,
std::size_t maxResponseSize,
beast::Journal& j)
: mSocket(io_context, httpClientSSLContext->context())
, mResolver(io_context)
: mSocket(io_service, httpClientSSLContext->context())
, mResolver(io_service)
, mHeader(maxClientHeaderBytes)
, mPort(port)
, maxResponseSize_(maxResponseSize)
, mDeadline(io_context)
, mDeadline(io_service)
, j_(j)
{
}
@@ -147,21 +146,18 @@ public:
{
JLOG(j_.trace()) << "Fetch: " << mDeqSites[0];
auto query = std::make_shared<Query>(
auto query = std::make_shared<boost::asio::ip::tcp::resolver::query>(
mDeqSites[0],
std::to_string(mPort),
boost::asio::ip::resolver_query_base::numeric_service);
mQuery = query;
try
{
mDeadline.expires_after(mTimeout);
}
catch (boost::system::system_error const& e)
{
mShutdown = e.code();
mDeadline.expires_from_now(mTimeout, mShutdown);
JLOG(j_.trace()) << "expires_after: " << mShutdown.message();
JLOG(j_.trace()) << "expires_from_now: " << mShutdown.message();
if (!mShutdown)
{
mDeadline.async_wait(std::bind(
&HTTPClientImp::handleDeadline,
shared_from_this(),
@@ -173,9 +169,7 @@ public:
JLOG(j_.trace()) << "Resolving: " << mDeqSites[0];
mResolver.async_resolve(
mQuery->host,
mQuery->port,
mQuery->flags,
*mQuery,
std::bind(
&HTTPClientImp::handleResolve,
shared_from_this(),
@@ -239,7 +233,7 @@ public:
void
handleResolve(
boost::system::error_code const& ecResult,
boost::asio::ip::tcp::resolver::results_type result)
boost::asio::ip::tcp::resolver::iterator itrEndpoint)
{
if (!mShutdown)
{
@@ -261,7 +255,7 @@ public:
boost::asio::async_connect(
mSocket.lowest_layer(),
result,
itrEndpoint,
std::bind(
&HTTPClientImp::handleConnect,
shared_from_this(),
@@ -481,15 +475,13 @@ public:
std::string const& strData = "")
{
boost::system::error_code ecCancel;
try
(void)mDeadline.cancel(ecCancel);
if (ecCancel)
{
mDeadline.cancel();
}
catch (boost::system::system_error const& e)
{
JLOG(j_.trace())
<< "invokeComplete: Deadline cancel error: " << e.what();
ecCancel = e.code();
JLOG(j_.trace()) << "invokeComplete: Deadline cancel error: "
<< ecCancel.message();
}
JLOG(j_.debug()) << "invokeComplete: Deadline popping: "
@@ -523,15 +515,7 @@ private:
bool mSSL;
AutoSocket mSocket;
boost::asio::ip::tcp::resolver mResolver;
struct Query
{
std::string host;
std::string port;
boost::asio::ip::resolver_query_base::flags flags;
};
std::shared_ptr<Query> mQuery;
std::shared_ptr<boost::asio::ip::tcp::resolver::query> mQuery;
boost::asio::streambuf mRequest;
boost::asio::streambuf mHeader;
boost::asio::streambuf mResponse;
@@ -562,7 +546,7 @@ private:
void
HTTPClient::get(
bool bSSL,
boost::asio::io_context& io_context,
boost::asio::io_service& io_service,
std::deque<std::string> deqSites,
unsigned short const port,
std::string const& strPath,
@@ -575,14 +559,14 @@ HTTPClient::get(
beast::Journal& j)
{
auto client =
std::make_shared<HTTPClientImp>(io_context, port, responseMax, j);
std::make_shared<HTTPClientImp>(io_service, port, responseMax, j);
client->get(bSSL, deqSites, strPath, timeout, complete);
}
void
HTTPClient::get(
bool bSSL,
boost::asio::io_context& io_context,
boost::asio::io_service& io_service,
std::string strSite,
unsigned short const port,
std::string const& strPath,
@@ -597,14 +581,14 @@ HTTPClient::get(
std::deque<std::string> deqSites(1, strSite);
auto client =
std::make_shared<HTTPClientImp>(io_context, port, responseMax, j);
std::make_shared<HTTPClientImp>(io_service, port, responseMax, j);
client->get(bSSL, deqSites, strPath, timeout, complete);
}
void
HTTPClient::request(
bool bSSL,
boost::asio::io_context& io_context,
boost::asio::io_service& io_service,
std::string strSite,
unsigned short const port,
std::function<void(boost::asio::streambuf& sb, std::string const& strHost)>
@@ -620,7 +604,7 @@ HTTPClient::request(
std::deque<std::string> deqSites(1, strSite);
auto client =
std::make_shared<HTTPClientImp>(io_context, port, responseMax, j);
std::make_shared<HTTPClientImp>(io_service, port, responseMax, j);
client->request(bSSL, deqSites, setRequest, timeout, complete);
}

View File

@@ -36,7 +36,7 @@ namespace BuildInfo {
// and follow the format described at http://semver.org/
//------------------------------------------------------------------------------
// clang-format off
char const* const versionString = "3.0.0-b1"
char const* const versionString = "3.1.0-b1"
// clang-format on
#if defined(DEBUG) || defined(SANITIZER)

View File

@@ -96,6 +96,8 @@ enum class LedgerNameSpace : std::uint16_t {
PERMISSIONED_DOMAIN = 'm',
DELEGATE = 'E',
VAULT = 'V',
LOAN_BROKER = 'l', // lower-case L
LOAN = 'L',
// No longer used or supported. Left here to reserve the space
// to avoid accidental reuse.
@@ -566,6 +568,18 @@ vault(AccountID const& owner, std::uint32_t seq) noexcept
return vault(indexHash(LedgerNameSpace::VAULT, owner, seq));
}
Keylet
loanbroker(AccountID const& owner, std::uint32_t seq) noexcept
{
return loanbroker(indexHash(LedgerNameSpace::LOAN_BROKER, owner, seq));
}
Keylet
loan(uint256 const& loanBrokerID, std::uint32_t loanSeq) noexcept
{
return loan(indexHash(LedgerNameSpace::LOAN, loanBrokerID, loanSeq));
}
Keylet
permissionedDomain(AccountID const& account, std::uint32_t seq) noexcept
{

View File

@@ -172,6 +172,14 @@ InnerObjectFormats::InnerObjectFormats()
{sfBookDirectory, soeREQUIRED},
{sfBookNode, soeREQUIRED},
});
add(sfCounterpartySignature.jsonName,
sfCounterpartySignature.getCode(),
{
{sfSigningPubKey, soeOPTIONAL},
{sfTxnSignature, soeOPTIONAL},
{sfSigners, soeOPTIONAL},
});
}
InnerObjectFormats const&

View File

@@ -321,7 +321,7 @@ STAmount::xrp() const
IOUAmount
STAmount::iou() const
{
if (native() || !holds<Issue>())
if (integral())
Throw<std::logic_error>("Cannot return non-IOU STAmount as IOUAmount");
auto mantissa = static_cast<std::int64_t>(mValue);
@@ -872,7 +872,7 @@ STAmount::isDefault() const
void
STAmount::canonicalize()
{
if (native() || mAsset.holds<MPTIssue>())
if (integral())
{
// native and MPT currency amounts should always have an offset of zero
// log(2^64,10) ~ 19.2
@@ -905,8 +905,10 @@ STAmount::canonicalize()
};
if (native())
set(XRPAmount{num});
else
else if (mAsset.holds<MPTIssue>())
set(MPTAmount{num});
else
Throw<std::runtime_error>("Unknown integral asset type");
mOffset = 0;
}
else
@@ -1135,7 +1137,7 @@ amountFromJson(SField const& name, Json::Value const& v)
}
else
{
parts.mantissa = -value.asInt();
parts.mantissa = value.asAbsUInt();
parts.negative = true;
}
}
@@ -1509,6 +1511,33 @@ canonicalizeRoundStrict(
}
}
STAmount
roundToScale(
STAmount const& value,
std::int32_t scale,
Number::rounding_mode rounding)
{
// Nothing to do for integral types.
if (value.integral())
return value;
// If the value's exponent is greater than or equal to the scale, then
// rounding will do nothing, and might even lose precision, so just return
// the value.
if (value.exponent() >= scale)
return value;
STAmount const referenceValue{
value.asset(), STAmount::cMinValue, scale, value.negative()};
NumberRoundModeGuard mg(rounding);
// With an IOU, the the result of addition will be truncated to the
// precision of the larger value, which in this case is referenceValue. Then
// remove the reference value via subtraction, and we're left with the
// rounded value.
return (value + referenceValue) - referenceValue;
}
namespace {
// We need a class that has an interface similar to NumberRoundModeGuard

View File

@@ -188,7 +188,7 @@ numberFromJson(SField const& field, Json::Value const& value)
}
else
{
parts.mantissa = -value.asInt();
parts.mantissa = value.asAbsUInt();
parts.negative = true;
}
}

View File

@@ -688,6 +688,16 @@ STObject::getFieldV256(SField const& field) const
return getFieldByConstRef<STVector256>(field, empty);
}
STObject
STObject::getFieldObject(SField const& field) const
{
STObject const empty{field};
auto ret = getFieldByConstRef<STObject>(field, empty);
if (ret != empty)
ret.applyTemplateFromSField(field);
return ret;
}
STArray const&
STObject::getFieldArray(SField const& field) const
{
@@ -833,6 +843,12 @@ STObject::setFieldArray(SField const& field, STArray const& v)
setFieldUsingAssignment(field, v);
}
void
STObject::setFieldObject(SField const& field, STObject const& v)
{
setFieldUsingAssignment(field, v);
}
Json::Value
STObject::getJson(JsonOptions options) const
{

View File

@@ -200,11 +200,11 @@ STTx::getSigningHash() const
}
Blob
STTx::getSignature() const
STTx::getSignature(STObject const& sigObject)
{
try
{
return getFieldVL(sfTxnSignature);
return sigObject.getFieldVL(sfTxnSignature);
}
catch (std::exception const&)
{
@@ -234,35 +234,66 @@ STTx::getSeqValue() const
}
void
STTx::sign(PublicKey const& publicKey, SecretKey const& secretKey)
STTx::sign(
PublicKey const& publicKey,
SecretKey const& secretKey,
std::optional<std::reference_wrapper<SField const>> signatureTarget)
{
auto const data = getSigningData(*this);
auto const sig = ripple::sign(publicKey, secretKey, makeSlice(data));
setFieldVL(sfTxnSignature, sig);
if (signatureTarget)
{
auto& target = peekFieldObject(*signatureTarget);
target.setFieldVL(sfTxnSignature, sig);
}
else
{
setFieldVL(sfTxnSignature, sig);
}
tid_ = getHash(HashPrefix::transactionID);
}
Expected<void, std::string>
STTx::checkSign(
RequireFullyCanonicalSig requireCanonicalSig,
Rules const& rules,
STObject const& sigObject) const
{
try
{
// Determine whether we're single- or multi-signing by looking
// at the SigningPubKey. If it's empty we must be
// multi-signing. Otherwise we're single-signing.
Blob const& signingPubKey = sigObject.getFieldVL(sfSigningPubKey);
return signingPubKey.empty()
? checkMultiSign(requireCanonicalSig, rules, sigObject)
: checkSingleSign(requireCanonicalSig, sigObject);
}
catch (std::exception const&)
{
}
return Unexpected("Internal signature check failure.");
}
Expected<void, std::string>
STTx::checkSign(
RequireFullyCanonicalSig requireCanonicalSig,
Rules const& rules) const
{
try
if (auto const ret = checkSign(requireCanonicalSig, rules, *this); !ret)
return ret;
if (isFieldPresent(sfCounterpartySignature))
{
// Determine whether we're single- or multi-signing by looking
// at the SigningPubKey. If it's empty we must be
// multi-signing. Otherwise we're single-signing.
Blob const& signingPubKey = getFieldVL(sfSigningPubKey);
return signingPubKey.empty()
? checkMultiSign(requireCanonicalSig, rules)
: checkSingleSign(requireCanonicalSig);
auto const counterSig = getFieldObject(sfCounterpartySignature);
if (auto const ret = checkSign(requireCanonicalSig, rules, counterSig);
!ret)
return Unexpected("Counterparty: " + ret.error());
}
catch (std::exception const&)
{
}
return Unexpected("Internal signature check failure.");
return {};
}
Expected<void, std::string>
@@ -382,23 +413,23 @@ STTx::getMetaSQL(
static Expected<void, std::string>
singleSignHelper(
STObject const& signer,
STObject const& sigObject,
Slice const& data,
bool const fullyCanonical)
{
// We don't allow both a non-empty sfSigningPubKey and an sfSigners.
// That would allow the transaction to be signed two ways. So if both
// fields are present the signature is invalid.
if (signer.isFieldPresent(sfSigners))
if (sigObject.isFieldPresent(sfSigners))
return Unexpected("Cannot both single- and multi-sign.");
bool validSig = false;
try
{
auto const spk = signer.getFieldVL(sfSigningPubKey);
auto const spk = sigObject.getFieldVL(sfSigningPubKey);
if (publicKeyType(makeSlice(spk)))
{
Blob const signature = signer.getFieldVL(sfTxnSignature);
Blob const signature = sigObject.getFieldVL(sfTxnSignature);
validSig = verify(
PublicKey(makeSlice(spk)),
data,
@@ -418,12 +449,14 @@ singleSignHelper(
}
Expected<void, std::string>
STTx::checkSingleSign(RequireFullyCanonicalSig requireCanonicalSig) const
STTx::checkSingleSign(
RequireFullyCanonicalSig requireCanonicalSig,
STObject const& sigObject) const
{
auto const data = getSigningData(*this);
bool const fullyCanonical = (getFlags() & tfFullyCanonicalSig) ||
(requireCanonicalSig == STTx::RequireFullyCanonicalSig::yes);
return singleSignHelper(*this, makeSlice(data), fullyCanonical);
return singleSignHelper(sigObject, makeSlice(data), fullyCanonical);
}
Expected<void, std::string>
@@ -440,31 +473,29 @@ STTx::checkBatchSingleSign(
Expected<void, std::string>
multiSignHelper(
STObject const& signerObj,
STObject const& sigObject,
std::optional<AccountID> txnAccountID,
bool const fullyCanonical,
std::function<Serializer(AccountID const&)> makeMsg,
Rules const& rules)
{
// Make sure the MultiSigners are present. Otherwise they are not
// attempting multi-signing and we just have a bad SigningPubKey.
if (!signerObj.isFieldPresent(sfSigners))
if (!sigObject.isFieldPresent(sfSigners))
return Unexpected("Empty SigningPubKey.");
// We don't allow both an sfSigners and an sfTxnSignature. Both fields
// being present would indicate that the transaction is signed both ways.
if (signerObj.isFieldPresent(sfTxnSignature))
if (sigObject.isFieldPresent(sfTxnSignature))
return Unexpected("Cannot both single- and multi-sign.");
STArray const& signers{signerObj.getFieldArray(sfSigners)};
STArray const& signers{sigObject.getFieldArray(sfSigners)};
// There are well known bounds that the number of signers must be within.
if (signers.size() < STTx::minMultiSigners ||
signers.size() > STTx::maxMultiSigners(&rules))
return Unexpected("Invalid Signers array size.");
// We also use the sfAccount field inside the loop. Get it once.
auto const txnAccountID = signerObj.getAccountID(sfAccount);
// Signers must be in sorted order by AccountID.
AccountID lastAccountID(beast::zero);
@@ -472,8 +503,10 @@ multiSignHelper(
{
auto const accountID = signer.getAccountID(sfAccount);
// The account owner may not multisign for themselves.
if (accountID == txnAccountID)
// The account owner may not usually multisign for themselves.
// If they can, txnAccountID will be unseated, which is not equal to any
// value.
if (txnAccountID == accountID)
return Unexpected("Invalid multisigner.");
// No duplicate signers allowed.
@@ -489,6 +522,7 @@ multiSignHelper(
// Verify the signature.
bool validSig = false;
std::optional<std::string> errorWhat;
try
{
auto spk = signer.getFieldVL(sfSigningPubKey);
@@ -502,15 +536,16 @@ multiSignHelper(
fullyCanonical);
}
}
catch (std::exception const&)
catch (std::exception const& e)
{
// We assume any problem lies with the signature.
validSig = false;
errorWhat = e.what();
}
if (!validSig)
return Unexpected(
std::string("Invalid signature on account ") +
toBase58(accountID) + ".");
toBase58(accountID) + errorWhat.value_or("") + ".");
}
// All signatures verified.
return {};
@@ -532,8 +567,9 @@ STTx::checkBatchMultiSign(
serializeBatch(dataStart, getFlags(), getBatchTransactionIDs());
return multiSignHelper(
batchSigner,
std::nullopt,
fullyCanonical,
[&dataStart](AccountID const& accountID) mutable -> Serializer {
[&dataStart](AccountID const& accountID) -> Serializer {
Serializer s = dataStart;
finishMultiSigningData(accountID, s);
return s;
@@ -544,19 +580,27 @@ STTx::checkBatchMultiSign(
Expected<void, std::string>
STTx::checkMultiSign(
RequireFullyCanonicalSig requireCanonicalSig,
Rules const& rules) const
Rules const& rules,
STObject const& sigObject) const
{
bool const fullyCanonical = (getFlags() & tfFullyCanonicalSig) ||
(requireCanonicalSig == RequireFullyCanonicalSig::yes);
// Used inside the loop in multiSignHelper to enforce that
// the account owner may not multisign for themselves.
auto const txnAccountID = &sigObject != this
? std::nullopt
: std::optional<AccountID>(getAccountID(sfAccount));
// We can ease the computational load inside the loop a bit by
// pre-constructing part of the data that we hash. Fill a Serializer
// with the stuff that stays constant from signature to signature.
Serializer dataStart = startMultiSigningData(*this);
return multiSignHelper(
*this,
sigObject,
txnAccountID,
fullyCanonical,
[&dataStart](AccountID const& accountID) mutable -> Serializer {
[&dataStart](AccountID const& accountID) -> Serializer {
Serializer s = dataStart;
finishMultiSigningData(accountID, s);
return s;
@@ -569,7 +613,7 @@ STTx::checkMultiSign(
*
* This function returns a vector of transaction IDs by extracting them from
* the field array `sfRawTransactions` within the STTx. If the batch
* transaction IDs have already been computed and cached in `batch_txn_ids_`,
* transaction IDs have already been computed and cached in `batchTxnIds_`,
* it returns the cached vector. Otherwise, it computes the transaction IDs,
* caches them, and then returns the vector.
*
@@ -579,7 +623,7 @@ STTx::checkMultiSign(
* empty and that the size of the computed batch transaction IDs matches the
* size of the `sfRawTransactions` field array.
*/
std::vector<uint256>
std::vector<uint256> const&
STTx::getBatchTransactionIDs() const
{
XRPL_ASSERT(
@@ -588,16 +632,20 @@ STTx::getBatchTransactionIDs() const
XRPL_ASSERT(
getFieldArray(sfRawTransactions).size() != 0,
"STTx::getBatchTransactionIDs : empty raw transactions");
if (batch_txn_ids_.size() != 0)
return batch_txn_ids_;
for (STObject const& rb : getFieldArray(sfRawTransactions))
batch_txn_ids_.push_back(rb.getHash(HashPrefix::transactionID));
// The list of inner ids is built once, then reused on subsequent calls.
// After the list is built, it must always have the same size as the array
// `sfRawTransactions`. The assert below verifies that.
if (batchTxnIds_.size() == 0)
{
for (STObject const& rb : getFieldArray(sfRawTransactions))
batchTxnIds_.push_back(rb.getHash(HashPrefix::transactionID));
}
XRPL_ASSERT(
batch_txn_ids_.size() == getFieldArray(sfRawTransactions).size(),
batchTxnIds_.size() == getFieldArray(sfRawTransactions).size(),
"STTx::getBatchTransactionIDs : batch transaction IDs size mismatch");
return batch_txn_ids_;
return batchTxnIds_;
}
//------------------------------------------------------------------------------

View File

@@ -36,7 +36,6 @@
#include <exception>
#include <ostream>
#include <sstream>
#include <stdexcept>
#include <string>
#include <vector>
@@ -220,7 +219,7 @@ parse_Port(ParsedPort& port, Section const& section, std::ostream& log)
{
try
{
port.ip = boost::asio::ip::make_address(*optResult);
port.ip = boost::asio::ip::address::from_string(*optResult);
}
catch (std::exception const&)
{

View File

@@ -747,7 +747,7 @@ public:
// Note that the fee structure for unit tests does not match the fees
// on the production network (October 2019). Unit tests have a base
// reserve of 200 XRP.
env.fund(env.current()->fees().reserve, noripple(alice));
env.fund(env.current()->fees().accountReserve(0), noripple(alice));
env.close();
// Burn a chunk of alice's funds so she only has 1 XRP remaining in

View File

@@ -2553,6 +2553,207 @@ class Batch_test : public beast::unit_test::suite
}
}
void
testLoan(FeatureBitset features)
{
testcase("loan");
bool const lendingBatchEnabled = !std::any_of(
Batch::disabledTxTypes.begin(),
Batch::disabledTxTypes.end(),
[](auto const& disabled) { return disabled == ttLOAN_BROKER_SET; });
using namespace test::jtx;
test::jtx::Env env{
*this,
envconfig(),
features | featureSingleAssetVault | featureLendingProtocol |
featureMPTokensV1};
Account const issuer{"issuer"};
// For simplicity, lender will be the sole actor for the vault &
// brokers.
Account const lender{"lender"};
// Borrower only wants to borrow
Account const borrower{"borrower"};
// Fund the accounts and trust lines with the same amount so that tests
// can use the same values regardless of the asset.
env.fund(XRP(100'000), issuer, noripple(lender, borrower));
env.close();
// Just use an XRP asset
PrettyAsset const asset{xrpIssue(), 1'000'000};
Vault vault{env};
auto const deposit = asset(50'000);
auto const debtMaximumValue = asset(25'000).value();
auto const coverDepositValue = asset(1000).value();
auto [tx, vaultKeylet] =
vault.create({.owner = lender, .asset = asset});
env(tx);
env.close();
BEAST_EXPECT(env.le(vaultKeylet));
env(vault.deposit(
{.depositor = lender, .id = vaultKeylet.key, .amount = deposit}));
env.close();
auto const brokerKeylet =
keylet::loanbroker(lender.id(), env.seq(lender));
{
using namespace loanBroker;
env(set(lender, vaultKeylet.key),
managementFeeRate(TenthBips16(100)),
debtMaximum(debtMaximumValue),
coverRateMinimum(TenthBips32(percentageToTenthBips(10))),
coverRateLiquidation(TenthBips32(percentageToTenthBips(25))));
env(coverDeposit(lender, brokerKeylet.key, coverDepositValue));
env.close();
}
{
using namespace loan;
using namespace std::chrono_literals;
auto const lenderSeq = env.seq(lender);
auto const batchFee = batch::calcBatchFee(env, 0, 2);
auto const loanKeylet = keylet::loan(brokerKeylet.key, 1);
{
auto const [txIDs, batchID] = submitBatch(
env,
lendingBatchEnabled ? temBAD_SIGNATURE
: temINVALID_INNER_BATCH,
batch::outer(lender, lenderSeq, batchFee, tfAllOrNothing),
batch::inner(
env.json(
set(lender, brokerKeylet.key, asset(1000).value()),
// Not allowed to include the counterparty signature
sig(sfCounterpartySignature, borrower),
sig(none),
fee(none),
seq(none)),
lenderSeq + 1),
batch::inner(
pay(lender,
loanKeylet.key,
STAmount{asset, asset(500).value()}),
lenderSeq + 2));
}
{
auto const [txIDs, batchID] = submitBatch(
env,
temINVALID_INNER_BATCH,
batch::outer(lender, lenderSeq, batchFee, tfAllOrNothing),
batch::inner(
env.json(
set(lender, brokerKeylet.key, asset(1000).value()),
// Counterparty must be set
sig(none),
fee(none),
seq(none)),
lenderSeq + 1),
batch::inner(
pay(lender,
loanKeylet.key,
STAmount{asset, asset(500).value()}),
lenderSeq + 2));
}
{
auto const [txIDs, batchID] = submitBatch(
env,
lendingBatchEnabled ? temBAD_SIGNER
: temINVALID_INNER_BATCH,
batch::outer(lender, lenderSeq, batchFee, tfAllOrNothing),
batch::inner(
env.json(
set(lender, brokerKeylet.key, asset(1000).value()),
// Counterparty must sign the outer transaction
counterparty(borrower.id()),
sig(none),
fee(none),
seq(none)),
lenderSeq + 1),
batch::inner(
pay(lender,
loanKeylet.key,
STAmount{asset, asset(500).value()}),
lenderSeq + 2));
}
{
// LoanSet normally charges at least 2x base fee, but since the
// signature check is done by the batch, it only charges the
// base fee.
auto const batchFee = batch::calcBatchFee(env, 1, 2);
auto const [txIDs, batchID] = submitBatch(
env,
lendingBatchEnabled ? TER(tesSUCCESS)
: TER(temINVALID_INNER_BATCH),
batch::outer(lender, lenderSeq, batchFee, tfAllOrNothing),
batch::inner(
env.json(
set(lender, brokerKeylet.key, asset(1000).value()),
counterparty(borrower.id()),
sig(none),
fee(none),
seq(none)),
lenderSeq + 1),
batch::inner(
pay(
// However, this inner transaction will fail,
// because the lender is not allowed to draw the
// transaction
lender,
loanKeylet.key,
STAmount{asset, asset(500).value()}),
lenderSeq + 2),
batch::sig(borrower));
}
env.close();
BEAST_EXPECT(env.le(brokerKeylet));
BEAST_EXPECT(!env.le(loanKeylet));
{
// LoanSet normally charges at least 2x base fee, but since the
// signature check is done by the batch, it only charges the
// base fee.
auto const lenderSeq = env.seq(lender);
auto const batchFee = batch::calcBatchFee(env, 1, 2);
auto const [txIDs, batchID] = submitBatch(
env,
lendingBatchEnabled ? TER(tesSUCCESS)
: TER(temINVALID_INNER_BATCH),
batch::outer(lender, lenderSeq, batchFee, tfAllOrNothing),
batch::inner(
env.json(
set(lender, brokerKeylet.key, asset(1000).value()),
counterparty(borrower.id()),
sig(none),
fee(none),
seq(none)),
lenderSeq + 1),
batch::inner(
manage(lender, loanKeylet.key, tfLoanImpair),
lenderSeq + 2),
batch::sig(borrower));
}
env.close();
BEAST_EXPECT(env.le(brokerKeylet));
if (auto const sleLoan = env.le(loanKeylet); lendingBatchEnabled
? BEAST_EXPECT(sleLoan)
: !BEAST_EXPECT(!sleLoan))
{
BEAST_EXPECT(sleLoan->isFlag(lsfLoanImpaired));
}
}
}
void
testObjectCreateSequence(FeatureBitset features)
{
@@ -4147,6 +4348,7 @@ class Batch_test : public beast::unit_test::suite
testAccountActivation(features);
testAccountSet(features);
testAccountDelete(features);
testLoan(features);
testObjectCreateSequence(features);
testObjectCreateTicket(features);
testObjectCreate3rdParty(features);

Some files were not shown because too many files have changed in this diff Show More