Compare commits

...

32 Commits

Author SHA1 Message Date
Bronek Kozicki
7d90d6311e DO NOT MERGE 2025-08-08 17:52:51 +01:00
JCW
205d2965da Fix name and formatting
Signed-off-by: JCW <a1q123456@users.noreply.github.com>
2025-08-08 16:40:34 +01:00
JCW
c40ca84363 Fix comments
Signed-off-by: JCW <a1q123456@users.noreply.github.com>
2025-08-08 16:38:06 +01:00
JCW
1fd24c31d3 Revert unnecessary change and update unit test
Signed-off-by: JCW <a1q123456@users.noreply.github.com>
2025-08-08 15:53:03 +01:00
JCW
96d0fcfbd1 Add unit tests
Signed-off-by: JCW <a1q123456@users.noreply.github.com>
2025-08-08 15:47:04 +01:00
Bronek Kozicki
73c83dfcad Merge branch 'develop' into a1q123456/optimise-xxhash 2025-08-08 15:26:42 +01:00
JCW
5dcd2e9d06 Fix formatting
Signed-off-by: JCW <a1q123456@users.noreply.github.com>
2025-08-08 15:17:27 +01:00
JCW
7d8a35c97d Address comments
Signed-off-by: JCW <a1q123456@users.noreply.github.com>
2025-08-08 15:13:26 +01:00
Bart
39b5031ab5 Switch Conan 1 commands to Conan 2 and fix credentials (#5655)
This change updates some incorrect Conan commands for Conan 2. As some flags do not exist in Conan 2, such as --settings build_type=[configuration], the commands have been adjusted accordingly. This change further uses the org-level variables and secrets rather than the repo-level ones.
2025-08-08 12:47:36 +00:00
Bart
892876af5e Merge branch 'develop' into a1q123456/optimise-xxhash 2025-08-07 18:28:36 -04:00
Valentin Balaschenko
94decc753b perf: Move mutex to the partition level (#5486)
This change introduces two key optimizations:
* Mutex scope reduction: Limits the lock to individual partitions within `TaggedCache`, reducing contention.
* Decoupling: Removes the tight coupling between `LedgerHistory` and `TaggedCache`, improving modularity and testability.

Lock contention analysis based on eBPF showed significant improvements as a result of this change.
2025-08-07 17:04:07 -04:00
Bart
991891625a Upload Conan dependencies upon merge into develop (#5654)
This change uploads built Conan dependencies to the Conan remote upon merge into the develop branch.

At the moment, whenever Conan dependencies change, we need to remember to manually push them to our Conan remote, so they are cached for future reuse. If we forget to do so, these changed dependencies need to be rebuilt over and over again, which can take a long time.
2025-08-07 06:52:58 -04:00
Bart
69314e6832 refactor: Remove external libraries as they are hosted in our Conan Center Index fork (#5643)
This change:
* Removes the patched Conan recipes from the `external/` directory.
* Adds instructions for contributors how to obtain our patched recipes.
* Updates the Conan remote name and remote URL (the underlying package repository isn't changed).
* If the remote already exists, updates the URL instead of removing and re-adding.
  * This is not done for the libXRPL job as it still uses Conan 1. This job will be switched to Conan 2 soon.
* Removes duplicate Conan remote CI pipeline steps.
* Overwrites the existing global.conf on MacOS and Windows machines, as those do not run CI pipelines in isolation but all share the same Conan installation; appending the same config over and over bloats the file.
2025-08-06 15:46:13 +00:00
Bronek Kozicki
dbeb841b5a docs: Update BUILD.md for Conan 2 (#5478)
This change updates BUILD.md for Conan 2, add fixes/workarounds for Apple Clang 17, Clang 20 and CMake 4. This also removes (from BUILD.md only) workarounds for compiler versions which we no longer support e.g. Clang 15 and adds compilation flag -Wno-deprecated-declarations to enable building with Clang 20 on Linux.
2025-08-06 10:18:41 +00:00
tequ
4eae037fee fix: Ensures canonical order for PriceDataSeries upon PriceOracle creation (#5485)
This change fixes an issue where the order of `PriceDataSeries` was out of sync between when `PriceOracle` was created and when it was updated. Although they are registered in the canonical order when updated, they are created using the order specified in the transaction; this change ensures that they are also registered in the canonical order when created.
2025-08-05 13:08:59 -04:00
JCW
a2a5a97d70 Merge remote-tracking branch 'origin/develop' into a1q123456/optimise-xxhash 2025-08-05 17:36:38 +01:00
Jingchen
b5a63b39d3 refactor: Decouple ledger from xrpld/app (#5492)
This change decouples `ledger` from `xrpld/app`, and therefore fully clears the path to the modularisation of the ledger component. Before this change, `View.cpp` relied on `MPTokenAuthorize::authorize; this change moves `MPTokenAuthorize::authorize` to `View.cpp` to invert the dependency, making ledger a standalone module.
2025-08-05 15:28:56 +00:00
Denis Angell
6419f9a253 docs: Set up developer environment with specific XCode version (#5645) 2025-08-04 10:54:54 -04:00
Ayaz Salikhov
31c99caa65 Revert "ci: Build all conan dependencies from source for now (#5623)" (#5639)
This reverts commit 9b45b6888b.
2025-07-31 14:01:43 -04:00
Bronek Kozicki
d835e97490 Fix crash in Slot::deletePeer (#5635)
Fix crash due to recurrent call to `Slot::deletePeer` (via `OverlayImpl::unsquelch`) when a peer is disconnected at just the wrong moment.
2025-07-31 13:08:34 -04:00
Shawn Xie
baf4b8381f fix DeliveredAmount and delivered_amount in transaction metadata for direct MPT transfer (#5569)
The Payment transaction metadata is missing the `DeliveredAmount` field that displays the actual amount delivered to the destination excluding transfer fees. This amendment fixes this problem.
2025-07-29 17:02:33 +00:00
Ayaz Salikhov
9b45b6888b ci: Build all conan dependencies from source for now (#5623) 2025-07-29 15:29:38 +00:00
Bronek Kozicki
096ab3a86d Merge branch 'develop' into a1q123456/optimise-xxhash 2025-07-28 10:12:51 +01:00
Bronek Kozicki
7179ce9c58 Build options cleanup (#5581)
As we no longer support old compiler versions, we are bringing back some warnings by removing no longer relevant `-Wno-...` options.
2025-07-25 15:48:22 -04:00
Bart
921aef9934 Updates Conan dependencies: Boost 1.86 (#5264) 2025-07-25 11:54:02 -04:00
JCW
64f1f2d580 Fix formatting
Signed-off-by: JCW <a1q123456@users.noreply.github.com>
2025-07-25 16:38:56 +01:00
JCW
3f9b724ed8 Fix build error 2025-07-25 16:27:31 +01:00
Jingchen
cad9eba7ed Merge branch 'develop' into a1q123456/optimise-xxhash 2025-07-25 15:57:19 +01:00
Bronek Kozicki
e7a7bb83c1 VaultWithdraw destination account bugfix (#5572)
#5224 added (among other things) a `VaultWithdraw` transaction that allows setting the recipient of the withdrawn funds in the `Destination` transaction field. This technically turns this transaction into a payment, and in some respect the implementation does follow payment rules, e.g. enforcement of `lsfRequireDestTag` or `lsfDepositAuth`, or that MPT transfer has destination `MPToken`. However for IOUs, it missed verification that the destination account has a trust line to the asset issuer. Since the default behavior of `accountSendIOU` is to create this trust line (if missing), this is what `VaultWithdraw` currently does. This is incorrect, since the `Destination` might not be interested in holding the asset in question; this basically enables spammy transfers. This change, therefore, removes automatic creation of a trust line to the `Destination` account in `VaultWithdraw`.
2025-07-25 13:53:25 +00:00
Bart
5c2a3a2779 refactor: Update rocksdb (#5568)
This change updates RocksDB to its latest version. RocksDB is backward-compatible, so even though this is a major version bump, databases created with previous versions will continue to function.

The external RocksDB folder is removed, as the latest version available via Conan Center no longer needs custom patches.
2025-07-24 14:53:14 -04:00
JCW
dbb989921c Remove iota and ranges to support older gcc 2025-06-06 09:23:55 +01:00
JCW
a0cf51d454 Optimize hash performance by avoiding allocating hash state object 2025-06-04 16:21:34 +01:00
70 changed files with 1569 additions and 1874 deletions

View File

@@ -2,48 +2,25 @@ name: dependencies
inputs:
configuration:
required: true
# An implicit input is the environment variable `build_dir`.
# Implicit inputs are the environment variables `build_dir`, CONAN_REMOTE_URL,
# CONAN_REMOTE_USERNAME, and CONAN_REMOTE_PASSWORD. The latter two are only
# used to upload newly built dependencies to the Conan remote.
runs:
using: composite
steps:
- name: export custom recipes
- name: add Conan remote
if: ${{ env.CONAN_REMOTE_URL != '' }}
shell: bash
run: |
conan export --version 1.1.10 external/snappy
conan export --version 9.7.3 external/rocksdb
conan export --version 4.0.3 external/soci
- name: add Ripple Conan remote
if: env.CONAN_URL != ''
shell: bash
run: |
if conan remote list | grep -q "ripple"; then
conan remote remove ripple
echo "Removed conan remote ripple"
fi
conan remote add --index 0 ripple "${CONAN_URL}"
echo "Added conan remote ripple at ${CONAN_URL}"
- name: try to authenticate to Ripple Conan remote
if: env.CONAN_LOGIN_USERNAME_RIPPLE != '' && env.CONAN_PASSWORD_RIPPLE != ''
id: remote
shell: bash
run: |
echo "Authenticating to ripple remote..."
conan remote auth ripple --force
conan remote list-users
- name: list missing binaries
id: binaries
shell: bash
# Print the list of dependencies that would need to be built locally.
# A non-empty list means we have "failed" to cache binaries remotely.
run: |
echo missing=$(conan info . --build missing --settings build_type=${{ inputs.configuration }} --json 2>/dev/null | grep '^\[') | tee ${GITHUB_OUTPUT}
echo "Adding Conan remote 'xrplf' at ${{ env.CONAN_REMOTE_URL }}."
conan remote add --index 0 --force xrplf ${{ env.CONAN_REMOTE_URL }}
echo "Listing Conan remotes."
conan remote list
- name: install dependencies
shell: bash
run: |
mkdir ${build_dir}
cd ${build_dir}
mkdir -p ${{ env.build_dir }}
cd ${{ env.build_dir }}
conan install \
--output-folder . \
--build missing \
@@ -51,3 +28,11 @@ runs:
--options:host "&:xrpld=True" \
--settings:all build_type=${{ inputs.configuration }} \
..
- name: upload dependencies
if: ${{ env.CONAN_REMOTE_URL != '' && env.CONAN_REMOTE_USERNAME != '' && env.CONAN_REMOTE_PASSWORD != '' && github.ref_type == 'branch' && github.ref_name == github.event.repository.default_branch }}
shell: bash
run: |
echo "Logging into Conan remote 'xrplf' at ${{ env.CONAN_REMOTE_URL }}."
conan remote login xrplf "${{ env.CONAN_REMOTE_USERNAME }}" --password "${{ env.CONAN_REMOTE_PASSWORD }}"
echo "Uploading dependencies."
conan upload '*' --confirm --check --remote xrplf

View File

@@ -1,8 +1,8 @@
name: Check libXRPL compatibility with Clio
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/dev
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
CONAN_REMOTE_URL: ${{ vars.CONAN_REMOTE_URL }}
CONAN_LOGIN_USERNAME_XRPLF: ${{ secrets.CONAN_REMOTE_USERNAME }}
CONAN_PASSWORD_XRPLF: ${{ secrets.CONAN_REMOTE_PASSWORD }}
on:
pull_request:
paths:
@@ -43,20 +43,20 @@ jobs:
shell: bash
run: |
conan export . ${{ steps.channel.outputs.channel }}
- name: Add Ripple Conan remote
- name: Add Conan remote
shell: bash
run: |
echo "Adding Conan remote 'xrplf' at ${{ env.CONAN_REMOTE_URL }}."
conan remote add xrplf ${{ env.CONAN_REMOTE_URL }} --insert 0 --force
echo "Listing Conan remotes."
conan remote list
conan remote remove ripple || true
# Do not quote the URL. An empty string will be accepted (with a non-fatal warning), but a missing argument will not.
conan remote add ripple ${{ env.CONAN_URL }} --insert 0
- name: Parse new version
id: version
shell: bash
run: |
echo version="$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" \
| awk -F '"' '{print $2}')" | tee ${GITHUB_OUTPUT}
- name: Try to authenticate to Ripple Conan remote
- name: Try to authenticate to Conan remote
id: remote
shell: bash
run: |
@@ -64,7 +64,7 @@ jobs:
# https://docs.conan.io/1/reference/commands/misc/user.html#using-environment-variables
# https://docs.conan.io/1/reference/env_vars.html#conan-login-username-conan-login-username-remote-name
# https://docs.conan.io/1/reference/env_vars.html#conan-password-conan-password-remote-name
echo outcome=$(conan user --remote ripple --password >&2 \
echo outcome=$(conan user --remote xrplf --password >&2 \
&& echo success || echo failure) | tee ${GITHUB_OUTPUT}
- name: Upload new package
id: upload

View File

@@ -18,9 +18,12 @@ concurrency:
# This part of Conan configuration is specific to this workflow only; we do not want
# to pollute conan/profiles directory with settings which might not work for others
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/dev
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
CONAN_REMOTE_URL: ${{ vars.CONAN_REMOTE_URL }}
CONAN_REMOTE_USERNAME: ${{ secrets.CONAN_REMOTE_USERNAME }}
CONAN_REMOTE_PASSWORD: ${{ secrets.CONAN_REMOTE_PASSWORD }}
# This part of the Conan configuration is specific to this workflow only; we
# do not want to pollute the 'conan/profiles' directory with settings that
# might not work for other workflows.
CONAN_GLOBAL_CONF: |
core.download:parallel={{os.cpu_count()}}
core.upload:parallel={{os.cpu_count()}}
@@ -87,25 +90,9 @@ jobs:
clang --version
- name: configure Conan
run : |
echo "${CONAN_GLOBAL_CONF}" >> $(conan config home)/global.conf
echo "${CONAN_GLOBAL_CONF}" > $(conan config home)/global.conf
conan config install conan/profiles/ -tf $(conan config home)/profiles/
conan profile show
- name: export custom recipes
shell: bash
run: |
conan export --version 1.1.10 external/snappy
conan export --version 9.7.3 external/rocksdb
conan export --version 4.0.3 external/soci
- name: add Ripple Conan remote
if: env.CONAN_URL != ''
shell: bash
run: |
if conan remote list | grep -q "ripple"; then
conan remote remove ripple
echo "Removed conan remote ripple"
fi
conan remote add --index 0 ripple "${CONAN_URL}"
echo "Added conan remote ripple at ${CONAN_URL}"
- name: build dependencies
uses: ./.github/actions/dependencies
with:

View File

@@ -16,12 +16,13 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
# This part of Conan configuration is specific to this workflow only; we do not want
# to pollute conan/profiles directory with settings which might not work for others
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/dev
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
CONAN_REMOTE_URL: ${{ vars.CONAN_REMOTE_URL }}
CONAN_REMOTE_USERNAME: ${{ secrets.CONAN_REMOTE_USERNAME }}
CONAN_REMOTE_PASSWORD: ${{ secrets.CONAN_REMOTE_PASSWORD }}
# This part of the Conan configuration is specific to this workflow only; we
# do not want to pollute the 'conan/profiles' directory with settings that
# might not work for other workflows.
CONAN_GLOBAL_CONF: |
core.download:parallel={{ os.cpu_count() }}
core.upload:parallel={{ os.cpu_count() }}
@@ -99,7 +100,6 @@ jobs:
run: tar -czf conan.tar.gz -C ${CONAN_HOME} .
- name: build dependencies
uses: ./.github/actions/dependencies
with:
configuration: ${{ matrix.configuration }}
- name: upload archive

View File

@@ -18,12 +18,13 @@ on:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
# This part of Conan configuration is specific to this workflow only; we do not want
# to pollute conan/profiles directory with settings which might not work for others
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/dev
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
CONAN_REMOTE_URL: ${{ vars.CONAN_REMOTE_URL }}
CONAN_REMOTE_USERNAME: ${{ secrets.CONAN_REMOTE_USERNAME }}
CONAN_REMOTE_PASSWORD: ${{ secrets.CONAN_REMOTE_PASSWORD }}
# This part of the Conan configuration is specific to this workflow only; we
# do not want to pollute the 'conan/profiles' directory with settings that
# might not work for other workflows.
CONAN_GLOBAL_CONF: |
core.download:parallel={{os.cpu_count()}}
core.upload:parallel={{os.cpu_count()}}
@@ -82,25 +83,9 @@ jobs:
- name: configure Conan
shell: bash
run: |
echo "${CONAN_GLOBAL_CONF}" >> $(conan config home)/global.conf
echo "${CONAN_GLOBAL_CONF}" > $(conan config home)/global.conf
conan config install conan/profiles/ -tf $(conan config home)/profiles/
conan profile show
- name: export custom recipes
shell: bash
run: |
conan export --version 1.1.10 external/snappy
conan export --version 9.7.3 external/rocksdb
conan export --version 4.0.3 external/soci
- name: add Ripple Conan remote
if: env.CONAN_URL != ''
shell: bash
run: |
if conan remote list | grep -q "ripple"; then
conan remote remove ripple
echo "Removed conan remote ripple"
fi
conan remote add --index 0 ripple "${CONAN_URL}"
echo "Added conan remote ripple at ${CONAN_URL}"
- name: build dependencies
uses: ./.github/actions/dependencies
with:

464
BUILD.md
View File

@@ -3,29 +3,29 @@
| These instructions assume you have a C++ development environment ready with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up on Linux, macOS, or Windows, [see this guide](./docs/build/environment.md). |
> These instructions also assume a basic familiarity with Conan and CMake.
> If you are unfamiliar with Conan,
> you can read our [crash course](./docs/build/conan.md)
> or the official [Getting Started][3] walkthrough.
> If you are unfamiliar with Conan, you can read our
> [crash course](./docs/build/conan.md) or the official [Getting Started][3]
> walkthrough.
## Branches
For a stable release, choose the `master` branch or one of the [tagged
releases](https://github.com/ripple/rippled/releases).
```
```bash
git checkout master
```
For the latest release candidate, choose the `release` branch.
```
```bash
git checkout release
```
For the latest set of untested features, or to contribute, choose the `develop`
branch.
```
```bash
git checkout develop
```
@@ -33,159 +33,295 @@ git checkout develop
See [System Requirements](https://xrpl.org/system-requirements.html).
Building rippled generally requires git, Python, Conan, CMake, and a C++ compiler. Some guidance on setting up such a [C++ development environment can be found here](./docs/build/environment.md).
Building rippled generally requires git, Python, Conan, CMake, and a C++
compiler. Some guidance on setting up such a [C++ development environment can be
found here](./docs/build/environment.md).
- [Python 3.7](https://www.python.org/downloads/)
- [Conan 1.60](https://conan.io/downloads.html)[^1]
- [CMake 3.16](https://cmake.org/download/)
- [Python 3.11](https://www.python.org/downloads/), or higher
- [Conan 2.17](https://conan.io/downloads.html)[^1], or higher
- [CMake 3.22](https://cmake.org/download/)[^2], or higher
[^1]: It is possible to build with Conan 2.x,
but the instructions are significantly different,
which is why we are not recommending it yet.
Notably, the `conan profile update` command is removed in 2.x.
Profiles must be edited by hand.
[^1]: It is possible to build with Conan 1.60+, but the instructions are
significantly different, which is why we are not recommending it.
[^2]: CMake 4 is not yet supported by all dependencies required by this project.
If you are affected by this issue, follow [conan workaround for cmake
4](#workaround-for-cmake-4)
`rippled` is written in the C++20 dialect and includes the `<concepts>` header.
The [minimum compiler versions][2] required are:
| Compiler | Version |
|-------------|---------|
| GCC | 11 |
| Clang | 13 |
| Apple Clang | 13.1.6 |
| MSVC | 19.23 |
|-------------|-----|
| GCC | 12 |
| Clang | 16 |
| Apple Clang | 16 |
| MSVC | 19.44[^3] |
### Linux
The Ubuntu operating system has received the highest level of
quality assurance, testing, and support.
The Ubuntu Linux distribution has received the highest level of quality
assurance, testing, and support. We also support Red Hat and use Debian
internally.
Here are [sample instructions for setting up a C++ development environment on Linux](./docs/build/environment.md#linux).
Here are [sample instructions for setting up a C++ development environment on
Linux](./docs/build/environment.md#linux).
### Mac
Many rippled engineers use macOS for development.
Here are [sample instructions for setting up a C++ development environment on macOS](./docs/build/environment.md#macos).
Here are [sample instructions for setting up a C++ development environment on
macOS](./docs/build/environment.md#macos).
### Windows
Windows is not recommended for production use at this time.
Windows is used by some engineers for development only.
- Additionally, 32-bit Windows development is not supported.
[Boost]: https://www.boost.org/
[^3]: Windows is not recommended for production use.
## Steps
### Set Up Conan
After you have a [C++ development environment](./docs/build/environment.md) ready with Git, Python, Conan, CMake, and a C++ compiler, you may need to set up your Conan profile.
After you have a [C++ development environment](./docs/build/environment.md) ready with Git, Python,
Conan, CMake, and a C++ compiler, you may need to set up your Conan profile.
These instructions assume a basic familiarity with Conan and CMake.
These instructions assume a basic familiarity with Conan and CMake. If you are
unfamiliar with Conan, then please read [this crash course](./docs/build/conan.md) or the official
[Getting Started][3] walkthrough.
If you are unfamiliar with Conan, then please read [this crash course](./docs/build/conan.md) or the official [Getting Started][3] walkthrough.
#### Default profile
We recommend that you import the provided `conan/profiles/default` profile:
You'll need at least one Conan profile:
```
conan profile new default --detect
```
Update the compiler settings:
```
conan profile update settings.compiler.cppstd=20 default
```
Configure Conan (1.x only) to use recipe revisions:
```
conan config set general.revisions_enabled=1
```
**Linux** developers will commonly have a default Conan [profile][] that compiles
with GCC and links with libstdc++.
If you are linking with libstdc++ (see profile setting `compiler.libcxx`),
then you will need to choose the `libstdc++11` ABI:
```
conan profile update settings.compiler.libcxx=libstdc++11 default
```
Ensure inter-operability between `boost::string_view` and `std::string_view` types:
```
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_BEAST_USE_STD_STRING_VIEW"]' default
conan profile update 'env.CXXFLAGS="-DBOOST_BEAST_USE_STD_STRING_VIEW"' default
```bash
conan config install conan/profiles/ -tf $(conan config home)/profiles/
```
If you have other flags in the `conf.tools.build` or `env.CXXFLAGS` sections, make sure to retain the existing flags and append the new ones. You can check them with:
```
conan profile show default
You can check your Conan profile by running:
```bash
conan profile show
```
#### Custom profile
**Windows** developers may need to use the x64 native build tools.
An easy way to do that is to run the shortcut "x64 Native Tools Command
Prompt" for the version of Visual Studio that you have installed.
If the default profile does not work for you and you do not yet have a Conan
profile, you can create one by running:
Windows developers must also build `rippled` and its dependencies for the x64
architecture:
```
conan profile update settings.arch=x86_64 default
```
### Multiple compilers
When `/usr/bin/g++` exists on a platform, it is the default cpp compiler. This
default works for some users.
However, if this compiler cannot build rippled or its dependencies, then you can
install another compiler and set Conan and CMake to use it.
Update the `conf.tools.build:compiler_executables` setting in order to set the correct variables (`CMAKE_<LANG>_COMPILER`) in the
generated CMake toolchain file.
For example, on Ubuntu 20, you may have gcc at `/usr/bin/gcc` and g++ at `/usr/bin/g++`; if that is the case, you can select those compilers with:
```
conan profile update 'conf.tools.build:compiler_executables={"c": "/usr/bin/gcc", "cpp": "/usr/bin/g++"}' default
```bash
conan profile detect
```
Replace `/usr/bin/gcc` and `/usr/bin/g++` with paths to the desired compilers.
You may need to make changes to the profile to suit your environment. You can
refer to the provided `conan/profiles/default` profile for inspiration, and you
may also need to apply the required [tweaks](#conan-profile-tweaks) to this
default profile.
It should choose the compiler for dependencies as well,
but not all of them have a Conan recipe that respects this setting (yet).
For the rest, you can set these environment variables.
Replace `<path>` with paths to the desired compilers:
### Patched recipes
- `conan profile update env.CC=<path> default`
- `conan profile update env.CXX=<path> default`
The recipes in Conan Center occasionally need to be patched for compatibility
with the latest version of `rippled`. We maintain a fork of the Conan Center
[here](https://github.com/XRPLF/conan-center-index/) containing the patches.
Export our [Conan recipe for Snappy](./external/snappy).
It does not explicitly link the C++ standard library,
which allows you to statically link it with GCC, if you want.
To ensure our patched recipes are used, you must add our Conan remote at a
higher index than the default Conan Center remote, so it is consulted first. You
can do this by running:
```
# Conan 2.x
conan export --version 1.1.10 external/snappy
```bash
conan remote add --index 0 xrplf "https://conan.ripplex.io"
```
Alternatively, you can pull the patched recipes into the repository and use them
locally:
```bash
cd external
git init
git remote add origin git@github.com:XRPLF/conan-center-index.git
git sparse-checkout init
git sparse-checkout set recipes/snappy
git sparse-checkout add recipes/soci
git fetch origin master
git checkout master
conan export --version 1.1.10 external/recipes/snappy
conan export --version 4.0.3 external/recipes/soci
```
In the case we switch to a newer version of a dependency that still requires a
patch, it will be necessary for you to pull in the changes and re-export the
updated dependencies with the newer version. However, if we switch to a newer
version that no longer requires a patch, no action is required on your part, as
the new recipe will be automatically pulled from the official Conan Center.
### Conan profile tweaks
#### Missing compiler version
If you see an error similar to the following after running `conan profile show`:
```bash
ERROR: Invalid setting '17' is not a valid 'settings.compiler.version' value.
Possible values are ['5.0', '5.1', '6.0', '6.1', '7.0', '7.3', '8.0', '8.1',
'9.0', '9.1', '10.0', '11.0', '12.0', '13', '13.0', '13.1', '14', '14.0', '15',
'15.0', '16', '16.0']
Read "http://docs.conan.io/2/knowledge/faq.html#error-invalid-setting"
```
you need to amend the list of compiler versions in
`$(conan config home)/settings.yml`, by appending the required version number(s)
to the `version` array specific for your compiler. For example:
```yaml
apple-clang:
version: ["5.0", "5.1", "6.0", "6.1", "7.0", "7.3", "8.0", "8.1", "9.0",
"9.1", "10.0", "11.0", "12.0", "13", "13.0", "13.1", "14",
"14.0", "15", "15.0", "16", "16.0", "17", "17.0"]
```
#### Multiple compilers
If you have multiple compilers installed, make sure to select the one to use in
your default Conan configuration **before** running `conan profile detect`, by
setting the `CC` and `CXX` environment variables.
For example, if you are running MacOS and have [homebrew
LLVM@18](https://formulae.brew.sh/formula/llvm@18), and want to use it as a
compiler in the new Conan profile:
```bash
export CC=$(brew --prefix llvm@18)/bin/clang
export CXX=$(brew --prefix llvm@18)/bin/clang++
conan profile detect
```
Export our [Conan recipe for RocksDB](./external/rocksdb).
It does not override paths to dependencies when building with Visual Studio.
You should also explicitly set the path to the compiler in the profile file,
which helps to avoid errors when `CC` and/or `CXX` are set and disagree with the
selected Conan profile. For example:
```
# Conan 2.x
conan export --version 9.7.3 external/rocksdb
```
```text
[conf]
tools.build:compiler_executables={'c':'/usr/bin/gcc','cpp':'/usr/bin/g++'}
```
Export our [Conan recipe for SOCI](./external/soci).
It patches their CMake to correctly import its dependencies.
#### Multiple profiles
```
# Conan 2.x
conan export --version 4.0.3 external/soci
```
You can manage multiple Conan profiles in the directory
`$(conan config home)/profiles`, for example renaming `default` to a different
name and then creating a new `default` profile for a different compiler.
#### Select language
The default profile created by Conan will typically select different C++ dialect
than C++20 used by this project. You should set `20` in the profile line
starting with `compiler.cppstd=`. For example:
```bash
sed -i.bak -e 's|^compiler\.cppstd=.*$|compiler.cppstd=20|' $(conan config home)/profiles/default
```
#### Select standard library in Linux
**Linux** developers will commonly have a default Conan [profile][] that
compiles with GCC and links with libstdc++. If you are linking with libstdc++
(see profile setting `compiler.libcxx`), then you will need to choose the
`libstdc++11` ABI:
```bash
sed -i.bak -e 's|^compiler\.libcxx=.*$|compiler.libcxx=libstdc++11|' $(conan config home)/profiles/default
```
#### Select architecture and runtime in Windows
**Windows** developers may need to use the x64 native build tools. An easy way
to do that is to run the shortcut "x64 Native Tools Command Prompt" for the
version of Visual Studio that you have installed.
Windows developers must also build `rippled` and its dependencies for the x64
architecture:
```bash
sed -i.bak -e 's|^arch=.*$|arch=x86_64|' $(conan config home)/profiles/default
```
**Windows** developers also must select static runtime:
```bash
sed -i.bak -e 's|^compiler\.runtime=.*$|compiler.runtime=static|' $(conan config home)/profiles/default
```
#### Workaround for CMake 4
If your system CMake is version 4 rather than 3, you may have to configure Conan
profile to use CMake version 3 for dependencies, by adding the following two
lines to your profile:
```text
[tool_requires]
!cmake/*: cmake/[>=3 <4]
```
This will force Conan to download and use a locally cached CMake 3 version, and
is needed because some of the dependencies used by this project do not support
CMake 4.
#### Clang workaround for grpc
If your compiler is clang, version 19 or later, or apple-clang, version 17 or
later, you may encounter a compilation error while building the `grpc`
dependency:
```text
In file included from .../lib/promise/try_seq.h:26:
.../lib/promise/detail/basic_seq.h:499:38: error: a template argument list is expected after a name prefixed by the template keyword [-Wmissing-template-arg-list-after-template-kw]
499 | Traits::template CallSeqFactory(f_, *cur_, std::move(arg)));
| ^
```
The workaround for this error is to add two lines to profile:
```text
[conf]
tools.build:cxxflags=['-Wno-missing-template-arg-list-after-template-kw']
```
#### Workaround for gcc 12
If your compiler is gcc, version 12, and you have enabled `werr` option, you may
encounter a compilation error such as:
```text
/usr/include/c++/12/bits/char_traits.h:435:56: error: 'void* __builtin_memcpy(void*, const void*, long unsigned int)' accessing 9223372036854775810 or more bytes at offsets [2, 9223372036854775807] and 1 may overlap up to 9223372036854775813 bytes at offset -3 [-Werror=restrict]
435 | return static_cast<char_type*>(__builtin_memcpy(__s1, __s2, __n));
| ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~
cc1plus: all warnings being treated as errors
```
The workaround for this error is to add two lines to your profile:
```text
[conf]
tools.build:cxxflags=['-Wno-restrict']
```
#### Workaround for clang 16
If your compiler is clang, version 16, you may encounter compilation error such
as:
```text
In file included from .../boost/beast/websocket/stream.hpp:2857:
.../boost/beast/websocket/impl/read.hpp:695:17: error: call to 'async_teardown' is ambiguous
async_teardown(impl.role, impl.stream(),
^~~~~~~~~~~~~~
```
The workaround for this error is to add two lines to your profile:
```text
[conf]
tools.build:cxxflags=['-DBOOST_ASIO_DISABLE_CONCEPTS']
```
### Build and Test
@@ -253,7 +389,6 @@ It patches their CMake to correctly import its dependencies.
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release -Dxrpld=ON -Dtests=ON ..
```
Multi-config generators:
```
@@ -265,13 +400,13 @@ It patches their CMake to correctly import its dependencies.
5. Build `rippled`.
For a single-configuration generator, it will build whatever configuration
you passed for `CMAKE_BUILD_TYPE`. For a multi-configuration generator,
you must pass the option `--config` to select the build configuration.
you passed for `CMAKE_BUILD_TYPE`. For a multi-configuration generator, you
must pass the option `--config` to select the build configuration.
Single-config generators:
```
cmake --build . -j $(nproc)
cmake --build .
```
Multi-config generators:
@@ -286,18 +421,22 @@ It patches their CMake to correctly import its dependencies.
Single-config generators:
```
./rippled --unittest
./rippled --unittest --unittest-jobs N
```
Multi-config generators:
```
./Release/rippled --unittest
./Debug/rippled --unittest
./Release/rippled --unittest --unittest-jobs N
./Debug/rippled --unittest --unittest-jobs N
```
The location of `rippled` in your build directory depends on your CMake
generator. Pass `--help` to see the rest of the command line options.
Replace the `--unittest-jobs` parameter N with the desired unit tests
concurrency. Recommended setting is half of the number of available CPU
cores.
The location of `rippled` binary in your build directory depends on your
CMake generator. Pass `--help` to see the rest of the command line options.
## Coverage report
@@ -355,7 +494,7 @@ cmake --build . --target coverage
After the `coverage` target is completed, the generated coverage report will be
stored inside the build directory, as either of:
- file named `coverage.`_extension_ , with a suitable extension for the report format, or
- file named `coverage.`_extension_, with a suitable extension for the report format, or
- directory named `coverage`, with the `index.html` and other files inside, for the `html-details` or `html-nested` report formats.
@@ -363,12 +502,14 @@ stored inside the build directory, as either of:
| Option | Default Value | Description |
| --- | ---| ---|
| `assert` | OFF | Enable assertions.
| `assert` | OFF | Enable assertions. |
| `coverage` | OFF | Prepare the coverage report. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
| `tests` | OFF | Build tests. |
| `unity` | ON | Configure a unity build. |
| `unity` | OFF | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld (`rippled`) application, and not just the libxrpl library. |
| `werr` | OFF | Treat compilation warnings as errors |
| `wextra` | OFF | Enable additional compilation warnings |
[Unity builds][5] may be faster for the first build
(at the cost of much more memory) since they concatenate sources into fewer
@@ -378,22 +519,33 @@ and can be helpful for detecting `#include` omissions.
## Troubleshooting
### Conan
After any updates or changes to dependencies, you may need to do the following:
1. Remove your build directory.
2. Remove the Conan cache:
2. Remove individual libraries from the Conan cache, e.g.
```bash
conan remove 'grpc/*'
```
rm -rf ~/.conan/data
**or**
Remove all libraries from Conan cache:
```bash
conan remove '*'
```
3. Re-run [conan export](#patched-recipes) if needed.
4. Re-run [conan install](#build-and-test).
### `protobuf/port_def.inc` file not found
### 'protobuf/port_def.inc' file not found
If `cmake --build .` results in an error due to a missing a protobuf file, then you might have generated CMake files for a different `build_type` than the `CMAKE_BUILD_TYPE` you passed to conan.
If `cmake --build .` results in an error due to a missing a protobuf file, then
you might have generated CMake files for a different `build_type` than the
`CMAKE_BUILD_TYPE` you passed to Conan.
```
/rippled/.build/pb-xrpl.libpb/xrpl/proto/ripple.pb.h:10:10: fatal error: 'google/protobuf/port_def.inc' file not found
@@ -407,54 +559,6 @@ For example, if you want to build Debug:
1. For conan install, pass `--settings build_type=Debug`
2. For cmake, pass `-DCMAKE_BUILD_TYPE=Debug`
### no std::result_of
If your compiler version is recent enough to have removed `std::result_of` as
part of C++20, e.g. Apple Clang 15.0, then you might need to add a preprocessor
definition to your build.
```
conan profile update 'options.boost:extra_b2_flags="define=BOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'conf.tools.build:cflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
```
### call to 'async_teardown' is ambiguous
If you are compiling with an early version of Clang 16, then you might hit
a [regression][6] when compiling C++20 that manifests as an [error in a Boost
header][7]. You can workaround it by adding this preprocessor definition:
```
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_DISABLE_CONCEPTS"' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
```
### recompile with -fPIC
If you get a linker error suggesting that you recompile Boost with
position-independent code, such as:
```
/usr/bin/ld.gold: error: /home/username/.conan/data/boost/1.77.0/_/_/package/.../lib/libboost_container.a(alloc_lib.o):
requires unsupported dynamic reloc 11; recompile with -fPIC
```
Conan most likely downloaded a bad binary distribution of the dependency.
This seems to be a [bug][1] in Conan just for Boost 1.77.0 compiled with GCC
for Linux. The solution is to build the dependency locally by passing
`--build boost` when calling `conan install`.
```
conan install --build boost ...
```
## Add a Dependency
If you want to experiment with a new package, follow these steps:

View File

@@ -90,28 +90,15 @@ if (MSVC)
-errorreport:none
-machine:X64)
else ()
# HACK : because these need to come first, before any warning demotion
string (APPEND CMAKE_CXX_FLAGS " -Wall -Wdeprecated")
if (wextra)
string (APPEND CMAKE_CXX_FLAGS " -Wextra -Wno-unused-parameter")
endif ()
# not MSVC
target_compile_options (common
INTERFACE
-Wall
-Wdeprecated
$<$<BOOL:${wextra}>:-Wextra -Wno-unused-parameter>
$<$<BOOL:${werr}>:-Werror>
$<$<COMPILE_LANGUAGE:CXX>:
-frtti
-Wnon-virtual-dtor
>
-Wno-sign-compare
-Wno-char-subscripts
-Wno-format
-Wno-unused-local-typedefs
-fstack-protector
$<$<BOOL:${is_gcc}>:
-Wno-unused-but-set-variable
-Wno-deprecated
>
-Wno-sign-compare
-Wno-unused-but-set-variable
$<$<NOT:$<CONFIG:Debug>>:-fno-strict-aliasing>
# tweak gcc optimization for debug
$<$<AND:$<BOOL:${is_gcc}>,$<CONFIG:Debug>>:-O0>

View File

@@ -26,9 +26,6 @@ tools.build:cxxflags=['-Wno-missing-template-arg-list-after-template-kw']
{% if compiler == "apple-clang" and compiler_version >= 17 %}
tools.build:cxxflags=['-Wno-missing-template-arg-list-after-template-kw']
{% endif %}
{% if compiler == "clang" and compiler_version == 16 %}
tools.build:cxxflags=['-DBOOST_ASIO_DISABLE_CONCEPTS']
{% endif %}
{% if compiler == "gcc" and compiler_version < 13 %}
tools.build:cxxflags=['-Wno-restrict']
{% endif %}

View File

@@ -104,7 +104,7 @@ class Xrpl(ConanFile):
def requirements(self):
# Conan 2 requires transitive headers to be specified
transitive_headers_opt = {'transitive_headers': True} if conan_version.split('.')[0] == '2' else {}
self.requires('boost/1.83.0', force=True, **transitive_headers_opt)
self.requires('boost/1.86.0', force=True, **transitive_headers_opt)
self.requires('date/3.0.4', **transitive_headers_opt)
self.requires('lz4/1.10.0', force=True)
self.requires('protobuf/3.21.12', force=True)
@@ -112,7 +112,7 @@ class Xrpl(ConanFile):
if self.options.jemalloc:
self.requires('jemalloc/5.3.0')
if self.options.rocksdb:
self.requires('rocksdb/9.7.3')
self.requires('rocksdb/10.0.1')
self.requires('xxhash/0.8.3', **transitive_headers_opt)
exports_sources = (

View File

@@ -10,37 +10,35 @@ platforms: Linux, macOS, or Windows.
Package ecosystems vary across Linux distributions,
so there is no one set of instructions that will work for every Linux user.
These instructions are written for Ubuntu 22.04.
They are largely copied from the [script][1] used to configure our Docker
container for continuous integration.
That script handles many more responsibilities.
These instructions are just the bare minimum to build one configuration of
rippled.
You can check that codebase for other Linux distributions and versions.
If you cannot find yours there,
then we hope that these instructions can at least guide you in the right
direction.
The instructions below are written for Debian 12 (Bookworm).
```
apt update
apt install --yes curl git libssl-dev pipx python3.10-dev python3-pip make g++-11 libprotobuf-dev protobuf-compiler
export GCC_RELEASE=12
sudo apt update
sudo apt install --yes gcc-${GCC_RELEASE} g++-${GCC_RELEASE} python3-pip \
python-is-python3 python3-venv python3-dev curl wget ca-certificates \
git build-essential cmake ninja-build libc6-dev
sudo pip install --break-system-packages conan
curl --location --remote-name \
"https://github.com/Kitware/CMake/releases/download/v3.25.1/cmake-3.25.1.tar.gz"
tar -xzf cmake-3.25.1.tar.gz
rm cmake-3.25.1.tar.gz
cd cmake-3.25.1
./bootstrap --parallel=$(nproc)
make --jobs $(nproc)
make install
cd ..
pipx install 'conan<2'
pipx ensurepath
sudo update-alternatives --install /usr/bin/cc cc /usr/bin/gcc-${GCC_RELEASE} 999
sudo update-alternatives --install \
/usr/bin/gcc gcc /usr/bin/gcc-${GCC_RELEASE} 100 \
--slave /usr/bin/g++ g++ /usr/bin/g++-${GCC_RELEASE} \
--slave /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-${GCC_RELEASE} \
--slave /usr/bin/gcc-nm gcc-nm /usr/bin/gcc-nm-${GCC_RELEASE} \
--slave /usr/bin/gcc-ranlib gcc-ranlib /usr/bin/gcc-ranlib-${GCC_RELEASE} \
--slave /usr/bin/gcov gcov /usr/bin/gcov-${GCC_RELEASE} \
--slave /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-tool-${GCC_RELEASE} \
--slave /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-dump-${GCC_RELEASE} \
--slave /usr/bin/lto-dump lto-dump /usr/bin/lto-dump-${GCC_RELEASE}
sudo update-alternatives --auto cc
sudo update-alternatives --auto gcc
```
[1]: https://github.com/thejohnfreeman/rippled-docker/blob/master/ubuntu-22.04/install.sh
If you use different Linux distribution, hope the instruction above can guide
you in the right direction. We try to maintain compatibility with all recent
compiler releases, so if you use a rolling distribution like e.g. Arch or CentOS
then there is a chance that everything will "just work".
## macOS
@@ -53,6 +51,34 @@ minimum required (see [BUILD.md][]).
clang --version
```
### Install Xcode Specific Version (Optional)
If you develop other applications using XCode you might be consistently updating to the newest version of Apple Clang.
This will likely cause issues building rippled. You may want to install a specific version of Xcode:
1. **Download Xcode**
- Visit [Apple Developer Downloads](https://developer.apple.com/download/more/)
- Sign in with your Apple Developer account
- Search for an Xcode version that includes **Apple Clang (Expected Version)**
- Download the `.xip` file
2. **Install and Configure Xcode**
```bash
# Extract the .xip file and rename for version management
# Example: Xcode_16.2.app
# Move to Applications directory
sudo mv Xcode_16.2.app /Applications/
# Set as default toolchain (persistent)
sudo xcode-select -s /Applications/Xcode_16.2.app/Contents/Developer
# Set as environment variable (temporary)
export DEVELOPER_DIR=/Applications/Xcode_16.2.app/Contents/Developer
```
The command line developer tools should include Git too:
```
@@ -72,10 +98,10 @@ and use it to install Conan:
brew update
brew install xz
brew install pyenv
pyenv install 3.10-dev
pyenv global 3.10-dev
pyenv install 3.11
pyenv global 3.11
eval "$(pyenv init -)"
pip install 'conan<2'
pip install 'conan'
```
Install CMake with Homebrew too:

8
external/README.md vendored
View File

@@ -1,14 +1,10 @@
# External Conan recipes
The subdirectories in this directory contain either copies or Conan recipes
of external libraries used by rippled.
The Conan recipes include patches we have not yet pushed upstream.
The subdirectories in this directory contain copies of external libraries used
by rippled.
| Folder | Upstream | Description |
|:----------------|:---------------------------------------------|:------------|
| `antithesis-sdk`| [Project](https://github.com/antithesishq/antithesis-sdk-cpp/) | [Antithesis](https://antithesis.com/docs/using_antithesis/sdk/cpp/overview.html) SDK for C++ |
| `ed25519-donna` | [Project](https://github.com/floodyberry/ed25519-donna) | [Ed25519](http://ed25519.cr.yp.to/) digital signatures |
| `rocksdb` | [Recipe](https://github.com/conan-io/conan-center-index/tree/master/recipes/rocksdb) | Fast key/value database. (Supports rotational disks better than NuDB.) |
| `secp256k1` | [Project](https://github.com/bitcoin-core/secp256k1) | ECDSA digital signatures using the **secp256k1** curve |
| `snappy` | [Recipe](https://github.com/conan-io/conan-center-index/tree/master/recipes/snappy) | "Snappy" lossless compression algorithm. |
| `soci` | [Recipe](https://github.com/conan-io/conan-center-index/tree/master/recipes/soci) | Abstraction layer for database access. |

View File

@@ -17,6 +17,9 @@ add_library(ed25519 STATIC
)
add_library(ed25519::ed25519 ALIAS ed25519)
target_link_libraries(ed25519 PUBLIC OpenSSL::SSL)
if(NOT MSVC)
target_compile_options(ed25519 PRIVATE -Wno-implicit-fallthrough)
endif()
include(GNUInstallDirs)

View File

@@ -1,12 +0,0 @@
sources:
"9.7.3":
url: "https://github.com/facebook/rocksdb/archive/refs/tags/v9.7.3.tar.gz"
sha256: "acfabb989cbfb5b5c4d23214819b059638193ec33dad2d88373c46448d16d38b"
patches:
"9.7.3":
- patch_file: "patches/9.x.x-0001-exclude-thirdparty.patch"
patch_description: "Do not include thirdparty.inc"
patch_type: "portability"
- patch_file: "patches/9.7.3-0001-memory-leak.patch"
patch_description: "Fix a leak of obsolete blob files left open until DB::Close()"
patch_type: "portability"

View File

@@ -1,235 +0,0 @@
import os
import glob
import shutil
from conan import ConanFile
from conan.errors import ConanInvalidConfiguration
from conan.tools.build import check_min_cppstd
from conan.tools.cmake import CMake, CMakeDeps, CMakeToolchain, cmake_layout
from conan.tools.files import apply_conandata_patches, collect_libs, copy, export_conandata_patches, get, rm, rmdir
from conan.tools.microsoft import check_min_vs, is_msvc, is_msvc_static_runtime
from conan.tools.scm import Version
required_conan_version = ">=1.53.0"
class RocksDBConan(ConanFile):
name = "rocksdb"
description = "A library that provides an embeddable, persistent key-value store for fast storage"
license = ("GPL-2.0-only", "Apache-2.0")
url = "https://github.com/conan-io/conan-center-index"
homepage = "https://github.com/facebook/rocksdb"
topics = ("database", "leveldb", "facebook", "key-value")
package_type = "library"
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
"fPIC": [True, False],
"lite": [True, False],
"with_gflags": [True, False],
"with_snappy": [True, False],
"with_lz4": [True, False],
"with_zlib": [True, False],
"with_zstd": [True, False],
"with_tbb": [True, False],
"with_jemalloc": [True, False],
"enable_sse": [False, "sse42", "avx2"],
"use_rtti": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
"lite": False,
"with_snappy": False,
"with_lz4": False,
"with_zlib": False,
"with_zstd": False,
"with_gflags": False,
"with_tbb": False,
"with_jemalloc": False,
"enable_sse": False,
"use_rtti": False,
}
@property
def _min_cppstd(self):
return "11" if Version(self.version) < "8.8.1" else "17"
@property
def _compilers_minimum_version(self):
return {} if self._min_cppstd == "11" else {
"apple-clang": "10",
"clang": "7",
"gcc": "7",
"msvc": "191",
"Visual Studio": "15",
}
def export_sources(self):
export_conandata_patches(self)
def config_options(self):
if self.settings.os == "Windows":
del self.options.fPIC
if self.settings.arch != "x86_64":
del self.options.with_tbb
if self.settings.build_type == "Debug":
self.options.use_rtti = True # Rtti are used in asserts for debug mode...
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
def layout(self):
cmake_layout(self, src_folder="src")
def requirements(self):
if self.options.with_gflags:
self.requires("gflags/2.2.2")
if self.options.with_snappy:
self.requires("snappy/1.1.10")
if self.options.with_lz4:
self.requires("lz4/1.10.0")
if self.options.with_zlib:
self.requires("zlib/[>=1.2.11 <2]")
if self.options.with_zstd:
self.requires("zstd/1.5.6")
if self.options.get_safe("with_tbb"):
self.requires("onetbb/2021.12.0")
if self.options.with_jemalloc:
self.requires("jemalloc/5.3.0")
def validate(self):
if self.settings.compiler.get_safe("cppstd"):
check_min_cppstd(self, self._min_cppstd)
minimum_version = self._compilers_minimum_version.get(str(self.settings.compiler), False)
if minimum_version and Version(self.settings.compiler.version) < minimum_version:
raise ConanInvalidConfiguration(
f"{self.ref} requires C++{self._min_cppstd}, which your compiler does not support."
)
if self.settings.arch not in ["x86_64", "ppc64le", "ppc64", "mips64", "armv8"]:
raise ConanInvalidConfiguration("Rocksdb requires 64 bits")
check_min_vs(self, "191")
if self.version == "6.20.3" and \
self.settings.os == "Linux" and \
self.settings.compiler == "gcc" and \
Version(self.settings.compiler.version) < "5":
raise ConanInvalidConfiguration("Rocksdb 6.20.3 is not compilable with gcc <5.") # See https://github.com/facebook/rocksdb/issues/3522
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def generate(self):
tc = CMakeToolchain(self)
tc.variables["FAIL_ON_WARNINGS"] = False
tc.variables["WITH_TESTS"] = False
tc.variables["WITH_TOOLS"] = False
tc.variables["WITH_CORE_TOOLS"] = False
tc.variables["WITH_BENCHMARK_TOOLS"] = False
tc.variables["WITH_FOLLY_DISTRIBUTED_MUTEX"] = False
if is_msvc(self):
tc.variables["WITH_MD_LIBRARY"] = not is_msvc_static_runtime(self)
tc.variables["ROCKSDB_INSTALL_ON_WINDOWS"] = self.settings.os == "Windows"
tc.variables["ROCKSDB_LITE"] = self.options.lite
tc.variables["WITH_GFLAGS"] = self.options.with_gflags
tc.variables["WITH_SNAPPY"] = self.options.with_snappy
tc.variables["WITH_LZ4"] = self.options.with_lz4
tc.variables["WITH_ZLIB"] = self.options.with_zlib
tc.variables["WITH_ZSTD"] = self.options.with_zstd
tc.variables["WITH_TBB"] = self.options.get_safe("with_tbb", False)
tc.variables["WITH_JEMALLOC"] = self.options.with_jemalloc
tc.variables["ROCKSDB_BUILD_SHARED"] = self.options.shared
tc.variables["ROCKSDB_LIBRARY_EXPORTS"] = self.settings.os == "Windows" and self.options.shared
tc.variables["ROCKSDB_DLL" ] = self.settings.os == "Windows" and self.options.shared
tc.variables["USE_RTTI"] = self.options.use_rtti
if not bool(self.options.enable_sse):
tc.variables["PORTABLE"] = True
tc.variables["FORCE_SSE42"] = False
elif self.options.enable_sse == "sse42":
tc.variables["PORTABLE"] = True
tc.variables["FORCE_SSE42"] = True
elif self.options.enable_sse == "avx2":
tc.variables["PORTABLE"] = False
tc.variables["FORCE_SSE42"] = False
# not available yet in CCI
tc.variables["WITH_NUMA"] = False
tc.generate()
deps = CMakeDeps(self)
if self.options.with_jemalloc:
deps.set_property("jemalloc", "cmake_file_name", "JeMalloc")
deps.set_property("jemalloc", "cmake_target_name", "JeMalloc::JeMalloc")
if self.options.with_zstd:
deps.set_property("zstd", "cmake_target_name", "zstd::zstd")
deps.generate()
def build(self):
apply_conandata_patches(self)
cmake = CMake(self)
cmake.configure()
cmake.build()
def _remove_static_libraries(self):
rm(self, "rocksdb.lib", os.path.join(self.package_folder, "lib"))
for lib in glob.glob(os.path.join(self.package_folder, "lib", "*.a")):
if not lib.endswith(".dll.a"):
os.remove(lib)
def _remove_cpp_headers(self):
for path in glob.glob(os.path.join(self.package_folder, "include", "rocksdb", "*")):
if path != os.path.join(self.package_folder, "include", "rocksdb", "c.h"):
if os.path.isfile(path):
os.remove(path)
else:
shutil.rmtree(path)
def package(self):
copy(self, "COPYING", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
copy(self, "LICENSE*", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
cmake = CMake(self)
cmake.install()
if self.options.shared:
self._remove_static_libraries()
self._remove_cpp_headers() # Force stable ABI for shared libraries
rmdir(self, os.path.join(self.package_folder, "lib", "cmake"))
rmdir(self, os.path.join(self.package_folder, "lib", "pkgconfig"))
def package_info(self):
cmake_target = "rocksdb-shared" if self.options.shared else "rocksdb"
self.cpp_info.set_property("cmake_file_name", "RocksDB")
self.cpp_info.set_property("cmake_target_name", f"RocksDB::{cmake_target}")
# TODO: back to global scope in conan v2 once cmake_find_package* generators removed
self.cpp_info.components["librocksdb"].libs = collect_libs(self)
if self.settings.os == "Windows":
self.cpp_info.components["librocksdb"].system_libs = ["shlwapi", "rpcrt4"]
if self.options.shared:
self.cpp_info.components["librocksdb"].defines = ["ROCKSDB_DLL"]
elif self.settings.os in ["Linux", "FreeBSD"]:
self.cpp_info.components["librocksdb"].system_libs = ["pthread", "m"]
if self.options.lite:
self.cpp_info.components["librocksdb"].defines.append("ROCKSDB_LITE")
# TODO: to remove in conan v2 once cmake_find_package* generators removed
self.cpp_info.names["cmake_find_package"] = "RocksDB"
self.cpp_info.names["cmake_find_package_multi"] = "RocksDB"
self.cpp_info.components["librocksdb"].names["cmake_find_package"] = cmake_target
self.cpp_info.components["librocksdb"].names["cmake_find_package_multi"] = cmake_target
self.cpp_info.components["librocksdb"].set_property("cmake_target_name", f"RocksDB::{cmake_target}")
if self.options.with_gflags:
self.cpp_info.components["librocksdb"].requires.append("gflags::gflags")
if self.options.with_snappy:
self.cpp_info.components["librocksdb"].requires.append("snappy::snappy")
if self.options.with_lz4:
self.cpp_info.components["librocksdb"].requires.append("lz4::lz4")
if self.options.with_zlib:
self.cpp_info.components["librocksdb"].requires.append("zlib::zlib")
if self.options.with_zstd:
self.cpp_info.components["librocksdb"].requires.append("zstd::zstd")
if self.options.get_safe("with_tbb"):
self.cpp_info.components["librocksdb"].requires.append("onetbb::onetbb")
if self.options.with_jemalloc:
self.cpp_info.components["librocksdb"].requires.append("jemalloc::jemalloc")

View File

@@ -1,319 +0,0 @@
diff --git a/HISTORY.md b/HISTORY.md
index 36d472229..05ad1a202 100644
--- a/HISTORY.md
+++ b/HISTORY.md
@@ -1,6 +1,10 @@
# Rocksdb Change Log
> NOTE: Entries for next release do not go here. Follow instructions in `unreleased_history/README.txt`
+## 9.7.4 (10/31/2024)
+### Bug Fixes
+* Fix a leak of obsolete blob files left open until DB::Close(). This bug was introduced in version 9.4.0.
+
## 9.7.3 (10/16/2024)
### Behavior Changes
* OPTIONS file to be loaded by remote worker is now preserved so that it does not get purged by the primary host. A similar technique as how we are preserving new SST files from getting purged is used for this. min_options_file_numbers_ is tracked like pending_outputs_ is tracked.
diff --git a/db/blob/blob_file_cache.cc b/db/blob/blob_file_cache.cc
index 5f340aadf..1b9faa238 100644
--- a/db/blob/blob_file_cache.cc
+++ b/db/blob/blob_file_cache.cc
@@ -42,6 +42,7 @@ Status BlobFileCache::GetBlobFileReader(
assert(blob_file_reader);
assert(blob_file_reader->IsEmpty());
+ // NOTE: sharing same Cache with table_cache
const Slice key = GetSliceForKey(&blob_file_number);
assert(cache_);
@@ -98,4 +99,13 @@ Status BlobFileCache::GetBlobFileReader(
return Status::OK();
}
+void BlobFileCache::Evict(uint64_t blob_file_number) {
+ // NOTE: sharing same Cache with table_cache
+ const Slice key = GetSliceForKey(&blob_file_number);
+
+ assert(cache_);
+
+ cache_.get()->Erase(key);
+}
+
} // namespace ROCKSDB_NAMESPACE
diff --git a/db/blob/blob_file_cache.h b/db/blob/blob_file_cache.h
index 740e67ada..6858d012b 100644
--- a/db/blob/blob_file_cache.h
+++ b/db/blob/blob_file_cache.h
@@ -36,6 +36,15 @@ class BlobFileCache {
uint64_t blob_file_number,
CacheHandleGuard<BlobFileReader>* blob_file_reader);
+ // Called when a blob file is obsolete to ensure it is removed from the cache
+ // to avoid effectively leaking the open file and assicated memory
+ void Evict(uint64_t blob_file_number);
+
+ // Used to identify cache entries for blob files (not normally useful)
+ static const Cache::CacheItemHelper* GetHelper() {
+ return CacheInterface::GetBasicHelper();
+ }
+
private:
using CacheInterface =
BasicTypedCacheInterface<BlobFileReader, CacheEntryRole::kMisc>;
diff --git a/db/column_family.h b/db/column_family.h
index e4b7adde8..86637736a 100644
--- a/db/column_family.h
+++ b/db/column_family.h
@@ -401,6 +401,7 @@ class ColumnFamilyData {
SequenceNumber earliest_seq);
TableCache* table_cache() const { return table_cache_.get(); }
+ BlobFileCache* blob_file_cache() const { return blob_file_cache_.get(); }
BlobSource* blob_source() const { return blob_source_.get(); }
// See documentation in compaction_picker.h
diff --git a/db/db_impl/db_impl.cc b/db/db_impl/db_impl.cc
index 261593423..06573ac2e 100644
--- a/db/db_impl/db_impl.cc
+++ b/db/db_impl/db_impl.cc
@@ -659,8 +659,9 @@ Status DBImpl::CloseHelper() {
// We need to release them before the block cache is destroyed. The block
// cache may be destroyed inside versions_.reset(), when column family data
// list is destroyed, so leaving handles in table cache after
- // versions_.reset() may cause issues.
- // Here we clean all unreferenced handles in table cache.
+ // versions_.reset() may cause issues. Here we clean all unreferenced handles
+ // in table cache, and (for certain builds/conditions) assert that no obsolete
+ // files are hanging around unreferenced (leak) in the table/blob file cache.
// Now we assume all user queries have finished, so only version set itself
// can possibly hold the blocks from block cache. After releasing unreferenced
// handles here, only handles held by version set left and inside
@@ -668,6 +669,9 @@ Status DBImpl::CloseHelper() {
// time a handle is released, we erase it from the cache too. By doing that,
// we can guarantee that after versions_.reset(), table cache is empty
// so the cache can be safely destroyed.
+#ifndef NDEBUG
+ TEST_VerifyNoObsoleteFilesCached(/*db_mutex_already_held=*/true);
+#endif // !NDEBUG
table_cache_->EraseUnRefEntries();
for (auto& txn_entry : recovered_transactions_) {
@@ -3227,6 +3231,8 @@ Status DBImpl::MultiGetImpl(
s = Status::Aborted();
break;
}
+ // This could be a long-running operation
+ ROCKSDB_THREAD_YIELD_HOOK();
}
// Post processing (decrement reference counts and record statistics)
diff --git a/db/db_impl/db_impl.h b/db/db_impl/db_impl.h
index 5e4fa310b..ccc0abfa7 100644
--- a/db/db_impl/db_impl.h
+++ b/db/db_impl/db_impl.h
@@ -1241,9 +1241,14 @@ class DBImpl : public DB {
static Status TEST_ValidateOptions(const DBOptions& db_options) {
return ValidateOptions(db_options);
}
-
#endif // NDEBUG
+ // In certain configurations, verify that the table/blob file cache only
+ // contains entries for live files, to check for effective leaks of open
+ // files. This can only be called when purging of obsolete files has
+ // "settled," such as during parts of DB Close().
+ void TEST_VerifyNoObsoleteFilesCached(bool db_mutex_already_held) const;
+
// persist stats to column family "_persistent_stats"
void PersistStats();
diff --git a/db/db_impl/db_impl_debug.cc b/db/db_impl/db_impl_debug.cc
index 790a50d7a..67f5b4aaf 100644
--- a/db/db_impl/db_impl_debug.cc
+++ b/db/db_impl/db_impl_debug.cc
@@ -9,6 +9,7 @@
#ifndef NDEBUG
+#include "db/blob/blob_file_cache.h"
#include "db/column_family.h"
#include "db/db_impl/db_impl.h"
#include "db/error_handler.h"
@@ -328,5 +329,49 @@ size_t DBImpl::TEST_EstimateInMemoryStatsHistorySize() const {
InstrumentedMutexLock l(&const_cast<DBImpl*>(this)->stats_history_mutex_);
return EstimateInMemoryStatsHistorySize();
}
+
+void DBImpl::TEST_VerifyNoObsoleteFilesCached(
+ bool db_mutex_already_held) const {
+ // This check is somewhat expensive and obscure to make a part of every
+ // unit test in every build variety. Thus, we only enable it for ASAN builds.
+ if (!kMustFreeHeapAllocations) {
+ return;
+ }
+
+ std::optional<InstrumentedMutexLock> l;
+ if (db_mutex_already_held) {
+ mutex_.AssertHeld();
+ } else {
+ l.emplace(&mutex_);
+ }
+
+ std::vector<uint64_t> live_files;
+ for (auto cfd : *versions_->GetColumnFamilySet()) {
+ if (cfd->IsDropped()) {
+ continue;
+ }
+ // Sneakily add both SST and blob files to the same list
+ cfd->current()->AddLiveFiles(&live_files, &live_files);
+ }
+ std::sort(live_files.begin(), live_files.end());
+
+ auto fn = [&live_files](const Slice& key, Cache::ObjectPtr, size_t,
+ const Cache::CacheItemHelper* helper) {
+ if (helper != BlobFileCache::GetHelper()) {
+ // Skip non-blob files for now
+ // FIXME: diagnose and fix the leaks of obsolete SST files revealed in
+ // unit tests.
+ return;
+ }
+ // See TableCache and BlobFileCache
+ assert(key.size() == sizeof(uint64_t));
+ uint64_t file_number;
+ GetUnaligned(reinterpret_cast<const uint64_t*>(key.data()), &file_number);
+ // Assert file is in sorted live_files
+ assert(
+ std::binary_search(live_files.begin(), live_files.end(), file_number));
+ };
+ table_cache_->ApplyToAllEntries(fn, {});
+}
} // namespace ROCKSDB_NAMESPACE
#endif // NDEBUG
diff --git a/db/db_iter.cc b/db/db_iter.cc
index e02586377..bf4749eb9 100644
--- a/db/db_iter.cc
+++ b/db/db_iter.cc
@@ -540,6 +540,8 @@ bool DBIter::FindNextUserEntryInternal(bool skipping_saved_key,
} else {
iter_.Next();
}
+ // This could be a long-running operation due to tombstones, etc.
+ ROCKSDB_THREAD_YIELD_HOOK();
} while (iter_.Valid());
valid_ = false;
diff --git a/db/table_cache.cc b/db/table_cache.cc
index 71fc29c32..8a5be75e8 100644
--- a/db/table_cache.cc
+++ b/db/table_cache.cc
@@ -164,6 +164,7 @@ Status TableCache::GetTableReader(
}
Cache::Handle* TableCache::Lookup(Cache* cache, uint64_t file_number) {
+ // NOTE: sharing same Cache with BlobFileCache
Slice key = GetSliceForFileNumber(&file_number);
return cache->Lookup(key);
}
@@ -179,6 +180,7 @@ Status TableCache::FindTable(
size_t max_file_size_for_l0_meta_pin, Temperature file_temperature) {
PERF_TIMER_GUARD_WITH_CLOCK(find_table_nanos, ioptions_.clock);
uint64_t number = file_meta.fd.GetNumber();
+ // NOTE: sharing same Cache with BlobFileCache
Slice key = GetSliceForFileNumber(&number);
*handle = cache_.Lookup(key);
TEST_SYNC_POINT_CALLBACK("TableCache::FindTable:0",
diff --git a/db/version_builder.cc b/db/version_builder.cc
index ed8ab8214..c98f53f42 100644
--- a/db/version_builder.cc
+++ b/db/version_builder.cc
@@ -24,6 +24,7 @@
#include <vector>
#include "cache/cache_reservation_manager.h"
+#include "db/blob/blob_file_cache.h"
#include "db/blob/blob_file_meta.h"
#include "db/dbformat.h"
#include "db/internal_stats.h"
@@ -744,12 +745,9 @@ class VersionBuilder::Rep {
return Status::Corruption("VersionBuilder", oss.str());
}
- // Note: we use C++11 for now but in C++14, this could be done in a more
- // elegant way using generalized lambda capture.
- VersionSet* const vs = version_set_;
- const ImmutableCFOptions* const ioptions = ioptions_;
-
- auto deleter = [vs, ioptions](SharedBlobFileMetaData* shared_meta) {
+ auto deleter = [vs = version_set_, ioptions = ioptions_,
+ bc = cfd_ ? cfd_->blob_file_cache()
+ : nullptr](SharedBlobFileMetaData* shared_meta) {
if (vs) {
assert(ioptions);
assert(!ioptions->cf_paths.empty());
@@ -758,6 +756,9 @@ class VersionBuilder::Rep {
vs->AddObsoleteBlobFile(shared_meta->GetBlobFileNumber(),
ioptions->cf_paths.front().path);
}
+ if (bc) {
+ bc->Evict(shared_meta->GetBlobFileNumber());
+ }
delete shared_meta;
};
@@ -766,7 +767,7 @@ class VersionBuilder::Rep {
blob_file_number, blob_file_addition.GetTotalBlobCount(),
blob_file_addition.GetTotalBlobBytes(),
blob_file_addition.GetChecksumMethod(),
- blob_file_addition.GetChecksumValue(), deleter);
+ blob_file_addition.GetChecksumValue(), std::move(deleter));
mutable_blob_file_metas_.emplace(
blob_file_number, MutableBlobFileMetaData(std::move(shared_meta)));
diff --git a/db/version_set.h b/db/version_set.h
index 9336782b1..024f869e7 100644
--- a/db/version_set.h
+++ b/db/version_set.h
@@ -1514,7 +1514,6 @@ class VersionSet {
void GetLiveFilesMetaData(std::vector<LiveFileMetaData>* metadata);
void AddObsoleteBlobFile(uint64_t blob_file_number, std::string path) {
- // TODO: Erase file from BlobFileCache?
obsolete_blob_files_.emplace_back(blob_file_number, std::move(path));
}
diff --git a/include/rocksdb/version.h b/include/rocksdb/version.h
index 2a19796b8..0afa2cab1 100644
--- a/include/rocksdb/version.h
+++ b/include/rocksdb/version.h
@@ -13,7 +13,7 @@
// minor or major version number planned for release.
#define ROCKSDB_MAJOR 9
#define ROCKSDB_MINOR 7
-#define ROCKSDB_PATCH 3
+#define ROCKSDB_PATCH 4
// Do not use these. We made the mistake of declaring macros starting with
// double underscore. Now we have to live with our choice. We'll deprecate these
diff --git a/port/port.h b/port/port.h
index 13aa56d47..141716e5b 100644
--- a/port/port.h
+++ b/port/port.h
@@ -19,3 +19,19 @@
#elif defined(OS_WIN)
#include "port/win/port_win.h"
#endif
+
+#ifdef OS_LINUX
+// A temporary hook into long-running RocksDB threads to support modifying their
+// priority etc. This should become a public API hook once the requirements
+// are better understood.
+extern "C" void RocksDbThreadYield() __attribute__((__weak__));
+#define ROCKSDB_THREAD_YIELD_HOOK() \
+ { \
+ if (RocksDbThreadYield) { \
+ RocksDbThreadYield(); \
+ } \
+ }
+#else
+#define ROCKSDB_THREAD_YIELD_HOOK() \
+ {}
+#endif

View File

@@ -1,30 +0,0 @@
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 93b884d..b715cb6 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -106,14 +106,9 @@ endif()
include(CMakeDependentOption)
if(MSVC)
- option(WITH_GFLAGS "build with GFlags" OFF)
option(WITH_XPRESS "build with windows built in compression" OFF)
- option(ROCKSDB_SKIP_THIRDPARTY "skip thirdparty.inc" OFF)
-
- if(NOT ROCKSDB_SKIP_THIRDPARTY)
- include(${CMAKE_CURRENT_SOURCE_DIR}/thirdparty.inc)
- endif()
-else()
+endif()
+if(TRUE)
if(CMAKE_SYSTEM_NAME MATCHES "FreeBSD" AND NOT CMAKE_SYSTEM_NAME MATCHES "kFreeBSD")
# FreeBSD has jemalloc as default malloc
# but it does not have all the jemalloc files in include/...
@@ -126,7 +121,7 @@ else()
endif()
endif()
- if(MINGW)
+ if(MSVC OR MINGW)
option(WITH_GFLAGS "build with GFlags" OFF)
else()
option(WITH_GFLAGS "build with GFlags" ON)

View File

@@ -1,40 +0,0 @@
sources:
"1.1.10":
url: "https://github.com/google/snappy/archive/1.1.10.tar.gz"
sha256: "49d831bffcc5f3d01482340fe5af59852ca2fe76c3e05df0e67203ebbe0f1d90"
"1.1.9":
url: "https://github.com/google/snappy/archive/1.1.9.tar.gz"
sha256: "75c1fbb3d618dd3a0483bff0e26d0a92b495bbe5059c8b4f1c962b478b6e06e7"
"1.1.8":
url: "https://github.com/google/snappy/archive/1.1.8.tar.gz"
sha256: "16b677f07832a612b0836178db7f374e414f94657c138e6993cbfc5dcc58651f"
"1.1.7":
url: "https://github.com/google/snappy/archive/1.1.7.tar.gz"
sha256: "3dfa02e873ff51a11ee02b9ca391807f0c8ea0529a4924afa645fbf97163f9d4"
patches:
"1.1.10":
- patch_file: "patches/1.1.10-0001-fix-inlining-failure.patch"
patch_description: "disable inlining for compilation error"
patch_type: "portability"
- patch_file: "patches/1.1.9-0002-no-Werror.patch"
patch_description: "disable 'warning as error' options"
patch_type: "portability"
- patch_file: "patches/1.1.10-0003-fix-clobber-list-older-llvm.patch"
patch_description: "disable inline asm on apple-clang"
patch_type: "portability"
- patch_file: "patches/1.1.9-0004-rtti-by-default.patch"
patch_description: "remove 'disable rtti'"
patch_type: "conan"
"1.1.9":
- patch_file: "patches/1.1.9-0001-fix-inlining-failure.patch"
patch_description: "disable inlining for compilation error"
patch_type: "portability"
- patch_file: "patches/1.1.9-0002-no-Werror.patch"
patch_description: "disable 'warning as error' options"
patch_type: "portability"
- patch_file: "patches/1.1.9-0003-fix-clobber-list-older-llvm.patch"
patch_description: "disable inline asm on apple-clang"
patch_type: "portability"
- patch_file: "patches/1.1.9-0004-rtti-by-default.patch"
patch_description: "remove 'disable rtti'"
patch_type: "conan"

View File

@@ -1,89 +0,0 @@
from conan import ConanFile
from conan.tools.build import check_min_cppstd
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get, rmdir
from conan.tools.scm import Version
import os
required_conan_version = ">=1.54.0"
class SnappyConan(ConanFile):
name = "snappy"
description = "A fast compressor/decompressor"
topics = ("google", "compressor", "decompressor")
url = "https://github.com/conan-io/conan-center-index"
homepage = "https://github.com/google/snappy"
license = "BSD-3-Clause"
package_type = "library"
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
"fPIC": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
}
def export_sources(self):
export_conandata_patches(self)
def config_options(self):
if self.settings.os == 'Windows':
del self.options.fPIC
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
def layout(self):
cmake_layout(self, src_folder="src")
def validate(self):
if self.settings.compiler.get_safe("cppstd"):
check_min_cppstd(self, 11)
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def generate(self):
tc = CMakeToolchain(self)
tc.variables["SNAPPY_BUILD_TESTS"] = False
if Version(self.version) >= "1.1.8":
tc.variables["SNAPPY_FUZZING_BUILD"] = False
tc.variables["SNAPPY_REQUIRE_AVX"] = False
tc.variables["SNAPPY_REQUIRE_AVX2"] = False
tc.variables["SNAPPY_INSTALL"] = True
if Version(self.version) >= "1.1.9":
tc.variables["SNAPPY_BUILD_BENCHMARKS"] = False
tc.generate()
def build(self):
apply_conandata_patches(self)
cmake = CMake(self)
cmake.configure()
cmake.build()
def package(self):
copy(self, "COPYING", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
cmake = CMake(self)
cmake.install()
rmdir(self, os.path.join(self.package_folder, "lib", "cmake"))
def package_info(self):
self.cpp_info.set_property("cmake_file_name", "Snappy")
self.cpp_info.set_property("cmake_target_name", "Snappy::snappy")
# TODO: back to global scope in conan v2 once cmake_find_package* generators removed
self.cpp_info.components["snappylib"].libs = ["snappy"]
if not self.options.shared:
if self.settings.os in ["Linux", "FreeBSD"]:
self.cpp_info.components["snappylib"].system_libs.append("m")
# TODO: to remove in conan v2 once cmake_find_package* generators removed
self.cpp_info.names["cmake_find_package"] = "Snappy"
self.cpp_info.names["cmake_find_package_multi"] = "Snappy"
self.cpp_info.components["snappylib"].names["cmake_find_package"] = "snappy"
self.cpp_info.components["snappylib"].names["cmake_find_package_multi"] = "snappy"
self.cpp_info.components["snappylib"].set_property("cmake_target_name", "Snappy::snappy")

View File

@@ -1,13 +0,0 @@
diff --git a/snappy-stubs-internal.h b/snappy-stubs-internal.h
index 1548ed7..3b4a9f3 100644
--- a/snappy-stubs-internal.h
+++ b/snappy-stubs-internal.h
@@ -100,7 +100,7 @@
// Inlining hints.
#if HAVE_ATTRIBUTE_ALWAYS_INLINE
-#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE __attribute__((always_inline))
+#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#else
#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#endif // HAVE_ATTRIBUTE_ALWAYS_INLINE

View File

@@ -1,13 +0,0 @@
diff --git a/snappy.cc b/snappy.cc
index d414718..e4efb59 100644
--- a/snappy.cc
+++ b/snappy.cc
@@ -1132,7 +1132,7 @@ inline size_t AdvanceToNextTagX86Optimized(const uint8_t** ip_p, size_t* tag) {
size_t literal_len = *tag >> 2;
size_t tag_type = *tag;
bool is_literal;
-#if defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(__x86_64__)
+#if defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(__x86_64__) && ( (!defined(__clang__) && !defined(__APPLE__)) || (!defined(__APPLE__) && defined(__clang__) && (__clang_major__ >= 9)) || (defined(__APPLE__) && defined(__clang__) && (__clang_major__ > 11)) )
// TODO clang misses the fact that the (c & 3) already correctly
// sets the zero flag.
asm("and $3, %k[tag_type]\n\t"

View File

@@ -1,14 +0,0 @@
Fixes the following error:
error: inlining failed in call to always_inline size_t snappy::AdvanceToNextTag(const uint8_t**, size_t*): function body can be overwritten at link time
--- snappy-stubs-internal.h
+++ snappy-stubs-internal.h
@@ -100,7 +100,7 @@
// Inlining hints.
#ifdef HAVE_ATTRIBUTE_ALWAYS_INLINE
-#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE __attribute__((always_inline))
+#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#else
#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#endif

View File

@@ -1,12 +0,0 @@
--- CMakeLists.txt
+++ CMakeLists.txt
@@ -69,7 +69,7 @@
- # Use -Werror for clang only.
+if(0)
if(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
if(NOT CMAKE_CXX_FLAGS MATCHES "-Werror")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror")
endif(NOT CMAKE_CXX_FLAGS MATCHES "-Werror")
endif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
-
+endif()

View File

@@ -1,12 +0,0 @@
asm clobbers do not work for clang < 9 and apple-clang < 11 (found by SpaceIm)
--- snappy.cc
+++ snappy.cc
@@ -1026,7 +1026,7 @@
size_t literal_len = *tag >> 2;
size_t tag_type = *tag;
bool is_literal;
-#if defined(__GNUC__) && defined(__x86_64__)
+#if defined(__GNUC__) && defined(__x86_64__) && ( (!defined(__clang__) && !defined(__APPLE__)) || (!defined(__APPLE__) && defined(__clang__) && (__clang_major__ >= 9)) || (defined(__APPLE__) && defined(__clang__) && (__clang_major__ > 11)) )
// TODO clang misses the fact that the (c & 3) already correctly
// sets the zero flag.
asm("and $3, %k[tag_type]\n\t"

View File

@@ -1,20 +0,0 @@
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -53,8 +53,6 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
add_definitions(-D_HAS_EXCEPTIONS=0)
# Disable RTTI.
- string(REGEX REPLACE "/GR" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /GR-")
else(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# Use -Wall for clang and gcc.
if(NOT CMAKE_CXX_FLAGS MATCHES "-Wall")
@@ -78,8 +76,6 @@ endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-exceptions")
# Disable RTTI.
- string(REGEX REPLACE "-frtti" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-rtti")
endif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# BUILD_SHARED_LIBS is a standard CMake variable, but we declare it here to make

View File

@@ -1,12 +0,0 @@
sources:
"4.0.3":
url: "https://github.com/SOCI/soci/archive/v4.0.3.tar.gz"
sha256: "4b1ff9c8545c5d802fbe06ee6cd2886630e5c03bf740e269bb625b45cf934928"
patches:
"4.0.3":
- patch_file: "patches/0001-Remove-hardcoded-INSTALL_NAME_DIR-for-relocatable-li.patch"
patch_description: "Generate relocatable libraries on MacOS"
patch_type: "portability"
- patch_file: "patches/0002-Fix-soci_backend.patch"
patch_description: "Fix variable names for dependencies"
patch_type: "conan"

View File

@@ -1,212 +0,0 @@
from conan import ConanFile
from conan.tools.build import check_min_cppstd
from conan.tools.cmake import CMake, CMakeDeps, CMakeToolchain, cmake_layout
from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get, rmdir
from conan.tools.microsoft import is_msvc
from conan.tools.scm import Version
from conan.errors import ConanInvalidConfiguration
import os
required_conan_version = ">=1.55.0"
class SociConan(ConanFile):
name = "soci"
homepage = "https://github.com/SOCI/soci"
url = "https://github.com/conan-io/conan-center-index"
description = "The C++ Database Access Library "
topics = ("mysql", "odbc", "postgresql", "sqlite3")
license = "BSL-1.0"
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
"fPIC": [True, False],
"empty": [True, False],
"with_sqlite3": [True, False],
"with_db2": [True, False],
"with_odbc": [True, False],
"with_oracle": [True, False],
"with_firebird": [True, False],
"with_mysql": [True, False],
"with_postgresql": [True, False],
"with_boost": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
"empty": False,
"with_sqlite3": False,
"with_db2": False,
"with_odbc": False,
"with_oracle": False,
"with_firebird": False,
"with_mysql": False,
"with_postgresql": False,
"with_boost": False,
}
def export_sources(self):
export_conandata_patches(self)
def layout(self):
cmake_layout(self, src_folder="src")
def config_options(self):
if self.settings.os == "Windows":
self.options.rm_safe("fPIC")
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
def requirements(self):
if self.options.with_sqlite3:
self.requires("sqlite3/3.47.0")
if self.options.with_odbc and self.settings.os != "Windows":
self.requires("odbc/2.3.11")
if self.options.with_mysql:
self.requires("libmysqlclient/8.1.0")
if self.options.with_postgresql:
self.requires("libpq/15.5")
if self.options.with_boost:
self.requires("boost/1.83.0")
@property
def _minimum_compilers_version(self):
return {
"Visual Studio": "14",
"gcc": "4.8",
"clang": "3.8",
"apple-clang": "8.0"
}
def validate(self):
if self.settings.compiler.get_safe("cppstd"):
check_min_cppstd(self, 11)
compiler = str(self.settings.compiler)
compiler_version = Version(self.settings.compiler.version.value)
if compiler not in self._minimum_compilers_version:
self.output.warning("{} recipe lacks information about the {} compiler support.".format(self.name, self.settings.compiler))
elif compiler_version < self._minimum_compilers_version[compiler]:
raise ConanInvalidConfiguration("{} requires a {} version >= {}".format(self.name, compiler, compiler_version))
prefix = "Dependencies for"
message = "not configured in this conan package."
if self.options.with_db2:
# self.requires("db2/0.0.0") # TODO add support for db2
raise ConanInvalidConfiguration("{} DB2 {} ".format(prefix, message))
if self.options.with_oracle:
# self.requires("oracle_db/0.0.0") # TODO add support for oracle
raise ConanInvalidConfiguration("{} ORACLE {} ".format(prefix, message))
if self.options.with_firebird:
# self.requires("firebird/0.0.0") # TODO add support for firebird
raise ConanInvalidConfiguration("{} firebird {} ".format(prefix, message))
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def generate(self):
tc = CMakeToolchain(self)
tc.variables["SOCI_SHARED"] = self.options.shared
tc.variables["SOCI_STATIC"] = not self.options.shared
tc.variables["SOCI_TESTS"] = False
tc.variables["SOCI_CXX11"] = True
tc.variables["SOCI_EMPTY"] = self.options.empty
tc.variables["WITH_SQLITE3"] = self.options.with_sqlite3
tc.variables["WITH_DB2"] = self.options.with_db2
tc.variables["WITH_ODBC"] = self.options.with_odbc
tc.variables["WITH_ORACLE"] = self.options.with_oracle
tc.variables["WITH_FIREBIRD"] = self.options.with_firebird
tc.variables["WITH_MYSQL"] = self.options.with_mysql
tc.variables["WITH_POSTGRESQL"] = self.options.with_postgresql
tc.variables["WITH_BOOST"] = self.options.with_boost
tc.generate()
deps = CMakeDeps(self)
deps.generate()
def build(self):
apply_conandata_patches(self)
cmake = CMake(self)
cmake.configure()
cmake.build()
def package(self):
copy(self, "LICENSE_1_0.txt", dst=os.path.join(self.package_folder, "licenses"), src=self.source_folder)
cmake = CMake(self)
cmake.install()
rmdir(self, os.path.join(self.package_folder, "lib", "cmake"))
def package_info(self):
self.cpp_info.set_property("cmake_file_name", "SOCI")
target_suffix = "" if self.options.shared else "_static"
lib_prefix = "lib" if is_msvc(self) and not self.options.shared else ""
version = Version(self.version)
lib_suffix = "_{}_{}".format(version.major, version.minor) if self.settings.os == "Windows" else ""
# soci_core
self.cpp_info.components["soci_core"].set_property("cmake_target_name", "SOCI::soci_core{}".format(target_suffix))
self.cpp_info.components["soci_core"].libs = ["{}soci_core{}".format(lib_prefix, lib_suffix)]
if self.options.with_boost:
self.cpp_info.components["soci_core"].requires.append("boost::boost")
# soci_empty
if self.options.empty:
self.cpp_info.components["soci_empty"].set_property("cmake_target_name", "SOCI::soci_empty{}".format(target_suffix))
self.cpp_info.components["soci_empty"].libs = ["{}soci_empty{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_empty"].requires = ["soci_core"]
# soci_sqlite3
if self.options.with_sqlite3:
self.cpp_info.components["soci_sqlite3"].set_property("cmake_target_name", "SOCI::soci_sqlite3{}".format(target_suffix))
self.cpp_info.components["soci_sqlite3"].libs = ["{}soci_sqlite3{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_sqlite3"].requires = ["soci_core", "sqlite3::sqlite3"]
# soci_odbc
if self.options.with_odbc:
self.cpp_info.components["soci_odbc"].set_property("cmake_target_name", "SOCI::soci_odbc{}".format(target_suffix))
self.cpp_info.components["soci_odbc"].libs = ["{}soci_odbc{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_odbc"].requires = ["soci_core"]
if self.settings.os == "Windows":
self.cpp_info.components["soci_odbc"].system_libs.append("odbc32")
else:
self.cpp_info.components["soci_odbc"].requires.append("odbc::odbc")
# soci_mysql
if self.options.with_mysql:
self.cpp_info.components["soci_mysql"].set_property("cmake_target_name", "SOCI::soci_mysql{}".format(target_suffix))
self.cpp_info.components["soci_mysql"].libs = ["{}soci_mysql{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_mysql"].requires = ["soci_core", "libmysqlclient::libmysqlclient"]
# soci_postgresql
if self.options.with_postgresql:
self.cpp_info.components["soci_postgresql"].set_property("cmake_target_name", "SOCI::soci_postgresql{}".format(target_suffix))
self.cpp_info.components["soci_postgresql"].libs = ["{}soci_postgresql{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_postgresql"].requires = ["soci_core", "libpq::libpq"]
# TODO: to remove in conan v2 once cmake_find_package* generators removed
self.cpp_info.names["cmake_find_package"] = "SOCI"
self.cpp_info.names["cmake_find_package_multi"] = "SOCI"
self.cpp_info.components["soci_core"].names["cmake_find_package"] = "soci_core{}".format(target_suffix)
self.cpp_info.components["soci_core"].names["cmake_find_package_multi"] = "soci_core{}".format(target_suffix)
if self.options.empty:
self.cpp_info.components["soci_empty"].names["cmake_find_package"] = "soci_empty{}".format(target_suffix)
self.cpp_info.components["soci_empty"].names["cmake_find_package_multi"] = "soci_empty{}".format(target_suffix)
if self.options.with_sqlite3:
self.cpp_info.components["soci_sqlite3"].names["cmake_find_package"] = "soci_sqlite3{}".format(target_suffix)
self.cpp_info.components["soci_sqlite3"].names["cmake_find_package_multi"] = "soci_sqlite3{}".format(target_suffix)
if self.options.with_odbc:
self.cpp_info.components["soci_odbc"].names["cmake_find_package"] = "soci_odbc{}".format(target_suffix)
self.cpp_info.components["soci_odbc"].names["cmake_find_package_multi"] = "soci_odbc{}".format(target_suffix)
if self.options.with_mysql:
self.cpp_info.components["soci_mysql"].names["cmake_find_package"] = "soci_mysql{}".format(target_suffix)
self.cpp_info.components["soci_mysql"].names["cmake_find_package_multi"] = "soci_mysql{}".format(target_suffix)
if self.options.with_postgresql:
self.cpp_info.components["soci_postgresql"].names["cmake_find_package"] = "soci_postgresql{}".format(target_suffix)
self.cpp_info.components["soci_postgresql"].names["cmake_find_package_multi"] = "soci_postgresql{}".format(target_suffix)

View File

@@ -1,39 +0,0 @@
From d491bf7b5040d314ffd0c6310ba01f78ff44c85e Mon Sep 17 00:00:00 2001
From: Rasmus Thomsen <rasmus.thomsen@dampsoft.de>
Date: Fri, 14 Apr 2023 09:16:29 +0200
Subject: [PATCH] Remove hardcoded INSTALL_NAME_DIR for relocatable libraries
on MacOS
---
cmake/SociBackend.cmake | 2 +-
src/core/CMakeLists.txt | 1 -
2 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/cmake/SociBackend.cmake b/cmake/SociBackend.cmake
index 5d4ef0df..39fe1f77 100644
--- a/cmake/SociBackend.cmake
+++ b/cmake/SociBackend.cmake
@@ -171,7 +171,7 @@ macro(soci_backend NAME)
set_target_properties(${THIS_BACKEND_TARGET}
PROPERTIES
SOVERSION ${${PROJECT_NAME}_SOVERSION}
- INSTALL_NAME_DIR ${CMAKE_INSTALL_PREFIX}/lib)
+ )
if(APPLE)
set_target_properties(${THIS_BACKEND_TARGET}
diff --git a/src/core/CMakeLists.txt b/src/core/CMakeLists.txt
index 3e7deeae..f9eae564 100644
--- a/src/core/CMakeLists.txt
+++ b/src/core/CMakeLists.txt
@@ -59,7 +59,6 @@ if (SOCI_SHARED)
PROPERTIES
VERSION ${SOCI_VERSION}
SOVERSION ${SOCI_SOVERSION}
- INSTALL_NAME_DIR ${CMAKE_INSTALL_PREFIX}/lib
CLEAN_DIRECT_OUTPUT 1)
endif()
--
2.25.1

View File

@@ -1,24 +0,0 @@
diff --git a/cmake/SociBackend.cmake b/cmake/SociBackend.cmake
index 0a664667..3fa2ed95 100644
--- a/cmake/SociBackend.cmake
+++ b/cmake/SociBackend.cmake
@@ -31,14 +31,13 @@ macro(soci_backend_deps_found NAME DEPS SUCCESS)
if(NOT DEPEND_FOUND)
list(APPEND DEPS_NOT_FOUND ${dep})
else()
- string(TOUPPER "${dep}" DEPU)
- if( ${DEPU}_INCLUDE_DIR )
- list(APPEND DEPS_INCLUDE_DIRS ${${DEPU}_INCLUDE_DIR})
+ if( ${dep}_INCLUDE_DIR )
+ list(APPEND DEPS_INCLUDE_DIRS ${${dep}_INCLUDE_DIR})
endif()
- if( ${DEPU}_INCLUDE_DIRS )
- list(APPEND DEPS_INCLUDE_DIRS ${${DEPU}_INCLUDE_DIRS})
+ if( ${dep}_INCLUDE_DIRS )
+ list(APPEND DEPS_INCLUDE_DIRS ${${dep}_INCLUDE_DIRS})
endif()
- list(APPEND DEPS_LIBRARIES ${${DEPU}_LIBRARIES})
+ list(APPEND DEPS_LIBRARIES ${${dep}_LIBRARIES})
endif()
endforeach()

View File

@@ -26,6 +26,7 @@
#include <boost/beast/core/string.hpp>
#include <boost/filesystem.hpp>
#include <fstream>
#include <map>
#include <memory>
#include <mutex>

View File

@@ -21,7 +21,6 @@
#define RIPPLE_BASICS_SHAMAP_HASH_H_INCLUDED
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/partitioned_unordered_map.h>
#include <ostream>

View File

@@ -90,9 +90,6 @@ public:
int
getCacheSize() const;
int
getTrackSize() const;
float
getHitRate();
@@ -170,9 +167,6 @@ public:
bool
retrieve(key_type const& key, T& data);
mutex_type&
peekMutex();
std::vector<key_type>
getKeys() const;
@@ -193,11 +187,14 @@ public:
private:
SharedPointerType
initialFetch(key_type const& key, std::lock_guard<mutex_type> const& l);
initialFetch(key_type const& key);
void
collect_metrics();
Mutex&
lockPartition(key_type const& key) const;
private:
struct Stats
{
@@ -300,8 +297,8 @@ private:
[[maybe_unused]] clock_type::time_point const& now,
typename KeyValueCacheType::map_type& partition,
SweptPointersVector& stuffToSweep,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&);
std::atomic<int>& allRemoval,
Mutex& partitionLock);
[[nodiscard]] std::thread
sweepHelper(
@@ -310,14 +307,12 @@ private:
typename KeyOnlyCacheType::map_type& partition,
SweptPointersVector&,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&);
Mutex& partitionLock);
beast::Journal m_journal;
clock_type& m_clock;
Stats m_stats;
mutex_type mutable m_mutex;
// Used for logging
std::string m_name;
@@ -328,10 +323,11 @@ private:
clock_type::duration const m_target_age;
// Number of items cached
int m_cache_count;
std::atomic<int> m_cache_count;
cache_type m_cache; // Hold strong reference to recent objects
std::uint64_t m_hits;
std::uint64_t m_misses;
std::atomic<std::uint64_t> m_hits;
std::atomic<std::uint64_t> m_misses;
mutable std::vector<mutex_type> partitionLocks_;
};
} // namespace ripple

View File

@@ -22,6 +22,7 @@
#include <xrpl/basics/IntrusivePointer.ipp>
#include <xrpl/basics/TaggedCache.h>
#include <xrpl/beast/core/CurrentThreadName.h>
namespace ripple {
@@ -60,6 +61,7 @@ inline TaggedCache<
, m_hits(0)
, m_misses(0)
{
partitionLocks_ = std::vector<mutex_type>(m_cache.partitions());
}
template <
@@ -105,8 +107,13 @@ TaggedCache<
KeyEqual,
Mutex>::size() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
std::size_t totalSize = 0;
for (size_t i = 0; i < partitionLocks_.size(); ++i)
{
std::lock_guard<Mutex> lock(partitionLocks_[i]);
totalSize += m_cache.map()[i].size();
}
return totalSize;
}
template <
@@ -129,32 +136,7 @@ TaggedCache<
KeyEqual,
Mutex>::getCacheSize() const
{
std::lock_guard lock(m_mutex);
return m_cache_count;
}
template <
class Key,
class T,
bool IsKeyCache,
class SharedWeakUnionPointer,
class SharedPointerType,
class Hash,
class KeyEqual,
class Mutex>
inline int
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getTrackSize() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
return m_cache_count.load(std::memory_order_relaxed);
}
template <
@@ -177,9 +159,10 @@ TaggedCache<
KeyEqual,
Mutex>::getHitRate()
{
std::lock_guard lock(m_mutex);
auto const total = static_cast<float>(m_hits + m_misses);
return m_hits * (100.0f / std::max(1.0f, total));
auto const hits = m_hits.load(std::memory_order_relaxed);
auto const misses = m_misses.load(std::memory_order_relaxed);
float const total = float(hits + misses);
return hits * (100.0f / std::max(1.0f, total));
}
template <
@@ -202,9 +185,12 @@ TaggedCache<
KeyEqual,
Mutex>::clear()
{
std::lock_guard lock(m_mutex);
for (auto& mutex : partitionLocks_)
mutex.lock();
m_cache.clear();
m_cache_count = 0;
for (auto& mutex : partitionLocks_)
mutex.unlock();
m_cache_count.store(0, std::memory_order_relaxed);
}
template <
@@ -227,11 +213,9 @@ TaggedCache<
KeyEqual,
Mutex>::reset()
{
std::lock_guard lock(m_mutex);
m_cache.clear();
m_cache_count = 0;
m_hits = 0;
m_misses = 0;
clear();
m_hits.store(0, std::memory_order_relaxed);
m_misses.store(0, std::memory_order_relaxed);
}
template <
@@ -255,7 +239,7 @@ TaggedCache<
KeyEqual,
Mutex>::touch_if_exists(KeyComparable const& key)
{
std::lock_guard lock(m_mutex);
std::lock_guard<Mutex> lock(lockPartition(key));
auto const iter(m_cache.find(key));
if (iter == m_cache.end())
{
@@ -297,8 +281,6 @@ TaggedCache<
auto const start = std::chrono::steady_clock::now();
{
std::lock_guard lock(m_mutex);
if (m_target_size == 0 ||
(static_cast<int>(m_cache.size()) <= m_target_size))
{
@@ -330,12 +312,13 @@ TaggedCache<
m_cache.map()[p],
allStuffToSweep[p],
allRemovals,
lock));
partitionLocks_[p]));
}
for (std::thread& worker : workers)
worker.join();
m_cache_count -= allRemovals;
int removals = allRemovals.load(std::memory_order_relaxed);
m_cache_count.fetch_sub(removals, std::memory_order_relaxed);
}
// At this point allStuffToSweep will go out of scope outside the lock
// and decrement the reference count on each strong pointer.
@@ -369,7 +352,8 @@ TaggedCache<
{
// Remove from cache, if !valid, remove from map too. Returns true if
// removed from cache
std::lock_guard lock(m_mutex);
std::lock_guard<Mutex> lock(lockPartition(key));
auto cit = m_cache.find(key);
@@ -382,7 +366,7 @@ TaggedCache<
if (entry.isCached())
{
--m_cache_count;
m_cache_count.fetch_sub(1, std::memory_order_relaxed);
entry.ptr.convertToWeak();
ret = true;
}
@@ -420,17 +404,16 @@ TaggedCache<
{
// Return canonical value, store if needed, refresh in cache
// Return values: true=we had the data already
std::lock_guard lock(m_mutex);
std::lock_guard<Mutex> lock(lockPartition(key));
auto cit = m_cache.find(key);
if (cit == m_cache.end())
{
m_cache.emplace(
std::piecewise_construct,
std::forward_as_tuple(key),
std::forward_as_tuple(m_clock.now(), data));
++m_cache_count;
m_cache_count.fetch_add(1, std::memory_order_relaxed);
return false;
}
@@ -479,12 +462,12 @@ TaggedCache<
data = cachedData;
}
++m_cache_count;
m_cache_count.fetch_add(1, std::memory_order_relaxed);
return true;
}
entry.ptr = data;
++m_cache_count;
m_cache_count.fetch_add(1, std::memory_order_relaxed);
return false;
}
@@ -560,10 +543,11 @@ TaggedCache<
KeyEqual,
Mutex>::fetch(key_type const& key)
{
std::lock_guard<mutex_type> l(m_mutex);
auto ret = initialFetch(key, l);
std::lock_guard<Mutex> lock(lockPartition(key));
auto ret = initialFetch(key);
if (!ret)
++m_misses;
m_misses.fetch_add(1, std::memory_order_relaxed);
return ret;
}
@@ -627,8 +611,8 @@ TaggedCache<
Mutex>::insert(key_type const& key)
-> std::enable_if_t<IsKeyCache, ReturnType>
{
std::lock_guard lock(m_mutex);
clock_type::time_point const now(m_clock.now());
std::lock_guard<Mutex> lock(lockPartition(key));
auto [it, inserted] = m_cache.emplace(
std::piecewise_construct,
std::forward_as_tuple(key),
@@ -668,29 +652,6 @@ TaggedCache<
return true;
}
template <
class Key,
class T,
bool IsKeyCache,
class SharedWeakUnionPointer,
class SharedPointerType,
class Hash,
class KeyEqual,
class Mutex>
inline auto
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::peekMutex() -> mutex_type&
{
return m_mutex;
}
template <
class Key,
class T,
@@ -714,10 +675,13 @@ TaggedCache<
std::vector<key_type> v;
{
std::lock_guard lock(m_mutex);
v.reserve(m_cache.size());
for (auto const& _ : m_cache)
v.push_back(_.first);
for (std::size_t i = 0; i < partitionLocks_.size(); ++i)
{
std::lock_guard<Mutex> lock(partitionLocks_[i]);
for (auto const& entry : m_cache.map()[i])
v.push_back(entry.first);
}
}
return v;
@@ -743,11 +707,12 @@ TaggedCache<
KeyEqual,
Mutex>::rate() const
{
std::lock_guard lock(m_mutex);
auto const tot = m_hits + m_misses;
auto const hits = m_hits.load(std::memory_order_relaxed);
auto const misses = m_misses.load(std::memory_order_relaxed);
auto const tot = hits + misses;
if (tot == 0)
return 0;
return double(m_hits) / tot;
return 0.0;
return double(hits) / tot;
}
template <
@@ -771,18 +736,16 @@ TaggedCache<
KeyEqual,
Mutex>::fetch(key_type const& digest, Handler const& h)
{
{
std::lock_guard l(m_mutex);
if (auto ret = initialFetch(digest, l))
return ret;
}
std::lock_guard<Mutex> lock(lockPartition(digest));
if (auto ret = initialFetch(digest))
return ret;
auto sle = h();
if (!sle)
return {};
std::lock_guard l(m_mutex);
++m_misses;
m_misses.fetch_add(1, std::memory_order_relaxed);
auto const [it, inserted] =
m_cache.emplace(digest, Entry(m_clock.now(), std::move(sle)));
if (!inserted)
@@ -809,9 +772,10 @@ TaggedCache<
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
initialFetch(key_type const& key, std::lock_guard<mutex_type> const& l)
Mutex>::initialFetch(key_type const& key)
{
std::lock_guard<Mutex> lock(lockPartition(key));
auto cit = m_cache.find(key);
if (cit == m_cache.end())
return {};
@@ -819,7 +783,7 @@ TaggedCache<
Entry& entry = cit->second;
if (entry.isCached())
{
++m_hits;
m_hits.fetch_add(1, std::memory_order_relaxed);
entry.touch(m_clock.now());
return entry.ptr.getStrong();
}
@@ -827,12 +791,13 @@ TaggedCache<
if (entry.isCached())
{
// independent of cache size, so not counted as a hit
++m_cache_count;
m_cache_count.fetch_add(1, std::memory_order_relaxed);
entry.touch(m_clock.now());
return entry.ptr.getStrong();
}
m_cache.erase(cit);
return {};
}
@@ -861,10 +826,11 @@ TaggedCache<
{
beast::insight::Gauge::value_type hit_rate(0);
{
std::lock_guard lock(m_mutex);
auto const total(m_hits + m_misses);
auto const hits = m_hits.load(std::memory_order_relaxed);
auto const misses = m_misses.load(std::memory_order_relaxed);
auto const total = hits + misses;
if (total != 0)
hit_rate = (m_hits * 100) / total;
hit_rate = (hits * 100) / total;
}
m_stats.hit_rate.set(hit_rate);
}
@@ -895,12 +861,16 @@ TaggedCache<
typename KeyValueCacheType::map_type& partition,
SweptPointersVector& stuffToSweep,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
Mutex& partitionLock)
{
return std::thread([&, this]() {
beast::setCurrentThreadName("sweep-KVCache");
int cacheRemovals = 0;
int mapRemovals = 0;
std::lock_guard<Mutex> lock(partitionLock);
// Keep references to all the stuff we sweep
// so that we can destroy them outside the lock.
stuffToSweep.reserve(partition.size());
@@ -984,12 +954,16 @@ TaggedCache<
typename KeyOnlyCacheType::map_type& partition,
SweptPointersVector&,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
Mutex& partitionLock)
{
return std::thread([&, this]() {
beast::setCurrentThreadName("sweep-KCache");
int cacheRemovals = 0;
int mapRemovals = 0;
std::lock_guard<Mutex> lock(partitionLock);
// Keep references to all the stuff we sweep
// so that we can destroy them outside the lock.
{
@@ -1024,6 +998,29 @@ TaggedCache<
});
}
template <
class Key,
class T,
bool IsKeyCache,
class SharedWeakUnionPointer,
class SharedPointerType,
class Hash,
class KeyEqual,
class Mutex>
inline Mutex&
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::lockPartition(key_type const& key) const
{
return partitionLocks_[m_cache.partition_index(key)];
}
} // namespace ripple
#endif

View File

@@ -277,6 +277,12 @@ public:
return map_;
}
partition_map_type const&
map() const
{
return map_;
}
iterator
begin()
{
@@ -321,6 +327,12 @@ public:
return cend();
}
std::size_t
partition_index(key_type const& key) const
{
return partitioner(key);
}
private:
template <class T>
void

View File

@@ -3257,7 +3257,6 @@ operator==(aged_unordered_container<
{
if (size() != other.size())
return false;
using EqRng = std::pair<const_iterator, const_iterator>;
for (auto iter(cbegin()), last(cend()); iter != last;)
{
auto const& k(extract(*iter));

View File

@@ -24,32 +24,111 @@
#include <xxhash.h>
#include <array>
#include <cstddef>
#include <new>
#include <type_traits>
#include <cstdint>
#include <optional>
#include <span>
namespace beast {
class xxhasher
{
private:
// requires 64-bit std::size_t
static_assert(sizeof(std::size_t) == 8, "");
public:
using result_type = std::size_t;
static int volatile test;
XXH3_state_t* state_;
private:
static_assert(sizeof(std::size_t) == 8, "requires 64-bit std::size_t");
// Have an internal buffer to avoid the streaming API
// A 64-byte buffer should to be big enough for us
static constexpr std::size_t INTERNAL_BUFFER_SIZE = 64;
alignas(64) std::array<std::uint8_t, INTERNAL_BUFFER_SIZE> buffer_;
std::span<std::uint8_t> readBuffer_;
std::span<std::uint8_t> writeBuffer_;
std::optional<XXH64_hash_t> seed_;
XXH3_state_t* state_ = nullptr;
void
resetBuffers()
{
writeBuffer_ = std::span{buffer_};
readBuffer_ = {};
}
void
updateHash(void const* data, std::size_t len)
{
if (writeBuffer_.size() < len)
{
flushToState(data, len);
}
else
{
std::memcpy(writeBuffer_.data(), data, len);
writeBuffer_ = writeBuffer_.subspan(len);
readBuffer_ = std::span{
std::begin(buffer_), buffer_.size() - writeBuffer_.size()};
}
}
static XXH3_state_t*
allocState()
{
auto ret = XXH3_createState();
if (ret == nullptr)
throw std::bad_alloc();
throw std::bad_alloc(); // LCOV_EXCL_LINE
return ret;
}
public:
using result_type = std::size_t;
void
flushToState(void const* data, std::size_t len)
{
if (!state_)
{
state_ = allocState();
if (seed_.has_value())
{
XXH3_64bits_reset_withSeed(state_, *seed_);
}
else
{
XXH3_64bits_reset(state_);
}
}
XXH3_64bits_update(state_, readBuffer_.data(), readBuffer_.size());
resetBuffers();
if (data && len)
{
XXH3_64bits_update(state_, data, len);
}
}
result_type
retrieveHash()
{
if (state_)
{
flushToState(nullptr, 0);
return XXH3_64bits_digest(state_);
}
else
{
if (seed_.has_value())
{
return XXH3_64bits_withSeed(
readBuffer_.data(), readBuffer_.size(), *seed_);
}
else
{
return XXH3_64bits(readBuffer_.data(), readBuffer_.size());
}
}
}
public:
static constexpr auto const endian = boost::endian::order::native;
xxhasher(xxhasher const&) = delete;
@@ -58,43 +137,44 @@ public:
xxhasher()
{
state_ = allocState();
XXH3_64bits_reset(state_);
resetBuffers();
}
~xxhasher() noexcept
{
XXH3_freeState(state_);
if (state_)
{
test = test + 1;
XXH3_freeState(state_);
}
}
template <
class Seed,
std::enable_if_t<std::is_unsigned<Seed>::value>* = nullptr>
explicit xxhasher(Seed seed)
explicit xxhasher(Seed seed) : seed_(seed)
{
state_ = allocState();
XXH3_64bits_reset_withSeed(state_, seed);
resetBuffers();
}
template <
class Seed,
std::enable_if_t<std::is_unsigned<Seed>::value>* = nullptr>
xxhasher(Seed seed, Seed)
xxhasher(Seed seed, Seed) : seed_(seed)
{
state_ = allocState();
XXH3_64bits_reset_withSeed(state_, seed);
resetBuffers();
}
void
operator()(void const* key, std::size_t len) noexcept
{
XXH3_64bits_update(state_, key, len);
updateHash(key, len);
}
explicit
operator std::size_t() noexcept
operator result_type() noexcept
{
return XXH3_64bits_digest(state_);
return retrieveHash();
}
};

View File

@@ -22,7 +22,6 @@
#include <xrpl/basics/ByteUtilities.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/partitioned_unordered_map.h>
#include <cstdint>

View File

@@ -32,13 +32,15 @@
// If you add an amendment here, then do not forget to increment `numFeatures`
// in include/xrpl/protocol/Feature.h.
XRPL_FIX (PriceOracleOrder, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (MPTDeliveredAmount, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (AMMClawbackRounding, Supported::no, VoteBehavior::DefaultNo)
XRPL_FEATURE(TokenEscrow, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (EnforceNFTokenTrustlineV2, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (AMMv1_3, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionedDEX, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(Batch, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(SingleAssetVault, Supported::no, VoteBehavior::DefaultNo)
XRPL_FEATURE(SingleAssetVault, Supported::no, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionDelegation, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (PayChanCancelAfter, Supported::yes, VoteBehavior::DefaultNo)
// Check flags in Credential transactions

View File

@@ -509,6 +509,7 @@ TRANSACTION(ttVAULT_WITHDRAW, 69, VaultWithdraw, Delegation::delegatable, ({
{sfVaultID, soeREQUIRED},
{sfAmount, soeREQUIRED, soeMPTSupported},
{sfDestination, soeOPTIONAL},
{sfDestinationTag, soeOPTIONAL},
}))
/** This transaction claws back tokens from a vault. */

View File

@@ -28,6 +28,7 @@
#include <cerrno>
#include <cstddef>
#include <fstream>
#include <ios>
#include <iterator>
#include <optional>
@@ -55,7 +56,7 @@ getFileContents(
return {};
}
ifstream fileStream(fullPath, std::ios::in);
std::ifstream fileStream(fullPath.string(), std::ios::in);
if (!fileStream)
{
@@ -85,7 +86,8 @@ writeFileContents(
using namespace boost::filesystem;
using namespace boost::system::errc;
ofstream fileStream(destPath, std::ios::out | std::ios::trunc);
std::ofstream fileStream(
destPath.string(), std::ios::out | std::ios::trunc);
if (!fileStream)
{

View File

@@ -107,8 +107,9 @@ sliceToHex(Slice const& slice)
}
for (int i = 0; i < slice.size(); ++i)
{
s += "0123456789ABCDEF"[((slice[i] & 0xf0) >> 4)];
s += "0123456789ABCDEF"[((slice[i] & 0x0f) >> 0)];
constexpr char hex[] = "0123456789ABCDEF";
s += hex[((slice[i] & 0xf0) >> 4)];
s += hex[((slice[i] & 0x0f) >> 0)];
}
return s;
}

View File

@@ -671,12 +671,12 @@ isMemoOkay(STObject const& st, std::string& reason)
"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
"abcdefghijklmnopqrstuvwxyz");
for (char c : symbols)
for (unsigned char c : symbols)
a[c] = 1;
return a;
}();
for (auto c : *optData)
for (unsigned char c : *optData)
{
if (!allowedSymbols[c])
{

View File

@@ -544,7 +544,7 @@ b58_to_b256_be(std::string_view input, std::span<std::uint8_t> out)
XRPL_ASSERT(
num_b_58_10_coeffs <= b_58_10_coeff.size(),
"ripple::b58_fast::detail::b58_to_b256_be : maximum coeff");
for (auto c : input.substr(0, partial_coeff_len))
for (unsigned char c : input.substr(0, partial_coeff_len))
{
auto cur_val = ::ripple::alphabetReverse[c];
if (cur_val < 0)
@@ -558,7 +558,7 @@ b58_to_b256_be(std::string_view input, std::span<std::uint8_t> out)
{
for (int j = 0; j < num_full_coeffs; ++j)
{
auto c = input[partial_coeff_len + j * 10 + i];
unsigned char c = input[partial_coeff_len + j * 10 + i];
auto cur_val = ::ripple::alphabetReverse[c];
if (cur_val < 0)
{

View File

@@ -678,6 +678,61 @@ private:
oracle.set(
UpdateArg{.series = {{"XRP", "USD", 742, 2}}, .fee = baseFee});
}
for (bool const withFixOrder : {false, true})
{
// Should be same order as creation
Env env(
*this,
withFixOrder ? testable_amendments()
: testable_amendments() - fixPriceOracleOrder);
auto const baseFee =
static_cast<int>(env.current()->fees().base.drops());
auto test = [&](Env& env, DataSeries const& series) {
env.fund(XRP(1'000), owner);
Oracle oracle(
env, {.owner = owner, .series = series, .fee = baseFee});
BEAST_EXPECT(oracle.exists());
auto sle = env.le(keylet::oracle(owner, oracle.documentID()));
BEAST_EXPECT(
sle->getFieldArray(sfPriceDataSeries).size() ==
series.size());
auto const beforeQuoteAssetName1 =
sle->getFieldArray(sfPriceDataSeries)[0]
.getFieldCurrency(sfQuoteAsset)
.getText();
auto const beforeQuoteAssetName2 =
sle->getFieldArray(sfPriceDataSeries)[1]
.getFieldCurrency(sfQuoteAsset)
.getText();
oracle.set(UpdateArg{.series = series, .fee = baseFee});
sle = env.le(keylet::oracle(owner, oracle.documentID()));
auto const afterQuoteAssetName1 =
sle->getFieldArray(sfPriceDataSeries)[0]
.getFieldCurrency(sfQuoteAsset)
.getText();
auto const afterQuoteAssetName2 =
sle->getFieldArray(sfPriceDataSeries)[1]
.getFieldCurrency(sfQuoteAsset)
.getText();
if (env.current()->rules().enabled(fixPriceOracleOrder))
{
BEAST_EXPECT(afterQuoteAssetName1 == beforeQuoteAssetName1);
BEAST_EXPECT(afterQuoteAssetName2 == beforeQuoteAssetName2);
}
else
{
BEAST_EXPECT(afterQuoteAssetName1 != beforeQuoteAssetName1);
BEAST_EXPECT(afterQuoteAssetName2 != beforeQuoteAssetName2);
}
};
test(env, {{"XRP", "USD", 742, 2}, {"XRP", "EUR", 711, 2}});
}
}
void

View File

@@ -229,7 +229,6 @@ class RCLValidations_test : public beast::unit_test::suite
// support for a ledger hash which is already in the trie.
using Seq = RCLValidatedLedger::Seq;
using ID = RCLValidatedLedger::ID;
// Max known ancestors for each ledger
Seq const maxAncestors = 256;

View File

@@ -234,6 +234,28 @@ class Vault_test : public beast::unit_test::suite
env(tx, ter{tecNO_PERMISSION});
}
{
testcase(prefix + " fail to withdraw to zero destination");
auto tx = vault.withdraw(
{.depositor = depositor,
.id = keylet.key,
.amount = asset(1000)});
tx[sfDestination] = "0";
env(tx, ter(temMALFORMED));
}
{
testcase(
prefix +
" fail to withdraw with tag but without destination");
auto tx = vault.withdraw(
{.depositor = depositor,
.id = keylet.key,
.amount = asset(1000)});
tx[sfDestinationTag] = "0";
env(tx, ter(temMALFORMED));
}
if (!asset.raw().native())
{
testcase(
@@ -1335,6 +1357,7 @@ class Vault_test : public beast::unit_test::suite
struct CaseArgs
{
bool enableClawback = true;
bool requireAuth = true;
};
auto testCase = [this](
@@ -1356,16 +1379,20 @@ class Vault_test : public beast::unit_test::suite
Vault vault{env};
MPTTester mptt{env, issuer, mptInitNoFund};
auto const none = LedgerSpecificFlags(0);
mptt.create(
{.flags = tfMPTCanTransfer | tfMPTCanLock |
(args.enableClawback ? lsfMPTCanClawback
: LedgerSpecificFlags(0)) |
tfMPTRequireAuth});
(args.enableClawback ? tfMPTCanClawback : none) |
(args.requireAuth ? tfMPTRequireAuth : none)});
PrettyAsset asset = mptt.issuanceID();
mptt.authorize({.account = owner});
mptt.authorize({.account = issuer, .holder = owner});
mptt.authorize({.account = depositor});
mptt.authorize({.account = issuer, .holder = depositor});
if (args.requireAuth)
{
mptt.authorize({.account = issuer, .holder = owner});
mptt.authorize({.account = issuer, .holder = depositor});
}
env(pay(issuer, depositor, asset(1000)));
env.close();
@@ -1514,6 +1541,100 @@ class Vault_test : public beast::unit_test::suite
}
});
testCase(
[this](
Env& env,
Account const& issuer,
Account const& owner,
Account const& depositor,
PrettyAsset const& asset,
Vault& vault,
MPTTester& mptt) {
testcase(
"MPT 3rd party without MPToken cannot be withdrawal "
"destination");
auto [tx, keylet] =
vault.create({.owner = owner, .asset = asset});
env(tx);
env.close();
tx = vault.deposit(
{.depositor = depositor,
.id = keylet.key,
.amount = asset(100)});
env(tx);
env.close();
{
// Set destination to 3rd party without MPToken
Account charlie{"charlie"};
env.fund(XRP(1000), charlie);
env.close();
tx = vault.withdraw(
{.depositor = depositor,
.id = keylet.key,
.amount = asset(100)});
tx[sfDestination] = charlie.human();
env(tx, ter(tecNO_AUTH));
}
},
{.requireAuth = false});
testCase(
[this](
Env& env,
Account const& issuer,
Account const& owner,
Account const& depositor,
PrettyAsset const& asset,
Vault& vault,
MPTTester& mptt) {
testcase("MPT depositor without MPToken cannot withdraw");
auto [tx, keylet] =
vault.create({.owner = owner, .asset = asset});
env(tx);
env.close();
tx = vault.deposit(
{.depositor = depositor,
.id = keylet.key,
.amount = asset(1000)});
env(tx);
env.close();
{
// Remove depositor's MPToken and withdraw will fail
mptt.authorize(
{.account = depositor, .flags = tfMPTUnauthorize});
env.close();
auto const mptoken =
env.le(keylet::mptoken(mptt.issuanceID(), depositor));
BEAST_EXPECT(mptoken == nullptr);
tx = vault.withdraw(
{.depositor = depositor,
.id = keylet.key,
.amount = asset(100)});
env(tx, ter(tecNO_AUTH));
}
{
// Restore depositor's MPToken and withdraw will succeed
mptt.authorize({.account = depositor});
env.close();
tx = vault.withdraw(
{.depositor = depositor,
.id = keylet.key,
.amount = asset(100)});
env(tx);
}
},
{.requireAuth = false});
testCase([this](
Env& env,
Account const& issuer,
@@ -1803,6 +1924,7 @@ class Vault_test : public beast::unit_test::suite
PrettyAsset const asset = issuer["IOU"];
env.trust(asset(1000), owner);
env.trust(asset(1000), charlie);
env(pay(issuer, owner, asset(200)));
env(rate(issuer, 1.25));
env.close();
@@ -2118,6 +2240,79 @@ class Vault_test : public beast::unit_test::suite
env.close();
});
testCase([&, this](
Env& env,
Account const& owner,
Account const& issuer,
Account const& charlie,
auto,
Vault& vault,
PrettyAsset const& asset,
auto&&...) {
testcase("IOU no trust line to 3rd party");
auto [tx, keylet] = vault.create({.owner = owner, .asset = asset});
env(tx);
env.close();
env(vault.deposit(
{.depositor = owner, .id = keylet.key, .amount = asset(100)}));
env.close();
Account const erin{"erin"};
env.fund(XRP(1000), erin);
env.close();
// Withdraw to 3rd party without trust line
auto const tx1 = [&](ripple::Keylet keylet) {
auto tx = vault.withdraw(
{.depositor = owner,
.id = keylet.key,
.amount = asset(10)});
tx[sfDestination] = erin.human();
return tx;
}(keylet);
env(tx1, ter{tecNO_LINE});
});
testCase([&, this](
Env& env,
Account const& owner,
Account const& issuer,
Account const& charlie,
auto,
Vault& vault,
PrettyAsset const& asset,
auto&&...) {
testcase("IOU no trust line to depositor");
auto [tx, keylet] = vault.create({.owner = owner, .asset = asset});
env(tx);
env.close();
// reset limit, so deposit of all funds will delete the trust line
env.trust(asset(0), owner);
env.close();
env(vault.deposit(
{.depositor = owner, .id = keylet.key, .amount = asset(200)}));
env.close();
auto trustline =
env.le(keylet::line(owner, asset.raw().get<Issue>()));
BEAST_EXPECT(trustline == nullptr);
// Withdraw without trust line, will succeed
auto const tx1 = [&](ripple::Keylet keylet) {
auto tx = vault.withdraw(
{.depositor = owner,
.id = keylet.key,
.amount = asset(10)});
return tx;
}(keylet);
env(tx1);
});
testCase([&, this](
Env& env,
Account const& owner,

View File

@@ -58,10 +58,10 @@ public:
// Insert an item, retrieve it, and age it so it gets purged.
{
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 0);
BEAST_EXPECT(c.size() == 0);
BEAST_EXPECT(!c.insert(1, "one"));
BEAST_EXPECT(c.getCacheSize() == 1);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
{
std::string s;
@@ -72,7 +72,7 @@ public:
++clock;
c.sweep();
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 0);
BEAST_EXPECT(c.size() == 0);
}
// Insert an item, maintain a strong pointer, age it, and
@@ -80,7 +80,7 @@ public:
{
BEAST_EXPECT(!c.insert(2, "two"));
BEAST_EXPECT(c.getCacheSize() == 1);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
{
auto p = c.fetch(2);
@@ -88,14 +88,14 @@ public:
++clock;
c.sweep();
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
}
// Make sure its gone now that our reference is gone
++clock;
c.sweep();
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 0);
BEAST_EXPECT(c.size() == 0);
}
// Insert the same key/value pair and make sure we get the same result
@@ -111,7 +111,7 @@ public:
++clock;
c.sweep();
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 0);
BEAST_EXPECT(c.size() == 0);
}
// Put an object in but keep a strong pointer to it, advance the clock a
@@ -121,24 +121,24 @@ public:
// Put an object in
BEAST_EXPECT(!c.insert(4, "four"));
BEAST_EXPECT(c.getCacheSize() == 1);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
{
// Keep a strong pointer to it
auto const p1 = c.fetch(4);
BEAST_EXPECT(p1 != nullptr);
BEAST_EXPECT(c.getCacheSize() == 1);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
// Advance the clock a lot
++clock;
c.sweep();
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
// Canonicalize a new object with the same key
auto p2 = std::make_shared<std::string>("four");
BEAST_EXPECT(c.canonicalize_replace_client(4, p2));
BEAST_EXPECT(c.getCacheSize() == 1);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
// Make sure we get the original object
BEAST_EXPECT(p1.get() == p2.get());
}
@@ -146,7 +146,7 @@ public:
++clock;
c.sweep();
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 0);
BEAST_EXPECT(c.size() == 0);
}
}
};

View File

@@ -703,10 +703,6 @@ aged_associative_container_test_base::checkContentsRefRef(
Values const& v)
{
using Cont = typename std::remove_reference<C>::type;
using Traits = TestTraits<
Cont::is_unordered::value,
Cont::is_multi::value,
Cont::is_map::value>;
using size_type = typename Cont::size_type;
BEAST_EXPECT(c.size() == v.size());
@@ -761,10 +757,6 @@ typename std::enable_if<!IsUnordered>::type
aged_associative_container_test_base::testConstructEmpty()
{
using Traits = TestTraits<IsUnordered, IsMulti, IsMap>;
using Value = typename Traits::Value;
using Key = typename Traits::Key;
using T = typename Traits::T;
using Clock = typename Traits::Clock;
using Comp = typename Traits::Comp;
using Alloc = typename Traits::Alloc;
using MyComp = typename Traits::MyComp;
@@ -802,10 +794,6 @@ typename std::enable_if<IsUnordered>::type
aged_associative_container_test_base::testConstructEmpty()
{
using Traits = TestTraits<IsUnordered, IsMulti, IsMap>;
using Value = typename Traits::Value;
using Key = typename Traits::Key;
using T = typename Traits::T;
using Clock = typename Traits::Clock;
using Hash = typename Traits::Hash;
using Equal = typename Traits::Equal;
using Alloc = typename Traits::Alloc;
@@ -870,10 +858,6 @@ typename std::enable_if<!IsUnordered>::type
aged_associative_container_test_base::testConstructRange()
{
using Traits = TestTraits<IsUnordered, IsMulti, IsMap>;
using Value = typename Traits::Value;
using Key = typename Traits::Key;
using T = typename Traits::T;
using Clock = typename Traits::Clock;
using Comp = typename Traits::Comp;
using Alloc = typename Traits::Alloc;
using MyComp = typename Traits::MyComp;
@@ -925,10 +909,6 @@ typename std::enable_if<IsUnordered>::type
aged_associative_container_test_base::testConstructRange()
{
using Traits = TestTraits<IsUnordered, IsMulti, IsMap>;
using Value = typename Traits::Value;
using Key = typename Traits::Key;
using T = typename Traits::T;
using Clock = typename Traits::Clock;
using Hash = typename Traits::Hash;
using Equal = typename Traits::Equal;
using Alloc = typename Traits::Alloc;
@@ -996,14 +976,6 @@ typename std::enable_if<!IsUnordered>::type
aged_associative_container_test_base::testConstructInitList()
{
using Traits = TestTraits<IsUnordered, IsMulti, IsMap>;
using Value = typename Traits::Value;
using Key = typename Traits::Key;
using T = typename Traits::T;
using Clock = typename Traits::Clock;
using Comp = typename Traits::Comp;
using Alloc = typename Traits::Alloc;
using MyComp = typename Traits::MyComp;
using MyAlloc = typename Traits::MyAlloc;
typename Traits::ManualClock clock;
// testcase (Traits::name() + " init-list");
@@ -1020,16 +992,6 @@ typename std::enable_if<IsUnordered>::type
aged_associative_container_test_base::testConstructInitList()
{
using Traits = TestTraits<IsUnordered, IsMulti, IsMap>;
using Value = typename Traits::Value;
using Key = typename Traits::Key;
using T = typename Traits::T;
using Clock = typename Traits::Clock;
using Hash = typename Traits::Hash;
using Equal = typename Traits::Equal;
using Alloc = typename Traits::Alloc;
using MyHash = typename Traits::MyHash;
using MyEqual = typename Traits::MyEqual;
using MyAlloc = typename Traits::MyAlloc;
typename Traits::ManualClock clock;
// testcase (Traits::name() + " init-list");
@@ -1050,7 +1012,6 @@ void
aged_associative_container_test_base::testCopyMove()
{
using Traits = TestTraits<IsUnordered, IsMulti, IsMap>;
using Value = typename Traits::Value;
using Alloc = typename Traits::Alloc;
typename Traits::ManualClock clock;
auto const v(Traits::values());
@@ -1121,8 +1082,6 @@ void
aged_associative_container_test_base::testIterator()
{
using Traits = TestTraits<IsUnordered, IsMulti, IsMap>;
using Value = typename Traits::Value;
using Alloc = typename Traits::Alloc;
typename Traits::ManualClock clock;
auto const v(Traits::values());
@@ -1179,8 +1138,6 @@ typename std::enable_if<!IsUnordered>::type
aged_associative_container_test_base::testReverseIterator()
{
using Traits = TestTraits<IsUnordered, IsMulti, IsMap>;
using Value = typename Traits::Value;
using Alloc = typename Traits::Alloc;
typename Traits::ManualClock clock;
auto const v(Traits::values());
@@ -1190,7 +1147,6 @@ aged_associative_container_test_base::testReverseIterator()
typename Traits::template Cont<> c{clock};
using iterator = decltype(c.begin());
using const_iterator = decltype(c.cbegin());
using reverse_iterator = decltype(c.rbegin());
using const_reverse_iterator = decltype(c.crbegin());
@@ -1394,7 +1350,6 @@ void
aged_associative_container_test_base::testChronological()
{
using Traits = TestTraits<IsUnordered, IsMulti, IsMap>;
using Value = typename Traits::Value;
typename Traits::ManualClock clock;
auto const v(Traits::values());
@@ -1760,7 +1715,6 @@ typename std::enable_if<!IsUnordered>::type
aged_associative_container_test_base::testCompare()
{
using Traits = TestTraits<IsUnordered, IsMulti, IsMap>;
using Value = typename Traits::Value;
typename Traits::ManualClock clock;
auto const v(Traits::values());
@@ -1832,8 +1786,6 @@ template <bool IsUnordered, bool IsMulti, bool IsMap>
void
aged_associative_container_test_base::testMaybeUnorderedMultiMap()
{
using Traits = TestTraits<IsUnordered, IsMulti, IsMap>;
testConstructEmpty<IsUnordered, IsMulti, IsMap>();
testConstructRange<IsUnordered, IsMulti, IsMap>();
testConstructInitList<IsUnordered, IsMulti, IsMap>();

View File

@@ -0,0 +1,246 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2025 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <xrpl/beast/hash/xxhasher.h>
#include <xrpl/beast/unit_test.h>
namespace beast {
int volatile xxhasher::test = 0;
class XXHasher_test : public unit_test::suite
{
public:
void
testWithoutSeed()
{
testcase("Without seed");
xxhasher hasher{};
std::string objectToHash{"Hello, xxHash!"};
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
16042857369214894119ULL);
}
void
testWithSeed()
{
testcase("With seed");
xxhasher hasher{static_cast<std::uint32_t>(102)};
std::string objectToHash{"Hello, xxHash!"};
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
14440132435660934800ULL);
}
void
testWithTwoSeeds()
{
testcase("With two seeds");
xxhasher hasher{
static_cast<std::uint32_t>(102), static_cast<std::uint32_t>(103)};
std::string objectToHash{"Hello, xxHash!"};
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
14440132435660934800ULL);
}
void
testBigObjectWithMultiupleSmallUpdatesWithoutSeed()
{
testcase("Big object with multiple small updates without seed");
xxhasher hasher{};
std::string objectToHash{"Hello, xxHash!"};
for (int i = 0; i < 100; i++)
{
hasher(objectToHash.data(), objectToHash.size());
}
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
15296278154063476002ULL);
}
void
testBigObjectWithMultiupleSmallUpdatesWithSeed()
{
testcase("Big object with multiple small updates with seed");
xxhasher hasher{static_cast<std::uint32_t>(103)};
std::string objectToHash{"Hello, xxHash!"};
for (int i = 0; i < 100; i++)
{
hasher(objectToHash.data(), objectToHash.size());
}
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
17285302196561698791ULL);
BEAST_EXPECT(xxhasher::test > 0);
}
void
testBigObjectWithSmallAndBigUpdatesWithoutSeed()
{
testcase("Big object with small and big updates without seed");
xxhasher hasher{};
std::string objectToHash{"Hello, xxHash!"};
std::string bigObject;
for (int i = 0; i < 20; i++)
{
bigObject += "Hello, xxHash!";
}
hasher(objectToHash.data(), objectToHash.size());
hasher(bigObject.data(), bigObject.size());
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
1865045178324729219ULL);
BEAST_EXPECT(xxhasher::test > 0);
}
void
testBigObjectWithSmallAndBigUpdatesWithSeed()
{
testcase("Big object with small and big updates with seed");
xxhasher hasher{static_cast<std::uint32_t>(103)};
std::string objectToHash{"Hello, xxHash!"};
std::string bigObject;
for (int i = 0; i < 20; i++)
{
bigObject += "Hello, xxHash!";
}
hasher(objectToHash.data(), objectToHash.size());
hasher(bigObject.data(), bigObject.size());
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
16189862915636005281ULL);
BEAST_EXPECT(xxhasher::test > 0);
}
void
testBigObjectWithOneUpdateWithoutSeed()
{
testcase("Big object with one update without seed");
xxhasher hasher{};
std::string objectToHash;
for (int i = 0; i < 100; i++)
{
objectToHash += "Hello, xxHash!";
}
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
15296278154063476002ULL);
BEAST_EXPECT(xxhasher::test > 0);
}
void
testBigObjectWithOneUpdateWithSeed()
{
testcase("Big object with one update with seed");
xxhasher hasher{static_cast<std::uint32_t>(103)};
std::string objectToHash;
for (int i = 0; i < 100; i++)
{
objectToHash += "Hello, xxHash!";
}
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
17285302196561698791ULL);
BEAST_EXPECT(xxhasher::test > 0);
}
void
testOperatorResultTypeDoesNotChangeInternalState()
{
testcase("Operator result type doesn't change the internal state");
{
xxhasher hasher;
std::string object{"Hello xxhash"};
hasher(object.data(), object.size());
auto xxhashResult1 = static_cast<xxhasher::result_type>(hasher);
auto xxhashResult2 = static_cast<xxhasher::result_type>(hasher);
BEAST_EXPECT(xxhashResult1 == xxhashResult2);
}
{
xxhasher hasher;
std::string object;
for (int i = 0; i < 100; i++)
{
object += "Hello, xxHash!";
}
hasher(object.data(), object.size());
auto xxhashResult1 = hasher.operator xxhasher::result_type();
auto xxhashResult2 = hasher.operator xxhasher::result_type();
BEAST_EXPECT(xxhashResult1 == xxhashResult2);
}
BEAST_EXPECT(xxhasher::test > 0);
}
void
run() override
{
testWithoutSeed();
testWithSeed();
testWithTwoSeeds();
testBigObjectWithMultiupleSmallUpdatesWithoutSeed();
testBigObjectWithMultiupleSmallUpdatesWithSeed();
testBigObjectWithSmallAndBigUpdatesWithoutSeed();
testBigObjectWithSmallAndBigUpdatesWithSeed();
testBigObjectWithOneUpdateWithoutSeed();
testBigObjectWithOneUpdateWithSeed();
testOperatorResultTypeDoesNotChangeInternalState();
}
};
BEAST_DEFINE_TESTSUITE(XXHasher, beast_core, beast);
} // namespace beast

View File

@@ -313,7 +313,6 @@ class LedgerTrie_test : public beast::unit_test::suite
testSupport()
{
using namespace csf;
using Seq = Ledger::Seq;
LedgerTrie<Ledger> t;
LedgerHistoryHelper h;
@@ -596,7 +595,6 @@ class LedgerTrie_test : public beast::unit_test::suite
testRootRelated()
{
using namespace csf;
using Seq = Ledger::Seq;
// Since the root is a special node that breaks the no-single child
// invariant, do some tests that exercise it.

View File

@@ -805,7 +805,6 @@ class Validations_test : public beast::unit_test::suite
Ledger ledgerACD = h["acd"];
using Seq = Ledger::Seq;
using ID = Ledger::ID;
auto pref = [](Ledger ledger) {
return std::make_pair(ledger.seq(), ledger.id());

View File

@@ -33,6 +33,7 @@
#include <boost/asio.hpp>
#include <boost/asio/ip/tcp.hpp>
#include <boost/asio/ssl/stream.hpp>
#include <boost/beast/core/flat_buffer.hpp>
#include <boost/beast/http.hpp>
#include <boost/beast/ssl.hpp>
#include <boost/beast/version.hpp>
@@ -220,9 +221,8 @@ public:
getList_ = [blob = blob, sig, manifest, version](int interval) {
// Build the contents of a version 1 format UNL file
std::stringstream l;
l << "{\"blob\":\"" << blob << "\""
<< ",\"signature\":\"" << sig << "\""
<< ",\"manifest\":\"" << manifest << "\""
l << "{\"blob\":\"" << blob << "\"" << ",\"signature\":\"" << sig
<< "\"" << ",\"manifest\":\"" << manifest << "\""
<< ",\"refresh_interval\": " << interval
<< ",\"version\":" << version << '}';
return l.str();
@@ -257,15 +257,14 @@ public:
std::stringstream l;
for (auto const& info : blobInfo)
{
l << "{\"blob\":\"" << info.blob << "\""
<< ",\"signature\":\"" << info.signature << "\"},";
l << "{\"blob\":\"" << info.blob << "\"" << ",\"signature\":\""
<< info.signature << "\"},";
}
std::string blobs = l.str();
blobs.pop_back();
l.str(std::string());
l << "{\"blobs_v2\": [ " << blobs << "],\"manifest\":\"" << manifest
<< "\""
<< ",\"refresh_interval\": " << interval
<< "\"" << ",\"refresh_interval\": " << interval
<< ",\"version\":" << (version + 1) << '}';
return l.str();
};

View File

@@ -21,6 +21,7 @@
#include <test/jtx/WSClient.h>
#include <xrpl/beast/unit_test.h>
#include <xrpl/beast/unit_test/suite.h>
#include <xrpl/protocol/jss.h>
namespace ripple {
@@ -329,12 +330,95 @@ class DeliveredAmount_test : public beast::unit_test::suite
}
}
void
testMPTDeliveredAmountRPC(FeatureBitset features)
{
testcase("MPT DeliveredAmount");
using namespace jtx;
Account const alice("alice");
Account const carol("carol");
Account const bob("bob");
Env env{*this, features};
MPTTester mptAlice(
env, alice, {.holders = {bob, carol}, .close = false});
mptAlice.create(
{.transferFee = 25000,
.ownerCount = 1,
.holderCount = 0,
.flags = tfMPTCanTransfer});
auto const MPT = mptAlice["MPT"];
mptAlice.authorize({.account = bob});
mptAlice.authorize({.account = carol});
// issuer to holder
mptAlice.pay(alice, bob, 10000);
// holder to holder
env(pay(bob, carol, mptAlice.mpt(1000)), txflags(tfPartialPayment));
env.close();
// Get the hash for the most recent transaction.
std::string txHash{
env.tx()->getJson(JsonOptions::none)[jss::hash].asString()};
Json::Value meta = env.rpc("tx", txHash)[jss::result][jss::meta];
if (features[fixMPTDeliveredAmount])
{
BEAST_EXPECT(
meta[sfDeliveredAmount.jsonName] ==
STAmount{MPT(800)}.getJson(JsonOptions::none));
BEAST_EXPECT(
meta[jss::delivered_amount] ==
STAmount{MPT(800)}.getJson(JsonOptions::none));
}
else
{
BEAST_EXPECT(!meta.isMember(sfDeliveredAmount.jsonName));
BEAST_EXPECT(
meta[jss::delivered_amount] = Json::Value("unavailable"));
}
env(pay(bob, carol, MPT(1000)),
sendmax(MPT(1200)),
txflags(tfPartialPayment));
env.close();
txHash = env.tx()->getJson(JsonOptions::none)[jss::hash].asString();
meta = env.rpc("tx", txHash)[jss::result][jss::meta];
if (features[fixMPTDeliveredAmount])
{
BEAST_EXPECT(
meta[sfDeliveredAmount.jsonName] ==
STAmount{MPT(960)}.getJson(JsonOptions::none));
BEAST_EXPECT(
meta[jss::delivered_amount] ==
STAmount{MPT(960)}.getJson(JsonOptions::none));
}
else
{
BEAST_EXPECT(!meta.isMember(sfDeliveredAmount.jsonName));
BEAST_EXPECT(
meta[jss::delivered_amount] = Json::Value("unavailable"));
}
}
public:
void
run() override
{
using namespace test::jtx;
FeatureBitset const all{testable_amendments()};
testTxDeliveredAmountRPC();
testAccountDeliveredAmountSubscribe();
testMPTDeliveredAmountRPC(all - fixMPTDeliveredAmount);
testMPTDeliveredAmountRPC(all);
}
};

View File

@@ -681,7 +681,7 @@ class ServerStatus_test : public beast::unit_test::suite,
resp["Upgrade"] == "websocket");
BEAST_EXPECT(
resp.find("Connection") != resp.end() &&
resp["Connection"] == "upgrade");
resp["Connection"] == "Upgrade");
}
void

View File

@@ -26,6 +26,8 @@ OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
#include <boost/filesystem.hpp>
#include <fstream>
namespace ripple {
namespace test {
namespace detail {

View File

@@ -63,8 +63,6 @@ LedgerHistory::insert(
ledger->stateMap().getHash().isNonZero(),
"ripple::LedgerHistory::insert : nonzero hash");
std::unique_lock sl(m_ledgers_by_hash.peekMutex());
bool const alreadyHad = m_ledgers_by_hash.canonicalize_replace_cache(
ledger->info().hash, ledger);
if (validated)
@@ -76,7 +74,6 @@ LedgerHistory::insert(
LedgerHash
LedgerHistory::getLedgerHash(LedgerIndex index)
{
std::unique_lock sl(m_ledgers_by_hash.peekMutex());
if (auto it = mLedgersByIndex.find(index); it != mLedgersByIndex.end())
return it->second;
return {};
@@ -86,13 +83,11 @@ std::shared_ptr<Ledger const>
LedgerHistory::getLedgerBySeq(LedgerIndex index)
{
{
std::unique_lock sl(m_ledgers_by_hash.peekMutex());
auto it = mLedgersByIndex.find(index);
if (it != mLedgersByIndex.end())
{
uint256 hash = it->second;
sl.unlock();
return getLedgerByHash(hash);
}
}
@@ -108,7 +103,6 @@ LedgerHistory::getLedgerBySeq(LedgerIndex index)
{
// Add this ledger to the local tracking by index
std::unique_lock sl(m_ledgers_by_hash.peekMutex());
XRPL_ASSERT(
ret->isImmutable(),
@@ -458,8 +452,6 @@ LedgerHistory::builtLedger(
XRPL_ASSERT(
!hash.isZero(), "ripple::LedgerHistory::builtLedger : nonzero hash");
std::unique_lock sl(m_consensus_validated.peekMutex());
auto entry = std::make_shared<cv_entry>();
m_consensus_validated.canonicalize_replace_client(index, entry);
@@ -500,8 +492,6 @@ LedgerHistory::validatedLedger(
!hash.isZero(),
"ripple::LedgerHistory::validatedLedger : nonzero hash");
std::unique_lock sl(m_consensus_validated.peekMutex());
auto entry = std::make_shared<cv_entry>();
m_consensus_validated.canonicalize_replace_client(index, entry);
@@ -535,10 +525,9 @@ LedgerHistory::validatedLedger(
bool
LedgerHistory::fixIndex(LedgerIndex ledgerIndex, LedgerHash const& ledgerHash)
{
std::unique_lock sl(m_ledgers_by_hash.peekMutex());
auto ledger = m_ledgers_by_hash.fetch(ledgerHash);
auto it = mLedgersByIndex.find(ledgerIndex);
if ((it != mLedgersByIndex.end()) && (it->second != ledgerHash))
if (ledger && (it != mLedgersByIndex.end()) && (it->second != ledgerHash))
{
it->second = ledgerHash;
return false;

View File

@@ -79,7 +79,7 @@
#include <chrono>
#include <condition_variable>
#include <cstring>
#include <iostream>
#include <fstream>
#include <limits>
#include <mutex>
#include <optional>

View File

@@ -39,7 +39,7 @@
#include <google/protobuf/stubs/common.h>
#include <cstdlib>
#include <iostream>
#include <fstream>
#include <stdexcept>
#include <utility>

View File

@@ -315,14 +315,14 @@ escrowCreatePreclaimHelper<MPTIssue>(
// authorized
auto const& mptIssue = amount.get<MPTIssue>();
if (auto const ter =
requireAuth(ctx.view, mptIssue, account, MPTAuthType::WeakAuth);
requireAuth(ctx.view, mptIssue, account, AuthType::WeakAuth);
ter != tesSUCCESS)
return ter;
// If the issuer has requireAuth set, check if the destination is
// authorized
if (auto const ter =
requireAuth(ctx.view, mptIssue, dest, MPTAuthType::WeakAuth);
requireAuth(ctx.view, mptIssue, dest, AuthType::WeakAuth);
ter != tesSUCCESS)
return ter;
@@ -746,7 +746,7 @@ escrowFinishPreclaimHelper<MPTIssue>(
// authorized
auto const& mptIssue = amount.get<MPTIssue>();
if (auto const ter =
requireAuth(ctx.view, mptIssue, dest, MPTAuthType::WeakAuth);
requireAuth(ctx.view, mptIssue, dest, AuthType::WeakAuth);
ter != tesSUCCESS)
return ter;
@@ -1259,7 +1259,7 @@ escrowCancelPreclaimHelper<MPTIssue>(
// authorized
auto const& mptIssue = amount.get<MPTIssue>();
if (auto const ter =
requireAuth(ctx.view, mptIssue, account, MPTAuthType::WeakAuth);
requireAuth(ctx.view, mptIssue, account, AuthType::WeakAuth);
ter != tesSUCCESS)
return ter;

View File

@@ -175,126 +175,18 @@ MPTokenAuthorize::createMPToken(
return tesSUCCESS;
}
TER
MPTokenAuthorize::authorize(
ApplyView& view,
beast::Journal journal,
MPTAuthorizeArgs const& args)
{
auto const sleAcct = view.peek(keylet::account(args.account));
if (!sleAcct)
return tecINTERNAL;
// If the account that submitted the tx is a holder
// Note: `account_` is holder's account
// `holderID` is NOT used
if (!args.holderID)
{
// When a holder wants to unauthorize/delete a MPT, the ledger must
// - delete mptokenKey from owner directory
// - delete the MPToken
if (args.flags & tfMPTUnauthorize)
{
auto const mptokenKey =
keylet::mptoken(args.mptIssuanceID, args.account);
auto const sleMpt = view.peek(mptokenKey);
if (!sleMpt || (*sleMpt)[sfMPTAmount] != 0)
return tecINTERNAL; // LCOV_EXCL_LINE
if (!view.dirRemove(
keylet::ownerDir(args.account),
(*sleMpt)[sfOwnerNode],
sleMpt->key(),
false))
return tecINTERNAL; // LCOV_EXCL_LINE
adjustOwnerCount(view, sleAcct, -1, journal);
view.erase(sleMpt);
return tesSUCCESS;
}
// A potential holder wants to authorize/hold a mpt, the ledger must:
// - add the new mptokenKey to the owner directory
// - create the MPToken object for the holder
// The reserve that is required to create the MPToken. Note
// that although the reserve increases with every item
// an account owns, in the case of MPTokens we only
// *enforce* a reserve if the user owns more than two
// items. This is similar to the reserve requirements of trust lines.
std::uint32_t const uOwnerCount = sleAcct->getFieldU32(sfOwnerCount);
XRPAmount const reserveCreate(
(uOwnerCount < 2) ? XRPAmount(beast::zero)
: view.fees().accountReserve(uOwnerCount + 1));
if (args.priorBalance < reserveCreate)
return tecINSUFFICIENT_RESERVE;
auto const mptokenKey =
keylet::mptoken(args.mptIssuanceID, args.account);
auto mptoken = std::make_shared<SLE>(mptokenKey);
if (auto ter = dirLink(view, args.account, mptoken))
return ter; // LCOV_EXCL_LINE
(*mptoken)[sfAccount] = args.account;
(*mptoken)[sfMPTokenIssuanceID] = args.mptIssuanceID;
(*mptoken)[sfFlags] = 0;
view.insert(mptoken);
// Update owner count.
adjustOwnerCount(view, sleAcct, 1, journal);
return tesSUCCESS;
}
auto const sleMptIssuance =
view.read(keylet::mptIssuance(args.mptIssuanceID));
if (!sleMptIssuance)
return tecINTERNAL;
// If the account that submitted this tx is the issuer of the MPT
// Note: `account_` is issuer's account
// `holderID` is holder's account
if (args.account != (*sleMptIssuance)[sfIssuer])
return tecINTERNAL;
auto const sleMpt =
view.peek(keylet::mptoken(args.mptIssuanceID, *args.holderID));
if (!sleMpt)
return tecINTERNAL;
std::uint32_t const flagsIn = sleMpt->getFieldU32(sfFlags);
std::uint32_t flagsOut = flagsIn;
// Issuer wants to unauthorize the holder, unset lsfMPTAuthorized on
// their MPToken
if (args.flags & tfMPTUnauthorize)
flagsOut &= ~lsfMPTAuthorized;
// Issuer wants to authorize a holder, set lsfMPTAuthorized on their
// MPToken
else
flagsOut |= lsfMPTAuthorized;
if (flagsIn != flagsOut)
sleMpt->setFieldU32(sfFlags, flagsOut);
view.update(sleMpt);
return tesSUCCESS;
}
TER
MPTokenAuthorize::doApply()
{
auto const& tx = ctx_.tx;
return authorize(
return authorizeMPToken(
ctx_.view(),
mPriorBalance,
tx[sfMPTokenIssuanceID],
account_,
ctx_.journal,
{.priorBalance = mPriorBalance,
.mptIssuanceID = tx[sfMPTokenIssuanceID],
.account = account_,
.flags = tx.getFlags(),
.holderID = tx[~sfHolder]});
tx.getFlags(),
tx[~sfHolder]);
}
} // namespace ripple

View File

@@ -48,12 +48,6 @@ public:
static TER
preclaim(PreclaimContext const& ctx);
static TER
authorize(
ApplyView& view,
beast::Journal journal,
MPTAuthorizeArgs const& args);
static TER
createMPToken(
ApplyView& view,

View File

@@ -580,7 +580,16 @@ Payment::doApply()
auto res = accountSend(
pv, account_, dstAccountID, amountDeliver, ctx_.journal);
if (res == tesSUCCESS)
{
pv.apply(ctx_.rawView());
// If the actual amount delivered is different from the original
// amount due to partial payment or transfer fee, we need to update
// DelieveredAmount using the actual delivered amount
if (view().rules().enabled(fixMPTDeliveredAmount) &&
amountDeliver != dstAmount)
ctx_.deliver(amountDeliver);
}
else if (res == tecINSUFFICIENT_FUNDS || res == tecPATH_DRY)
res = tecPATH_PARTIAL;

View File

@@ -209,6 +209,17 @@ SetOracle::doApply()
{
auto const oracleID = keylet::oracle(account_, ctx_.tx[sfOracleDocumentID]);
auto populatePriceData = [](STObject& priceData, STObject const& entry) {
setPriceDataInnerObjTemplate(priceData);
priceData.setFieldCurrency(
sfBaseAsset, entry.getFieldCurrency(sfBaseAsset));
priceData.setFieldCurrency(
sfQuoteAsset, entry.getFieldCurrency(sfQuoteAsset));
priceData.setFieldU64(sfAssetPrice, entry.getFieldU64(sfAssetPrice));
if (entry.isFieldPresent(sfScale))
priceData.setFieldU8(sfScale, entry.getFieldU8(sfScale));
};
if (auto sle = ctx_.view().peek(oracleID))
{
// update
@@ -249,15 +260,7 @@ SetOracle::doApply()
{
// add a token pair with the price
STObject priceData{sfPriceData};
setPriceDataInnerObjTemplate(priceData);
priceData.setFieldCurrency(
sfBaseAsset, entry.getFieldCurrency(sfBaseAsset));
priceData.setFieldCurrency(
sfQuoteAsset, entry.getFieldCurrency(sfQuoteAsset));
priceData.setFieldU64(
sfAssetPrice, entry.getFieldU64(sfAssetPrice));
if (entry.isFieldPresent(sfScale))
priceData.setFieldU8(sfScale, entry.getFieldU8(sfScale));
populatePriceData(priceData, entry);
pairs.emplace(key, std::move(priceData));
}
}
@@ -285,7 +288,26 @@ SetOracle::doApply()
sle->setFieldVL(sfProvider, ctx_.tx[sfProvider]);
if (ctx_.tx.isFieldPresent(sfURI))
sle->setFieldVL(sfURI, ctx_.tx[sfURI]);
auto const& series = ctx_.tx.getFieldArray(sfPriceDataSeries);
STArray series;
if (!ctx_.view().rules().enabled(fixPriceOracleOrder))
{
series = ctx_.tx.getFieldArray(sfPriceDataSeries);
}
else
{
std::map<std::pair<Currency, Currency>, STObject> pairs;
for (auto const& entry : ctx_.tx.getFieldArray(sfPriceDataSeries))
{
auto const key = tokenPairKey(entry);
STObject priceData{sfPriceData};
populatePriceData(priceData, entry);
pairs.emplace(key, std::move(priceData));
}
for (auto const& iter : pairs)
series.push_back(std::move(iter.second));
}
sle->setFieldArray(sfPriceDataSeries, series);
sle->setFieldVL(sfAssetClass, ctx_.tx[sfAssetClass]);
sle->setFieldU32(sfLastUpdateTime, ctx_.tx[sfLastUpdateTime]);

View File

@@ -210,12 +210,12 @@ VaultDeposit::doApply()
auto sleMpt = view().read(keylet::mptoken(mptIssuanceID, account_));
if (!sleMpt)
{
if (auto const err = MPTokenAuthorize::authorize(
if (auto const err = authorizeMPToken(
view(),
ctx_.journal,
{.priorBalance = mPriorBalance,
.mptIssuanceID = mptIssuanceID->value(),
.account = account_});
mPriorBalance,
mptIssuanceID->value(),
account_,
ctx_.journal);
!isTesSuccess(err))
return err;
}
@@ -223,15 +223,15 @@ VaultDeposit::doApply()
// If the vault is private, set the authorized flag for the vault owner
if (vault->isFlag(tfVaultPrivate))
{
if (auto const err = MPTokenAuthorize::authorize(
if (auto const err = authorizeMPToken(
view(),
mPriorBalance, // priorBalance
mptIssuanceID->value(), // mptIssuanceID
sleIssuance->at(sfIssuer), // account
ctx_.journal,
{
.priorBalance = mPriorBalance,
.mptIssuanceID = mptIssuanceID->value(),
.account = sleIssuance->at(sfIssuer),
.holderID = account_,
});
{}, // flags
account_ // holderID
);
!isTesSuccess(err))
return err;
}

View File

@@ -52,9 +52,19 @@ VaultWithdraw::preflight(PreflightContext const& ctx)
return temBAD_AMOUNT;
if (auto const destination = ctx.tx[~sfDestination];
destination && *destination == beast::zero)
destination.has_value())
{
JLOG(ctx.j.debug()) << "VaultWithdraw: zero/empty destination account.";
if (*destination == beast::zero)
{
JLOG(ctx.j.debug())
<< "VaultWithdraw: zero/empty destination account.";
return temMALFORMED;
}
}
else if (ctx.tx.isFieldPresent(sfDestinationTag))
{
JLOG(ctx.j.debug()) << "VaultWithdraw: sfDestinationTag is set but "
"sfDestination is not";
return temMALFORMED;
}
@@ -123,33 +133,39 @@ VaultWithdraw::preclaim(PreclaimContext const& ctx)
// Withdrawal to a 3rd party destination account is essentially a transfer,
// via shares in the vault. Enforce all the usual asset transfer checks.
AuthType authType = AuthType::Legacy;
if (account != dstAcct)
{
auto const sleDst = ctx.view.read(keylet::account(dstAcct));
if (sleDst == nullptr)
return tecNO_DST;
if (sleDst->getFlags() & lsfRequireDestTag)
if (sleDst->isFlag(lsfRequireDestTag) &&
!ctx.tx.isFieldPresent(sfDestinationTag))
return tecDST_TAG_NEEDED; // Cannot send without a tag
if (sleDst->getFlags() & lsfDepositAuth)
if (sleDst->isFlag(lsfDepositAuth))
{
if (!ctx.view.exists(keylet::depositPreauth(dstAcct, account)))
return tecNO_PERMISSION;
}
// The destination account must have consented to receive the asset by
// creating a RippleState or MPToken
authType = AuthType::StrongAuth;
}
// Destination MPToken must exist (if asset is an MPT)
if (auto const ter = requireAuth(ctx.view, vaultAsset, dstAcct);
// Destination MPToken (for an MPT) or trust line (for an IOU) must exist
// if not sending to Account.
if (auto const ter = requireAuth(ctx.view, vaultAsset, dstAcct, authType);
!isTesSuccess(ter))
return ter;
// Cannot withdraw from a Vault an Asset frozen for the destination account
if (isFrozen(ctx.view, dstAcct, vaultAsset))
return vaultAsset.holds<Issue>() ? tecFROZEN : tecLOCKED;
if (auto const ret = checkFrozen(ctx.view, dstAcct, vaultAsset))
return ret;
if (isFrozen(ctx.view, account, vaultShare))
return tecLOCKED;
if (auto const ret = checkFrozen(ctx.view, account, vaultShare))
return ret;
return tesSUCCESS;
}

View File

@@ -175,6 +175,29 @@ isFrozen(
asset.value());
}
[[nodiscard]] inline TER
checkFrozen(ReadView const& view, AccountID const& account, Issue const& issue)
{
return isFrozen(view, account, issue) ? (TER)tecFROZEN : (TER)tesSUCCESS;
}
[[nodiscard]] inline TER
checkFrozen(
ReadView const& view,
AccountID const& account,
MPTIssue const& mptIssue)
{
return isFrozen(view, account, mptIssue) ? (TER)tecLOCKED : (TER)tesSUCCESS;
}
[[nodiscard]] inline TER
checkFrozen(ReadView const& view, AccountID const& account, Asset const& asset)
{
return std::visit(
[&](auto const& issue) { return checkFrozen(view, account, issue); },
asset.value());
}
[[nodiscard]] bool
isAnyFrozen(
ReadView const& view,
@@ -577,6 +600,16 @@ addEmptyHolding(
asset.value());
}
[[nodiscard]] TER
authorizeMPToken(
ApplyView& view,
XRPAmount const& priorBalance,
MPTID const& mptIssuanceID,
AccountID const& account,
beast::Journal journal,
std::uint32_t flags = 0,
std::optional<AccountID> holderID = std::nullopt);
// VFALCO NOTE Both STAmount parameters should just
// be "Amount", a unit-less number.
//
@@ -725,19 +758,40 @@ transferXRP(
STAmount const& amount,
beast::Journal j);
/* Check if MPToken exists:
* - StrongAuth - before checking lsfMPTRequireAuth is set
* - WeakAuth - after checking if lsfMPTRequireAuth is set
/* Check if MPToken (for MPT) or trust line (for IOU) exists:
* - StrongAuth - before checking if authorization is required
* - WeakAuth
* for MPT - after checking lsfMPTRequireAuth flag
* for IOU - do not check if trust line exists
* - Legacy
* for MPT - before checking lsfMPTRequireAuth flag i.e. same as StrongAuth
* for IOU - do not check if trust line exists i.e. same as WeakAuth
*/
enum class MPTAuthType : bool { StrongAuth = true, WeakAuth = false };
enum class AuthType { StrongAuth, WeakAuth, Legacy };
/** Check if the account lacks required authorization.
*
* Return tecNO_AUTH or tecNO_LINE if it does
* and tesSUCCESS otherwise.
* Return tecNO_AUTH or tecNO_LINE if it does
* and tesSUCCESS otherwise.
*
* If StrongAuth then return tecNO_LINE if the RippleState doesn't exist. Return
* tecNO_AUTH if lsfRequireAuth is set on the issuer's AccountRoot, and the
* RippleState does exist, and the RippleState is not authorized.
*
* If WeakAuth then return tecNO_AUTH if lsfRequireAuth is set, and the
* RippleState exists, and is not authorized. Return tecNO_LINE if
* lsfRequireAuth is set and the RippleState doesn't exist. Consequently, if
* WeakAuth and lsfRequireAuth is *not* set, this function will return
* tesSUCCESS even if RippleState does *not* exist.
*
* The default "Legacy" auth type is equivalent to WeakAuth.
*/
[[nodiscard]] TER
requireAuth(ReadView const& view, Issue const& issue, AccountID const& account);
requireAuth(
ReadView const& view,
Issue const& issue,
AccountID const& account,
AuthType authType = AuthType::Legacy);
/** Check if the account lacks required authorization.
*
@@ -751,32 +805,33 @@ requireAuth(ReadView const& view, Issue const& issue, AccountID const& account);
* purely defensive, as we currently do not allow such vaults to be created.
*
* If StrongAuth then return tecNO_AUTH if MPToken doesn't exist or
* lsfMPTRequireAuth is set and MPToken is not authorized. If WeakAuth then
* return tecNO_AUTH if lsfMPTRequireAuth is set and MPToken doesn't exist or is
* not authorized (explicitly or via credentials, if DomainID is set in
* MPTokenIssuance). Consequently, if WeakAuth and lsfMPTRequireAuth is *not*
* set, this function will return true even if MPToken does *not* exist.
* lsfMPTRequireAuth is set and MPToken is not authorized.
*
* If WeakAuth then return tecNO_AUTH if lsfMPTRequireAuth is set and MPToken
* doesn't exist or is not authorized (explicitly or via credentials, if
* DomainID is set in MPTokenIssuance). Consequently, if WeakAuth and
* lsfMPTRequireAuth is *not* set, this function will return true even if
* MPToken does *not* exist.
*
* The default "Legacy" auth type is equivalent to StrongAuth.
*/
[[nodiscard]] TER
requireAuth(
ReadView const& view,
MPTIssue const& mptIssue,
AccountID const& account,
MPTAuthType authType = MPTAuthType::StrongAuth,
AuthType authType = AuthType::Legacy,
int depth = 0);
[[nodiscard]] TER inline requireAuth(
ReadView const& view,
Asset const& asset,
AccountID const& account,
MPTAuthType authType = MPTAuthType::StrongAuth)
AuthType authType = AuthType::Legacy)
{
return std::visit(
[&]<ValidIssueType TIss>(TIss const& issue_) {
if constexpr (std::is_same_v<TIss, Issue>)
return requireAuth(view, issue_, account);
else
return requireAuth(view, issue_, account, authType);
return requireAuth(view, issue_, account, authType);
},
asset.value());
}

View File

@@ -18,7 +18,6 @@
//==============================================================================
#include <xrpld/app/misc/CredentialHelpers.h>
#include <xrpld/app/tx/detail/MPTokenAuthorize.h>
#include <xrpld/ledger/ReadView.h>
#include <xrpld/ledger/View.h>
@@ -505,8 +504,8 @@ accountHolds(
if (zeroIfUnauthorized == ahZERO_IF_UNAUTHORIZED &&
view.rules().enabled(featureSingleAssetVault))
{
if (auto const err = requireAuth(
view, mptIssue, account, MPTAuthType::StrongAuth);
if (auto const err =
requireAuth(view, mptIssue, account, AuthType::StrongAuth);
!isTesSuccess(err))
amount.clear(mptIssue);
}
@@ -1215,12 +1214,115 @@ addEmptyHolding(
if (view.peek(keylet::mptoken(mptID, accountID)))
return tecDUPLICATE;
return MPTokenAuthorize::authorize(
view,
journal,
{.priorBalance = priorBalance,
.mptIssuanceID = mptID,
.account = accountID});
return authorizeMPToken(view, priorBalance, mptID, accountID, journal);
}
[[nodiscard]] TER
authorizeMPToken(
ApplyView& view,
XRPAmount const& priorBalance,
MPTID const& mptIssuanceID,
AccountID const& account,
beast::Journal journal,
std::uint32_t flags,
std::optional<AccountID> holderID)
{
auto const sleAcct = view.peek(keylet::account(account));
if (!sleAcct)
return tecINTERNAL;
// If the account that submitted the tx is a holder
// Note: `account_` is holder's account
// `holderID` is NOT used
if (!holderID)
{
// When a holder wants to unauthorize/delete a MPT, the ledger must
// - delete mptokenKey from owner directory
// - delete the MPToken
if (flags & tfMPTUnauthorize)
{
auto const mptokenKey = keylet::mptoken(mptIssuanceID, account);
auto const sleMpt = view.peek(mptokenKey);
if (!sleMpt || (*sleMpt)[sfMPTAmount] != 0)
return tecINTERNAL; // LCOV_EXCL_LINE
if (!view.dirRemove(
keylet::ownerDir(account),
(*sleMpt)[sfOwnerNode],
sleMpt->key(),
false))
return tecINTERNAL; // LCOV_EXCL_LINE
adjustOwnerCount(view, sleAcct, -1, journal);
view.erase(sleMpt);
return tesSUCCESS;
}
// A potential holder wants to authorize/hold a mpt, the ledger must:
// - add the new mptokenKey to the owner directory
// - create the MPToken object for the holder
// The reserve that is required to create the MPToken. Note
// that although the reserve increases with every item
// an account owns, in the case of MPTokens we only
// *enforce* a reserve if the user owns more than two
// items. This is similar to the reserve requirements of trust lines.
std::uint32_t const uOwnerCount = sleAcct->getFieldU32(sfOwnerCount);
XRPAmount const reserveCreate(
(uOwnerCount < 2) ? XRPAmount(beast::zero)
: view.fees().accountReserve(uOwnerCount + 1));
if (priorBalance < reserveCreate)
return tecINSUFFICIENT_RESERVE;
auto const mptokenKey = keylet::mptoken(mptIssuanceID, account);
auto mptoken = std::make_shared<SLE>(mptokenKey);
if (auto ter = dirLink(view, account, mptoken))
return ter; // LCOV_EXCL_LINE
(*mptoken)[sfAccount] = account;
(*mptoken)[sfMPTokenIssuanceID] = mptIssuanceID;
(*mptoken)[sfFlags] = 0;
view.insert(mptoken);
// Update owner count.
adjustOwnerCount(view, sleAcct, 1, journal);
return tesSUCCESS;
}
auto const sleMptIssuance = view.read(keylet::mptIssuance(mptIssuanceID));
if (!sleMptIssuance)
return tecINTERNAL;
// If the account that submitted this tx is the issuer of the MPT
// Note: `account_` is issuer's account
// `holderID` is holder's account
if (account != (*sleMptIssuance)[sfIssuer])
return tecINTERNAL;
auto const sleMpt = view.peek(keylet::mptoken(mptIssuanceID, *holderID));
if (!sleMpt)
return tecINTERNAL;
std::uint32_t const flagsIn = sleMpt->getFieldU32(sfFlags);
std::uint32_t flagsOut = flagsIn;
// Issuer wants to unauthorize the holder, unset lsfMPTAuthorized on
// their MPToken
if (flags & tfMPTUnauthorize)
flagsOut &= ~lsfMPTAuthorized;
// Issuer wants to authorize a holder, set lsfMPTAuthorized on their
// MPToken
else
flagsOut |= lsfMPTAuthorized;
if (flagsIn != flagsOut)
sleMpt->setFieldU32(sfFlags, flagsOut);
view.update(sleMpt);
return tesSUCCESS;
}
TER
@@ -1418,13 +1520,14 @@ removeEmptyHolding(
if (mptoken->at(sfMPTAmount) != 0)
return tecHAS_OBLIGATIONS;
return MPTokenAuthorize::authorize(
return authorizeMPToken(
view,
{}, // priorBalance
mptID,
accountID,
journal,
{.priorBalance = {},
.mptIssuanceID = mptID,
.account = accountID,
.flags = tfMPTUnauthorize});
tfMPTUnauthorize // flags
);
}
TER
@@ -2298,15 +2401,27 @@ transferXRP(
}
TER
requireAuth(ReadView const& view, Issue const& issue, AccountID const& account)
requireAuth(
ReadView const& view,
Issue const& issue,
AccountID const& account,
AuthType authType)
{
if (isXRP(issue) || issue.account == account)
return tesSUCCESS;
auto const trustLine =
view.read(keylet::line(account, issue.account, issue.currency));
// If account has no line, and this is a strong check, fail
if (!trustLine && authType == AuthType::StrongAuth)
return tecNO_LINE;
// If this is a weak or legacy check, or if the account has a line, fail if
// auth is required and not set on the line
if (auto const issuerAccount = view.read(keylet::account(issue.account));
issuerAccount && (*issuerAccount)[sfFlags] & lsfRequireAuth)
{
if (auto const trustLine =
view.read(keylet::line(account, issue.account, issue.currency)))
if (trustLine)
return ((*trustLine)[sfFlags] &
((account > issue.account) ? lsfLowAuth : lsfHighAuth))
? tesSUCCESS
@@ -2322,7 +2437,7 @@ requireAuth(
ReadView const& view,
MPTIssue const& mptIssue,
AccountID const& account,
MPTAuthType authType,
AuthType authType,
int depth)
{
auto const mptID = keylet::mptIssuance(mptIssue.getMptID());
@@ -2357,7 +2472,7 @@ requireAuth(
if (auto const err = std::visit(
[&]<ValidIssueType TIss>(TIss const& issue) {
if constexpr (std::is_same_v<TIss, Issue>)
return requireAuth(view, issue, account);
return requireAuth(view, issue, account, authType);
else
return requireAuth(
view, issue, account, authType, depth + 1);
@@ -2372,7 +2487,8 @@ requireAuth(
auto const sleToken = view.read(mptokenID);
// if account has no MPToken, fail
if (!sleToken && authType == MPTAuthType::StrongAuth)
if (!sleToken &&
(authType == AuthType::StrongAuth || authType == AuthType::Legacy))
return tecNO_AUTH;
// Note, this check is not amendment-gated because DomainID will be always
@@ -2484,15 +2600,12 @@ enforceMPTokenAuthorization(
XRPL_ASSERT(
maybeDomainID.has_value() && sleToken == nullptr,
"ripple::enforceMPTokenAuthorization : new MPToken for domain");
if (auto const err = MPTokenAuthorize::authorize(
if (auto const err = authorizeMPToken(
view,
j,
{
.priorBalance = priorBalance,
.mptIssuanceID = mptIssuanceID,
.account = account,
.flags = 0,
});
priorBalance, // priorBalance
mptIssuanceID, // mptIssuanceID
account, // account
j);
!isTesSuccess(err))
return err;

View File

@@ -446,6 +446,8 @@ Slot<clock_type>::deletePeer(PublicKey const& validator, id_t id, bool erase)
auto it = peers_.find(id);
if (it != peers_.end())
{
std::vector<Peer::id_t> toUnsquelch;
JLOG(journal_.trace())
<< "deletePeer: " << Slice(validator) << " " << id << " selected "
<< (it->second.state == PeerState::Selected) << " considered "
@@ -457,7 +459,7 @@ Slot<clock_type>::deletePeer(PublicKey const& validator, id_t id, bool erase)
for (auto& [k, v] : peers_)
{
if (v.state == PeerState::Squelched)
handler_.unsquelch(validator, k);
toUnsquelch.push_back(k);
v.state = PeerState::Counting;
v.count = 0;
v.expire = now;
@@ -479,6 +481,10 @@ Slot<clock_type>::deletePeer(PublicKey const& validator, id_t id, bool erase)
if (erase)
peers_.erase(it);
// Must be after peers_.erase(it)
for (auto const& k : toUnsquelch)
handler_.unsquelch(validator, k);
}
}

View File

@@ -1423,7 +1423,12 @@ OverlayImpl::updateSlotAndSquelch(
if (!strand_.running_in_this_thread())
return post(
strand_,
[this, key, validator, peers = std::move(peers), type]() mutable {
// Must capture copies of reference parameters (i.e. key, validator)
[this,
key = key,
validator = validator,
peers = std::move(peers),
type]() mutable {
updateSlotAndSquelch(key, validator, std::move(peers), type);
});
@@ -1444,9 +1449,12 @@ OverlayImpl::updateSlotAndSquelch(
return;
if (!strand_.running_in_this_thread())
return post(strand_, [this, key, validator, peer, type]() {
updateSlotAndSquelch(key, validator, peer, type);
});
return post(
strand_,
// Must capture copies of reference parameters (i.e. key, validator)
[this, key = key, validator = validator, peer, type]() {
updateSlotAndSquelch(key, validator, peer, type);
});
slots_.updateSlotAndSquelch(key, validator, peer, type, [&]() {
reportInboundTraffic(TrafficCount::squelch_ignored, 0);

View File

@@ -114,7 +114,7 @@ getCountsJson(Application& app, int minObjectCount)
ret[jss::treenode_cache_size] =
app.getNodeFamily().getTreeNodeCache()->getCacheSize();
ret[jss::treenode_track_size] =
app.getNodeFamily().getTreeNodeCache()->getTrackSize();
static_cast<int>(app.getNodeFamily().getTreeNodeCache()->size());
std::string uptime;
auto s = UptimeClock::now();

View File

@@ -44,7 +44,6 @@ doLogLevel(RPC::JsonContext& context)
Logs::toString(Logs::fromSeverity(context.app.logs().threshold()));
std::vector<std::pair<std::string, std::string>> logTable(
context.app.logs().partition_severities());
using stringPair = std::map<std::string, std::string>::value_type;
for (auto const& [k, v] : logTable)
lev[k] = v;