Compare commits

..

15 Commits

Author SHA1 Message Date
Bronek Kozicki
e874c4061e Improvements 2025-08-08 11:38:26 +01:00
Bart
892876af5e Merge branch 'develop' into a1q123456/optimise-xxhash 2025-08-07 18:28:36 -04:00
Valentin Balaschenko
94decc753b perf: Move mutex to the partition level (#5486)
This change introduces two key optimizations:
* Mutex scope reduction: Limits the lock to individual partitions within `TaggedCache`, reducing contention.
* Decoupling: Removes the tight coupling between `LedgerHistory` and `TaggedCache`, improving modularity and testability.

Lock contention analysis based on eBPF showed significant improvements as a result of this change.
2025-08-07 17:04:07 -04:00
Bart
991891625a Upload Conan dependencies upon merge into develop (#5654)
This change uploads built Conan dependencies to the Conan remote upon merge into the develop branch.

At the moment, whenever Conan dependencies change, we need to remember to manually push them to our Conan remote, so they are cached for future reuse. If we forget to do so, these changed dependencies need to be rebuilt over and over again, which can take a long time.
2025-08-07 06:52:58 -04:00
Bart
69314e6832 refactor: Remove external libraries as they are hosted in our Conan Center Index fork (#5643)
This change:
* Removes the patched Conan recipes from the `external/` directory.
* Adds instructions for contributors how to obtain our patched recipes.
* Updates the Conan remote name and remote URL (the underlying package repository isn't changed).
* If the remote already exists, updates the URL instead of removing and re-adding.
  * This is not done for the libXRPL job as it still uses Conan 1. This job will be switched to Conan 2 soon.
* Removes duplicate Conan remote CI pipeline steps.
* Overwrites the existing global.conf on MacOS and Windows machines, as those do not run CI pipelines in isolation but all share the same Conan installation; appending the same config over and over bloats the file.
2025-08-06 15:46:13 +00:00
Bronek Kozicki
dbeb841b5a docs: Update BUILD.md for Conan 2 (#5478)
This change updates BUILD.md for Conan 2, add fixes/workarounds for Apple Clang 17, Clang 20 and CMake 4. This also removes (from BUILD.md only) workarounds for compiler versions which we no longer support e.g. Clang 15 and adds compilation flag -Wno-deprecated-declarations to enable building with Clang 20 on Linux.
2025-08-06 10:18:41 +00:00
tequ
4eae037fee fix: Ensures canonical order for PriceDataSeries upon PriceOracle creation (#5485)
This change fixes an issue where the order of `PriceDataSeries` was out of sync between when `PriceOracle` was created and when it was updated. Although they are registered in the canonical order when updated, they are created using the order specified in the transaction; this change ensures that they are also registered in the canonical order when created.
2025-08-05 13:08:59 -04:00
JCW
a2a5a97d70 Merge remote-tracking branch 'origin/develop' into a1q123456/optimise-xxhash 2025-08-05 17:36:38 +01:00
Jingchen
b5a63b39d3 refactor: Decouple ledger from xrpld/app (#5492)
This change decouples `ledger` from `xrpld/app`, and therefore fully clears the path to the modularisation of the ledger component. Before this change, `View.cpp` relied on `MPTokenAuthorize::authorize; this change moves `MPTokenAuthorize::authorize` to `View.cpp` to invert the dependency, making ledger a standalone module.
2025-08-05 15:28:56 +00:00
Bronek Kozicki
096ab3a86d Merge branch 'develop' into a1q123456/optimise-xxhash 2025-07-28 10:12:51 +01:00
JCW
64f1f2d580 Fix formatting
Signed-off-by: JCW <a1q123456@users.noreply.github.com>
2025-07-25 16:38:56 +01:00
JCW
3f9b724ed8 Fix build error 2025-07-25 16:27:31 +01:00
Jingchen
cad9eba7ed Merge branch 'develop' into a1q123456/optimise-xxhash 2025-07-25 15:57:19 +01:00
JCW
dbb989921c Remove iota and ranges to support older gcc 2025-06-06 09:23:55 +01:00
JCW
a0cf51d454 Optimize hash performance by avoiding allocating hash state object 2025-06-04 16:21:34 +01:00
54 changed files with 2117 additions and 4445 deletions

View File

@@ -6,29 +6,17 @@ inputs:
runs:
using: composite
steps:
- name: export custom recipes
shell: bash
run: |
conan export --version 1.1.10 external/snappy
conan export --version 4.0.3 external/soci
- name: add Ripple Conan remote
- name: add Conan remote
if: env.CONAN_URL != ''
shell: bash
run: |
if conan remote list | grep -q "ripple"; then
conan remote remove ripple
echo "Removed conan remote ripple"
if conan remote list | grep -q 'xrplf'; then
conan remote update --index 0 --url ${CONAN_URL} xrplf
echo "Updated Conan remote 'xrplf' to ${CONAN_URL}."
else
conan remote add --index 0 xrplf ${CONAN_URL}
echo "Added new Conan remote 'xrplf' at ${CONAN_URL}."
fi
conan remote add --index 0 ripple "${CONAN_URL}"
echo "Added conan remote ripple at ${CONAN_URL}"
- name: try to authenticate to Ripple Conan remote
if: env.CONAN_LOGIN_USERNAME_RIPPLE != '' && env.CONAN_PASSWORD_RIPPLE != ''
id: remote
shell: bash
run: |
echo "Authenticating to ripple remote..."
conan remote auth ripple --force
conan remote list-users
- name: list missing binaries
id: binaries
shell: bash
@@ -48,3 +36,11 @@ runs:
--options:host "&:xrpld=True" \
--settings:all build_type=${{ inputs.configuration }} \
..
- name: upload dependencies
if: ${{ env.CONAN_URL != '' && env.CONAN_LOGIN_USERNAME_XRPLF != '' && env.CONAN_PASSWORD_XRPLF != '' && github.ref_type == 'branch' && github.ref_name == github.event.repository.default_branch }}
shell: bash
run: |
echo "Logging into Conan remote 'xrplf' at ${CONAN_URL}."
conan remote login xrplf "${{ env.CONAN_LOGIN_USERNAME_XRPLF }}" --password "${{ env.CONAN_PASSWORD_XRPLF }}"
echo "Uploading dependencies for configuration '${{ inputs.configuration }}'."
conan upload --all --confirm --remote xrplf . --settings build_type=${{ inputs.configuration }}

View File

@@ -1,8 +1,8 @@
name: Check libXRPL compatibility with Clio
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/dev
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
CONAN_URL: https://conan.ripplex.io
CONAN_LOGIN_USERNAME_XRPLF: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_XRPLF: ${{ secrets.CONAN_TOKEN }}
on:
pull_request:
paths:
@@ -43,20 +43,20 @@ jobs:
shell: bash
run: |
conan export . ${{ steps.channel.outputs.channel }}
- name: Add Ripple Conan remote
- name: Add Conan remote
shell: bash
run: |
conan remote list
conan remote remove ripple || true
conan remote remove xrplf || true
# Do not quote the URL. An empty string will be accepted (with a non-fatal warning), but a missing argument will not.
conan remote add ripple ${{ env.CONAN_URL }} --insert 0
conan remote add xrplf ${{ env.CONAN_URL }} --insert 0
- name: Parse new version
id: version
shell: bash
run: |
echo version="$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" \
| awk -F '"' '{print $2}')" | tee ${GITHUB_OUTPUT}
- name: Try to authenticate to Ripple Conan remote
- name: Try to authenticate to Conan remote
id: remote
shell: bash
run: |
@@ -64,7 +64,7 @@ jobs:
# https://docs.conan.io/1/reference/commands/misc/user.html#using-environment-variables
# https://docs.conan.io/1/reference/env_vars.html#conan-login-username-conan-login-username-remote-name
# https://docs.conan.io/1/reference/env_vars.html#conan-password-conan-password-remote-name
echo outcome=$(conan user --remote ripple --password >&2 \
echo outcome=$(conan user --remote xrplf --password >&2 \
&& echo success || echo failure) | tee ${GITHUB_OUTPUT}
- name: Upload new package
id: upload

View File

@@ -18,9 +18,9 @@ concurrency:
# This part of Conan configuration is specific to this workflow only; we do not want
# to pollute conan/profiles directory with settings which might not work for others
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/dev
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
CONAN_URL: https://conan.ripplex.io
CONAN_LOGIN_USERNAME_XRPLF: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_XRPLF: ${{ secrets.CONAN_TOKEN }}
CONAN_GLOBAL_CONF: |
core.download:parallel={{os.cpu_count()}}
core.upload:parallel={{os.cpu_count()}}
@@ -87,24 +87,9 @@ jobs:
clang --version
- name: configure Conan
run : |
echo "${CONAN_GLOBAL_CONF}" >> $(conan config home)/global.conf
echo "${CONAN_GLOBAL_CONF}" > $(conan config home)/global.conf
conan config install conan/profiles/ -tf $(conan config home)/profiles/
conan profile show
- name: export custom recipes
shell: bash
run: |
conan export --version 1.1.10 external/snappy
conan export --version 4.0.3 external/soci
- name: add Ripple Conan remote
if: env.CONAN_URL != ''
shell: bash
run: |
if conan remote list | grep -q "ripple"; then
conan remote remove ripple
echo "Removed conan remote ripple"
fi
conan remote add --index 0 ripple "${CONAN_URL}"
echo "Added conan remote ripple at ${CONAN_URL}"
- name: build dependencies
uses: ./.github/actions/dependencies
with:

View File

@@ -19,9 +19,9 @@ concurrency:
# This part of Conan configuration is specific to this workflow only; we do not want
# to pollute conan/profiles directory with settings which might not work for others
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/dev
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
CONAN_URL: https://conan.ripplex.io
CONAN_LOGIN_USERNAME_XRPLF: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_XRPLF: ${{ secrets.CONAN_TOKEN }}
CONAN_GLOBAL_CONF: |
core.download:parallel={{ os.cpu_count() }}
core.upload:parallel={{ os.cpu_count() }}

View File

@@ -21,9 +21,9 @@ concurrency:
# This part of Conan configuration is specific to this workflow only; we do not want
# to pollute conan/profiles directory with settings which might not work for others
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/dev
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
CONAN_URL: https://conan.ripplex.io
CONAN_LOGIN_USERNAME_XRPLF: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_XRPLF: ${{ secrets.CONAN_TOKEN }}
CONAN_GLOBAL_CONF: |
core.download:parallel={{os.cpu_count()}}
core.upload:parallel={{os.cpu_count()}}
@@ -82,24 +82,9 @@ jobs:
- name: configure Conan
shell: bash
run: |
echo "${CONAN_GLOBAL_CONF}" >> $(conan config home)/global.conf
echo "${CONAN_GLOBAL_CONF}" > $(conan config home)/global.conf
conan config install conan/profiles/ -tf $(conan config home)/profiles/
conan profile show
- name: export custom recipes
shell: bash
run: |
conan export --version 1.1.10 external/snappy
conan export --version 4.0.3 external/soci
- name: add Ripple Conan remote
if: env.CONAN_URL != ''
shell: bash
run: |
if conan remote list | grep -q "ripple"; then
conan remote remove ripple
echo "Removed conan remote ripple"
fi
conan remote add --index 0 ripple "${CONAN_URL}"
echo "Added conan remote ripple at ${CONAN_URL}"
- name: build dependencies
uses: ./.github/actions/dependencies
with:

413
BUILD.md
View File

@@ -3,29 +3,29 @@
| These instructions assume you have a C++ development environment ready with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up on Linux, macOS, or Windows, [see this guide](./docs/build/environment.md). |
> These instructions also assume a basic familiarity with Conan and CMake.
> If you are unfamiliar with Conan,
> you can read our [crash course](./docs/build/conan.md)
> or the official [Getting Started][3] walkthrough.
> If you are unfamiliar with Conan, you can read our
> [crash course](./docs/build/conan.md) or the official [Getting Started][3]
> walkthrough.
## Branches
For a stable release, choose the `master` branch or one of the [tagged
releases](https://github.com/ripple/rippled/releases).
```
```bash
git checkout master
```
For the latest release candidate, choose the `release` branch.
```
```bash
git checkout release
```
For the latest set of untested features, or to contribute, choose the `develop`
branch.
```
```bash
git checkout develop
```
@@ -33,151 +33,295 @@ git checkout develop
See [System Requirements](https://xrpl.org/system-requirements.html).
Building rippled generally requires git, Python, Conan, CMake, and a C++ compiler. Some guidance on setting up such a [C++ development environment can be found here](./docs/build/environment.md).
Building rippled generally requires git, Python, Conan, CMake, and a C++
compiler. Some guidance on setting up such a [C++ development environment can be
found here](./docs/build/environment.md).
- [Python 3.7](https://www.python.org/downloads/)
- [Conan 1.60](https://conan.io/downloads.html)[^1]
- [CMake 3.16](https://cmake.org/download/)
- [Python 3.11](https://www.python.org/downloads/), or higher
- [Conan 2.17](https://conan.io/downloads.html)[^1], or higher
- [CMake 3.22](https://cmake.org/download/)[^2], or higher
[^1]: It is possible to build with Conan 2.x,
but the instructions are significantly different,
which is why we are not recommending it yet.
Notably, the `conan profile update` command is removed in 2.x.
Profiles must be edited by hand.
[^1]: It is possible to build with Conan 1.60+, but the instructions are
significantly different, which is why we are not recommending it.
[^2]: CMake 4 is not yet supported by all dependencies required by this project.
If you are affected by this issue, follow [conan workaround for cmake
4](#workaround-for-cmake-4)
`rippled` is written in the C++20 dialect and includes the `<concepts>` header.
The [minimum compiler versions][2] required are:
| Compiler | Version |
|-------------|---------|
| GCC | 11 |
| Clang | 13 |
| Apple Clang | 13.1.6 |
| MSVC | 19.23 |
|-------------|-----|
| GCC | 12 |
| Clang | 16 |
| Apple Clang | 16 |
| MSVC | 19.44[^3] |
### Linux
The Ubuntu operating system has received the highest level of
quality assurance, testing, and support.
The Ubuntu Linux distribution has received the highest level of quality
assurance, testing, and support. We also support Red Hat and use Debian
internally.
Here are [sample instructions for setting up a C++ development environment on Linux](./docs/build/environment.md#linux).
Here are [sample instructions for setting up a C++ development environment on
Linux](./docs/build/environment.md#linux).
### Mac
Many rippled engineers use macOS for development.
Here are [sample instructions for setting up a C++ development environment on macOS](./docs/build/environment.md#macos).
Here are [sample instructions for setting up a C++ development environment on
macOS](./docs/build/environment.md#macos).
### Windows
Windows is not recommended for production use at this time.
Windows is used by some engineers for development only.
- Additionally, 32-bit Windows development is not supported.
[Boost]: https://www.boost.org/
[^3]: Windows is not recommended for production use.
## Steps
### Set Up Conan
After you have a [C++ development environment](./docs/build/environment.md) ready with Git, Python, Conan, CMake, and a C++ compiler, you may need to set up your Conan profile.
After you have a [C++ development environment](./docs/build/environment.md) ready with Git, Python,
Conan, CMake, and a C++ compiler, you may need to set up your Conan profile.
These instructions assume a basic familiarity with Conan and CMake.
These instructions assume a basic familiarity with Conan and CMake. If you are
unfamiliar with Conan, then please read [this crash course](./docs/build/conan.md) or the official
[Getting Started][3] walkthrough.
If you are unfamiliar with Conan, then please read [this crash course](./docs/build/conan.md) or the official [Getting Started][3] walkthrough.
#### Default profile
We recommend that you import the provided `conan/profiles/default` profile:
You'll need at least one Conan profile:
```
conan profile new default --detect
```
Update the compiler settings:
```
conan profile update settings.compiler.cppstd=20 default
```
Configure Conan (1.x only) to use recipe revisions:
```
conan config set general.revisions_enabled=1
```
**Linux** developers will commonly have a default Conan [profile][] that compiles
with GCC and links with libstdc++.
If you are linking with libstdc++ (see profile setting `compiler.libcxx`),
then you will need to choose the `libstdc++11` ABI:
```
conan profile update settings.compiler.libcxx=libstdc++11 default
```
Ensure inter-operability between `boost::string_view` and `std::string_view` types:
```
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_BEAST_USE_STD_STRING_VIEW"]' default
conan profile update 'env.CXXFLAGS="-DBOOST_BEAST_USE_STD_STRING_VIEW"' default
```bash
conan config install conan/profiles/ -tf $(conan config home)/profiles/
```
If you have other flags in the `conf.tools.build` or `env.CXXFLAGS` sections, make sure to retain the existing flags and append the new ones. You can check them with:
```
conan profile show default
You can check your Conan profile by running:
```bash
conan profile show
```
#### Custom profile
**Windows** developers may need to use the x64 native build tools.
An easy way to do that is to run the shortcut "x64 Native Tools Command
Prompt" for the version of Visual Studio that you have installed.
If the default profile does not work for you and you do not yet have a Conan
profile, you can create one by running:
Windows developers must also build `rippled` and its dependencies for the x64
architecture:
```
conan profile update settings.arch=x86_64 default
```
### Multiple compilers
When `/usr/bin/g++` exists on a platform, it is the default cpp compiler. This
default works for some users.
However, if this compiler cannot build rippled or its dependencies, then you can
install another compiler and set Conan and CMake to use it.
Update the `conf.tools.build:compiler_executables` setting in order to set the correct variables (`CMAKE_<LANG>_COMPILER`) in the
generated CMake toolchain file.
For example, on Ubuntu 20, you may have gcc at `/usr/bin/gcc` and g++ at `/usr/bin/g++`; if that is the case, you can select those compilers with:
```
conan profile update 'conf.tools.build:compiler_executables={"c": "/usr/bin/gcc", "cpp": "/usr/bin/g++"}' default
```bash
conan profile detect
```
Replace `/usr/bin/gcc` and `/usr/bin/g++` with paths to the desired compilers.
You may need to make changes to the profile to suit your environment. You can
refer to the provided `conan/profiles/default` profile for inspiration, and you
may also need to apply the required [tweaks](#conan-profile-tweaks) to this
default profile.
It should choose the compiler for dependencies as well,
but not all of them have a Conan recipe that respects this setting (yet).
For the rest, you can set these environment variables.
Replace `<path>` with paths to the desired compilers:
### Patched recipes
- `conan profile update env.CC=<path> default`
- `conan profile update env.CXX=<path> default`
The recipes in Conan Center occasionally need to be patched for compatibility
with the latest version of `rippled`. We maintain a fork of the Conan Center
[here](https://github.com/XRPLF/conan-center-index/) containing the patches.
Export our [Conan recipe for Snappy](./external/snappy).
It does not explicitly link the C++ standard library,
which allows you to statically link it with GCC, if you want.
To ensure our patched recipes are used, you must add our Conan remote at a
higher index than the default Conan Center remote, so it is consulted first. You
can do this by running:
```
# Conan 2.x
conan export --version 1.1.10 external/snappy
```bash
conan remote add --index 0 xrplf "https://conan.ripplex.io"
```
Alternatively, you can pull the patched recipes into the repository and use them
locally:
```bash
cd external
git init
git remote add origin git@github.com:XRPLF/conan-center-index.git
git sparse-checkout init
git sparse-checkout set recipes/snappy
git sparse-checkout add recipes/soci
git fetch origin master
git checkout master
conan export --version 1.1.10 external/recipes/snappy
conan export --version 4.0.3 external/recipes/soci
```
In the case we switch to a newer version of a dependency that still requires a
patch, it will be necessary for you to pull in the changes and re-export the
updated dependencies with the newer version. However, if we switch to a newer
version that no longer requires a patch, no action is required on your part, as
the new recipe will be automatically pulled from the official Conan Center.
### Conan profile tweaks
#### Missing compiler version
If you see an error similar to the following after running `conan profile show`:
```bash
ERROR: Invalid setting '17' is not a valid 'settings.compiler.version' value.
Possible values are ['5.0', '5.1', '6.0', '6.1', '7.0', '7.3', '8.0', '8.1',
'9.0', '9.1', '10.0', '11.0', '12.0', '13', '13.0', '13.1', '14', '14.0', '15',
'15.0', '16', '16.0']
Read "http://docs.conan.io/2/knowledge/faq.html#error-invalid-setting"
```
you need to amend the list of compiler versions in
`$(conan config home)/settings.yml`, by appending the required version number(s)
to the `version` array specific for your compiler. For example:
```yaml
apple-clang:
version: ["5.0", "5.1", "6.0", "6.1", "7.0", "7.3", "8.0", "8.1", "9.0",
"9.1", "10.0", "11.0", "12.0", "13", "13.0", "13.1", "14",
"14.0", "15", "15.0", "16", "16.0", "17", "17.0"]
```
#### Multiple compilers
If you have multiple compilers installed, make sure to select the one to use in
your default Conan configuration **before** running `conan profile detect`, by
setting the `CC` and `CXX` environment variables.
For example, if you are running MacOS and have [homebrew
LLVM@18](https://formulae.brew.sh/formula/llvm@18), and want to use it as a
compiler in the new Conan profile:
```bash
export CC=$(brew --prefix llvm@18)/bin/clang
export CXX=$(brew --prefix llvm@18)/bin/clang++
conan profile detect
```
Export our [Conan recipe for SOCI](./external/soci).
It patches their CMake to correctly import its dependencies.
You should also explicitly set the path to the compiler in the profile file,
which helps to avoid errors when `CC` and/or `CXX` are set and disagree with the
selected Conan profile. For example:
```
# Conan 2.x
conan export --version 4.0.3 external/soci
```
```text
[conf]
tools.build:compiler_executables={'c':'/usr/bin/gcc','cpp':'/usr/bin/g++'}
```
#### Multiple profiles
You can manage multiple Conan profiles in the directory
`$(conan config home)/profiles`, for example renaming `default` to a different
name and then creating a new `default` profile for a different compiler.
#### Select language
The default profile created by Conan will typically select different C++ dialect
than C++20 used by this project. You should set `20` in the profile line
starting with `compiler.cppstd=`. For example:
```bash
sed -i.bak -e 's|^compiler\.cppstd=.*$|compiler.cppstd=20|' $(conan config home)/profiles/default
```
#### Select standard library in Linux
**Linux** developers will commonly have a default Conan [profile][] that
compiles with GCC and links with libstdc++. If you are linking with libstdc++
(see profile setting `compiler.libcxx`), then you will need to choose the
`libstdc++11` ABI:
```bash
sed -i.bak -e 's|^compiler\.libcxx=.*$|compiler.libcxx=libstdc++11|' $(conan config home)/profiles/default
```
#### Select architecture and runtime in Windows
**Windows** developers may need to use the x64 native build tools. An easy way
to do that is to run the shortcut "x64 Native Tools Command Prompt" for the
version of Visual Studio that you have installed.
Windows developers must also build `rippled` and its dependencies for the x64
architecture:
```bash
sed -i.bak -e 's|^arch=.*$|arch=x86_64|' $(conan config home)/profiles/default
```
**Windows** developers also must select static runtime:
```bash
sed -i.bak -e 's|^compiler\.runtime=.*$|compiler.runtime=static|' $(conan config home)/profiles/default
```
#### Workaround for CMake 4
If your system CMake is version 4 rather than 3, you may have to configure Conan
profile to use CMake version 3 for dependencies, by adding the following two
lines to your profile:
```text
[tool_requires]
!cmake/*: cmake/[>=3 <4]
```
This will force Conan to download and use a locally cached CMake 3 version, and
is needed because some of the dependencies used by this project do not support
CMake 4.
#### Clang workaround for grpc
If your compiler is clang, version 19 or later, or apple-clang, version 17 or
later, you may encounter a compilation error while building the `grpc`
dependency:
```text
In file included from .../lib/promise/try_seq.h:26:
.../lib/promise/detail/basic_seq.h:499:38: error: a template argument list is expected after a name prefixed by the template keyword [-Wmissing-template-arg-list-after-template-kw]
499 | Traits::template CallSeqFactory(f_, *cur_, std::move(arg)));
| ^
```
The workaround for this error is to add two lines to profile:
```text
[conf]
tools.build:cxxflags=['-Wno-missing-template-arg-list-after-template-kw']
```
#### Workaround for gcc 12
If your compiler is gcc, version 12, and you have enabled `werr` option, you may
encounter a compilation error such as:
```text
/usr/include/c++/12/bits/char_traits.h:435:56: error: 'void* __builtin_memcpy(void*, const void*, long unsigned int)' accessing 9223372036854775810 or more bytes at offsets [2, 9223372036854775807] and 1 may overlap up to 9223372036854775813 bytes at offset -3 [-Werror=restrict]
435 | return static_cast<char_type*>(__builtin_memcpy(__s1, __s2, __n));
| ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~
cc1plus: all warnings being treated as errors
```
The workaround for this error is to add two lines to your profile:
```text
[conf]
tools.build:cxxflags=['-Wno-restrict']
```
#### Workaround for clang 16
If your compiler is clang, version 16, you may encounter compilation error such
as:
```text
In file included from .../boost/beast/websocket/stream.hpp:2857:
.../boost/beast/websocket/impl/read.hpp:695:17: error: call to 'async_teardown' is ambiguous
async_teardown(impl.role, impl.stream(),
^~~~~~~~~~~~~~
```
The workaround for this error is to add two lines to your profile:
```text
[conf]
tools.build:cxxflags=['-DBOOST_ASIO_DISABLE_CONCEPTS']
```
### Build and Test
@@ -245,7 +389,6 @@ It patches their CMake to correctly import its dependencies.
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release -Dxrpld=ON -Dtests=ON ..
```
Multi-config generators:
```
@@ -257,13 +400,13 @@ It patches their CMake to correctly import its dependencies.
5. Build `rippled`.
For a single-configuration generator, it will build whatever configuration
you passed for `CMAKE_BUILD_TYPE`. For a multi-configuration generator,
you must pass the option `--config` to select the build configuration.
you passed for `CMAKE_BUILD_TYPE`. For a multi-configuration generator, you
must pass the option `--config` to select the build configuration.
Single-config generators:
```
cmake --build . -j $(nproc)
cmake --build .
```
Multi-config generators:
@@ -278,18 +421,22 @@ It patches their CMake to correctly import its dependencies.
Single-config generators:
```
./rippled --unittest
./rippled --unittest --unittest-jobs N
```
Multi-config generators:
```
./Release/rippled --unittest
./Debug/rippled --unittest
./Release/rippled --unittest --unittest-jobs N
./Debug/rippled --unittest --unittest-jobs N
```
The location of `rippled` in your build directory depends on your CMake
generator. Pass `--help` to see the rest of the command line options.
Replace the `--unittest-jobs` parameter N with the desired unit tests
concurrency. Recommended setting is half of the number of available CPU
cores.
The location of `rippled` binary in your build directory depends on your
CMake generator. Pass `--help` to see the rest of the command line options.
## Coverage report
@@ -347,7 +494,7 @@ cmake --build . --target coverage
After the `coverage` target is completed, the generated coverage report will be
stored inside the build directory, as either of:
- file named `coverage.`_extension_ , with a suitable extension for the report format, or
- file named `coverage.`_extension_, with a suitable extension for the report format, or
- directory named `coverage`, with the `index.html` and other files inside, for the `html-details` or `html-nested` report formats.
@@ -355,12 +502,14 @@ stored inside the build directory, as either of:
| Option | Default Value | Description |
| --- | ---| ---|
| `assert` | OFF | Enable assertions.
| `assert` | OFF | Enable assertions. |
| `coverage` | OFF | Prepare the coverage report. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
| `tests` | OFF | Build tests. |
| `unity` | ON | Configure a unity build. |
| `unity` | OFF | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld (`rippled`) application, and not just the libxrpl library. |
| `werr` | OFF | Treat compilation warnings as errors |
| `wextra` | OFF | Enable additional compilation warnings |
[Unity builds][5] may be faster for the first build
(at the cost of much more memory) since they concatenate sources into fewer
@@ -375,12 +524,28 @@ and can be helpful for detecting `#include` omissions.
After any updates or changes to dependencies, you may need to do the following:
1. Remove your build directory.
2. Remove the Conan cache: `conan remove "*" -c`
3. Re-run [conan install](#build-and-test).
2. Remove individual libraries from the Conan cache, e.g.
### 'protobuf/port_def.inc' file not found
```bash
conan remove 'grpc/*'
```
If `cmake --build .` results in an error due to a missing a protobuf file, then you might have generated CMake files for a different `build_type` than the `CMAKE_BUILD_TYPE` you passed to conan.
**or**
Remove all libraries from Conan cache:
```bash
conan remove '*'
```
3. Re-run [conan export](#patched-recipes) if needed.
4. Re-run [conan install](#build-and-test).
### `protobuf/port_def.inc` file not found
If `cmake --build .` results in an error due to a missing a protobuf file, then
you might have generated CMake files for a different `build_type` than the
`CMAKE_BUILD_TYPE` you passed to Conan.
```
/rippled/.build/pb-xrpl.libpb/xrpl/proto/ripple.pb.h:10:10: fatal error: 'google/protobuf/port_def.inc' file not found

View File

@@ -10,37 +10,35 @@ platforms: Linux, macOS, or Windows.
Package ecosystems vary across Linux distributions,
so there is no one set of instructions that will work for every Linux user.
These instructions are written for Ubuntu 22.04.
They are largely copied from the [script][1] used to configure our Docker
container for continuous integration.
That script handles many more responsibilities.
These instructions are just the bare minimum to build one configuration of
rippled.
You can check that codebase for other Linux distributions and versions.
If you cannot find yours there,
then we hope that these instructions can at least guide you in the right
direction.
The instructions below are written for Debian 12 (Bookworm).
```
apt update
apt install --yes curl git libssl-dev pipx python3.10-dev python3-pip make g++-11 libprotobuf-dev protobuf-compiler
export GCC_RELEASE=12
sudo apt update
sudo apt install --yes gcc-${GCC_RELEASE} g++-${GCC_RELEASE} python3-pip \
python-is-python3 python3-venv python3-dev curl wget ca-certificates \
git build-essential cmake ninja-build libc6-dev
sudo pip install --break-system-packages conan
curl --location --remote-name \
"https://github.com/Kitware/CMake/releases/download/v3.25.1/cmake-3.25.1.tar.gz"
tar -xzf cmake-3.25.1.tar.gz
rm cmake-3.25.1.tar.gz
cd cmake-3.25.1
./bootstrap --parallel=$(nproc)
make --jobs $(nproc)
make install
cd ..
pipx install 'conan<2'
pipx ensurepath
sudo update-alternatives --install /usr/bin/cc cc /usr/bin/gcc-${GCC_RELEASE} 999
sudo update-alternatives --install \
/usr/bin/gcc gcc /usr/bin/gcc-${GCC_RELEASE} 100 \
--slave /usr/bin/g++ g++ /usr/bin/g++-${GCC_RELEASE} \
--slave /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-${GCC_RELEASE} \
--slave /usr/bin/gcc-nm gcc-nm /usr/bin/gcc-nm-${GCC_RELEASE} \
--slave /usr/bin/gcc-ranlib gcc-ranlib /usr/bin/gcc-ranlib-${GCC_RELEASE} \
--slave /usr/bin/gcov gcov /usr/bin/gcov-${GCC_RELEASE} \
--slave /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-tool-${GCC_RELEASE} \
--slave /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-dump-${GCC_RELEASE} \
--slave /usr/bin/lto-dump lto-dump /usr/bin/lto-dump-${GCC_RELEASE}
sudo update-alternatives --auto cc
sudo update-alternatives --auto gcc
```
[1]: https://github.com/thejohnfreeman/rippled-docker/blob/master/ubuntu-22.04/install.sh
If you use different Linux distribution, hope the instruction above can guide
you in the right direction. We try to maintain compatibility with all recent
compiler releases, so if you use a rolling distribution like e.g. Arch or CentOS
then there is a chance that everything will "just work".
## macOS
@@ -100,10 +98,10 @@ and use it to install Conan:
brew update
brew install xz
brew install pyenv
pyenv install 3.10-dev
pyenv global 3.10-dev
pyenv install 3.11
pyenv global 3.11
eval "$(pyenv init -)"
pip install 'conan<2'
pip install 'conan'
```
Install CMake with Homebrew too:

8
external/README.md vendored
View File

@@ -1,14 +1,10 @@
# External Conan recipes
The subdirectories in this directory contain either copies or Conan recipes
of external libraries used by rippled.
The Conan recipes include patches we have not yet pushed upstream.
The subdirectories in this directory contain copies of external libraries used
by rippled.
| Folder | Upstream | Description |
|:----------------|:---------------------------------------------|:------------|
| `antithesis-sdk`| [Project](https://github.com/antithesishq/antithesis-sdk-cpp/) | [Antithesis](https://antithesis.com/docs/using_antithesis/sdk/cpp/overview.html) SDK for C++ |
| `ed25519-donna` | [Project](https://github.com/floodyberry/ed25519-donna) | [Ed25519](http://ed25519.cr.yp.to/) digital signatures |
| `rocksdb` | [Recipe](https://github.com/conan-io/conan-center-index/tree/master/recipes/rocksdb) | Fast key/value database. (Supports rotational disks better than NuDB.) |
| `secp256k1` | [Project](https://github.com/bitcoin-core/secp256k1) | ECDSA digital signatures using the **secp256k1** curve |
| `snappy` | [Recipe](https://github.com/conan-io/conan-center-index/tree/master/recipes/snappy) | "Snappy" lossless compression algorithm. |
| `soci` | [Recipe](https://github.com/conan-io/conan-center-index/tree/master/recipes/soci) | Abstraction layer for database access. |

View File

@@ -1,40 +0,0 @@
sources:
"1.1.10":
url: "https://github.com/google/snappy/archive/1.1.10.tar.gz"
sha256: "49d831bffcc5f3d01482340fe5af59852ca2fe76c3e05df0e67203ebbe0f1d90"
"1.1.9":
url: "https://github.com/google/snappy/archive/1.1.9.tar.gz"
sha256: "75c1fbb3d618dd3a0483bff0e26d0a92b495bbe5059c8b4f1c962b478b6e06e7"
"1.1.8":
url: "https://github.com/google/snappy/archive/1.1.8.tar.gz"
sha256: "16b677f07832a612b0836178db7f374e414f94657c138e6993cbfc5dcc58651f"
"1.1.7":
url: "https://github.com/google/snappy/archive/1.1.7.tar.gz"
sha256: "3dfa02e873ff51a11ee02b9ca391807f0c8ea0529a4924afa645fbf97163f9d4"
patches:
"1.1.10":
- patch_file: "patches/1.1.10-0001-fix-inlining-failure.patch"
patch_description: "disable inlining for compilation error"
patch_type: "portability"
- patch_file: "patches/1.1.9-0002-no-Werror.patch"
patch_description: "disable 'warning as error' options"
patch_type: "portability"
- patch_file: "patches/1.1.10-0003-fix-clobber-list-older-llvm.patch"
patch_description: "disable inline asm on apple-clang"
patch_type: "portability"
- patch_file: "patches/1.1.9-0004-rtti-by-default.patch"
patch_description: "remove 'disable rtti'"
patch_type: "conan"
"1.1.9":
- patch_file: "patches/1.1.9-0001-fix-inlining-failure.patch"
patch_description: "disable inlining for compilation error"
patch_type: "portability"
- patch_file: "patches/1.1.9-0002-no-Werror.patch"
patch_description: "disable 'warning as error' options"
patch_type: "portability"
- patch_file: "patches/1.1.9-0003-fix-clobber-list-older-llvm.patch"
patch_description: "disable inline asm on apple-clang"
patch_type: "portability"
- patch_file: "patches/1.1.9-0004-rtti-by-default.patch"
patch_description: "remove 'disable rtti'"
patch_type: "conan"

View File

@@ -1,89 +0,0 @@
from conan import ConanFile
from conan.tools.build import check_min_cppstd
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get, rmdir
from conan.tools.scm import Version
import os
required_conan_version = ">=1.54.0"
class SnappyConan(ConanFile):
name = "snappy"
description = "A fast compressor/decompressor"
topics = ("google", "compressor", "decompressor")
url = "https://github.com/conan-io/conan-center-index"
homepage = "https://github.com/google/snappy"
license = "BSD-3-Clause"
package_type = "library"
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
"fPIC": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
}
def export_sources(self):
export_conandata_patches(self)
def config_options(self):
if self.settings.os == 'Windows':
del self.options.fPIC
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
def layout(self):
cmake_layout(self, src_folder="src")
def validate(self):
if self.settings.compiler.get_safe("cppstd"):
check_min_cppstd(self, 11)
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def generate(self):
tc = CMakeToolchain(self)
tc.variables["SNAPPY_BUILD_TESTS"] = False
if Version(self.version) >= "1.1.8":
tc.variables["SNAPPY_FUZZING_BUILD"] = False
tc.variables["SNAPPY_REQUIRE_AVX"] = False
tc.variables["SNAPPY_REQUIRE_AVX2"] = False
tc.variables["SNAPPY_INSTALL"] = True
if Version(self.version) >= "1.1.9":
tc.variables["SNAPPY_BUILD_BENCHMARKS"] = False
tc.generate()
def build(self):
apply_conandata_patches(self)
cmake = CMake(self)
cmake.configure()
cmake.build()
def package(self):
copy(self, "COPYING", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
cmake = CMake(self)
cmake.install()
rmdir(self, os.path.join(self.package_folder, "lib", "cmake"))
def package_info(self):
self.cpp_info.set_property("cmake_file_name", "Snappy")
self.cpp_info.set_property("cmake_target_name", "Snappy::snappy")
# TODO: back to global scope in conan v2 once cmake_find_package* generators removed
self.cpp_info.components["snappylib"].libs = ["snappy"]
if not self.options.shared:
if self.settings.os in ["Linux", "FreeBSD"]:
self.cpp_info.components["snappylib"].system_libs.append("m")
# TODO: to remove in conan v2 once cmake_find_package* generators removed
self.cpp_info.names["cmake_find_package"] = "Snappy"
self.cpp_info.names["cmake_find_package_multi"] = "Snappy"
self.cpp_info.components["snappylib"].names["cmake_find_package"] = "snappy"
self.cpp_info.components["snappylib"].names["cmake_find_package_multi"] = "snappy"
self.cpp_info.components["snappylib"].set_property("cmake_target_name", "Snappy::snappy")

View File

@@ -1,13 +0,0 @@
diff --git a/snappy-stubs-internal.h b/snappy-stubs-internal.h
index 1548ed7..3b4a9f3 100644
--- a/snappy-stubs-internal.h
+++ b/snappy-stubs-internal.h
@@ -100,7 +100,7 @@
// Inlining hints.
#if HAVE_ATTRIBUTE_ALWAYS_INLINE
-#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE __attribute__((always_inline))
+#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#else
#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#endif // HAVE_ATTRIBUTE_ALWAYS_INLINE

View File

@@ -1,13 +0,0 @@
diff --git a/snappy.cc b/snappy.cc
index d414718..e4efb59 100644
--- a/snappy.cc
+++ b/snappy.cc
@@ -1132,7 +1132,7 @@ inline size_t AdvanceToNextTagX86Optimized(const uint8_t** ip_p, size_t* tag) {
size_t literal_len = *tag >> 2;
size_t tag_type = *tag;
bool is_literal;
-#if defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(__x86_64__)
+#if defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(__x86_64__) && ( (!defined(__clang__) && !defined(__APPLE__)) || (!defined(__APPLE__) && defined(__clang__) && (__clang_major__ >= 9)) || (defined(__APPLE__) && defined(__clang__) && (__clang_major__ > 11)) )
// TODO clang misses the fact that the (c & 3) already correctly
// sets the zero flag.
asm("and $3, %k[tag_type]\n\t"

View File

@@ -1,14 +0,0 @@
Fixes the following error:
error: inlining failed in call to always_inline size_t snappy::AdvanceToNextTag(const uint8_t**, size_t*): function body can be overwritten at link time
--- snappy-stubs-internal.h
+++ snappy-stubs-internal.h
@@ -100,7 +100,7 @@
// Inlining hints.
#ifdef HAVE_ATTRIBUTE_ALWAYS_INLINE
-#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE __attribute__((always_inline))
+#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#else
#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#endif

View File

@@ -1,12 +0,0 @@
--- CMakeLists.txt
+++ CMakeLists.txt
@@ -69,7 +69,7 @@
- # Use -Werror for clang only.
+if(0)
if(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
if(NOT CMAKE_CXX_FLAGS MATCHES "-Werror")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror")
endif(NOT CMAKE_CXX_FLAGS MATCHES "-Werror")
endif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
-
+endif()

View File

@@ -1,12 +0,0 @@
asm clobbers do not work for clang < 9 and apple-clang < 11 (found by SpaceIm)
--- snappy.cc
+++ snappy.cc
@@ -1026,7 +1026,7 @@
size_t literal_len = *tag >> 2;
size_t tag_type = *tag;
bool is_literal;
-#if defined(__GNUC__) && defined(__x86_64__)
+#if defined(__GNUC__) && defined(__x86_64__) && ( (!defined(__clang__) && !defined(__APPLE__)) || (!defined(__APPLE__) && defined(__clang__) && (__clang_major__ >= 9)) || (defined(__APPLE__) && defined(__clang__) && (__clang_major__ > 11)) )
// TODO clang misses the fact that the (c & 3) already correctly
// sets the zero flag.
asm("and $3, %k[tag_type]\n\t"

View File

@@ -1,20 +0,0 @@
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -53,8 +53,6 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
add_definitions(-D_HAS_EXCEPTIONS=0)
# Disable RTTI.
- string(REGEX REPLACE "/GR" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /GR-")
else(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# Use -Wall for clang and gcc.
if(NOT CMAKE_CXX_FLAGS MATCHES "-Wall")
@@ -78,8 +76,6 @@ endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-exceptions")
# Disable RTTI.
- string(REGEX REPLACE "-frtti" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-rtti")
endif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# BUILD_SHARED_LIBS is a standard CMake variable, but we declare it here to make

View File

@@ -1,12 +0,0 @@
sources:
"4.0.3":
url: "https://github.com/SOCI/soci/archive/v4.0.3.tar.gz"
sha256: "4b1ff9c8545c5d802fbe06ee6cd2886630e5c03bf740e269bb625b45cf934928"
patches:
"4.0.3":
- patch_file: "patches/0001-Remove-hardcoded-INSTALL_NAME_DIR-for-relocatable-li.patch"
patch_description: "Generate relocatable libraries on MacOS"
patch_type: "portability"
- patch_file: "patches/0002-Fix-soci_backend.patch"
patch_description: "Fix variable names for dependencies"
patch_type: "conan"

View File

@@ -1,212 +0,0 @@
from conan import ConanFile
from conan.tools.build import check_min_cppstd
from conan.tools.cmake import CMake, CMakeDeps, CMakeToolchain, cmake_layout
from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get, rmdir
from conan.tools.microsoft import is_msvc
from conan.tools.scm import Version
from conan.errors import ConanInvalidConfiguration
import os
required_conan_version = ">=1.55.0"
class SociConan(ConanFile):
name = "soci"
homepage = "https://github.com/SOCI/soci"
url = "https://github.com/conan-io/conan-center-index"
description = "The C++ Database Access Library "
topics = ("mysql", "odbc", "postgresql", "sqlite3")
license = "BSL-1.0"
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
"fPIC": [True, False],
"empty": [True, False],
"with_sqlite3": [True, False],
"with_db2": [True, False],
"with_odbc": [True, False],
"with_oracle": [True, False],
"with_firebird": [True, False],
"with_mysql": [True, False],
"with_postgresql": [True, False],
"with_boost": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
"empty": False,
"with_sqlite3": False,
"with_db2": False,
"with_odbc": False,
"with_oracle": False,
"with_firebird": False,
"with_mysql": False,
"with_postgresql": False,
"with_boost": False,
}
def export_sources(self):
export_conandata_patches(self)
def layout(self):
cmake_layout(self, src_folder="src")
def config_options(self):
if self.settings.os == "Windows":
self.options.rm_safe("fPIC")
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
def requirements(self):
if self.options.with_sqlite3:
self.requires("sqlite3/3.47.0")
if self.options.with_odbc and self.settings.os != "Windows":
self.requires("odbc/2.3.11")
if self.options.with_mysql:
self.requires("libmysqlclient/8.1.0")
if self.options.with_postgresql:
self.requires("libpq/15.5")
if self.options.with_boost:
self.requires("boost/1.86.0")
@property
def _minimum_compilers_version(self):
return {
"Visual Studio": "14",
"gcc": "4.8",
"clang": "3.8",
"apple-clang": "8.0"
}
def validate(self):
if self.settings.compiler.get_safe("cppstd"):
check_min_cppstd(self, 11)
compiler = str(self.settings.compiler)
compiler_version = Version(self.settings.compiler.version.value)
if compiler not in self._minimum_compilers_version:
self.output.warning("{} recipe lacks information about the {} compiler support.".format(self.name, self.settings.compiler))
elif compiler_version < self._minimum_compilers_version[compiler]:
raise ConanInvalidConfiguration("{} requires a {} version >= {}".format(self.name, compiler, compiler_version))
prefix = "Dependencies for"
message = "not configured in this conan package."
if self.options.with_db2:
# self.requires("db2/0.0.0") # TODO add support for db2
raise ConanInvalidConfiguration("{} DB2 {} ".format(prefix, message))
if self.options.with_oracle:
# self.requires("oracle_db/0.0.0") # TODO add support for oracle
raise ConanInvalidConfiguration("{} ORACLE {} ".format(prefix, message))
if self.options.with_firebird:
# self.requires("firebird/0.0.0") # TODO add support for firebird
raise ConanInvalidConfiguration("{} firebird {} ".format(prefix, message))
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def generate(self):
tc = CMakeToolchain(self)
tc.variables["SOCI_SHARED"] = self.options.shared
tc.variables["SOCI_STATIC"] = not self.options.shared
tc.variables["SOCI_TESTS"] = False
tc.variables["SOCI_CXX11"] = True
tc.variables["SOCI_EMPTY"] = self.options.empty
tc.variables["WITH_SQLITE3"] = self.options.with_sqlite3
tc.variables["WITH_DB2"] = self.options.with_db2
tc.variables["WITH_ODBC"] = self.options.with_odbc
tc.variables["WITH_ORACLE"] = self.options.with_oracle
tc.variables["WITH_FIREBIRD"] = self.options.with_firebird
tc.variables["WITH_MYSQL"] = self.options.with_mysql
tc.variables["WITH_POSTGRESQL"] = self.options.with_postgresql
tc.variables["WITH_BOOST"] = self.options.with_boost
tc.generate()
deps = CMakeDeps(self)
deps.generate()
def build(self):
apply_conandata_patches(self)
cmake = CMake(self)
cmake.configure()
cmake.build()
def package(self):
copy(self, "LICENSE_1_0.txt", dst=os.path.join(self.package_folder, "licenses"), src=self.source_folder)
cmake = CMake(self)
cmake.install()
rmdir(self, os.path.join(self.package_folder, "lib", "cmake"))
def package_info(self):
self.cpp_info.set_property("cmake_file_name", "SOCI")
target_suffix = "" if self.options.shared else "_static"
lib_prefix = "lib" if is_msvc(self) and not self.options.shared else ""
version = Version(self.version)
lib_suffix = "_{}_{}".format(version.major, version.minor) if self.settings.os == "Windows" else ""
# soci_core
self.cpp_info.components["soci_core"].set_property("cmake_target_name", "SOCI::soci_core{}".format(target_suffix))
self.cpp_info.components["soci_core"].libs = ["{}soci_core{}".format(lib_prefix, lib_suffix)]
if self.options.with_boost:
self.cpp_info.components["soci_core"].requires.append("boost::headers")
# soci_empty
if self.options.empty:
self.cpp_info.components["soci_empty"].set_property("cmake_target_name", "SOCI::soci_empty{}".format(target_suffix))
self.cpp_info.components["soci_empty"].libs = ["{}soci_empty{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_empty"].requires = ["soci_core"]
# soci_sqlite3
if self.options.with_sqlite3:
self.cpp_info.components["soci_sqlite3"].set_property("cmake_target_name", "SOCI::soci_sqlite3{}".format(target_suffix))
self.cpp_info.components["soci_sqlite3"].libs = ["{}soci_sqlite3{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_sqlite3"].requires = ["soci_core", "sqlite3::sqlite3"]
# soci_odbc
if self.options.with_odbc:
self.cpp_info.components["soci_odbc"].set_property("cmake_target_name", "SOCI::soci_odbc{}".format(target_suffix))
self.cpp_info.components["soci_odbc"].libs = ["{}soci_odbc{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_odbc"].requires = ["soci_core"]
if self.settings.os == "Windows":
self.cpp_info.components["soci_odbc"].system_libs.append("odbc32")
else:
self.cpp_info.components["soci_odbc"].requires.append("odbc::odbc")
# soci_mysql
if self.options.with_mysql:
self.cpp_info.components["soci_mysql"].set_property("cmake_target_name", "SOCI::soci_mysql{}".format(target_suffix))
self.cpp_info.components["soci_mysql"].libs = ["{}soci_mysql{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_mysql"].requires = ["soci_core", "libmysqlclient::libmysqlclient"]
# soci_postgresql
if self.options.with_postgresql:
self.cpp_info.components["soci_postgresql"].set_property("cmake_target_name", "SOCI::soci_postgresql{}".format(target_suffix))
self.cpp_info.components["soci_postgresql"].libs = ["{}soci_postgresql{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_postgresql"].requires = ["soci_core", "libpq::libpq"]
# TODO: to remove in conan v2 once cmake_find_package* generators removed
self.cpp_info.names["cmake_find_package"] = "SOCI"
self.cpp_info.names["cmake_find_package_multi"] = "SOCI"
self.cpp_info.components["soci_core"].names["cmake_find_package"] = "soci_core{}".format(target_suffix)
self.cpp_info.components["soci_core"].names["cmake_find_package_multi"] = "soci_core{}".format(target_suffix)
if self.options.empty:
self.cpp_info.components["soci_empty"].names["cmake_find_package"] = "soci_empty{}".format(target_suffix)
self.cpp_info.components["soci_empty"].names["cmake_find_package_multi"] = "soci_empty{}".format(target_suffix)
if self.options.with_sqlite3:
self.cpp_info.components["soci_sqlite3"].names["cmake_find_package"] = "soci_sqlite3{}".format(target_suffix)
self.cpp_info.components["soci_sqlite3"].names["cmake_find_package_multi"] = "soci_sqlite3{}".format(target_suffix)
if self.options.with_odbc:
self.cpp_info.components["soci_odbc"].names["cmake_find_package"] = "soci_odbc{}".format(target_suffix)
self.cpp_info.components["soci_odbc"].names["cmake_find_package_multi"] = "soci_odbc{}".format(target_suffix)
if self.options.with_mysql:
self.cpp_info.components["soci_mysql"].names["cmake_find_package"] = "soci_mysql{}".format(target_suffix)
self.cpp_info.components["soci_mysql"].names["cmake_find_package_multi"] = "soci_mysql{}".format(target_suffix)
if self.options.with_postgresql:
self.cpp_info.components["soci_postgresql"].names["cmake_find_package"] = "soci_postgresql{}".format(target_suffix)
self.cpp_info.components["soci_postgresql"].names["cmake_find_package_multi"] = "soci_postgresql{}".format(target_suffix)

View File

@@ -1,39 +0,0 @@
From d491bf7b5040d314ffd0c6310ba01f78ff44c85e Mon Sep 17 00:00:00 2001
From: Rasmus Thomsen <rasmus.thomsen@dampsoft.de>
Date: Fri, 14 Apr 2023 09:16:29 +0200
Subject: [PATCH] Remove hardcoded INSTALL_NAME_DIR for relocatable libraries
on MacOS
---
cmake/SociBackend.cmake | 2 +-
src/core/CMakeLists.txt | 1 -
2 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/cmake/SociBackend.cmake b/cmake/SociBackend.cmake
index 5d4ef0df..39fe1f77 100644
--- a/cmake/SociBackend.cmake
+++ b/cmake/SociBackend.cmake
@@ -171,7 +171,7 @@ macro(soci_backend NAME)
set_target_properties(${THIS_BACKEND_TARGET}
PROPERTIES
SOVERSION ${${PROJECT_NAME}_SOVERSION}
- INSTALL_NAME_DIR ${CMAKE_INSTALL_PREFIX}/lib)
+ )
if(APPLE)
set_target_properties(${THIS_BACKEND_TARGET}
diff --git a/src/core/CMakeLists.txt b/src/core/CMakeLists.txt
index 3e7deeae..f9eae564 100644
--- a/src/core/CMakeLists.txt
+++ b/src/core/CMakeLists.txt
@@ -59,7 +59,6 @@ if (SOCI_SHARED)
PROPERTIES
VERSION ${SOCI_VERSION}
SOVERSION ${SOCI_SOVERSION}
- INSTALL_NAME_DIR ${CMAKE_INSTALL_PREFIX}/lib
CLEAN_DIRECT_OUTPUT 1)
endif()
--
2.25.1

View File

@@ -1,24 +0,0 @@
diff --git a/cmake/SociBackend.cmake b/cmake/SociBackend.cmake
index 0a664667..3fa2ed95 100644
--- a/cmake/SociBackend.cmake
+++ b/cmake/SociBackend.cmake
@@ -31,14 +31,13 @@ macro(soci_backend_deps_found NAME DEPS SUCCESS)
if(NOT DEPEND_FOUND)
list(APPEND DEPS_NOT_FOUND ${dep})
else()
- string(TOUPPER "${dep}" DEPU)
- if( ${DEPU}_INCLUDE_DIR )
- list(APPEND DEPS_INCLUDE_DIRS ${${DEPU}_INCLUDE_DIR})
+ if( ${dep}_INCLUDE_DIR )
+ list(APPEND DEPS_INCLUDE_DIRS ${${dep}_INCLUDE_DIR})
endif()
- if( ${DEPU}_INCLUDE_DIRS )
- list(APPEND DEPS_INCLUDE_DIRS ${${DEPU}_INCLUDE_DIRS})
+ if( ${dep}_INCLUDE_DIRS )
+ list(APPEND DEPS_INCLUDE_DIRS ${${dep}_INCLUDE_DIRS})
endif()
- list(APPEND DEPS_LIBRARIES ${${DEPU}_LIBRARIES})
+ list(APPEND DEPS_LIBRARIES ${${dep}_LIBRARIES})
endif()
endforeach()

View File

@@ -21,7 +21,6 @@
#define RIPPLE_BASICS_SHAMAP_HASH_H_INCLUDED
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/partitioned_unordered_map.h>
#include <ostream>

View File

@@ -90,9 +90,6 @@ public:
int
getCacheSize() const;
int
getTrackSize() const;
float
getHitRate();
@@ -170,9 +167,6 @@ public:
bool
retrieve(key_type const& key, T& data);
mutex_type&
peekMutex();
std::vector<key_type>
getKeys() const;
@@ -193,11 +187,14 @@ public:
private:
SharedPointerType
initialFetch(key_type const& key, std::lock_guard<mutex_type> const& l);
initialFetch(key_type const& key);
void
collect_metrics();
Mutex&
lockPartition(key_type const& key) const;
private:
struct Stats
{
@@ -300,8 +297,8 @@ private:
[[maybe_unused]] clock_type::time_point const& now,
typename KeyValueCacheType::map_type& partition,
SweptPointersVector& stuffToSweep,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&);
std::atomic<int>& allRemoval,
Mutex& partitionLock);
[[nodiscard]] std::thread
sweepHelper(
@@ -310,14 +307,12 @@ private:
typename KeyOnlyCacheType::map_type& partition,
SweptPointersVector&,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&);
Mutex& partitionLock);
beast::Journal m_journal;
clock_type& m_clock;
Stats m_stats;
mutex_type mutable m_mutex;
// Used for logging
std::string m_name;
@@ -328,10 +323,11 @@ private:
clock_type::duration const m_target_age;
// Number of items cached
int m_cache_count;
std::atomic<int> m_cache_count;
cache_type m_cache; // Hold strong reference to recent objects
std::uint64_t m_hits;
std::uint64_t m_misses;
std::atomic<std::uint64_t> m_hits;
std::atomic<std::uint64_t> m_misses;
mutable std::vector<mutex_type> partitionLocks_;
};
} // namespace ripple

View File

@@ -22,6 +22,7 @@
#include <xrpl/basics/IntrusivePointer.ipp>
#include <xrpl/basics/TaggedCache.h>
#include <xrpl/beast/core/CurrentThreadName.h>
namespace ripple {
@@ -60,6 +61,7 @@ inline TaggedCache<
, m_hits(0)
, m_misses(0)
{
partitionLocks_ = std::vector<mutex_type>(m_cache.partitions());
}
template <
@@ -105,8 +107,13 @@ TaggedCache<
KeyEqual,
Mutex>::size() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
std::size_t totalSize = 0;
for (size_t i = 0; i < partitionLocks_.size(); ++i)
{
std::lock_guard<Mutex> lock(partitionLocks_[i]);
totalSize += m_cache.map()[i].size();
}
return totalSize;
}
template <
@@ -129,32 +136,7 @@ TaggedCache<
KeyEqual,
Mutex>::getCacheSize() const
{
std::lock_guard lock(m_mutex);
return m_cache_count;
}
template <
class Key,
class T,
bool IsKeyCache,
class SharedWeakUnionPointer,
class SharedPointerType,
class Hash,
class KeyEqual,
class Mutex>
inline int
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::getTrackSize() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
return m_cache_count.load(std::memory_order_relaxed);
}
template <
@@ -177,9 +159,10 @@ TaggedCache<
KeyEqual,
Mutex>::getHitRate()
{
std::lock_guard lock(m_mutex);
auto const total = static_cast<float>(m_hits + m_misses);
return m_hits * (100.0f / std::max(1.0f, total));
auto const hits = m_hits.load(std::memory_order_relaxed);
auto const misses = m_misses.load(std::memory_order_relaxed);
float const total = float(hits + misses);
return hits * (100.0f / std::max(1.0f, total));
}
template <
@@ -202,9 +185,12 @@ TaggedCache<
KeyEqual,
Mutex>::clear()
{
std::lock_guard lock(m_mutex);
for (auto& mutex : partitionLocks_)
mutex.lock();
m_cache.clear();
m_cache_count = 0;
for (auto& mutex : partitionLocks_)
mutex.unlock();
m_cache_count.store(0, std::memory_order_relaxed);
}
template <
@@ -227,11 +213,9 @@ TaggedCache<
KeyEqual,
Mutex>::reset()
{
std::lock_guard lock(m_mutex);
m_cache.clear();
m_cache_count = 0;
m_hits = 0;
m_misses = 0;
clear();
m_hits.store(0, std::memory_order_relaxed);
m_misses.store(0, std::memory_order_relaxed);
}
template <
@@ -255,7 +239,7 @@ TaggedCache<
KeyEqual,
Mutex>::touch_if_exists(KeyComparable const& key)
{
std::lock_guard lock(m_mutex);
std::lock_guard<Mutex> lock(lockPartition(key));
auto const iter(m_cache.find(key));
if (iter == m_cache.end())
{
@@ -297,8 +281,6 @@ TaggedCache<
auto const start = std::chrono::steady_clock::now();
{
std::lock_guard lock(m_mutex);
if (m_target_size == 0 ||
(static_cast<int>(m_cache.size()) <= m_target_size))
{
@@ -330,12 +312,13 @@ TaggedCache<
m_cache.map()[p],
allStuffToSweep[p],
allRemovals,
lock));
partitionLocks_[p]));
}
for (std::thread& worker : workers)
worker.join();
m_cache_count -= allRemovals;
int removals = allRemovals.load(std::memory_order_relaxed);
m_cache_count.fetch_sub(removals, std::memory_order_relaxed);
}
// At this point allStuffToSweep will go out of scope outside the lock
// and decrement the reference count on each strong pointer.
@@ -369,7 +352,8 @@ TaggedCache<
{
// Remove from cache, if !valid, remove from map too. Returns true if
// removed from cache
std::lock_guard lock(m_mutex);
std::lock_guard<Mutex> lock(lockPartition(key));
auto cit = m_cache.find(key);
@@ -382,7 +366,7 @@ TaggedCache<
if (entry.isCached())
{
--m_cache_count;
m_cache_count.fetch_sub(1, std::memory_order_relaxed);
entry.ptr.convertToWeak();
ret = true;
}
@@ -420,17 +404,16 @@ TaggedCache<
{
// Return canonical value, store if needed, refresh in cache
// Return values: true=we had the data already
std::lock_guard lock(m_mutex);
std::lock_guard<Mutex> lock(lockPartition(key));
auto cit = m_cache.find(key);
if (cit == m_cache.end())
{
m_cache.emplace(
std::piecewise_construct,
std::forward_as_tuple(key),
std::forward_as_tuple(m_clock.now(), data));
++m_cache_count;
m_cache_count.fetch_add(1, std::memory_order_relaxed);
return false;
}
@@ -479,12 +462,12 @@ TaggedCache<
data = cachedData;
}
++m_cache_count;
m_cache_count.fetch_add(1, std::memory_order_relaxed);
return true;
}
entry.ptr = data;
++m_cache_count;
m_cache_count.fetch_add(1, std::memory_order_relaxed);
return false;
}
@@ -560,10 +543,11 @@ TaggedCache<
KeyEqual,
Mutex>::fetch(key_type const& key)
{
std::lock_guard<mutex_type> l(m_mutex);
auto ret = initialFetch(key, l);
std::lock_guard<Mutex> lock(lockPartition(key));
auto ret = initialFetch(key);
if (!ret)
++m_misses;
m_misses.fetch_add(1, std::memory_order_relaxed);
return ret;
}
@@ -627,8 +611,8 @@ TaggedCache<
Mutex>::insert(key_type const& key)
-> std::enable_if_t<IsKeyCache, ReturnType>
{
std::lock_guard lock(m_mutex);
clock_type::time_point const now(m_clock.now());
std::lock_guard<Mutex> lock(lockPartition(key));
auto [it, inserted] = m_cache.emplace(
std::piecewise_construct,
std::forward_as_tuple(key),
@@ -668,29 +652,6 @@ TaggedCache<
return true;
}
template <
class Key,
class T,
bool IsKeyCache,
class SharedWeakUnionPointer,
class SharedPointerType,
class Hash,
class KeyEqual,
class Mutex>
inline auto
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::peekMutex() -> mutex_type&
{
return m_mutex;
}
template <
class Key,
class T,
@@ -714,10 +675,13 @@ TaggedCache<
std::vector<key_type> v;
{
std::lock_guard lock(m_mutex);
v.reserve(m_cache.size());
for (auto const& _ : m_cache)
v.push_back(_.first);
for (std::size_t i = 0; i < partitionLocks_.size(); ++i)
{
std::lock_guard<Mutex> lock(partitionLocks_[i]);
for (auto const& entry : m_cache.map()[i])
v.push_back(entry.first);
}
}
return v;
@@ -743,11 +707,12 @@ TaggedCache<
KeyEqual,
Mutex>::rate() const
{
std::lock_guard lock(m_mutex);
auto const tot = m_hits + m_misses;
auto const hits = m_hits.load(std::memory_order_relaxed);
auto const misses = m_misses.load(std::memory_order_relaxed);
auto const tot = hits + misses;
if (tot == 0)
return 0;
return double(m_hits) / tot;
return 0.0;
return double(hits) / tot;
}
template <
@@ -771,18 +736,16 @@ TaggedCache<
KeyEqual,
Mutex>::fetch(key_type const& digest, Handler const& h)
{
{
std::lock_guard l(m_mutex);
if (auto ret = initialFetch(digest, l))
return ret;
}
std::lock_guard<Mutex> lock(lockPartition(digest));
if (auto ret = initialFetch(digest))
return ret;
auto sle = h();
if (!sle)
return {};
std::lock_guard l(m_mutex);
++m_misses;
m_misses.fetch_add(1, std::memory_order_relaxed);
auto const [it, inserted] =
m_cache.emplace(digest, Entry(m_clock.now(), std::move(sle)));
if (!inserted)
@@ -809,9 +772,10 @@ TaggedCache<
SharedPointerType,
Hash,
KeyEqual,
Mutex>::
initialFetch(key_type const& key, std::lock_guard<mutex_type> const& l)
Mutex>::initialFetch(key_type const& key)
{
std::lock_guard<Mutex> lock(lockPartition(key));
auto cit = m_cache.find(key);
if (cit == m_cache.end())
return {};
@@ -819,7 +783,7 @@ TaggedCache<
Entry& entry = cit->second;
if (entry.isCached())
{
++m_hits;
m_hits.fetch_add(1, std::memory_order_relaxed);
entry.touch(m_clock.now());
return entry.ptr.getStrong();
}
@@ -827,12 +791,13 @@ TaggedCache<
if (entry.isCached())
{
// independent of cache size, so not counted as a hit
++m_cache_count;
m_cache_count.fetch_add(1, std::memory_order_relaxed);
entry.touch(m_clock.now());
return entry.ptr.getStrong();
}
m_cache.erase(cit);
return {};
}
@@ -861,10 +826,11 @@ TaggedCache<
{
beast::insight::Gauge::value_type hit_rate(0);
{
std::lock_guard lock(m_mutex);
auto const total(m_hits + m_misses);
auto const hits = m_hits.load(std::memory_order_relaxed);
auto const misses = m_misses.load(std::memory_order_relaxed);
auto const total = hits + misses;
if (total != 0)
hit_rate = (m_hits * 100) / total;
hit_rate = (hits * 100) / total;
}
m_stats.hit_rate.set(hit_rate);
}
@@ -895,12 +861,16 @@ TaggedCache<
typename KeyValueCacheType::map_type& partition,
SweptPointersVector& stuffToSweep,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
Mutex& partitionLock)
{
return std::thread([&, this]() {
beast::setCurrentThreadName("sweep-KVCache");
int cacheRemovals = 0;
int mapRemovals = 0;
std::lock_guard<Mutex> lock(partitionLock);
// Keep references to all the stuff we sweep
// so that we can destroy them outside the lock.
stuffToSweep.reserve(partition.size());
@@ -984,12 +954,16 @@ TaggedCache<
typename KeyOnlyCacheType::map_type& partition,
SweptPointersVector&,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
Mutex& partitionLock)
{
return std::thread([&, this]() {
beast::setCurrentThreadName("sweep-KCache");
int cacheRemovals = 0;
int mapRemovals = 0;
std::lock_guard<Mutex> lock(partitionLock);
// Keep references to all the stuff we sweep
// so that we can destroy them outside the lock.
{
@@ -1024,6 +998,29 @@ TaggedCache<
});
}
template <
class Key,
class T,
bool IsKeyCache,
class SharedWeakUnionPointer,
class SharedPointerType,
class Hash,
class KeyEqual,
class Mutex>
inline Mutex&
TaggedCache<
Key,
T,
IsKeyCache,
SharedWeakUnionPointer,
SharedPointerType,
Hash,
KeyEqual,
Mutex>::lockPartition(key_type const& key) const
{
return partitionLocks_[m_cache.partition_index(key)];
}
} // namespace ripple
#endif

View File

@@ -277,6 +277,12 @@ public:
return map_;
}
partition_map_type const&
map() const
{
return map_;
}
iterator
begin()
{
@@ -321,6 +327,12 @@ public:
return cend();
}
std::size_t
partition_index(key_type const& key) const
{
return partitioner(key);
}
private:
template <class T>
void

View File

@@ -24,32 +24,111 @@
#include <xxhash.h>
#include <array>
#include <cstddef>
#include <new>
#include <type_traits>
#include <cstdint>
#include <optional>
#include <span>
namespace beast {
class xxhasher
{
private:
// requires 64-bit std::size_t
static_assert(sizeof(std::size_t) == 8, "");
public:
using result_type = std::size_t;
XXH3_state_t* state_;
private:
static_assert(sizeof(std::size_t) == 8, "requires 64-bit std::size_t");
// Have an internal buffer to avoid the streaming API
// A 64-byte buffer should to be big enough for us
static constexpr std::size_t INTERNAL_BUFFER_SIZE = 64;
alignas(64) std::array<std::uint8_t, INTERNAL_BUFFER_SIZE> buffer_;
std::span<std::uint8_t> readBuffer_;
std::span<std::uint8_t> writeBuffer_;
std::optional<XXH64_hash_t> seed_;
XXH3_state_t* state_ = nullptr;
void
resetBuffers()
{
writeBuffer_ = std::span{buffer_};
readBuffer_ = {};
}
void
updateHash(void const* data, std::size_t len)
{
if (writeBuffer_.size() < len)
{
flushToState(data, len);
}
else
{
std::memcpy(writeBuffer_.data(), data, len);
writeBuffer_ = writeBuffer_.subspan(len);
readBuffer_ = std::span{
std::begin(buffer_), buffer_.size() - writeBuffer_.size()};
}
}
static XXH3_state_t*
allocState()
{
auto ret = XXH3_createState();
if (ret == nullptr)
throw std::bad_alloc();
throw std::bad_alloc(); // LCOV_EXCL_LINE
return ret;
}
public:
using result_type = std::size_t;
void
flushToState(void const* data, std::size_t len)
{
if (!state_)
{
state_ = allocState();
if (seed_.has_value())
{
XXH3_64bits_reset_withSeed(state_, *seed_);
}
else
{
XXH3_64bits_reset(state_);
}
}
XXH3_64bits_update(state_, readBuffer_.data(), readBuffer_.size());
resetBuffers();
if (data && len)
{
XXH3_64bits_update(state_, data, len);
}
}
result_type
retrieveHash()
{
if (state_)
{
flushToState(nullptr, 0);
return XXH3_64bits_digest(state_);
}
else
{
if (seed_.has_value())
{
return XXH3_64bits_withSeed(
readBuffer_.data(), readBuffer_.size(), *seed_);
}
else
{
return XXH3_64bits(readBuffer_.data(), readBuffer_.size());
}
}
}
public:
static constexpr auto const endian = boost::endian::order::native;
xxhasher(xxhasher const&) = delete;
@@ -58,43 +137,43 @@ public:
xxhasher()
{
state_ = allocState();
XXH3_64bits_reset(state_);
resetBuffers();
}
~xxhasher() noexcept
{
XXH3_freeState(state_);
if (state_)
{
XXH3_freeState(state_);
}
}
template <
class Seed,
std::enable_if_t<std::is_unsigned<Seed>::value>* = nullptr>
explicit xxhasher(Seed seed)
explicit xxhasher(Seed seed) : seed_(seed)
{
state_ = allocState();
XXH3_64bits_reset_withSeed(state_, seed);
resetBuffers();
}
template <
class Seed,
std::enable_if_t<std::is_unsigned<Seed>::value>* = nullptr>
xxhasher(Seed seed, Seed)
xxhasher(Seed seed, Seed) : seed_(seed)
{
state_ = allocState();
XXH3_64bits_reset_withSeed(state_, seed);
resetBuffers();
}
void
operator()(void const* key, std::size_t len) noexcept
{
XXH3_64bits_update(state_, key, len);
updateHash(key, len);
}
explicit
operator std::size_t() noexcept
operator result_type() noexcept
{
return XXH3_64bits_digest(state_);
return retrieveHash();
}
};

View File

@@ -22,7 +22,6 @@
#include <xrpl/basics/ByteUtilities.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/partitioned_unordered_map.h>
#include <cstdint>

View File

@@ -32,6 +32,7 @@
// If you add an amendment here, then do not forget to increment `numFeatures`
// in include/xrpl/protocol/Feature.h.
XRPL_FIX (PriceOracleOrder, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (MPTDeliveredAmount, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (AMMClawbackRounding, Supported::no, VoteBehavior::DefaultNo)
XRPL_FEATURE(TokenEscrow, Supported::yes, VoteBehavior::DefaultNo)
@@ -39,7 +40,7 @@ XRPL_FIX (EnforceNFTokenTrustlineV2, Supported::yes, VoteBehavior::DefaultNo
XRPL_FIX (AMMv1_3, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionedDEX, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(Batch, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(SingleAssetVault, Supported::no, VoteBehavior::DefaultNo)
XRPL_FEATURE(SingleAssetVault, Supported::no, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionDelegation, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (PayChanCancelAfter, Supported::yes, VoteBehavior::DefaultNo)
// Check flags in Credential transactions

View File

@@ -678,6 +678,61 @@ private:
oracle.set(
UpdateArg{.series = {{"XRP", "USD", 742, 2}}, .fee = baseFee});
}
for (bool const withFixOrder : {false, true})
{
// Should be same order as creation
Env env(
*this,
withFixOrder ? testable_amendments()
: testable_amendments() - fixPriceOracleOrder);
auto const baseFee =
static_cast<int>(env.current()->fees().base.drops());
auto test = [&](Env& env, DataSeries const& series) {
env.fund(XRP(1'000), owner);
Oracle oracle(
env, {.owner = owner, .series = series, .fee = baseFee});
BEAST_EXPECT(oracle.exists());
auto sle = env.le(keylet::oracle(owner, oracle.documentID()));
BEAST_EXPECT(
sle->getFieldArray(sfPriceDataSeries).size() ==
series.size());
auto const beforeQuoteAssetName1 =
sle->getFieldArray(sfPriceDataSeries)[0]
.getFieldCurrency(sfQuoteAsset)
.getText();
auto const beforeQuoteAssetName2 =
sle->getFieldArray(sfPriceDataSeries)[1]
.getFieldCurrency(sfQuoteAsset)
.getText();
oracle.set(UpdateArg{.series = series, .fee = baseFee});
sle = env.le(keylet::oracle(owner, oracle.documentID()));
auto const afterQuoteAssetName1 =
sle->getFieldArray(sfPriceDataSeries)[0]
.getFieldCurrency(sfQuoteAsset)
.getText();
auto const afterQuoteAssetName2 =
sle->getFieldArray(sfPriceDataSeries)[1]
.getFieldCurrency(sfQuoteAsset)
.getText();
if (env.current()->rules().enabled(fixPriceOracleOrder))
{
BEAST_EXPECT(afterQuoteAssetName1 == beforeQuoteAssetName1);
BEAST_EXPECT(afterQuoteAssetName2 == beforeQuoteAssetName2);
}
else
{
BEAST_EXPECT(afterQuoteAssetName1 != beforeQuoteAssetName1);
BEAST_EXPECT(afterQuoteAssetName2 != beforeQuoteAssetName2);
}
};
test(env, {{"XRP", "USD", 742, 2}, {"XRP", "EUR", 711, 2}});
}
}
void

View File

@@ -58,10 +58,10 @@ public:
// Insert an item, retrieve it, and age it so it gets purged.
{
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 0);
BEAST_EXPECT(c.size() == 0);
BEAST_EXPECT(!c.insert(1, "one"));
BEAST_EXPECT(c.getCacheSize() == 1);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
{
std::string s;
@@ -72,7 +72,7 @@ public:
++clock;
c.sweep();
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 0);
BEAST_EXPECT(c.size() == 0);
}
// Insert an item, maintain a strong pointer, age it, and
@@ -80,7 +80,7 @@ public:
{
BEAST_EXPECT(!c.insert(2, "two"));
BEAST_EXPECT(c.getCacheSize() == 1);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
{
auto p = c.fetch(2);
@@ -88,14 +88,14 @@ public:
++clock;
c.sweep();
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
}
// Make sure its gone now that our reference is gone
++clock;
c.sweep();
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 0);
BEAST_EXPECT(c.size() == 0);
}
// Insert the same key/value pair and make sure we get the same result
@@ -111,7 +111,7 @@ public:
++clock;
c.sweep();
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 0);
BEAST_EXPECT(c.size() == 0);
}
// Put an object in but keep a strong pointer to it, advance the clock a
@@ -121,24 +121,24 @@ public:
// Put an object in
BEAST_EXPECT(!c.insert(4, "four"));
BEAST_EXPECT(c.getCacheSize() == 1);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
{
// Keep a strong pointer to it
auto const p1 = c.fetch(4);
BEAST_EXPECT(p1 != nullptr);
BEAST_EXPECT(c.getCacheSize() == 1);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
// Advance the clock a lot
++clock;
c.sweep();
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
// Canonicalize a new object with the same key
auto p2 = std::make_shared<std::string>("four");
BEAST_EXPECT(c.canonicalize_replace_client(4, p2));
BEAST_EXPECT(c.getCacheSize() == 1);
BEAST_EXPECT(c.getTrackSize() == 1);
BEAST_EXPECT(c.size() == 1);
// Make sure we get the original object
BEAST_EXPECT(p1.get() == p2.get());
}
@@ -146,7 +146,7 @@ public:
++clock;
c.sweep();
BEAST_EXPECT(c.getCacheSize() == 0);
BEAST_EXPECT(c.getTrackSize() == 0);
BEAST_EXPECT(c.size() == 0);
}
}
};

View File

@@ -0,0 +1,201 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2025 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <xrpl/beast/hash/xxhasher.h>
#include <xrpl/beast/unit_test.h>
namespace beast {
class XXHasher_test : public unit_test::suite
{
public:
void
testWithoutSeed()
{
testcase("Without Seed");
xxhasher hasher{};
std::string objectToHash{"Hello, xxHash!"};
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
16042857369214894119ULL);
}
void
testWithSeed()
{
testcase("With Seed");
xxhasher hasher{static_cast<std::uint32_t>(102)};
std::string objectToHash{"Hello, xxHash!"};
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
14440132435660934800ULL);
}
void
testWithTwoSeeds()
{
testcase("With Two Seeds");
xxhasher hasher{
static_cast<std::uint32_t>(102), static_cast<std::uint32_t>(103)};
std::string objectToHash{"Hello, xxHash!"};
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
14440132435660934800ULL);
}
void
testBigObjectWithMultiupleSmallUpdatesWithoutSeed()
{
testcase("Big Object With Multiuple Small Updates Without Seed");
xxhasher hasher{};
std::string objectToHash{"Hello, xxHash!"};
for (int i = 0; i < 100; i++)
{
hasher(objectToHash.data(), objectToHash.size());
}
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
15296278154063476002ULL);
}
void
testBigObjectWithMultiupleSmallUpdatesWithSeed()
{
testcase("Big Object With Multiuple Small Updates With Seed");
xxhasher hasher{static_cast<std::uint32_t>(103)};
std::string objectToHash{"Hello, xxHash!"};
for (int i = 0; i < 100; i++)
{
hasher(objectToHash.data(), objectToHash.size());
}
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
17285302196561698791ULL);
}
void
testBigObjectWithSmallAndBigUpdatesWithoutSeed()
{
testcase("Big Object With Small And Big Updates Without Seed");
xxhasher hasher{};
std::string objectToHash{"Hello, xxHash!"};
std::string bigObject;
for (int i = 0; i < 20; i++)
{
bigObject += "Hello, xxHash!";
}
hasher(objectToHash.data(), objectToHash.size());
hasher(bigObject.data(), bigObject.size());
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
1865045178324729219ULL);
}
void
testBigObjectWithSmallAndBigUpdatesWithSeed()
{
testcase("Big Object With Small And Big Updates With Seed");
xxhasher hasher{static_cast<std::uint32_t>(103)};
std::string objectToHash{"Hello, xxHash!"};
std::string bigObject;
for (int i = 0; i < 20; i++)
{
bigObject += "Hello, xxHash!";
}
hasher(objectToHash.data(), objectToHash.size());
hasher(bigObject.data(), bigObject.size());
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
16189862915636005281ULL);
}
void
testBigObjectWithOneUpdateWithoutSeed()
{
testcase("Big Object With One Update Without Seed");
xxhasher hasher{};
std::string objectToHash;
for (int i = 0; i < 100; i++)
{
objectToHash += "Hello, xxHash!";
}
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
15296278154063476002ULL);
}
void
testBigObjectWithOneUpdateWithSeed()
{
testcase("Big Object With One Update With Seed");
xxhasher hasher{static_cast<std::uint32_t>(103)};
std::string objectToHash;
for (int i = 0; i < 100; i++)
{
objectToHash += "Hello, xxHash!";
}
hasher(objectToHash.data(), objectToHash.size());
BEAST_EXPECT(
static_cast<xxhasher::result_type>(hasher) ==
17285302196561698791ULL);
}
void
run() override
{
testWithoutSeed();
testWithSeed();
testWithTwoSeeds();
testBigObjectWithMultiupleSmallUpdatesWithoutSeed();
testBigObjectWithMultiupleSmallUpdatesWithSeed();
testBigObjectWithSmallAndBigUpdatesWithoutSeed();
testBigObjectWithSmallAndBigUpdatesWithSeed();
testBigObjectWithOneUpdateWithoutSeed();
testBigObjectWithOneUpdateWithSeed();
}
};
BEAST_DEFINE_TESTSUITE(XXHasher, beast_core, beast);
} // namespace beast

View File

@@ -1,980 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2025 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <test/jtx/Env.h>
#include <xrpld/overlay/Peer.h>
#include <xrpld/overlay/ReduceRelayCommon.h>
#include <xrpld/overlay/Slot.h>
#include <xrpl/beast/unit_test.h>
#include <xrpl/protocol/SecretKey.h>
#include <chrono>
#include <cstdint>
#include <functional>
#include <optional>
#include <vector>
namespace ripple {
namespace test {
class TestHandler : public reduce_relay::SquelchHandler
{
public:
using squelch_method =
std::function<void(PublicKey const&, Peer::id_t, std::uint32_t)>;
using squelchAll_method = std::function<
void(PublicKey const&, std::uint32_t, std::function<void(Peer::id_t)>)>;
using unsquelch_method = std::function<void(PublicKey const&, Peer::id_t)>;
squelch_method squelch_f_;
squelchAll_method squelchAll_f_;
unsquelch_method unsquelch_f_;
TestHandler(
squelch_method const& squelch_f,
squelchAll_method const& squelchAll_f,
unsquelch_method const& unsquelch_f)
: squelch_f_(squelch_f)
, squelchAll_f_(squelchAll_f)
, unsquelch_f_(unsquelch_f)
{
}
TestHandler(TestHandler& copy)
{
squelch_f_ = copy.squelch_f_;
squelchAll_f_ = copy.squelchAll_f_;
unsquelch_f_ = copy.unsquelch_f_;
}
void
squelch(PublicKey const& validator, Peer::id_t peer, std::uint32_t duration)
const override
{
squelch_f_(validator, peer, duration);
}
void
squelchAll(
PublicKey const& validator,
std::uint32_t duration,
std::function<void(Peer::id_t)> callback) override
{
squelchAll_f_(validator, duration, callback);
}
void
unsquelch(PublicKey const& validator, Peer::id_t peer) const override
{
unsquelch_f_(validator, peer);
}
};
class EnhancedSquelchingTestSlots : public reduce_relay::Slots
{
using Slots = reduce_relay::Slots;
public:
EnhancedSquelchingTestSlots(
Logs& logs,
reduce_relay::SquelchHandler& handler,
Config const& config,
reduce_relay::Slots::clock_type& clock)
: Slots(logs, handler, config, clock)
{
}
Slots::slots_map const&
getSlots(bool trusted) const
{
if (trusted)
return trustedSlots_;
return untrustedSlots_;
}
hash_map<PublicKey, ValidatorInfo> const&
getConsideredValidators()
{
return consideredValidators_;
}
std::optional<PublicKey>
updateConsideredValidator(PublicKey const& validator, Peer::id_t peerID)
{
return Slots::updateConsideredValidator(validator, peerID);
}
void
squelchValidator(PublicKey const& validatorKey, Peer::id_t peerID)
{
Slots::registerSquelchedValidator(validatorKey, peerID);
}
bool
validatorSquelched(PublicKey const& validatorKey)
{
return Slots::expireAndIsValidatorSquelched(validatorKey);
}
bool
peerSquelched(PublicKey const& validatorKey, Peer::id_t peerID)
{
return Slots::expireAndIsPeerSquelched(validatorKey, peerID);
}
};
class enhanced_squelch_test : public beast::unit_test::suite
{
public:
TestHandler::squelch_method noop_squelch =
[&](PublicKey const&, Peer::id_t, std::uint32_t) {
BEAST_EXPECTS(false, "unexpected call to squelch handler");
};
TestHandler::squelchAll_method noop_squelchAll =
[&](PublicKey const&, std::uint32_t, std::function<void(Peer::id_t)>) {
BEAST_EXPECTS(false, "unexpected call to squelchAll handler");
};
TestHandler::unsquelch_method noop_unsquelch = [&](PublicKey const&,
Peer::id_t) {
BEAST_EXPECTS(false, "unexpected call to unsquelch handler");
};
// noop_handler is passed as a place holder Handler to slots
TestHandler noop_handler = {
noop_squelch,
noop_squelchAll,
noop_unsquelch,
};
jtx::Env env_;
enhanced_squelch_test() : env_(*this)
{
env_.app().config().VP_REDUCE_RELAY_ENHANCED_SQUELCH_ENABLE = true;
}
void
testConfig()
{
testcase("Test Config - enabled enhanced squelching");
Config c;
std::string toLoad(R"rippleConfig(
[reduce_relay]
vp_enhanced_squelch_enable=1
)rippleConfig");
c.loadFromString(toLoad);
BEAST_EXPECT(c.VP_REDUCE_RELAY_ENHANCED_SQUELCH_ENABLE == true);
toLoad = R"rippleConfig(
[reduce_relay]
vp_enhanced_squelch_enable=0
)rippleConfig";
c.loadFromString(toLoad);
BEAST_EXPECT(c.VP_REDUCE_RELAY_ENHANCED_SQUELCH_ENABLE == false);
toLoad = R"rippleConfig(
[reduce_relay]
)rippleConfig";
c.loadFromString(toLoad);
BEAST_EXPECT(c.VP_REDUCE_RELAY_ENHANCED_SQUELCH_ENABLE == false);
}
/** Tests tracking for squelched validators and peers */
void
testSquelchTracking()
{
testcase("squelchTracking");
Peer::id_t const squelchedPeerID = 0;
Peer::id_t const newPeerID = 1;
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), noop_handler, env_.app().config(), stopwatch);
auto const publicKey = randomKeyPair(KeyType::ed25519).first;
// a new key should not be squelched
BEAST_EXPECTS(
!slots.validatorSquelched(publicKey), "validator squelched");
slots.squelchValidator(publicKey, squelchedPeerID);
// after squelching a peer, the validator must be squelched
BEAST_EXPECTS(
slots.validatorSquelched(publicKey), "validator not squelched");
// the peer must also be squelched
BEAST_EXPECTS(
slots.peerSquelched(publicKey, squelchedPeerID),
"peer not squelched");
// a new peer must not be squelched
BEAST_EXPECTS(
!slots.peerSquelched(publicKey, newPeerID), "new peer squelched");
// advance the manual clock to after expiration
stopwatch.advance(
reduce_relay::MAX_UNSQUELCH_EXPIRE_DEFAULT +
std::chrono::seconds{11});
// validator squelch should expire
BEAST_EXPECTS(
!slots.validatorSquelched(publicKey),
"validator squelched after expiry");
// peer squelch should also expire
BEAST_EXPECTS(
!slots.peerSquelched(publicKey, squelchedPeerID),
"validator squelched after expiry");
}
void
testUpdateValidatorSlot_newValidator()
{
testcase("updateValidatorSlot_newValidator");
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), noop_handler, env_.app().config(), stopwatch);
Peer::id_t const peerID = 1;
auto const validator = randomKeyPair(KeyType::ed25519).first;
uint256 const message{0};
slots.updateUntrustedValidatorSlot(message, validator, peerID);
// adding untrusted slot does not effect trusted slots
BEAST_EXPECTS(
slots.getSlots(true).size() == 0, "trusted slots changed");
// we expect that the validator was not added to untrusted slots
BEAST_EXPECTS(
slots.getSlots(false).size() == 0, "untrusted slot changed");
// we expect that the validator was added to th consideration list
BEAST_EXPECTS(
slots.getConsideredValidators().contains(validator),
"new validator was not considered");
}
void
testUpdateValidatorSlot_squelchedValidator()
{
testcase("testUpdateValidatorSlot_squelchedValidator");
Peer::id_t const squelchedPeerID = 0;
Peer::id_t const newPeerID = 1;
auto const validator = randomKeyPair(KeyType::ed25519).first;
TestHandler::squelch_method const squelch_f =
[&](PublicKey const& key, Peer::id_t id, std::uint32_t duration) {
BEAST_EXPECTS(
key == validator,
"squelch called for unknown validator key");
BEAST_EXPECTS(
id == newPeerID, "squelch called for the wrong peer");
};
TestHandler handler{squelch_f, noop_squelchAll, noop_unsquelch};
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), handler, env_.app().config(), stopwatch);
slots.squelchValidator(validator, squelchedPeerID);
// this should not trigger squelch assertions, the peer is squelched
slots.updateUntrustedValidatorSlot(
sha512Half(validator), validator, squelchedPeerID);
slots.updateUntrustedValidatorSlot(
sha512Half(validator), validator, newPeerID);
// the squelched peer remained squelched
BEAST_EXPECTS(
slots.peerSquelched(validator, squelchedPeerID),
"peer not squelched");
// because the validator was squelched, the new peer was also squelched
BEAST_EXPECTS(
slots.peerSquelched(validator, newPeerID),
"new peer was not squelched");
// a squelched validator must not be considered
BEAST_EXPECTS(
!slots.getConsideredValidators().contains(validator),
"squelched validator was added for consideration");
}
void
testUpdateValidatorSlot_slotsFull()
{
testcase("updateValidatorSlot_slotsFull");
Peer::id_t const peerID = 1;
// while there are open untrusted slots, no calls should be made to
// squelch any validators
TestHandler handler{noop_handler};
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), handler, env_.app().config(), stopwatch);
// saturate validator slots
auto const validators = fillUntrustedSlots(slots);
// adding untrusted slot does not effect trusted slots
BEAST_EXPECTS(
slots.getSlots(true).size() == 0, "trusted slots changed");
// simulate additional messages from already selected validators
for (auto const& validator : validators)
for (int i = 0; i < reduce_relay::MAX_MESSAGE_THRESHOLD; ++i)
slots.updateUntrustedValidatorSlot(
sha512Half(validator) + static_cast<uint256>(i),
validator,
peerID);
// an untrusted slot was added for each validator
BEAST_EXPECT(
slots.getSlots(false).size() == reduce_relay::MAX_UNTRUSTED_SLOTS);
for (auto const& validator : validators)
BEAST_EXPECTS(
!slots.validatorSquelched(validator),
"selected validator was squelched");
auto const newValidator = randomKeyPair(KeyType::ed25519).first;
// once slots are full squelchAll must be called for new peer/validator
handler.squelchAll_f_ = [&](PublicKey const& key,
std::uint32_t,
std::function<void(Peer::id_t)> callback) {
BEAST_EXPECTS(
key == newValidator, "unexpected validator squelched");
callback(peerID);
};
slots.updateUntrustedValidatorSlot(
sha512Half(newValidator), newValidator, peerID);
// Once the slots are saturated every other validator is squelched
BEAST_EXPECTS(
slots.validatorSquelched(newValidator),
"untrusted validator not squelched");
BEAST_EXPECTS(
slots.peerSquelched(newValidator, peerID),
"peer for untrusted validator not squelched");
}
void
testDeleteIdlePeers_deleteIdleSlots()
{
testcase("deleteIdlePeers");
TestHandler handler{noop_handler};
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), handler, env_.app().config(), stopwatch);
auto keys = fillUntrustedSlots(slots);
// verify that squelchAll is called for each idled slot validator
handler.squelchAll_f_ = [&](PublicKey const& actualKey,
std::uint32_t duration,
std::function<void(Peer::id_t)> callback) {
for (auto it = keys.begin(); it != keys.end(); ++it)
{
if (*it == actualKey)
{
keys.erase(it);
return;
}
}
BEAST_EXPECTS(false, "unexpected key passed to squelchAll");
};
BEAST_EXPECTS(
slots.getSlots(false).size() == reduce_relay::MAX_UNTRUSTED_SLOTS,
"unexpected number of untrusted slots");
// advance the manual clock to after slot expiration
stopwatch.advance(
reduce_relay::MAX_UNSQUELCH_EXPIRE_DEFAULT +
std::chrono::seconds{1});
slots.deleteIdlePeers();
BEAST_EXPECTS(
slots.getSlots(false).size() == 0,
"unexpected number of untrusted slots");
BEAST_EXPECTS(keys.empty(), "not all validators were squelched");
}
void
testDeleteIdlePeers_deleteIdleUntrustedPeer()
{
testcase("deleteIdleUntrustedPeer");
Peer::id_t const peerID = 1;
Peer::id_t const peerID2 = 2;
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), noop_handler, env_.app().config(), stopwatch);
// fill one untrustd validator slot
auto const validator = fillUntrustedSlots(slots, 1)[0];
BEAST_EXPECTS(
slots.getSlots(false).size() == 1,
"unexpected number of untrusted slots");
slots.updateSlotAndSquelch(
sha512Half(validator) + static_cast<uint256>(100),
validator,
peerID,
false);
slots.updateSlotAndSquelch(
sha512Half(validator) + static_cast<uint256>(100),
validator,
peerID2,
false);
slots.deletePeer(peerID, true);
auto const slotPeers = getUntrustedSlotPeers(validator, slots);
BEAST_EXPECTS(
slotPeers.size() == 1, "untrusted validator slot is missing");
BEAST_EXPECTS(
!slotPeers.contains(peerID),
"peer was not removed from untrusted slots");
BEAST_EXPECTS(
slotPeers.contains(peerID2),
"peer was removed from untrusted slots");
}
/** Test that untrusted validator slots are correctly updated by
* updateSlotAndSquelch
*/
void
testUpdateSlotAndSquelch_untrustedValidator()
{
testcase("updateUntrsutedValidatorSlot");
TestHandler handler{noop_handler};
handler.squelch_f_ = [](PublicKey const&, Peer::id_t, std::uint32_t) {};
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), handler, env_.app().config(), stopwatch);
// peers that will be source of validator messages
std::vector<Peer::id_t> peers = {};
// prepare n+1 peers, we expect the n+1st peer will be squelched
for (int i = 0; i <
env_.app().config().VP_REDUCE_RELAY_SQUELCH_MAX_SELECTED_PEERS + 1;
++i)
peers.push_back(i);
auto const validator = fillUntrustedSlots(slots, 1)[0];
// Squelching logic resets all counters each time a new peer is added
// Therfore we need to populate counters for each peer before sending
// new messages
for (auto const& peer : peers)
{
auto const now = stopwatch.now();
slots.updateSlotAndSquelch(
sha512Half(validator) +
static_cast<uint256>(now.time_since_epoch().count()),
validator,
peer,
false);
stopwatch.advance(std::chrono::milliseconds{10});
}
// simulate new, unique validator messages sent by peers
for (auto const& peer : peers)
for (int i = 0; i < reduce_relay::MAX_MESSAGE_THRESHOLD + 1; ++i)
{
auto const now = stopwatch.now();
slots.updateSlotAndSquelch(
sha512Half(validator) +
static_cast<uint256>(now.time_since_epoch().count()),
validator,
peer,
false);
stopwatch.advance(std::chrono::milliseconds{10});
}
auto const slotPeers = getUntrustedSlotPeers(validator, slots);
BEAST_EXPECTS(
slotPeers.size() ==
env_.app().config().VP_REDUCE_RELAY_SQUELCH_MAX_SELECTED_PEERS +
1,
"untrusted validator slot is missing");
int selected = 0;
int squelched = 0;
for (auto const& [_, info] : slotPeers)
{
switch (info.state)
{
case reduce_relay::PeerState::Selected:
++selected;
break;
case reduce_relay::PeerState::Squelched:
++squelched;
break;
case reduce_relay::PeerState::Counting:
BEAST_EXPECTS(
false, "peer should not be in counting state");
}
}
BEAST_EXPECTS(squelched == 1, "expected one squelched peer");
BEAST_EXPECTS(
selected ==
env_.app().config().VP_REDUCE_RELAY_SQUELCH_MAX_SELECTED_PEERS,
"wrong number of peers selected");
}
void
testUpdateConsideredValidator_new()
{
testcase("testUpdateConsideredValidator_new");
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), noop_handler, env_.app().config(), stopwatch);
// insert some random validator key
auto const validator = randomKeyPair(KeyType::ed25519).first;
Peer::id_t const peerID = 0;
Peer::id_t const peerID2 = 1;
BEAST_EXPECTS(
!slots.updateConsideredValidator(validator, peerID),
"validator was selected with insufficient number of peers");
BEAST_EXPECTS(
slots.getConsideredValidators().contains(validator),
"new validator was not added for consideration");
BEAST_EXPECTS(
!slots.updateConsideredValidator(validator, peerID),
"validator was selected with insufficient number of peers");
// expect that a peer will be registered once as a message source
BEAST_EXPECTS(
slots.getConsideredValidators().at(validator).peers.size() == 1,
"duplicate peer was registered");
BEAST_EXPECTS(
!slots.updateConsideredValidator(validator, peerID2),
"validator was selected with insufficient number of peers");
// expect that each distinct peer will be registered
BEAST_EXPECTS(
slots.getConsideredValidators().at(validator).peers.size() == 2,
"distinct peers were not registered");
}
void
testUpdateConsideredValidator_idle()
{
testcase("testUpdateConsideredValidator_idle");
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), noop_handler, env_.app().config(), stopwatch);
// insert some random validator key
auto const validator = randomKeyPair(KeyType::ed25519).first;
Peer::id_t peerID = 0;
BEAST_EXPECTS(
!slots.updateConsideredValidator(validator, peerID),
"validator was selected with insufficient number of peers");
BEAST_EXPECTS(
slots.getConsideredValidators().contains(validator),
"new validator was not added for consideration");
auto const state = slots.getConsideredValidators().at(validator);
// simulate a validator sending a new message before the idle timer
stopwatch.advance(reduce_relay::PEER_IDLED - std::chrono::seconds(1));
BEAST_EXPECTS(
!slots.updateConsideredValidator(validator, peerID),
"validator was selected with insufficient number of peers");
auto const newState = slots.getConsideredValidators().at(validator);
BEAST_EXPECTS(
state.count + 1 == newState.count,
"non-idling validator was updated");
// simulate a validator idling
stopwatch.advance(reduce_relay::PEER_IDLED + std::chrono::seconds(1));
BEAST_EXPECTS(
!slots.updateConsideredValidator(validator, peerID),
"validator was selected with insufficient number of peers");
}
void
testUpdateConsideredValidator_selectQualifying()
{
testcase("testUpdateConsideredValidator_selectQualifying");
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), noop_handler, env_.app().config(), stopwatch);
// insert some random validator key
auto const validator = randomKeyPair(KeyType::ed25519).first;
Peer::id_t peerID = 0;
for (int i = 0; i < reduce_relay::MAX_MESSAGE_THRESHOLD - 1; ++i)
{
BEAST_EXPECTS(
!slots.updateConsideredValidator(validator, peerID),
"validator was selected before reaching message threshold");
stopwatch.advance(
reduce_relay::PEER_IDLED - std::chrono::seconds(1));
}
auto const consideredValidator =
slots.updateConsideredValidator(validator, peerID);
BEAST_EXPECTS(
consideredValidator && *consideredValidator == validator,
"expected validator was not selected");
// expect that selected peer was removed
BEAST_EXPECTS(
!slots.getConsideredValidators().contains(validator),
"selected validator was not removed from considered list");
}
void
testCleanConsideredValidators_resetIdle()
{
testcase("testCleanConsideredValidators_resetIdle");
auto const validator = randomKeyPair(KeyType::ed25519).first;
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), noop_handler, env_.app().config(), stopwatch);
// send enough messages for a slot to meet peer requirements
for (int i = 0;
i < env_.app().config().VP_REDUCE_RELAY_SQUELCH_MAX_SELECTED_PEERS;
++i)
slots.updateUntrustedValidatorSlot(
sha512Half(validator) + static_cast<uint256>(i), validator, i);
// send enough messages from some peer to be one message away from
// meeting the selection criteria
for (int i = 0; i < reduce_relay::MAX_MESSAGE_THRESHOLD -
(env_.app()
.config()
.VP_REDUCE_RELAY_SQUELCH_MAX_SELECTED_PEERS +
1);
++i)
slots.updateUntrustedValidatorSlot(
sha512Half(validator) + static_cast<uint256>(i), validator, 0);
BEAST_EXPECTS(
slots.getConsideredValidators().at(validator).count ==
reduce_relay::MAX_MESSAGE_THRESHOLD - 1,
"considered validator information is in an invalid state");
BEAST_EXPECTS(
slots.getConsideredValidators().at(validator).peers.size() ==
env_.app().config().VP_REDUCE_RELAY_SQUELCH_MAX_SELECTED_PEERS,
"considered validator information is in an invalid state");
stopwatch.advance(reduce_relay::PEER_IDLED + std::chrono::seconds{1});
// deleteIdlePeers must reset the progress of a validator that idled
slots.deleteIdlePeers();
slots.updateUntrustedValidatorSlot(
sha512Half(validator) + static_cast<uint256>(1), validator, 0);
// we expect that the validator was not selected
BEAST_EXPECTS(
slots.getSlots(false).size() == 0, "untrusted slot was created");
BEAST_EXPECTS(
slots.getConsideredValidators().at(validator).count == 1,
"considered validator information is in an invalid state");
BEAST_EXPECTS(
slots.getConsideredValidators().at(validator).peers.size() == 1,
"considered validator information is in an invalid state");
}
void
testCleanConsideredValidators_deletePoorlyConnected()
{
testcase("cleanConsideredValidators_deletePoorlyConnected");
auto const validator = randomKeyPair(KeyType::ed25519).first;
Peer::id_t const peerID = 0;
TestHandler handler{noop_handler};
// verify that squelchAll is called for poorly connected validator
handler.squelchAll_f_ = [&](PublicKey const& actualKey,
std::uint32_t duration,
std::function<void(Peer::id_t)> callback) {
BEAST_EXPECTS(
actualKey == validator, "unexpected key passed to squelchAll");
callback(peerID);
};
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), handler, env_.app().config(), stopwatch);
// send enough messages from a single peer
for (int i = 0; i < 2 * reduce_relay::MAX_MESSAGE_THRESHOLD + 1; ++i)
slots.updateUntrustedValidatorSlot(
sha512Half(validator) + static_cast<uint256>(i),
validator,
peerID);
stopwatch.advance(reduce_relay::PEER_IDLED + std::chrono::seconds{1});
// deleteIdlePeers must squelch the validator as it failed to reach
// peering requirements
slots.deleteIdlePeers();
BEAST_EXPECTS(
slots.getConsideredValidators().size() == 0,
"poorly connected validator was not deleted");
}
void
testCleanConsideredValidators_deleteSilent()
{
testcase("cleanConsideredValidators_deleteSilent");
// insert some random validator key
auto const idleValidator = randomKeyPair(KeyType::ed25519).first;
auto const validator = randomKeyPair(KeyType::ed25519).first;
Peer::id_t const peerID = 0;
TestHandler handler{noop_handler};
// verify that squelchAll is called for idle validator
handler.squelchAll_f_ = [&](PublicKey const& actualKey,
std::uint32_t duration,
std::function<void(Peer::id_t)> callback) {
BEAST_EXPECTS(
actualKey == idleValidator,
"unexpected key passed to squelchAll");
callback(peerID);
};
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), handler, env_.app().config(), stopwatch);
BEAST_EXPECTS(
!slots.updateConsideredValidator(idleValidator, peerID),
"validator was selected with insufficient number of peers");
BEAST_EXPECTS(
slots.getConsideredValidators().contains(idleValidator),
"new validator was not added for consideration");
// simulate a validator idling
stopwatch.advance(
reduce_relay::MAX_UNTRUSTED_VALIDATOR_IDLE +
std::chrono::seconds(1));
BEAST_EXPECTS(
!slots.updateConsideredValidator(validator, peerID),
"validator was selected with insufficient number of peers");
slots.deleteIdlePeers();
BEAST_EXPECTS(
!slots.getConsideredValidators().contains(idleValidator),
"late validator was not removed");
BEAST_EXPECTS(
slots.getConsideredValidators().contains(validator),
"timely validator was removed");
}
void
testSquelchUntrustedValidator_consideredListCleared()
{
testcase("testSquelchUntrustedValidator");
auto const validator = randomKeyPair(KeyType::ed25519).first;
Peer::id_t const peerID = 0;
TestHandler handler{noop_handler};
// verify that squelchAll is called for idle validator
handler.squelchAll_f_ = [&](PublicKey const& actualKey,
std::uint32_t duration,
std::function<void(Peer::id_t)> callback) {
BEAST_EXPECTS(
actualKey == validator, "unexpected key passed to squelchAll");
};
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), handler, env_.app().config(), stopwatch);
// add the validator to the considered list
slots.updateUntrustedValidatorSlot(
sha512Half(validator), validator, peerID);
BEAST_EXPECTS(
slots.getConsideredValidators().contains(validator),
"validator was not added to considered list");
slots.squelchUntrustedValidator(validator);
BEAST_EXPECTS(
!slots.getConsideredValidators().contains(validator),
"validator was not removed from considered list");
}
void
testSquelchUntrustedValidator_slotEvicted()
{
testcase("testSquelchUntrustedValidator_slotEvicted");
TestHandler handler{noop_handler};
TestStopwatch stopwatch;
EnhancedSquelchingTestSlots slots(
env_.app().logs(), handler, env_.app().config(), stopwatch);
// assign a slot to the untrusted validator
auto const validators = fillUntrustedSlots(slots, 1);
// verify that squelchAll is called for idle validator
handler.squelchAll_f_ = [&](PublicKey const& actualKey,
std::uint32_t duration,
std::function<void(Peer::id_t)> callback) {
BEAST_EXPECTS(
actualKey == validators[0],
"unexpected key passed to squelchAll");
};
BEAST_EXPECTS(
slots.getSlots(false).contains(validators[0]),
"a slot was not assigned to a validator");
slots.squelchUntrustedValidator(validators[0]);
BEAST_EXPECTS(
!slots.getSlots(false).contains(validators[0]),
"a slot was not evicted");
}
private:
/** A helper method to fill untrusted slots of a given Slots instance
* with random validator messages*/
std::vector<PublicKey>
fillUntrustedSlots(
EnhancedSquelchingTestSlots& slots,
int64_t maxSlots = reduce_relay::MAX_UNTRUSTED_SLOTS)
{
std::vector<PublicKey> keys;
for (int i = 0; i < maxSlots; ++i)
{
auto const validator = randomKeyPair(KeyType::ed25519).first;
keys.push_back(validator);
for (int j = 0; j <
env_.app().config().VP_REDUCE_RELAY_SQUELCH_MAX_SELECTED_PEERS;
++j)
// send enough messages so that a validator slot is selected
for (int k = 0; k < reduce_relay::MAX_MESSAGE_THRESHOLD; ++k)
slots.updateUntrustedValidatorSlot(
sha512Half(validator) + static_cast<uint256>(k),
validator,
j);
}
return keys;
}
std::unordered_map<Peer::id_t, reduce_relay::Slot::PeerInfo>
getUntrustedSlotPeers(
PublicKey const& validator,
EnhancedSquelchingTestSlots const& slots)
{
auto const& it = slots.getSlots(false).find(validator);
if (it == slots.getSlots(false).end())
return {};
auto r = std::unordered_map<Peer::id_t, reduce_relay::Slot::PeerInfo>();
for (auto const& [id, info] : it->second.getPeers())
r.emplace(std::make_pair(id, info));
return r;
}
void
run() override
{
testConfig();
testSquelchTracking();
testUpdateValidatorSlot_newValidator();
testUpdateValidatorSlot_slotsFull();
testUpdateValidatorSlot_squelchedValidator();
testDeleteIdlePeers_deleteIdleSlots();
testDeleteIdlePeers_deleteIdleUntrustedPeer();
testUpdateSlotAndSquelch_untrustedValidator();
testUpdateConsideredValidator_new();
testUpdateConsideredValidator_idle();
testUpdateConsideredValidator_selectQualifying();
testCleanConsideredValidators_deleteSilent();
testCleanConsideredValidators_resetIdle();
testCleanConsideredValidators_deletePoorlyConnected();
testSquelchUntrustedValidator_consideredListCleared();
testSquelchUntrustedValidator_slotEvicted();
}
};
BEAST_DEFINE_TESTSUITE(enhanced_squelch, overlay, ripple);
} // namespace test
} // namespace ripple

View File

@@ -17,22 +17,24 @@
*/
//==============================================================================
#include <test/jtx.h>
#include <test/jtx/Env.h>
#include <xrpld/overlay/Message.h>
#include <xrpld/overlay/Peer.h>
#include <xrpld/overlay/ReduceRelayCommon.h>
#include <xrpld/overlay/Slot.h>
#include <xrpld/overlay/SquelchStore.h>
#include <xrpld/overlay/Squelch.h>
#include <xrpld/overlay/detail/Handshake.h>
#include <xrpl/basics/random.h>
#include <xrpl/beast/unit_test.h>
#include <xrpl/protocol/SecretKey.h>
#include <xrpl/protocol/messages.h>
#include <boost/thread.hpp>
#include <algorithm>
#include <chrono>
#include <iterator>
#include <iostream>
#include <numeric>
#include <optional>
@@ -42,21 +44,6 @@ namespace test {
using namespace std::chrono;
template <class Clock>
class extended_manual_clock : public beast::manual_clock<Clock>
{
public:
using typename beast::manual_clock<Clock>::duration;
using typename beast::manual_clock<Clock>::time_point;
void
randAdvance(std::chrono::milliseconds min, std::chrono::milliseconds max)
{
auto ms = ripple::rand_int(min.count(), max.count());
this->advance(std::chrono::milliseconds(ms));
}
};
class Link;
using MessageSPtr = std::shared_ptr<Message>;
@@ -67,7 +54,6 @@ using SquelchCB =
std::function<void(PublicKey const&, PeerWPtr const&, std::uint32_t)>;
using UnsquelchCB = std::function<void(PublicKey const&, PeerWPtr const&)>;
using LinkIterCB = std::function<void(Link&, MessageSPtr)>;
using TestStopwatch = extended_manual_clock<std::chrono::steady_clock>;
static constexpr std::uint32_t MAX_PEERS = 10;
static constexpr std::uint32_t MAX_VALIDATORS = 10;
@@ -205,6 +191,52 @@ public:
}
};
/** Manually advanced clock. */
class ManualClock
{
public:
typedef uint64_t rep;
typedef std::milli period;
typedef std::chrono::duration<std::uint32_t, period> duration;
typedef std::chrono::time_point<ManualClock> time_point;
inline static bool const is_steady = false;
static void
advance(duration d) noexcept
{
now_ += d;
}
static void
randAdvance(milliseconds min, milliseconds max)
{
now_ += randDuration(min, max);
}
static void
reset() noexcept
{
now_ = time_point(seconds(0));
}
static time_point
now() noexcept
{
return now_;
}
static duration
randDuration(milliseconds min, milliseconds max)
{
return duration(milliseconds(rand_int(min.count(), max.count())));
}
explicit ManualClock() = default;
private:
inline static time_point now_ = time_point(seconds(0));
};
/** Simulate server's OverlayImpl */
class Overlay
{
@@ -217,20 +249,12 @@ public:
uint256 const& key,
PublicKey const& validator,
Peer::id_t id,
SquelchCB f) = 0;
SquelchCB f,
protocol::MessageType type = protocol::mtVALIDATION) = 0;
virtual void deleteIdlePeers(UnsquelchCB) = 0;
virtual void deletePeer(Peer::id_t, UnsquelchCB) = 0;
TestStopwatch&
clock()
{
return clock_;
}
protected:
TestStopwatch clock_;
};
class Validator;
@@ -433,39 +457,19 @@ private:
std::uint16_t id_ = 0;
};
class BaseSquelchingTestSlots : public reduce_relay::Slots
{
using Slots = reduce_relay::Slots;
public:
BaseSquelchingTestSlots(
Logs& logs,
reduce_relay::SquelchHandler& handler,
Config const& config,
reduce_relay::Slots::clock_type& clock)
: Slots(logs, handler, config, clock)
{
}
Slots::slots_map const&
getSlots() const
{
return trustedSlots_;
}
};
class PeerSim : public PeerPartial, public std::enable_shared_from_this<PeerSim>
{
public:
using id_t = Peer::id_t;
PeerSim(Overlay& overlay, beast::Journal journal)
: overlay_(overlay), squelchStore_(journal, overlay_.clock())
: overlay_(overlay), squelch_(journal)
{
id_ = sid_++;
}
~PeerSim() = default;
Peer::id_t
id_t
id() const override
{
return id_;
@@ -483,7 +487,7 @@ public:
{
auto validator = m->getValidatorKey();
assert(validator);
if (squelchStore_.isSquelched(*validator))
if (!squelch_.expireSquelch(*validator))
return;
overlay_.updateSlotAndSquelch({}, *validator, id(), f);
@@ -495,17 +499,18 @@ public:
{
auto validator = squelch.validatorpubkey();
PublicKey key(Slice(validator.data(), validator.size()));
squelchStore_.handleSquelch(
key,
squelch.squelch(),
std::chrono::seconds{squelch.squelchduration()});
if (squelch.squelch())
squelch_.addSquelch(
key, std::chrono::seconds{squelch.squelchduration()});
else
squelch_.removeSquelch(key);
}
private:
inline static Peer::id_t sid_ = 0;
Peer::id_t id_;
inline static id_t sid_ = 0;
id_t id_;
Overlay& overlay_;
reduce_relay::SquelchStore squelchStore_;
reduce_relay::Squelch<ManualClock> squelch_;
};
class OverlaySim : public Overlay, public reduce_relay::SquelchHandler
@@ -513,9 +518,10 @@ class OverlaySim : public Overlay, public reduce_relay::SquelchHandler
using Peers = std::unordered_map<Peer::id_t, PeerSPtr>;
public:
using clock_type = TestStopwatch;
using id_t = Peer::id_t;
using clock_type = ManualClock;
OverlaySim(Application& app)
: slots_(app.logs(), *this, app.config(), clock_), logs_(app.logs())
: slots_(app.logs(), *this, app.config()), logs_(app.logs())
{
}
@@ -525,21 +531,15 @@ public:
clear()
{
peers_.clear();
clock_.advance(hours(1));
ManualClock::advance(hours(1));
slots_.deleteIdlePeers();
}
std::uint16_t
inState(PublicKey const& validator, reduce_relay::PeerState state)
{
auto const& it = slots_.getSlots().find(validator);
if (it != slots_.getSlots().end())
return std::count_if(
it->second.getPeers().begin(),
it->second.getPeers().end(),
[&](auto const& it) { return (it.second.state == state); });
return 0;
auto res = slots_.inState(validator, state);
return res ? *res : 0;
}
void
@@ -547,14 +547,15 @@ public:
uint256 const& key,
PublicKey const& validator,
Peer::id_t id,
SquelchCB f) override
SquelchCB f,
protocol::MessageType type = protocol::mtVALIDATION) override
{
squelch_ = f;
slots_.updateSlotAndSquelch(key, validator, id, true);
slots_.updateSlotAndSquelch(key, validator, id, type);
}
void
deletePeer(Peer::id_t id, UnsquelchCB f) override
deletePeer(id_t id, UnsquelchCB f) override
{
unsquelch_ = f;
slots_.deletePeer(id, true);
@@ -631,55 +632,40 @@ public:
bool
isCountingState(PublicKey const& validator)
{
auto const& it = slots_.getSlots().find(validator);
if (it != slots_.getSlots().end())
return it->second.getState() == reduce_relay::SlotState::Counting;
return false;
return slots_.inState(validator, reduce_relay::SlotState::Counting);
}
std::set<Peer::id_t>
std::set<id_t>
getSelected(PublicKey const& validator)
{
auto const& it = slots_.getSlots().find(validator);
if (it == slots_.getSlots().end())
return {};
std::set<Peer::id_t> r;
for (auto const& [id, info] : it->second.getPeers())
if (info.state == reduce_relay::PeerState::Selected)
r.insert(id);
return r;
return slots_.getSelected(validator);
}
bool
isSelected(PublicKey const& validator, Peer::id_t peer)
{
auto selected = getSelected(validator);
auto selected = slots_.getSelected(validator);
return selected.find(peer) != selected.end();
}
Peer::id_t
id_t
getSelectedPeer(PublicKey const& validator)
{
auto selected = getSelected(validator);
auto selected = slots_.getSelected(validator);
assert(selected.size());
return *selected.begin();
}
std::unordered_map<Peer::id_t, reduce_relay::Slot::PeerInfo>
std::unordered_map<
id_t,
std::tuple<
reduce_relay::PeerState,
std::uint16_t,
std::uint32_t,
std::uint32_t>>
getPeers(PublicKey const& validator)
{
auto const& it = slots_.getSlots().find(validator);
if (it == slots_.getSlots().end())
return {};
auto r = std::unordered_map<Peer::id_t, reduce_relay::Slot::PeerInfo>();
for (auto const& [id, info] : it->second.getPeers())
r.emplace(std::make_pair(id, info));
return r;
return slots_.getPeers(validator);
}
std::uint16_t
@@ -698,31 +684,17 @@ private:
if (auto it = peers_.find(id); it != peers_.end())
squelch_(validator, it->second, squelchDuration);
}
void
squelchAll(
PublicKey const& validator,
std::uint32_t duration,
std::function<void(Peer::id_t)> callback) override
{
for (auto const& [id, peer] : peers_)
{
squelch_(validator, peer, duration);
callback(id);
}
}
void
unsquelch(PublicKey const& validator, Peer::id_t id) const override
{
if (auto it = peers_.find(id); it != peers_.end())
unsquelch_(validator, it->second);
}
SquelchCB squelch_;
UnsquelchCB unsquelch_;
Peers peers_;
Peers peersCache_;
BaseSquelchingTestSlots slots_;
reduce_relay::Slots<ManualClock> slots_;
Logs& logs_;
};
@@ -854,8 +826,12 @@ public:
LinkIterCB link,
std::uint16_t nValidators = MAX_VALIDATORS,
std::uint32_t nMessages = MAX_MESSAGES,
bool purge = true)
bool purge = true,
bool resetClock = true)
{
if (resetClock)
ManualClock::reset();
if (purge)
{
purgePeers();
@@ -864,8 +840,7 @@ public:
for (int m = 0; m < nMessages; ++m)
{
overlay_.clock().randAdvance(
milliseconds(1800), milliseconds(2200));
ManualClock::randAdvance(milliseconds(1800), milliseconds(2200));
for_rand(0, nValidators, [&](std::uint32_t v) {
validators_[v].for_links(link);
});
@@ -899,7 +874,8 @@ public:
for (auto& [_, v] : peers)
{
(void)_;
if (v.state == reduce_relay::PeerState::Squelched)
if (std::get<reduce_relay::PeerState>(v) ==
reduce_relay::PeerState::Squelched)
return false;
}
}
@@ -911,9 +887,10 @@ private:
std::vector<Validator> validators_;
};
class base_squelch_test : public beast::unit_test::suite
class reduce_relay_test : public beast::unit_test::suite
{
using Slot = reduce_relay::Slot;
using Slot = reduce_relay::Slot<ManualClock>;
using id_t = Peer::id_t;
protected:
void
@@ -924,7 +901,8 @@ protected:
<< "num peers " << (int)network_.overlay().getNumPeers()
<< std::endl;
for (auto& [k, v] : peers)
std::cout << k << ":" << to_string(v.state) << " ";
std::cout << k << ":" << (int)std::get<reduce_relay::PeerState>(v)
<< " ";
std::cout << std::endl;
}
@@ -962,7 +940,7 @@ protected:
Peer::id_t peer_;
std::uint16_t validator_;
std::optional<PublicKey> key_;
TestStopwatch::time_point time_;
time_point<ManualClock> time_;
bool handled_ = false;
};
@@ -974,12 +952,12 @@ protected:
{
std::unordered_map<EventType, Event> events{
{LinkDown, {}}, {PeerDisconnected, {}}};
auto lastCheck = network_.overlay().clock().now();
time_point<ManualClock> lastCheck = ManualClock::now();
network_.reset();
network_.propagate([&](Link& link, MessageSPtr m) {
auto& validator = link.validator();
auto const now = network_.overlay().clock().now();
auto now = ManualClock::now();
bool squelched = false;
std::stringstream str;
@@ -1003,8 +981,7 @@ protected:
str << s << " ";
if (log)
std::cout
<< (double)std::chrono::duration_cast<milliseconds>(
now.time_since_epoch())
<< (double)reduce_relay::epoch<milliseconds>(now)
.count() /
1000.
<< " random, squelched, validator: " << validator.id()
@@ -1096,17 +1073,10 @@ protected:
event.isSelected_ =
network_.overlay().isSelected(*event.key_, event.peer_);
auto peers = network_.overlay().getPeers(*event.key_);
auto d =
std::chrono::duration_cast<std::chrono::milliseconds>(
now.time_since_epoch())
.count() -
std::chrono::duration_cast<std::chrono::milliseconds>(
peers[event.peer_].lastMessage.time_since_epoch())
.count();
auto d = reduce_relay::epoch<milliseconds>(now).count() -
std::get<3>(peers[event.peer_]);
mustHandle = event.isSelected_ &&
d > milliseconds(reduce_relay::PEER_IDLED).count() &&
d > milliseconds(reduce_relay::IDLED).count() &&
network_.overlay().inState(
*event.key_, reduce_relay::PeerState::Squelched) >
0 &&
@@ -1128,7 +1098,7 @@ protected:
}
if (event.state_ == State::WaitReset ||
(event.state_ == State::On &&
(now - event.time_ > (reduce_relay::PEER_IDLED + seconds(2)))))
(now - event.time_ > (reduce_relay::IDLED + seconds(2)))))
{
bool handled =
event.state_ == State::WaitReset || !event.handled_;
@@ -1158,6 +1128,7 @@ protected:
checkCounting(PublicKey const& validator, bool isCountingState)
{
auto countingState = network_.overlay().isCountingState(validator);
BEAST_EXPECT(countingState == isCountingState);
return countingState == isCountingState;
}
@@ -1188,7 +1159,7 @@ protected:
testPeerUnsquelchedTooSoon(bool log)
{
doTest("Peer Unsquelched Too Soon", log, [this](bool log) {
BEAST_EXPECT(propagateNoSquelch(log, 1, false, false));
BEAST_EXPECT(propagateNoSquelch(log, 1, false, false, false));
});
}
@@ -1198,17 +1169,17 @@ protected:
void
testPeerUnsquelched(bool log)
{
network_.overlay().clock().advance(seconds(601));
ManualClock::advance(seconds(601));
doTest("Peer Unsquelched", log, [this](bool log) {
BEAST_EXPECT(propagateNoSquelch(log, 2, true, true));
BEAST_EXPECT(propagateNoSquelch(log, 2, true, true, false));
});
}
/** Propagate enough messages to generate one squelch event */
bool
propagateAndSquelch(bool log, bool purge = true)
propagateAndSquelch(bool log, bool purge = true, bool resetClock = true)
{
int squelchEvents = 0;
int n = 0;
network_.propagate(
[&](Link& link, MessageSPtr message) {
std::uint16_t squelched = 0;
@@ -1228,21 +1199,21 @@ protected:
env_.app()
.config()
.VP_REDUCE_RELAY_SQUELCH_MAX_SELECTED_PEERS);
squelchEvents++;
n++;
}
},
1,
reduce_relay::MAX_MESSAGE_THRESHOLD + 2,
purge);
purge,
resetClock);
auto selected = network_.overlay().getSelected(network_.validator(0));
BEAST_EXPECT(
selected.size() ==
env_.app().config().VP_REDUCE_RELAY_SQUELCH_MAX_SELECTED_PEERS);
BEAST_EXPECT(squelchEvents == 1); // only one selection round
BEAST_EXPECT(n == 1); // only one selection round
auto res = checkCounting(network_.validator(0), false);
BEAST_EXPECT(res);
return squelchEvents == 1 && res;
return n == 1 && res;
}
/** Send fewer message so that squelch event is not generated */
@@ -1251,7 +1222,8 @@ protected:
bool log,
std::uint16_t nMessages,
bool countingState,
bool purge = true)
bool purge = true,
bool resetClock = true)
{
bool squelched = false;
network_.propagate(
@@ -1267,9 +1239,9 @@ protected:
},
1,
nMessages,
purge);
purge,
resetClock);
auto res = checkCounting(network_.validator(0), countingState);
BEAST_EXPECT(res);
return !squelched && res;
}
@@ -1280,9 +1252,9 @@ protected:
testNewPeer(bool log)
{
doTest("New Peer", log, [this](bool log) {
BEAST_EXPECT(propagateAndSquelch(log, true));
BEAST_EXPECT(propagateAndSquelch(log, true, false));
network_.addPeer();
BEAST_EXPECT(propagateNoSquelch(log, 1, true, false));
BEAST_EXPECT(propagateNoSquelch(log, 1, true, false, false));
});
}
@@ -1292,8 +1264,8 @@ protected:
testSelectedPeerDisconnects(bool log)
{
doTest("Selected Peer Disconnects", log, [this](bool log) {
network_.overlay().clock().advance(seconds(601));
BEAST_EXPECT(propagateAndSquelch(log, true));
ManualClock::advance(seconds(601));
BEAST_EXPECT(propagateAndSquelch(log, true, false));
auto id = network_.overlay().getSelectedPeer(network_.validator(0));
std::uint16_t unsquelched = 0;
network_.overlay().deletePeer(
@@ -1316,16 +1288,15 @@ protected:
testSelectedPeerStopsRelaying(bool log)
{
doTest("Selected Peer Stops Relaying", log, [this](bool log) {
network_.overlay().clock().advance(seconds(601));
BEAST_EXPECT(propagateAndSquelch(log, true));
network_.overlay().clock().advance(
reduce_relay::PEER_IDLED + seconds(1));
ManualClock::advance(seconds(601));
BEAST_EXPECT(propagateAndSquelch(log, true, false));
ManualClock::advance(reduce_relay::IDLED + seconds(1));
std::uint16_t unsquelched = 0;
network_.overlay().deleteIdlePeers(
[&](PublicKey const& key, PeerWPtr const& peer) {
unsquelched++;
});
auto peers = network_.overlay().getPeers(network_.validator(0));
BEAST_EXPECT(
unsquelched ==
MAX_PEERS -
@@ -1342,11 +1313,12 @@ protected:
testSquelchedPeerDisconnects(bool log)
{
doTest("Squelched Peer Disconnects", log, [this](bool log) {
network_.overlay().clock().advance(seconds(601));
BEAST_EXPECT(propagateAndSquelch(log, true));
ManualClock::advance(seconds(601));
BEAST_EXPECT(propagateAndSquelch(log, true, false));
auto peers = network_.overlay().getPeers(network_.validator(0));
auto it = std::find_if(peers.begin(), peers.end(), [&](auto it) {
return it.second.state == reduce_relay::PeerState::Squelched;
return std::get<reduce_relay::PeerState>(it.second) ==
reduce_relay::PeerState::Squelched;
});
assert(it != peers.end());
std::uint16_t unsquelched = 0;
@@ -1497,34 +1469,29 @@ vp_base_squelch_max_selected_peers=2
testBaseSquelchReady(bool log)
{
doTest("BaseSquelchReady", log, [&](bool log) {
auto createSlots =
[&](bool baseSquelchEnabled,
TestStopwatch stopwatch) -> reduce_relay::Slots {
ManualClock::reset();
auto createSlots = [&](bool baseSquelchEnabled)
-> reduce_relay::Slots<ManualClock> {
env_.app().config().VP_REDUCE_RELAY_BASE_SQUELCH_ENABLE =
baseSquelchEnabled;
return reduce_relay::Slots(
env_.app().logs(),
network_.overlay(),
env_.app().config(),
stopwatch);
return reduce_relay::Slots<ManualClock>(
env_.app().logs(), network_.overlay(), env_.app().config());
};
TestStopwatch stopwatch;
// base squelching must not be ready if squelching is disabled
BEAST_EXPECT(!createSlots(false, stopwatch).baseSquelchReady());
BEAST_EXPECT(!createSlots(false).baseSquelchReady());
// base squelch must not be ready as not enough time passed from
// bootup
BEAST_EXPECT(!createSlots(true, stopwatch).baseSquelchReady());
BEAST_EXPECT(!createSlots(true).baseSquelchReady());
stopwatch.advance(reduce_relay::WAIT_ON_BOOTUP + minutes{1});
ManualClock::advance(reduce_relay::WAIT_ON_BOOTUP + minutes{1});
// base squelch enabled and bootup time passed
BEAST_EXPECT(createSlots(true, stopwatch).baseSquelchReady());
BEAST_EXPECT(createSlots(true).baseSquelchReady());
// even if time passed, base squelching must not be ready if turned
// off in the config
BEAST_EXPECT(!createSlots(false, stopwatch).baseSquelchReady());
BEAST_EXPECT(!createSlots(false).baseSquelchReady());
});
}
@@ -1547,7 +1514,7 @@ vp_base_squelch_max_selected_peers=2
auto peers = network_.overlay().getPeers(network_.validator(0));
// first message changes Slot state to Counting and is not counted,
// hence '-1'.
BEAST_EXPECT(peers[0].count == (nMessages - 1));
BEAST_EXPECT(std::get<1>(peers[0]) == (nMessages - 1));
// add duplicate
uint256 key(nMessages - 1);
network_.overlay().updateSlotAndSquelch(
@@ -1557,10 +1524,9 @@ vp_base_squelch_max_selected_peers=2
[&](PublicKey const&, PeerWPtr, std::uint32_t) {});
// confirm the same number of messages
peers = network_.overlay().getPeers(network_.validator(0));
BEAST_EXPECT(peers[0].count == (nMessages - 1));
BEAST_EXPECT(std::get<1>(peers[0]) == (nMessages - 1));
// advance the clock
network_.overlay().clock().advance(
reduce_relay::PEER_IDLED + seconds(1));
ManualClock::advance(reduce_relay::IDLED + seconds(1));
network_.overlay().updateSlotAndSquelch(
key,
network_.validator(0),
@@ -1568,7 +1534,7 @@ vp_base_squelch_max_selected_peers=2
[&](PublicKey const&, PeerWPtr, std::uint32_t) {});
peers = network_.overlay().getPeers(network_.validator(0));
// confirm message number increased
BEAST_EXPECT(peers[0].count == nMessages);
BEAST_EXPECT(std::get<1>(peers[0]) == nMessages);
});
}
@@ -1584,15 +1550,6 @@ vp_base_squelch_max_selected_peers=2
if (duration > maxDuration_)
maxDuration_ = duration;
}
void
squelchAll(
PublicKey const&,
std::uint32_t,
std::function<void(Peer::id_t)>) override
{
}
void
unsquelch(PublicKey const&, Peer::id_t) const override
{
@@ -1609,11 +1566,8 @@ vp_base_squelch_max_selected_peers=2
auto run = [&](int npeers) {
handler.maxDuration_ = 0;
reduce_relay::Slots slots(
env_.app().logs(),
handler,
env_.app().config(),
network_.overlay().clock());
reduce_relay::Slots<ManualClock> slots(
env_.app().logs(), handler, env_.app().config());
// 1st message from a new peer switches the slot
// to counting state and resets the counts of all peers +
// MAX_MESSAGE_THRESHOLD + 1 messages to reach the threshold
@@ -1628,11 +1582,14 @@ vp_base_squelch_max_selected_peers=2
std::uint64_t mid = m * 1000 + peer;
uint256 const message{mid};
slots.updateSlotAndSquelch(
message, validator, peer, true);
message,
validator,
peer,
protocol::MessageType::mtVALIDATION);
}
}
// make Slot's internal hash router expire all messages
network_.overlay().clock().advance(hours(1));
ManualClock::advance(hours(1));
};
using namespace reduce_relay;
@@ -1746,7 +1703,7 @@ vp_base_squelch_max_selected_peers=2
Network network_;
public:
base_squelch_test()
reduce_relay_test()
: env_(*this, jtx::envconfig([](std::unique_ptr<Config> cfg) {
cfg->VP_REDUCE_RELAY_BASE_SQUELCH_ENABLE = true;
cfg->VP_REDUCE_RELAY_SQUELCH_MAX_SELECTED_PEERS = 6;
@@ -1775,7 +1732,7 @@ public:
}
};
class base_squelch_simulate_test : public base_squelch_test
class reduce_relay_simulate_test : public reduce_relay_test
{
void
testRandom(bool log)
@@ -1791,8 +1748,8 @@ class base_squelch_simulate_test : public base_squelch_test
}
};
BEAST_DEFINE_TESTSUITE(base_squelch, ripple_data, ripple);
BEAST_DEFINE_TESTSUITE_MANUAL(base_squelch_simulate, ripple_data, ripple);
BEAST_DEFINE_TESTSUITE(reduce_relay, ripple_data, ripple);
BEAST_DEFINE_TESTSUITE_MANUAL(reduce_relay_simulate, ripple_data, ripple);
} // namespace test

View File

@@ -1,164 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2025 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <test/jtx/Env.h>
#include <xrpld/overlay/ReduceRelayCommon.h>
#include <xrpld/overlay/SquelchStore.h>
#include <xrpl/beast/unit_test.h>
#include <xrpl/protocol/PublicKey.h>
#include <chrono>
namespace ripple {
namespace test {
class TestSquelchStore : public reduce_relay::SquelchStore
{
public:
TestSquelchStore(beast::Journal journal, TestStopwatch& clock)
: reduce_relay::SquelchStore(journal, clock)
{
}
hash_map<PublicKey, TestStopwatch::time_point> const&
getSquelched() const
{
return squelched_;
}
};
class squelch_store_test : public beast::unit_test::suite
{
using seconds = std::chrono::seconds;
public:
jtx::Env env_;
squelch_store_test() : env_(*this)
{
}
void
testHandleSquelch()
{
testcase("SquelchStore handleSquelch");
TestStopwatch clock;
auto store = TestSquelchStore(env_.journal, clock);
auto const validator = randomKeyPair(KeyType::ed25519).first;
// attempt to squelch the peer with a too small duration
store.handleSquelch(
validator, true, reduce_relay::MIN_UNSQUELCH_EXPIRE - seconds{1});
// the peer must not be squelched
BEAST_EXPECTS(!store.isSquelched(validator), "peer is squelched");
// attempt to squelch the peer with a too big duration
store.handleSquelch(
validator,
true,
reduce_relay::MAX_UNSQUELCH_EXPIRE_PEERS + seconds{1});
// the peer must not be squelched
BEAST_EXPECTS(!store.isSquelched(validator), "peer is squelched");
// squelch the peer with a good duration
store.handleSquelch(
validator, true, reduce_relay::MIN_UNSQUELCH_EXPIRE + seconds{1});
// the peer for the validator should be squelched
BEAST_EXPECTS(
store.isSquelched(validator),
"peer and validator are not squelched");
// unsquelch the validator
store.handleSquelch(validator, false, seconds{0});
BEAST_EXPECTS(!store.isSquelched(validator), "peer is squelched");
}
void
testIsSquelched()
{
testcase("SquelchStore IsSquelched");
TestStopwatch clock;
auto store = TestSquelchStore(env_.journal, clock);
auto const validator = randomKeyPair(KeyType::ed25519).first;
auto const duration = reduce_relay::MIN_UNSQUELCH_EXPIRE + seconds{1};
store.handleSquelch(
validator, true, reduce_relay::MIN_UNSQUELCH_EXPIRE + seconds{1});
BEAST_EXPECTS(
store.isSquelched(validator),
"peer and validator are not squelched");
clock.advance(duration + seconds{1});
// the peer with short squelch duration must be not squelched
BEAST_EXPECTS(
!store.isSquelched(validator), "peer and validator are squelched");
}
void
testClearExpiredSquelches()
{
testcase("SquelchStore testClearExpiredSquelches");
TestStopwatch clock;
auto store = TestSquelchStore(env_.journal, clock);
auto const validator = randomKeyPair(KeyType::ed25519).first;
auto const duration = reduce_relay::MIN_UNSQUELCH_EXPIRE + seconds{1};
store.handleSquelch(validator, true, duration);
BEAST_EXPECTS(
store.getSquelched().size() == 1,
"validators were not registered in the store");
clock.advance(duration + seconds{1});
auto const validator2 = randomKeyPair(KeyType::ed25519).first;
auto const duration2 = reduce_relay::MIN_UNSQUELCH_EXPIRE + seconds{2};
store.handleSquelch(validator2, true, duration2);
BEAST_EXPECTS(
!store.getSquelched().contains(validator),
"expired squelch was not deleted");
BEAST_EXPECTS(
store.getSquelched().contains(validator2),
"validators were not registered in the store");
}
void
run() override
{
testHandleSquelch();
testIsSquelched();
testClearExpiredSquelches();
}
};
BEAST_DEFINE_TESTSUITE(squelch_store, ripple_data, ripple);
} // namespace test
} // namespace ripple

View File

@@ -63,8 +63,6 @@ LedgerHistory::insert(
ledger->stateMap().getHash().isNonZero(),
"ripple::LedgerHistory::insert : nonzero hash");
std::unique_lock sl(m_ledgers_by_hash.peekMutex());
bool const alreadyHad = m_ledgers_by_hash.canonicalize_replace_cache(
ledger->info().hash, ledger);
if (validated)
@@ -76,7 +74,6 @@ LedgerHistory::insert(
LedgerHash
LedgerHistory::getLedgerHash(LedgerIndex index)
{
std::unique_lock sl(m_ledgers_by_hash.peekMutex());
if (auto it = mLedgersByIndex.find(index); it != mLedgersByIndex.end())
return it->second;
return {};
@@ -86,13 +83,11 @@ std::shared_ptr<Ledger const>
LedgerHistory::getLedgerBySeq(LedgerIndex index)
{
{
std::unique_lock sl(m_ledgers_by_hash.peekMutex());
auto it = mLedgersByIndex.find(index);
if (it != mLedgersByIndex.end())
{
uint256 hash = it->second;
sl.unlock();
return getLedgerByHash(hash);
}
}
@@ -108,7 +103,6 @@ LedgerHistory::getLedgerBySeq(LedgerIndex index)
{
// Add this ledger to the local tracking by index
std::unique_lock sl(m_ledgers_by_hash.peekMutex());
XRPL_ASSERT(
ret->isImmutable(),
@@ -458,8 +452,6 @@ LedgerHistory::builtLedger(
XRPL_ASSERT(
!hash.isZero(), "ripple::LedgerHistory::builtLedger : nonzero hash");
std::unique_lock sl(m_consensus_validated.peekMutex());
auto entry = std::make_shared<cv_entry>();
m_consensus_validated.canonicalize_replace_client(index, entry);
@@ -500,8 +492,6 @@ LedgerHistory::validatedLedger(
!hash.isZero(),
"ripple::LedgerHistory::validatedLedger : nonzero hash");
std::unique_lock sl(m_consensus_validated.peekMutex());
auto entry = std::make_shared<cv_entry>();
m_consensus_validated.canonicalize_replace_client(index, entry);
@@ -535,10 +525,9 @@ LedgerHistory::validatedLedger(
bool
LedgerHistory::fixIndex(LedgerIndex ledgerIndex, LedgerHash const& ledgerHash)
{
std::unique_lock sl(m_ledgers_by_hash.peekMutex());
auto ledger = m_ledgers_by_hash.fetch(ledgerHash);
auto it = mLedgersByIndex.find(ledgerIndex);
if ((it != mLedgersByIndex.end()) && (it->second != ledgerHash))
if (ledger && (it != mLedgersByIndex.end()) && (it->second != ledgerHash))
{
it->second = ledgerHash;
return false;

View File

@@ -175,126 +175,18 @@ MPTokenAuthorize::createMPToken(
return tesSUCCESS;
}
TER
MPTokenAuthorize::authorize(
ApplyView& view,
beast::Journal journal,
MPTAuthorizeArgs const& args)
{
auto const sleAcct = view.peek(keylet::account(args.account));
if (!sleAcct)
return tecINTERNAL;
// If the account that submitted the tx is a holder
// Note: `account_` is holder's account
// `holderID` is NOT used
if (!args.holderID)
{
// When a holder wants to unauthorize/delete a MPT, the ledger must
// - delete mptokenKey from owner directory
// - delete the MPToken
if (args.flags & tfMPTUnauthorize)
{
auto const mptokenKey =
keylet::mptoken(args.mptIssuanceID, args.account);
auto const sleMpt = view.peek(mptokenKey);
if (!sleMpt || (*sleMpt)[sfMPTAmount] != 0)
return tecINTERNAL; // LCOV_EXCL_LINE
if (!view.dirRemove(
keylet::ownerDir(args.account),
(*sleMpt)[sfOwnerNode],
sleMpt->key(),
false))
return tecINTERNAL; // LCOV_EXCL_LINE
adjustOwnerCount(view, sleAcct, -1, journal);
view.erase(sleMpt);
return tesSUCCESS;
}
// A potential holder wants to authorize/hold a mpt, the ledger must:
// - add the new mptokenKey to the owner directory
// - create the MPToken object for the holder
// The reserve that is required to create the MPToken. Note
// that although the reserve increases with every item
// an account owns, in the case of MPTokens we only
// *enforce* a reserve if the user owns more than two
// items. This is similar to the reserve requirements of trust lines.
std::uint32_t const uOwnerCount = sleAcct->getFieldU32(sfOwnerCount);
XRPAmount const reserveCreate(
(uOwnerCount < 2) ? XRPAmount(beast::zero)
: view.fees().accountReserve(uOwnerCount + 1));
if (args.priorBalance < reserveCreate)
return tecINSUFFICIENT_RESERVE;
auto const mptokenKey =
keylet::mptoken(args.mptIssuanceID, args.account);
auto mptoken = std::make_shared<SLE>(mptokenKey);
if (auto ter = dirLink(view, args.account, mptoken))
return ter; // LCOV_EXCL_LINE
(*mptoken)[sfAccount] = args.account;
(*mptoken)[sfMPTokenIssuanceID] = args.mptIssuanceID;
(*mptoken)[sfFlags] = 0;
view.insert(mptoken);
// Update owner count.
adjustOwnerCount(view, sleAcct, 1, journal);
return tesSUCCESS;
}
auto const sleMptIssuance =
view.read(keylet::mptIssuance(args.mptIssuanceID));
if (!sleMptIssuance)
return tecINTERNAL;
// If the account that submitted this tx is the issuer of the MPT
// Note: `account_` is issuer's account
// `holderID` is holder's account
if (args.account != (*sleMptIssuance)[sfIssuer])
return tecINTERNAL;
auto const sleMpt =
view.peek(keylet::mptoken(args.mptIssuanceID, *args.holderID));
if (!sleMpt)
return tecINTERNAL;
std::uint32_t const flagsIn = sleMpt->getFieldU32(sfFlags);
std::uint32_t flagsOut = flagsIn;
// Issuer wants to unauthorize the holder, unset lsfMPTAuthorized on
// their MPToken
if (args.flags & tfMPTUnauthorize)
flagsOut &= ~lsfMPTAuthorized;
// Issuer wants to authorize a holder, set lsfMPTAuthorized on their
// MPToken
else
flagsOut |= lsfMPTAuthorized;
if (flagsIn != flagsOut)
sleMpt->setFieldU32(sfFlags, flagsOut);
view.update(sleMpt);
return tesSUCCESS;
}
TER
MPTokenAuthorize::doApply()
{
auto const& tx = ctx_.tx;
return authorize(
return authorizeMPToken(
ctx_.view(),
mPriorBalance,
tx[sfMPTokenIssuanceID],
account_,
ctx_.journal,
{.priorBalance = mPriorBalance,
.mptIssuanceID = tx[sfMPTokenIssuanceID],
.account = account_,
.flags = tx.getFlags(),
.holderID = tx[~sfHolder]});
tx.getFlags(),
tx[~sfHolder]);
}
} // namespace ripple

View File

@@ -48,12 +48,6 @@ public:
static TER
preclaim(PreclaimContext const& ctx);
static TER
authorize(
ApplyView& view,
beast::Journal journal,
MPTAuthorizeArgs const& args);
static TER
createMPToken(
ApplyView& view,

View File

@@ -209,6 +209,17 @@ SetOracle::doApply()
{
auto const oracleID = keylet::oracle(account_, ctx_.tx[sfOracleDocumentID]);
auto populatePriceData = [](STObject& priceData, STObject const& entry) {
setPriceDataInnerObjTemplate(priceData);
priceData.setFieldCurrency(
sfBaseAsset, entry.getFieldCurrency(sfBaseAsset));
priceData.setFieldCurrency(
sfQuoteAsset, entry.getFieldCurrency(sfQuoteAsset));
priceData.setFieldU64(sfAssetPrice, entry.getFieldU64(sfAssetPrice));
if (entry.isFieldPresent(sfScale))
priceData.setFieldU8(sfScale, entry.getFieldU8(sfScale));
};
if (auto sle = ctx_.view().peek(oracleID))
{
// update
@@ -249,15 +260,7 @@ SetOracle::doApply()
{
// add a token pair with the price
STObject priceData{sfPriceData};
setPriceDataInnerObjTemplate(priceData);
priceData.setFieldCurrency(
sfBaseAsset, entry.getFieldCurrency(sfBaseAsset));
priceData.setFieldCurrency(
sfQuoteAsset, entry.getFieldCurrency(sfQuoteAsset));
priceData.setFieldU64(
sfAssetPrice, entry.getFieldU64(sfAssetPrice));
if (entry.isFieldPresent(sfScale))
priceData.setFieldU8(sfScale, entry.getFieldU8(sfScale));
populatePriceData(priceData, entry);
pairs.emplace(key, std::move(priceData));
}
}
@@ -285,7 +288,26 @@ SetOracle::doApply()
sle->setFieldVL(sfProvider, ctx_.tx[sfProvider]);
if (ctx_.tx.isFieldPresent(sfURI))
sle->setFieldVL(sfURI, ctx_.tx[sfURI]);
auto const& series = ctx_.tx.getFieldArray(sfPriceDataSeries);
STArray series;
if (!ctx_.view().rules().enabled(fixPriceOracleOrder))
{
series = ctx_.tx.getFieldArray(sfPriceDataSeries);
}
else
{
std::map<std::pair<Currency, Currency>, STObject> pairs;
for (auto const& entry : ctx_.tx.getFieldArray(sfPriceDataSeries))
{
auto const key = tokenPairKey(entry);
STObject priceData{sfPriceData};
populatePriceData(priceData, entry);
pairs.emplace(key, std::move(priceData));
}
for (auto const& iter : pairs)
series.push_back(std::move(iter.second));
}
sle->setFieldArray(sfPriceDataSeries, series);
sle->setFieldVL(sfAssetClass, ctx_.tx[sfAssetClass]);
sle->setFieldU32(sfLastUpdateTime, ctx_.tx[sfLastUpdateTime]);

View File

@@ -210,12 +210,12 @@ VaultDeposit::doApply()
auto sleMpt = view().read(keylet::mptoken(mptIssuanceID, account_));
if (!sleMpt)
{
if (auto const err = MPTokenAuthorize::authorize(
if (auto const err = authorizeMPToken(
view(),
ctx_.journal,
{.priorBalance = mPriorBalance,
.mptIssuanceID = mptIssuanceID->value(),
.account = account_});
mPriorBalance,
mptIssuanceID->value(),
account_,
ctx_.journal);
!isTesSuccess(err))
return err;
}
@@ -223,15 +223,15 @@ VaultDeposit::doApply()
// If the vault is private, set the authorized flag for the vault owner
if (vault->isFlag(tfVaultPrivate))
{
if (auto const err = MPTokenAuthorize::authorize(
if (auto const err = authorizeMPToken(
view(),
mPriorBalance, // priorBalance
mptIssuanceID->value(), // mptIssuanceID
sleIssuance->at(sfIssuer), // account
ctx_.journal,
{
.priorBalance = mPriorBalance,
.mptIssuanceID = mptIssuanceID->value(),
.account = sleIssuance->at(sfIssuer),
.holderID = account_,
});
{}, // flags
account_ // holderID
);
!isTesSuccess(err))
return err;
}

View File

@@ -254,9 +254,6 @@ public:
std::size_t VP_REDUCE_RELAY_SQUELCH_MAX_SELECTED_PEERS = 5;
///////////////// END OF TEMPORARY CODE BLOCK /////////////////////
// Enable enhanced squelching of unique untrusted validator messages
bool VP_REDUCE_RELAY_ENHANCED_SQUELCH_ENABLE = false;
// Transaction reduce-relay feature
bool TX_REDUCE_RELAY_ENABLE = false;
// If tx reduce-relay feature is disabled

View File

@@ -775,9 +775,6 @@ Config::loadFromString(std::string const& fileContents)
"greater than or equal to 3");
///////////////// !!END OF TEMPORARY CODE BLOCK!! /////////////////////
VP_REDUCE_RELAY_ENHANCED_SQUELCH_ENABLE =
sec.value_or("vp_enhanced_squelch_enable", false);
TX_REDUCE_RELAY_ENABLE = sec.value_or("tx_enable", false);
TX_REDUCE_RELAY_METRICS = sec.value_or("tx_metrics", false);
TX_REDUCE_RELAY_MIN_PEERS = sec.value_or("tx_min_peers", 20);

View File

@@ -600,6 +600,16 @@ addEmptyHolding(
asset.value());
}
[[nodiscard]] TER
authorizeMPToken(
ApplyView& view,
XRPAmount const& priorBalance,
MPTID const& mptIssuanceID,
AccountID const& account,
beast::Journal journal,
std::uint32_t flags = 0,
std::optional<AccountID> holderID = std::nullopt);
// VFALCO NOTE Both STAmount parameters should just
// be "Amount", a unit-less number.
//

View File

@@ -18,7 +18,6 @@
//==============================================================================
#include <xrpld/app/misc/CredentialHelpers.h>
#include <xrpld/app/tx/detail/MPTokenAuthorize.h>
#include <xrpld/ledger/ReadView.h>
#include <xrpld/ledger/View.h>
@@ -1215,12 +1214,115 @@ addEmptyHolding(
if (view.peek(keylet::mptoken(mptID, accountID)))
return tecDUPLICATE;
return MPTokenAuthorize::authorize(
view,
journal,
{.priorBalance = priorBalance,
.mptIssuanceID = mptID,
.account = accountID});
return authorizeMPToken(view, priorBalance, mptID, accountID, journal);
}
[[nodiscard]] TER
authorizeMPToken(
ApplyView& view,
XRPAmount const& priorBalance,
MPTID const& mptIssuanceID,
AccountID const& account,
beast::Journal journal,
std::uint32_t flags,
std::optional<AccountID> holderID)
{
auto const sleAcct = view.peek(keylet::account(account));
if (!sleAcct)
return tecINTERNAL;
// If the account that submitted the tx is a holder
// Note: `account_` is holder's account
// `holderID` is NOT used
if (!holderID)
{
// When a holder wants to unauthorize/delete a MPT, the ledger must
// - delete mptokenKey from owner directory
// - delete the MPToken
if (flags & tfMPTUnauthorize)
{
auto const mptokenKey = keylet::mptoken(mptIssuanceID, account);
auto const sleMpt = view.peek(mptokenKey);
if (!sleMpt || (*sleMpt)[sfMPTAmount] != 0)
return tecINTERNAL; // LCOV_EXCL_LINE
if (!view.dirRemove(
keylet::ownerDir(account),
(*sleMpt)[sfOwnerNode],
sleMpt->key(),
false))
return tecINTERNAL; // LCOV_EXCL_LINE
adjustOwnerCount(view, sleAcct, -1, journal);
view.erase(sleMpt);
return tesSUCCESS;
}
// A potential holder wants to authorize/hold a mpt, the ledger must:
// - add the new mptokenKey to the owner directory
// - create the MPToken object for the holder
// The reserve that is required to create the MPToken. Note
// that although the reserve increases with every item
// an account owns, in the case of MPTokens we only
// *enforce* a reserve if the user owns more than two
// items. This is similar to the reserve requirements of trust lines.
std::uint32_t const uOwnerCount = sleAcct->getFieldU32(sfOwnerCount);
XRPAmount const reserveCreate(
(uOwnerCount < 2) ? XRPAmount(beast::zero)
: view.fees().accountReserve(uOwnerCount + 1));
if (priorBalance < reserveCreate)
return tecINSUFFICIENT_RESERVE;
auto const mptokenKey = keylet::mptoken(mptIssuanceID, account);
auto mptoken = std::make_shared<SLE>(mptokenKey);
if (auto ter = dirLink(view, account, mptoken))
return ter; // LCOV_EXCL_LINE
(*mptoken)[sfAccount] = account;
(*mptoken)[sfMPTokenIssuanceID] = mptIssuanceID;
(*mptoken)[sfFlags] = 0;
view.insert(mptoken);
// Update owner count.
adjustOwnerCount(view, sleAcct, 1, journal);
return tesSUCCESS;
}
auto const sleMptIssuance = view.read(keylet::mptIssuance(mptIssuanceID));
if (!sleMptIssuance)
return tecINTERNAL;
// If the account that submitted this tx is the issuer of the MPT
// Note: `account_` is issuer's account
// `holderID` is holder's account
if (account != (*sleMptIssuance)[sfIssuer])
return tecINTERNAL;
auto const sleMpt = view.peek(keylet::mptoken(mptIssuanceID, *holderID));
if (!sleMpt)
return tecINTERNAL;
std::uint32_t const flagsIn = sleMpt->getFieldU32(sfFlags);
std::uint32_t flagsOut = flagsIn;
// Issuer wants to unauthorize the holder, unset lsfMPTAuthorized on
// their MPToken
if (flags & tfMPTUnauthorize)
flagsOut &= ~lsfMPTAuthorized;
// Issuer wants to authorize a holder, set lsfMPTAuthorized on their
// MPToken
else
flagsOut |= lsfMPTAuthorized;
if (flagsIn != flagsOut)
sleMpt->setFieldU32(sfFlags, flagsOut);
view.update(sleMpt);
return tesSUCCESS;
}
TER
@@ -1418,13 +1520,14 @@ removeEmptyHolding(
if (mptoken->at(sfMPTAmount) != 0)
return tecHAS_OBLIGATIONS;
return MPTokenAuthorize::authorize(
return authorizeMPToken(
view,
{}, // priorBalance
mptID,
accountID,
journal,
{.priorBalance = {},
.mptIssuanceID = mptID,
.account = accountID,
.flags = tfMPTUnauthorize});
tfMPTUnauthorize // flags
);
}
TER
@@ -2497,15 +2600,12 @@ enforceMPTokenAuthorization(
XRPL_ASSERT(
maybeDomainID.has_value() && sleToken == nullptr,
"ripple::enforceMPTokenAuthorization : new MPToken for domain");
if (auto const err = MPTokenAuthorize::authorize(
if (auto const err = authorizeMPToken(
view,
j,
{
.priorBalance = priorBalance,
.mptIssuanceID = mptIssuanceID,
.account = account,
.flags = 0,
});
priorBalance, // priorBalance
mptIssuanceID, // mptIssuanceID
account, // account
j);
!isTesSuccess(err))
return err;

View File

@@ -72,6 +72,7 @@ Previous-Ledger: q4aKbP7sd5wv+EXArwCmQiWZhq9AwBl2p/hCtpGJNsc=
##### Example HTTP Upgrade Response (Success)
```
HTTP/1.1 101 Switching Protocols
Connection: Upgrade
@@ -101,9 +102,9 @@ Content-Type: application/json
#### Standard Fields
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `User-Agent` | :heavy_check_mark: | |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `User-Agent` | :heavy_check_mark: | |
The `User-Agent` field indicates the version of the software that the
peer that is making the HTTP request is using. No semantic meaning is
@@ -112,9 +113,9 @@ specify the version of the software that is used.
See [RFC2616 &sect;14.43](https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.43).
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Server` | | :heavy_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Server` | | :heavy_check_mark: |
The `Server` field indicates the version of the software that the
peer that is processing the HTTP request is using. No semantic meaning is
@@ -123,18 +124,18 @@ specify the version of the software that is used.
See [RFC2616 &sect;14.38](https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.38).
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Connection` | :heavy_check_mark: | :heavy_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Connection` | :heavy_check_mark: | :heavy_check_mark: |
The `Connection` field should have a value of `Upgrade` to indicate that a
request to upgrade the connection is being performed.
See [RFC2616 &sect;14.10](https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.10).
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Upgrade` | :heavy_check_mark: | :heavy_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Upgrade` | :heavy_check_mark: | :heavy_check_mark: |
The `Upgrade` field is part of the standard connection upgrade mechanism and
must be present in both requests and responses. It is used to negotiate the
@@ -155,11 +156,12 @@ equal to 2 and the minor is greater than or equal to 0.
See [RFC 2616 &sect;14.42](https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.42)
#### Custom Fields
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Connect-As` | :heavy_check_mark: | :heavy_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Connect-As` | :heavy_check_mark: | :heavy_check_mark: |
The mandatory `Connect-As` field is used to specify that type of connection
that is being requested.
@@ -173,9 +175,10 @@ elements specified in the request. If a server processing a request does not
recognize any of the connection types, the request should fail with an
appropriate HTTP error code (e.g. by sending an HTTP 400 "Bad Request" response).
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Remote-IP` | :white_check_mark: | :white_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Remote-IP` | :white_check_mark: | :white_check_mark: |
The optional `Remote-IP` field contains the string representation of the IP
address of the remote end of the connection as seen from the peer that is
@@ -184,9 +187,10 @@ sending the field.
By observing values of this field from a sufficient number of different
servers, a peer making outgoing connections can deduce its own IP address.
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Local-IP` | :white_check_mark: | :white_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Local-IP` | :white_check_mark: | :white_check_mark: |
The optional `Local-IP` field contains the string representation of the IP
address that the peer sending the field believes to be its own.
@@ -194,9 +198,10 @@ address that the peer sending the field believes to be its own.
Servers receiving this field can detect IP address mismatches, which may
indicate a potential man-in-the-middle attack.
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Network-ID` | :white_check_mark: | :white_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Network-ID` | :white_check_mark: | :white_check_mark: |
The optional `Network-ID` can be used to identify to which of several
[parallel networks](https://xrpl.org/parallel-networks.html) the server
@@ -212,9 +217,10 @@ If a server configured to join one network receives a connection request from a
server configured to join another network, the request should fail with an
appropriate HTTP error code (e.g. by sending an HTTP 400 "Bad Request" response).
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Network-Time` | :white_check_mark: | :white_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Network-Time` | :white_check_mark: | :white_check_mark: |
The optional `Network-Time` field reports the current [time](https://xrpl.org/basic-data-types.html#specifying-time)
according to sender's internal clock.
@@ -226,18 +232,20 @@ each other with an appropriate HTTP error code (e.g. by sending an HTTP 400
It is highly recommended that servers synchronize their clocks using time
synchronization software. For more on this topic, please visit [ntp.org](http://www.ntp.org/).
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Public-Key` | :heavy_check_mark: | :heavy_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Public-Key` | :heavy_check_mark: | :heavy_check_mark: |
The mandatory `Public-Key` field identifies the sending server's public key,
encoded in base58 using the standard encoding for node public keys.
See: <https://xrpl.org/base58-encodings.html>
See: https://xrpl.org/base58-encodings.html
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Server-Domain` | :white_check_mark: | :white_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Server-Domain` | :white_check_mark: | :white_check_mark: |
The optional `Server-Domain` field allows a server to report the domain that
it is operating under. The value is configured by the server administrator in
@@ -251,9 +259,10 @@ under the specified domain and locating the public key of this server under the
Sending a malformed domain will prevent a connection from being established.
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Session-Signature` | :heavy_check_mark: | :heavy_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Session-Signature` | :heavy_check_mark: | :heavy_check_mark: |
The `Session-Signature` field is mandatory and is used to secure the peer link
against certain types of attack. For more details see "Session Signature" below.
@@ -263,35 +272,36 @@ should support both **Base64** and **HEX** encoding for this value.
For more details on this field, please see **Session Signature** below.
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Crawl` | :white_check_mark: | :white_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Crawl` | :white_check_mark: | :white_check_mark: |
The optional `Crawl` field can be used by a server to indicate whether peers
should include it in crawl reports.
The field can take two values:
- **`Public`**: The server's IP address and port should be included in crawl
reports.
- **`Private`**: The server's IP address and port should not be included in
crawl reports. _This is the default, if the field is omitted._
For more on the Peer Crawler, please visit <https://xrpl.org/peer-crawler.html>.
For more on the Peer Crawler, please visit https://xrpl.org/peer-crawler.html.
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Closed-Ledger` | :white_check_mark: | :white_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Closed-Ledger` | :white_check_mark: | :white_check_mark: |
If present, identifies the hash of the last ledger that the sending server
considers to be closed.
The value is encoded as **HEX**, but implementations should support both
**Base64** and **HEX** encoding for this value for legacy purposes.
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Previous-Ledger` | :white_check_mark: | :white_check_mark: |
| Field Name | Request | Response |
|--------------------- |:-----------------: |:-----------------: |
| `Previous-Ledger` | :white_check_mark: | :white_check_mark: |
If present, identifies the hash of the parent ledger that the sending server
considers to be closed.
@@ -307,6 +317,7 @@ and values in both requests and responses.
Implementations should not reject requests because of the presence of fields
that they do not understand.
### Session Signature
Even for SSL/TLS encrypted connections, it is possible for an attacker to mount
@@ -354,52 +365,10 @@ transferred between A and B and will not be able to intelligently tamper with th
message stream between Alice and Bob, although she may be still be able to inject
delays or terminate the link.
## Peer-to-Peer Traffic Routing
### Squelching
# Ripple Clustering #
Validator Squelching is a network feature that reduces redundant message traffic by intelligently selecting a small subset of peers to listen to for each validator. Messages from non-selected peers are temporarily ignored, or "squelched." This process significantly cuts down on processing overhead and provides dynamic fault tolerance by allowing the network to ignore misbehaving peers without a permanent ban. The system continuously re-evaluates peer performance to adapt to changing network conditions.
### Components
The squelching architecture is built on five key classes:
- `SquelchStore.h`: A low-level, timed key-value store that maps a validator to its squelch expiration timestamp.
- `Slot.h/Slot`: Manages the state for a single validator, tracking all peers that relay its messages. It operates in a Counting state to gather peer performance data and a Selected state after choosing the best peers and squelching the rest. It handles peer disconnections and idleness to keep the selection optimal.
- `Slot.h/Slots`: The central container that manages all active Slot instances. It applies different policies for trusted and untrusted validators, evaluates candidates for the limited untrusted slots, and runs periodic cleanup routines.
- `OverlayImpl.h`: Integrates the squelching system with the network, capturing events and dispatching them to the Slots container in a thread-safe manner.
- `PeerImp.h` - Handles squelch messages, and calls `OverlayImpl.h` when it received proposal or validation messages.
### Component Dependency
The component dependencies follow a clear hierarchy from the network layer down to individual peers:
- `OverlayImpl`: The top-level component that owns a single instance of Slots, and a currently connected Peers.
- `Slots`: This central orchestrator owns and manages a collection of many Slot instances.
- `Slot`: Each Slot represents a single validator and manages the state of all PeerImp instances that relay messages for it.
- `PeerImp`: Represents a connected peer and owns its own instance of SquelchStore to manage its local squelch state for various validators.
### The Squelching Lifecycle
When a message from a validator arrives, it is dispatched to the appropriate Slot. The Slot, initially in a Counting state, tracks message volume from each peer. Once enough data is gathered, it triggers a selection, randomly choosing a small number of the best-performing peers. The Slot then instructs a SquelchHandler to squelch all non-selected peers for a calculated duration and transitions to a Selected state. The system continuously monitors for network changes, such as a selected peer disconnecting, which causes the Slot to reset to the Counting state and begin a new evaluation.
### Trusted vs. Untrusted Validators
The system applies different policies based on validator trust status:
- **Trusted Validators**: Are granted a Slot immediately to optimize traffic from known-good sources.
- **Untrusted Validators**: Are handled more cautiously, especially when the Enhanced Squelching feature is enabled. They must compete for a fixed number of limited slots by first proving their reliability in a "consideration pool." Validators that fail to gain a slot or become idle are aggressively squelched across all peers. This can also be triggered by a network-wide consensus to ignore a specific untrusted validator.
This dual-policy approach optimizes trusted traffic while robustly protecting the network from potentially malicious or unknown validators.
## XRP Ledger Clustering
A cluster consists of more than one XRP Ledger server under common
A cluster consists of more than one Ripple server under common
administration that share load information, distribute cryptography
operations, and provide greater response consistency.
@@ -409,7 +378,7 @@ Cluster nodes share information about their internal load status. Cluster
nodes do not have to verify the cryptographic signatures on messages
received from other cluster nodes.
### Configuration
## Configuration ##
A server's public key can be determined from the output of the `server_info`
command. The key is in the `pubkey_node` value, and is a text string
@@ -435,7 +404,7 @@ New spokes can be added as follows:
- Restart each hub, one by one
- Restart the spoke
### Transaction Behavior
## Transaction Behavior ##
When a transaction is received from a cluster member, several normal checks
are bypassed:
@@ -451,7 +420,7 @@ does not meet its current relay fee. It is preferable to keep the cluster
in agreement and permit confirmation from one cluster member to more
reliably indicate the transaction's acceptance by the cluster.
### Server Load Information
## Server Load Information ##
Cluster members exchange information on their server's load level. The load
level is essentially the amount by which the normal fee levels are multiplied
@@ -462,7 +431,7 @@ fee, is the highest of its local load level, the network load level, and the
cluster load level. The cluster load level is the median load level reported
by a cluster member.
### Gossip
## Gossip ##
Gossip is the mechanism by which cluster members share information about
endpoints (typically IPv4 addresses) that are imposing unusually high load
@@ -477,7 +446,7 @@ the servers in a cluster. With gossip, if he chooses to use the same IP
address to impose load on more than one server, he will find that the amount
of load he can impose before getting disconnected is much lower.
### Monitoring
## Monitoring ##
The `peers` command will report on the status of the cluster. The `cluster`
object will contain one entry for each member of the cluster (either configured

View File

@@ -21,7 +21,6 @@
#define RIPPLE_OVERLAY_REDUCERELAYCOMMON_H_INCLUDED
#include <chrono>
#include <cstdint>
namespace ripple {
@@ -40,31 +39,19 @@ static constexpr auto MIN_UNSQUELCH_EXPIRE = std::chrono::seconds{300};
static constexpr auto MAX_UNSQUELCH_EXPIRE_DEFAULT = std::chrono::seconds{600};
static constexpr auto SQUELCH_PER_PEER = std::chrono::seconds(10);
static constexpr auto MAX_UNSQUELCH_EXPIRE_PEERS = std::chrono::seconds{3600};
// No message received threshold before identifying a peer as idled
static constexpr auto PEER_IDLED = std::chrono::seconds{8};
static constexpr auto IDLED = std::chrono::seconds{8};
// Message count threshold to start selecting peers as the source
// of messages from the validator. We add peers who reach
// MIN_MESSAGE_THRESHOLD to considered pool once MAX_SELECTED_PEERS
// reach MAX_MESSAGE_THRESHOLD.
static constexpr uint16_t MIN_MESSAGE_THRESHOLD = 19;
static constexpr uint16_t MAX_MESSAGE_THRESHOLD = 20;
// Max selected peers to choose as the source of messages from validator
static constexpr uint16_t MAX_SELECTED_PEERS = 5;
// Max number of untrusted slots the server will maintain
static constexpr uint16_t MAX_UNTRUSTED_SLOTS = 30;
// The maximum of seconds an untrusted validator can go without sending a
// validation message. After this, a validator may be squelched
static constexpr auto MAX_UNTRUSTED_VALIDATOR_IDLE = std::chrono::seconds{30};
// Wait before reduce-relay feature is enabled on boot up to let
// the server establish peer connections
static constexpr auto WAIT_ON_BOOTUP = std::chrono::minutes{10};
// Maximum size of the aggregated transaction hashes per peer.
// Once we get to high tps throughput, this cap will prevent
// TMTransactions from exceeding the current protocol message

File diff suppressed because it is too large Load Diff

129
src/xrpld/overlay/Squelch.h Normal file
View File

@@ -0,0 +1,129 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2020 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_OVERLAY_SQUELCH_H_INCLUDED
#define RIPPLE_OVERLAY_SQUELCH_H_INCLUDED
#include <xrpld/overlay/ReduceRelayCommon.h>
#include <xrpl/beast/utility/Journal.h>
#include <xrpl/protocol/PublicKey.h>
#include <algorithm>
#include <chrono>
#include <functional>
namespace ripple {
namespace reduce_relay {
/** Maintains squelching of relaying messages from validators */
template <typename clock_type>
class Squelch
{
using time_point = typename clock_type::time_point;
public:
explicit Squelch(beast::Journal journal) : journal_(journal)
{
}
virtual ~Squelch() = default;
/** Squelch validation/proposal relaying for the validator
* @param validator The validator's public key
* @param squelchDuration Squelch duration in seconds
* @return false if invalid squelch duration
*/
bool
addSquelch(
PublicKey const& validator,
std::chrono::seconds const& squelchDuration);
/** Remove the squelch
* @param validator The validator's public key
*/
void
removeSquelch(PublicKey const& validator);
/** Remove expired squelch
* @param validator Validator's public key
* @return true if removed or doesn't exist, false if still active
*/
bool
expireSquelch(PublicKey const& validator);
private:
/** Maintains the list of squelched relaying to downstream peers.
* Expiration time is included in the TMSquelch message. */
hash_map<PublicKey, time_point> squelched_;
beast::Journal const journal_;
};
template <typename clock_type>
bool
Squelch<clock_type>::addSquelch(
PublicKey const& validator,
std::chrono::seconds const& squelchDuration)
{
if (squelchDuration >= MIN_UNSQUELCH_EXPIRE &&
squelchDuration <= MAX_UNSQUELCH_EXPIRE_PEERS)
{
squelched_[validator] = clock_type::now() + squelchDuration;
return true;
}
JLOG(journal_.error()) << "squelch: invalid squelch duration "
<< squelchDuration.count();
// unsquelch if invalid duration
removeSquelch(validator);
return false;
}
template <typename clock_type>
void
Squelch<clock_type>::removeSquelch(PublicKey const& validator)
{
squelched_.erase(validator);
}
template <typename clock_type>
bool
Squelch<clock_type>::expireSquelch(PublicKey const& validator)
{
auto now = clock_type::now();
auto const& it = squelched_.find(validator);
if (it == squelched_.end())
return true;
else if (it->second > now)
return false;
// squelch expired
squelched_.erase(it);
return true;
}
} // namespace reduce_relay
} // namespace ripple
#endif // RIPPLED_SQUELCH_H

View File

@@ -1,146 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2025 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_OVERLAY_SQUELCH_H_INCLUDED
#define RIPPLE_OVERLAY_SQUELCH_H_INCLUDED
#include <xrpl/basics/Log.h>
#include <xrpl/beast/utility/Journal.h>
#include <xrpl/protocol/PublicKey.h>
#include <chrono>
namespace ripple {
namespace reduce_relay {
/**
* @brief Manages the temporary suppression ("squelching") of validators.
*
* @details This class provides a mechanism to temporarily ignore messages from
* specific validators for a defined duration. It tracks which
* validators are currently squelched and handles the
* expiration of the squelch period. The use of an
* abstract clock allows for deterministic testing of time-based
* squelch logic.
*/
class SquelchStore
{
using clock_type = beast::abstract_clock<std::chrono::steady_clock>;
using time_point = typename clock_type::time_point;
public:
explicit SquelchStore(beast::Journal journal, clock_type& clock)
: journal_(journal), clock_(clock)
{
}
virtual ~SquelchStore() = default;
/**
* @brief Manages the squelch status of a validator.
*
* @details This function acts as the primary public interface for
* controlling a validator's squelch state. Based on the `squelch` flag, it
* either adds a new squelch entry for the specified duration or removes an
* existing one. This function also clears all expired squelches.
*
* @param validator The public key of the validator to manage.
* @param squelch If `true`, the validator will be squelched. If `false`,
* any existing squelch will be removed.
* @param duration The duration in seconds for the squelch. This value is
* only used when `squelch` is `true`.
*/
void
handleSquelch(
PublicKey const& validator,
bool squelch,
std::chrono::seconds duration);
/**
* @brief Checks if a validator is currently squelched.
*
* @details This function checks if the validator's squelch has expired.
*
* @param validator The public key of the validator to check.
* @return `true` if a non-expired squelch entry exists for the
* validator, `false` otherwise.
*/
bool
isSquelched(PublicKey const& validator) const;
// The following field is protected for unit tests.
protected:
/**
* @brief The core data structure mapping a validator's public key to the
* time point when their squelch expires.
*/
hash_map<PublicKey, time_point> squelched_;
private:
/**
* @brief Internal implementation to add or update a squelch entry.
*
* @details Calculates the expiration time point by adding the duration to
* the current time and inserts or overwrites the entry for the validator in
* the `squelched_` map.
*
* @param validator The public key of the validator to squelch.
* @param squelchDuration The duration for which the validator should be
* squelched.
*/
void
add(PublicKey const& validator,
std::chrono::seconds const& squelchDuration);
/**
* @brief Internal implementation to remove a squelch entry.
*
* @details Erases the squelch entry for the given validator from the
* `squelched_` map, effectively unsquelching it.
*
* @param validator The public key of the validator to unsquelch.
*/
void
remove(PublicKey const& validator);
/**
* @brief Internal implementation to remove all expired squelches.
*
* @details Erases all squelch entries whose expiration is in the past.
*/
void
removeExpired();
/**
* @brief The logging interface used by this store.
*/
beast::Journal const journal_;
/**
* @brief A reference to the clock used for all time-based operations,
* allowing for deterministic testing via dependency injection.
*/
clock_type& clock_;
};
} // namespace reduce_relay
} // namespace ripple
#endif // RIPPLED_SQUELCH_H

View File

@@ -24,12 +24,10 @@
#include <xrpld/app/rdb/RelationalDatabase.h>
#include <xrpld/app/rdb/Wallet.h>
#include <xrpld/overlay/Cluster.h>
#include <xrpld/overlay/Peer.h>
#include <xrpld/overlay/detail/ConnectAttempt.h>
#include <xrpld/overlay/detail/OverlayImpl.h>
#include <xrpld/overlay/detail/PeerImp.h>
#include <xrpld/overlay/detail/TrafficCount.h>
#include <xrpld/overlay/detail/Tuning.h>
#include <xrpld/overlay/predicates.h>
#include <xrpld/peerfinder/make_Manager.h>
#include <xrpld/rpc/handlers/GetCounts.h>
#include <xrpld/rpc/json_body.h>
@@ -41,7 +39,9 @@
#include <xrpl/protocol/STTx.h>
#include <xrpl/server/SimpleWriter.h>
#include <functional>
#include <boost/algorithm/string/predicate.hpp>
#include "xrpld/overlay/detail/TrafficCount.h"
namespace ripple {
@@ -142,7 +142,7 @@ OverlayImpl::OverlayImpl(
, m_resolver(resolver)
, next_id_(1)
, timer_count_(0)
, slots_(app.logs(), *this, app.config(), stopwatch())
, slots_(app.logs(), *this, app.config())
, m_stats(
std::bind(&OverlayImpl::collect_metrics, this),
collector,
@@ -578,22 +578,17 @@ OverlayImpl::stop()
void
OverlayImpl::onWrite(beast::PropertyStream::Map& stream)
{
beast::PropertyStream::Set set("traffic", stream);
auto const stats = m_traffic.getCounts();
for (auto const& pair : stats)
{
beast::PropertyStream::Set set("traffic", stream);
auto const stats = m_traffic.getCounts();
for (auto const& pair : stats)
{
beast::PropertyStream::Map item(set);
item["category"] = pair.second.name;
item["bytes_in"] = std::to_string(pair.second.bytesIn.load());
item["messages_in"] = std::to_string(pair.second.messagesIn.load());
item["bytes_out"] = std::to_string(pair.second.bytesOut.load());
item["messages_out"] =
std::to_string(pair.second.messagesOut.load());
}
beast::PropertyStream::Map item(set);
item["category"] = pair.second.name;
item["bytes_in"] = std::to_string(pair.second.bytesIn.load());
item["messages_in"] = std::to_string(pair.second.messagesIn.load());
item["bytes_out"] = std::to_string(pair.second.bytesOut.load());
item["messages_out"] = std::to_string(pair.second.messagesOut.load());
}
slots_.onWrite(stream);
}
//------------------------------------------------------------------------------
@@ -1415,24 +1410,12 @@ OverlayImpl::squelch(
}
}
void
OverlayImpl::squelchAll(
PublicKey const& validator,
uint32_t squelchDuration,
std::function<void(Peer::id_t)> report)
{
for_each([&](std::shared_ptr<PeerImp>&& p) {
p->send(makeSquelchMessage(validator, true, squelchDuration));
report(p->id());
});
}
void
OverlayImpl::updateSlotAndSquelch(
uint256 const& key,
PublicKey const& validator,
std::set<Peer::id_t>&& peers,
bool isTrusted)
protocol::MessageType type)
{
if (!slots_.baseSquelchReady())
return;
@@ -1445,18 +1428,14 @@ OverlayImpl::updateSlotAndSquelch(
key = key,
validator = validator,
peers = std::move(peers),
isTrusted]() mutable {
updateSlotAndSquelch(
key, validator, std::move(peers), isTrusted);
type]() mutable {
updateSlotAndSquelch(key, validator, std::move(peers), type);
});
for (auto id : peers)
slots_.updateSlotAndSquelch(
key,
validator,
id,
[&]() { reportInboundTraffic(TrafficCount::squelch_ignored, 0); },
isTrusted);
slots_.updateSlotAndSquelch(key, validator, id, type, [&]() {
reportInboundTraffic(TrafficCount::squelch_ignored, 0);
});
}
void
@@ -1464,7 +1443,7 @@ OverlayImpl::updateSlotAndSquelch(
uint256 const& key,
PublicKey const& validator,
Peer::id_t peer,
bool isTrusted)
protocol::MessageType type)
{
if (!slots_.baseSquelchReady())
return;
@@ -1473,64 +1452,15 @@ OverlayImpl::updateSlotAndSquelch(
return post(
strand_,
// Must capture copies of reference parameters (i.e. key, validator)
[this, key = key, validator = validator, peer, isTrusted]() {
updateSlotAndSquelch(key, validator, peer, isTrusted);
[this, key = key, validator = validator, peer, type]() {
updateSlotAndSquelch(key, validator, peer, type);
});
slots_.updateSlotAndSquelch(
key,
validator,
peer,
[&]() { reportInboundTraffic(TrafficCount::squelch_ignored, 0); },
isTrusted);
}
void
OverlayImpl::updateUntrustedValidatorSlot(
uint256 const& key,
PublicKey const& validator,
Peer::id_t peer)
{
if (!slots_.enhancedSquelchReady())
return;
if (!strand_.running_in_this_thread())
return post(
strand_,
// Must capture copies of reference parameters (i.e. key, validator)
[this, key = key, validator = validator, peer]() {
updateUntrustedValidatorSlot(key, validator, peer);
});
slots_.updateUntrustedValidatorSlot(key, validator, peer, [&]() {
slots_.updateSlotAndSquelch(key, validator, peer, type, [&]() {
reportInboundTraffic(TrafficCount::squelch_ignored, 0);
});
}
void
OverlayImpl::handleUntrustedSquelch(PublicKey const& validator)
{
if (!strand_.running_in_this_thread())
return post(
strand_,
std::bind(&OverlayImpl::handleUntrustedSquelch, this, validator));
auto count = 0;
// we can get the total number of peers with size(), however that would have
// to acquire another lock on peers. Instead, count the number of peers in
// the same loop, as we're already iterating all peers.
auto total = 0;
for_each([&](std::shared_ptr<PeerImp>&& p) {
++total;
if (p->isSquelched(validator))
++count;
});
// if majority of peers squelched the validator
if (count >= total - 1)
slots_.squelchUntrustedValidator(validator);
}
void
OverlayImpl::deletePeer(Peer::id_t id)
{

View File

@@ -24,7 +24,6 @@
#include <xrpld/core/Job.h>
#include <xrpld/overlay/Message.h>
#include <xrpld/overlay/Overlay.h>
#include <xrpld/overlay/Peer.h>
#include <xrpld/overlay/Slot.h>
#include <xrpld/overlay/detail/Handshake.h>
#include <xrpld/overlay/detail/TrafficCount.h>
@@ -49,7 +48,6 @@
#include <chrono>
#include <condition_variable>
#include <cstdint>
#include <functional>
#include <memory>
#include <mutex>
#include <optional>
@@ -105,7 +103,7 @@ private:
boost::asio::io_service& io_service_;
std::optional<boost::asio::io_service::work> work_;
boost::asio::io_service::strand strand_;
std::recursive_mutex mutable mutex_; // VFALCO use std::mutex
mutable std::recursive_mutex mutex_; // VFALCO use std::mutex
std::condition_variable_any cond_;
std::weak_ptr<Timer> timer_;
boost::container::flat_map<Child*, std::weak_ptr<Child>> list_;
@@ -124,7 +122,7 @@ private:
std::atomic<uint64_t> peerDisconnects_{0};
std::atomic<uint64_t> peerDisconnectsCharges_{0};
reduce_relay::Slots slots_;
reduce_relay::Slots<UptimeClock> slots_;
// Transaction reduce-relay metrics
metrics::TxMetrics txMetrics_;
@@ -394,90 +392,35 @@ public:
return setup_.networkID;
}
/**
* @brief Processes a message from a validator received via multiple peers.
*
* @details This function serves as a thread-safe entry point to the
* squelching system.
*
* @param key The unique hash of the message.
* @param validator The public key of the validator.
* @param peers A set of peer IDs that relayed this message.
* @param isTrusted `true` if the message is from a trusted validator.
/** Updates message count for validator/peer. Sends TMSquelch if the number
* of messages for N peers reaches threshold T. A message is counted
* if a peer receives the message for the first time and if
* the message has been relayed.
* @param key Unique message's key
* @param validator Validator's public key
* @param peers Peers' id to update the slots for
* @param type Received protocol message type
*/
void
updateSlotAndSquelch(
uint256 const& key,
PublicKey const& validator,
std::set<Peer::id_t>&& peers,
bool isTrusted);
protocol::MessageType type);
/**
* @brief Processes a message from a validator received via a single peer.
*
* @details This function is a thread-safe entry point for handling a
* message from a single peer. It ensures the squelching feature is ready
* and serializes the call onto the `strand_`. It then invokes the
* underlying `Slots::updateSlotAndSquelch` method to process the message.
*
* @param key The unique hash of the message.
* @param validator The public key of the validator.
* @param peer The ID of the peer that relayed this message.
* @param isTrusted `true` if the message is from a trusted validator.
/** Overload to reduce allocation in case of single peer
*/
void
updateSlotAndSquelch(
uint256 const& key,
PublicKey const& validator,
Peer::id_t peer,
bool isTrusted);
protocol::MessageType type);
/**
* @brief Processes a message specifically for the untrusted validator slot
* logic.
*
* @details This function is the thread-safe entry point for the enhanced
* squelching feature, which manages a limited number of slots for
* untrusted validators. It ensures the feature is ready, posts the work to
* the `strand_`, and then calls the underlying
* `Slots::updateUntrustedValidatorSlot` to handle the slot admission and
* evaluation logic.
*
* @param key The unique hash of the message.
* @param validator The public key of the untrusted validator.
* @param peer The ID of the peer that relayed this message.
*/
void
updateUntrustedValidatorSlot(
uint256 const& key,
PublicKey const& validator,
Peer::id_t peer);
/**
* @brief Handles a squelch message for an untrusted validator.
*
* @details This function is called when this node receives a message
* indicating that a peer is squelching an untrusted validator. It
* tallies how many of its own connected peers have also squelched the
* validator. If a majority of peers agree, this node takes definitive local
* action by calling `Slots::squelchUntrustedValidator`, effectively joining
* the consensus to silence the validator.
*
* @param validator The public key of the untrusted validator being
* squelched.
*/
void
handleUntrustedSquelch(PublicKey const& validator);
/**
* @brief Handles the deletion of a peer from the overlay network.
*
* @details This function provides a thread-safe entry point for removing a
* peer. It ensures the operation is executed on the correct strand and
* then delegates the logic to `Slots::deletePeer`, which notifies all
* active slots about the peer's removal.
*
* @param id The ID of the peer to be deleted.
/** Called when the peer is deleted. If the peer was selected to be the
* source of messages from the validator then squelched peers have to be
* unsquelched.
* @param id Peer's id
*/
void
deletePeer(Peer::id_t id);
@@ -508,12 +451,6 @@ private:
Peer::id_t const id,
std::uint32_t squelchDuration) const override;
void
squelchAll(
PublicKey const& validator,
std::uint32_t squelchDuration,
std::function<void(Peer::id_t)>) override;
void
unsquelch(PublicKey const& validator, Peer::id_t id) const override;
@@ -540,7 +477,7 @@ private:
/** Handles validator list requests.
Using a /vl/<hex-encoded public key> URL, will retrieve the
latest validator list (or UNL) that this node has for that
latest valdiator list (or UNL) that this node has for that
public key, if the node trusts that public key.
@return true if the request was handled.
@@ -618,13 +555,8 @@ private:
void
sendTxQueue();
/**
* @brief Triggers the cleanup of idle peers and stale slots.
*
* @details This function is a thread-safe wrapper that executes
* `Slots::deleteIdlePeers` to perform the necessary cleanup of inactive
* peers, stale slots, and unviable validator candidates.
*/
/** Check if peers stopped relaying messages
* and if slots stopped receiving messages from the validator */
void
deleteIdlePeers();

View File

@@ -29,7 +29,6 @@
#include <xrpld/app/misc/ValidatorList.h>
#include <xrpld/app/tx/apply.h>
#include <xrpld/overlay/Cluster.h>
#include <xrpld/overlay/ReduceRelayCommon.h>
#include <xrpld/overlay/detail/PeerImp.h>
#include <xrpld/overlay/detail/Tuning.h>
#include <xrpld/perflog/PerfLog.h>
@@ -45,7 +44,6 @@
#include <boost/beast/core/ostream.hpp>
#include <algorithm>
#include <chrono>
#include <memory>
#include <mutex>
#include <numeric>
@@ -97,7 +95,7 @@ PeerImp::PeerImp(
, publicKey_(publicKey)
, lastPingTime_(clock_type::now())
, creationTime_(clock_type::now())
, squelchStore_(app_.journal("SquelchStore"), stopwatch())
, squelch_(app_.journal("Squelch"))
, usage_(consumer)
, fee_{Resource::feeTrivialPeer, ""}
, slot_(slot)
@@ -248,8 +246,8 @@ PeerImp::send(std::shared_ptr<Message> const& m)
if (detaching_)
return;
auto const validator = m->getValidatorKey();
if (validator && isSquelched(*validator))
auto validator = m->getValidatorKey();
if (validator && !squelch_.expireSquelch(*validator))
{
overlay_.reportOutboundTraffic(
TrafficCount::category::squelch_suppressed,
@@ -267,7 +265,7 @@ PeerImp::send(std::shared_ptr<Message> const& m)
TrafficCount::category::total,
static_cast<int>(m->getBuffer(compressionEnabled_).size()));
auto const sendq_size = send_queue_.size();
auto sendq_size = send_queue_.size();
if (sendq_size < Tuning::targetSendQueue)
{
@@ -572,12 +570,6 @@ PeerImp::hasRange(std::uint32_t uMin, std::uint32_t uMax)
(uMax <= maxLedger_);
}
bool
PeerImp::isSquelched(PublicKey const& validator) const
{
return squelchStore_.isSquelched(validator);
}
//------------------------------------------------------------------------------
void
@@ -1707,6 +1699,21 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMProposeSet> const& m)
// suppression for 30 seconds to avoid doing a relatively expensive lookup
// every time a spam packet is received
PublicKey const publicKey{makeSlice(set.nodepubkey())};
auto const isTrusted = app_.validators().trusted(publicKey);
// If the operator has specified that untrusted proposals be dropped then
// this happens here I.e. before further wasting CPU verifying the signature
// of an untrusted key
if (!isTrusted)
{
// report untrusted proposal messages
overlay_.reportInboundTraffic(
TrafficCount::category::proposal_untrusted,
Message::messageSize(*m));
if (app_.config().RELAY_UNTRUSTED_PROPOSALS == -1)
return;
}
uint256 const proposeHash{set.currenttxhash()};
uint256 const prevLedger{set.previousledger()};
@@ -1721,18 +1728,15 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMProposeSet> const& m)
publicKey.slice(),
sig);
auto const isTrusted = app_.validators().trusted(publicKey);
if (auto const& [added, relayed] =
if (auto [added, relayed] =
app_.getHashRouter().addSuppressionPeerWithStatus(suppression, id_);
!added)
{
// Count unique messages (Slots has it's own 'HashRouter'), which a peer
// receives within IDLED seconds since the message has been relayed.
if (relayed &&
(stopwatch().now() - *relayed) < reduce_relay::PEER_IDLED)
if (relayed && (stopwatch().now() - *relayed) < reduce_relay::IDLED)
overlay_.updateSlotAndSquelch(
suppression, publicKey, id_, isTrusted);
suppression, publicKey, id_, protocol::mtPROPOSE_LEDGER);
// report duplicate proposal messages
overlay_.reportInboundTraffic(
@@ -1746,16 +1750,6 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMProposeSet> const& m)
if (!isTrusted)
{
overlay_.reportInboundTraffic(
TrafficCount::category::proposal_untrusted,
Message::messageSize(*m));
// If the operator has specified that untrusted proposals be dropped
// then this happens here I.e. before further wasting CPU verifying the
// signature of an untrusted key
if (app_.config().RELAY_UNTRUSTED_PROPOSALS == -1)
return;
if (tracking_.load() == Tracking::diverged)
{
JLOG(p_journal_.debug())
@@ -2364,6 +2358,20 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMValidation> const& m)
auto const isTrusted =
app_.validators().trusted(val->getSignerPublic());
// If the operator has specified that untrusted validations be
// dropped then this happens here I.e. before further wasting CPU
// verifying the signature of an untrusted key
if (!isTrusted)
{
// increase untrusted validations received
overlay_.reportInboundTraffic(
TrafficCount::category::validation_untrusted,
Message::messageSize(*m));
if (app_.config().RELAY_UNTRUSTED_VALIDATIONS == -1)
return;
}
auto key = sha512Half(makeSlice(m->validation()));
auto [added, relayed] =
@@ -2374,10 +2382,9 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMValidation> const& m)
// Count unique messages (Slots has it's own 'HashRouter'), which a
// peer receives within IDLED seconds since the message has been
// relayed.
if (relayed &&
(stopwatch().now() - *relayed) < reduce_relay::PEER_IDLED)
if (relayed && (stopwatch().now() - *relayed) < reduce_relay::IDLED)
overlay_.updateSlotAndSquelch(
key, val->getSignerPublic(), id_, isTrusted);
key, val->getSignerPublic(), id_, protocol::mtVALIDATION);
// increase duplicate validations received
overlay_.reportInboundTraffic(
@@ -2388,25 +2395,6 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMValidation> const& m)
return;
}
// at this point the message is guaranteed to be unique
if (!isTrusted)
{
overlay_.reportInboundTraffic(
TrafficCount::category::validation_untrusted,
Message::messageSize(*m));
// If the operator has specified that untrusted validations be
// dropped then this happens here I.e. before further wasting CPU
// verifying the signature of an untrusted key
// TODO: Deprecate RELAY_UNTRUSTED_VALIDATIONS config once enhanced
// squelching is the defacto routing algorithm.
if (app_.config().RELAY_UNTRUSTED_VALIDATIONS == -1)
return;
overlay_.updateUntrustedValidatorSlot(
key, val->getSignerPublic(), id_);
}
if (!isTrusted && (tracking_.load() == Tracking::diverged))
{
JLOG(p_journal_.debug())
@@ -2716,7 +2704,6 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMSquelch> const& m)
fee_.update(Resource::feeInvalidData, "squelch no pubkey");
return;
}
auto validator = m->validatorpubkey();
auto const slice{makeSlice(validator)};
if (!publicKeyType(slice))
@@ -2734,27 +2721,15 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMSquelch> const& m)
return;
}
auto duration = std::chrono::seconds{
m->has_squelchduration() ? m->squelchduration() : 0};
if (m->squelch() &&
(duration < reduce_relay::MIN_UNSQUELCH_EXPIRE ||
duration > reduce_relay::MAX_UNSQUELCH_EXPIRE_PEERS))
{
std::uint32_t duration =
m->has_squelchduration() ? m->squelchduration() : 0;
if (!m->squelch())
squelch_.removeSquelch(key);
else if (!squelch_.addSquelch(key, std::chrono::seconds{duration}))
fee_.update(Resource::feeInvalidData, "squelch duration");
return;
}
JLOG(p_journal_.debug())
<< "onMessage: TMSquelch " << (!m->squelch() ? "un" : "")
<< "squelch message; validator: " << slice << "peer: " << id()
<< " duration: " << duration.count();
squelchStore_.handleSquelch(key, m->squelch(), duration);
// if the squelch is for an untrusted validator
if (m->squelch() && !app_.validators().trusted(key))
overlay_.handleUntrustedSquelch(key);
<< "onMessage: TMSquelch " << slice << " " << id() << " " << duration;
}
//--------------------------------------------------------------------------
@@ -3038,7 +3013,7 @@ PeerImp::checkPropose(
peerPos.suppressionID(),
peerPos.publicKey(),
std::move(haveMessage),
isTrusted);
protocol::mtPROPOSE_LEDGER);
}
}
@@ -3074,7 +3049,7 @@ PeerImp::checkValidation(
key,
val->getSignerPublic(),
std::move(haveMessage),
val->isTrusted());
protocol::mtVALIDATION);
}
}
}

View File

@@ -23,8 +23,7 @@
#include <xrpld/app/consensus/RCLCxPeerPos.h>
#include <xrpld/app/ledger/detail/LedgerReplayMsgHandler.h>
#include <xrpld/app/misc/HashRouter.h>
#include <xrpld/overlay/Peer.h>
#include <xrpld/overlay/SquelchStore.h>
#include <xrpld/overlay/Squelch.h>
#include <xrpld/overlay/detail/OverlayImpl.h>
#include <xrpld/overlay/detail/ProtocolVersion.h>
#include <xrpld/peerfinder/PeerfinderManager.h>
@@ -117,7 +116,7 @@ private:
clock_type::time_point lastPingTime_;
clock_type::time_point const creationTime_;
reduce_relay::SquelchStore squelchStore_;
reduce_relay::Squelch<UptimeClock> squelch_;
// Notes on thread locking:
//
@@ -441,13 +440,6 @@ public:
return txReduceRelayEnabled_;
}
/** Check if a given validator is squelched.
* @param validator Validator's public key
* @return true if squelch exists and it is not expired. False otherwise.
*/
bool
isSquelched(PublicKey const& validator) const;
private:
void
close();
@@ -688,7 +680,7 @@ PeerImp::PeerImp(
, publicKey_(publicKey)
, lastPingTime_(clock_type::now())
, creationTime_(clock_type::now())
, squelchStore_(app_.journal("SquelchStore"), stopwatch())
, squelch_(app_.journal("Squelch"))
, usage_(usage)
, fee_{Resource::feeTrivialPeer}
, slot_(std::move(slot))

View File

@@ -1,722 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2025 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <xrpld/overlay/Peer.h>
#include <xrpld/overlay/ReduceRelayCommon.h>
#include <xrpld/overlay/Slot.h>
#include <xrpl/basics/Log.h>
#include <xrpl/basics/UnorderedContainers.h>
#include <xrpl/basics/chrono.h>
#include <xrpl/basics/random.h>
#include <xrpl/beast/container/aged_unordered_map.h>
#include <xrpl/beast/utility/Journal.h>
#include <xrpl/beast/utility/PropertyStream.h>
#include <xrpl/protocol/PublicKey.h>
#include <chrono>
#include <cstddef>
#include <optional>
#include <sstream>
#include <string>
#include <vector>
namespace ripple {
namespace reduce_relay {
void
Slot::deleteIdlePeer(PublicKey const& validator)
{
using namespace std::chrono;
auto const now = clock_.now();
for (auto it = peers_.begin(); it != peers_.end();)
{
auto const& peer = it->second;
auto const id = it->first;
++it;
if (now - peer.lastMessage > reduce_relay::PEER_IDLED)
{
JLOG(journal_.trace())
<< "deleteIdlePeer: deleting idle peer "
<< formatLogMessage(validator, id)
<< " peer_state: " << to_string(peer.state)
<< " idle for: " << (now - peer.lastMessage).count();
deletePeer(validator, id, false);
}
}
}
void
Slot::update(
PublicKey const& validator,
Peer::id_t id,
ignored_squelch_callback report)
{
using namespace std::chrono;
auto const now = clock_.now();
auto const it = peers_.find(id);
// First message from this peer
if (it == peers_.end())
{
JLOG(journal_.trace())
<< "update: adding new slot" << formatLogMessage(validator, id);
peers_.emplace(std::make_pair(
id,
PeerInfo{
.state = PeerState::Counting,
.count = 0,
.expire = now,
.lastMessage = now,
.timesSelected = 0}));
initCounting();
return;
}
// Message from a peer with expired squelch
if (it->second.state == PeerState::Squelched && now > it->second.expire)
{
JLOG(journal_.trace())
<< "update: squelch expired" << formatLogMessage(validator, id);
it->second.state = PeerState::Counting;
it->second.lastMessage = now;
initCounting();
return;
}
auto& peer = it->second;
peer.lastMessage = now;
// report if we received a message from a squelched peer
if (peer.state == PeerState::Squelched)
report();
if (getState() != SlotState::Counting || peer.state == PeerState::Squelched)
return;
if (++peer.count > reduce_relay::MIN_MESSAGE_THRESHOLD)
considered_.insert(id);
if (peer.count == (reduce_relay::MAX_MESSAGE_THRESHOLD + 1))
++reachedThreshold_;
if (now - lastSelected_ > 2 * reduce_relay::MAX_UNSQUELCH_EXPIRE_DEFAULT)
{
JLOG(journal_.warn())
<< "update: resetting due to inactivity"
<< formatLogMessage(validator, id) << " inactive for: "
<< duration_cast<seconds>(now - lastSelected_).count();
initCounting();
return;
}
if (reachedThreshold_ == maxSelectedPeers_)
{
// Randomly select maxSelectedPeers_ peers from considered.
// Exclude peers that have been idling > IDLED -
// it's possible that deleteIdlePeer() has not been called yet.
// If number of remaining peers != maxSelectedPeers_
// then reset the Counting state and let deleteIdlePeer() handle
// idled peers.
std::unordered_set<Peer::id_t> selected;
std::stringstream str;
while (selected.size() != maxSelectedPeers_ && considered_.size() != 0)
{
auto const i =
considered_.size() == 1 ? 0 : rand_int(considered_.size() - 1);
auto const it = std::next(considered_.begin(), i);
auto const id = *it;
considered_.erase(it);
auto const& peersIt = peers_.find(id);
if (peersIt == peers_.end())
{
JLOG(journal_.error()) << "update: peer not found"
<< formatLogMessage(validator, id);
continue;
}
if (now - peersIt->second.lastMessage < reduce_relay::PEER_IDLED)
{
selected.insert(id);
str << id << " ";
}
}
if (selected.size() != maxSelectedPeers_)
{
JLOG(journal_.error()) << "update: selection failed"
<< formatLogMessage(validator, std::nullopt);
initCounting();
return;
}
lastSelected_ = now;
JLOG(journal_.trace()) << "update: selected peers "
<< formatLogMessage(validator, std::nullopt)
<< " peers: " << str.str();
XRPL_ASSERT(
peers_.size() >= maxSelectedPeers_,
"ripple::reduce_relay::Slot::update : minimum peers");
// squelch peers which are not selected and
// not already squelched
str.clear();
for (auto& [k, v] : peers_)
{
v.count = 0;
if (selected.find(k) != selected.end())
{
v.state = PeerState::Selected;
++v.timesSelected;
}
else if (v.state != PeerState::Squelched)
{
if (journal_.trace())
str << k << " ";
v.state = PeerState::Squelched;
std::chrono::seconds duration =
getSquelchDuration(peers_.size() - maxSelectedPeers_);
v.expire = now + duration;
handler_.squelch(validator, k, duration.count());
}
}
JLOG(journal_.trace()) << "update: squelched peers "
<< formatLogMessage(validator, std::nullopt)
<< " peers: " << str.str();
considered_.clear();
reachedThreshold_ = 0;
state_ = SlotState::Selected;
}
}
std::chrono::seconds
Slot::getSquelchDuration(std::size_t npeers) const
{
using namespace std::chrono;
auto m = std::max(
reduce_relay::MAX_UNSQUELCH_EXPIRE_DEFAULT,
seconds{reduce_relay::SQUELCH_PER_PEER * npeers});
if (m > reduce_relay::MAX_UNSQUELCH_EXPIRE_PEERS)
{
m = reduce_relay::MAX_UNSQUELCH_EXPIRE_PEERS;
JLOG(journal_.warn())
<< "getSquelchDuration: unexpected squelch duration " << npeers;
}
return seconds{
ripple::rand_int(reduce_relay::MIN_UNSQUELCH_EXPIRE / 1s, m / 1s)};
}
void
Slot::deletePeer(PublicKey const& validator, Peer::id_t id, bool erase)
{
auto it = peers_.find(id);
if (it == peers_.end())
return;
std::vector<Peer::id_t> toUnsquelch;
auto const now = clock_.now();
if (it->second.state == PeerState::Selected)
{
JLOG(journal_.debug())
<< "deletePeer: unsquelching selected peer "
<< formatLogMessage(validator, id)
<< " peer_state: " << to_string(it->second.state)
<< " considered: " << (considered_.find(id) != considered_.end())
<< " erase: " << erase;
for (auto& [k, v] : peers_)
{
if (v.state == PeerState::Squelched)
toUnsquelch.push_back(k);
v.state = PeerState::Counting;
v.count = 0;
v.expire = now;
}
considered_.clear();
reachedThreshold_ = 0;
state_ = SlotState::Counting;
}
else if (considered_.contains(id))
{
if (it->second.count > reduce_relay::MAX_MESSAGE_THRESHOLD)
--reachedThreshold_;
considered_.erase(id);
}
it->second.lastMessage = now;
it->second.count = 0;
if (erase)
peers_.erase(it);
// Must be after peers_.erase(it)
for (auto const& k : toUnsquelch)
handler_.unsquelch(validator, k);
}
void
Slot::onWrite(beast::PropertyStream::Map& stream) const
{
auto const now = clock_.now();
stream["state"] = to_string(getState());
stream["reachedThreshold"] = reachedThreshold_;
stream["considered"] = considered_.size();
stream["lastSelected"] =
duration_cast<std::chrono::seconds>(now - lastSelected_).count();
stream["isTrusted"] = isTrusted_;
beast::PropertyStream::Set peers("peers", stream);
for (auto const& [id, info] : peers_)
{
beast::PropertyStream::Map item(peers);
item["id"] = id;
item["count"] = info.count;
item["expire"] =
duration_cast<std::chrono::seconds>(info.expire - now).count();
item["lastMessage"] =
duration_cast<std::chrono::seconds>(now - info.lastMessage).count();
item["timesSelected"] = info.timesSelected;
item["state"] = to_string(info.state);
}
}
void
Slot::initCounting()
{
state_ = SlotState::Counting;
considered_.clear();
reachedThreshold_ = 0;
for (auto& [_, peer] : peers_)
{
(void)_;
peer.count = 0;
}
}
std::string
Slot::formatLogMessage(PublicKey const& validator, std::optional<Peer::id_t> id)
const
{
std::stringstream ss;
ss << "validator: " << toBase58(TokenType::NodePublic, validator);
if (id)
ss << " peer: " << *id;
ss << " trusted: " << isTrusted_;
ss << " slot_state: " << to_string(getState());
return ss.str();
}
// --------------------------------- Slots --------------------------------- //
bool
Slots::reduceRelayReady()
{
if (!reduceRelayReady_)
reduceRelayReady_ =
std::chrono::duration_cast<std::chrono::minutes>(
clock_.now().time_since_epoch()) > reduce_relay::WAIT_ON_BOOTUP;
return reduceRelayReady_;
}
void
Slots::registerSquelchedValidator(
PublicKey const& validatorKey,
Peer::id_t peerID)
{
peersWithSquelchedValidators_[validatorKey].insert(peerID);
}
bool
Slots::expireAndIsValidatorSquelched(PublicKey const& validatorKey)
{
beast::expire(
peersWithSquelchedValidators_,
reduce_relay::MAX_UNSQUELCH_EXPIRE_DEFAULT);
return peersWithSquelchedValidators_.find(validatorKey) !=
peersWithSquelchedValidators_.end();
}
bool
Slots::expireAndIsPeerSquelched(
PublicKey const& validatorKey,
Peer::id_t peerID)
{
beast::expire(
peersWithSquelchedValidators_,
reduce_relay::MAX_UNSQUELCH_EXPIRE_DEFAULT);
auto const it = peersWithSquelchedValidators_.find(validatorKey);
// if validator was not squelched, the peer was also not squelched
if (it == peersWithSquelchedValidators_.end())
return false;
// if a peer is found the squelch for it has not expired
return it->second.find(peerID) != it->second.end();
}
bool
Slots::expireAndIsPeerMessageCached(uint256 const& key, Peer::id_t id)
{
beast::expire(peersWithMessage_, reduce_relay::PEER_IDLED);
// return false if the ID was not inserted
if (key.isNonZero())
return !peersWithMessage_[key].insert(id).second;
return false;
}
void
Slots::updateSlotAndSquelch(
uint256 const& key,
PublicKey const& validator,
Peer::id_t id,
typename Slot::ignored_squelch_callback report,
bool isTrusted)
{
if (expireAndIsPeerMessageCached(key, id))
return;
// If we receive a message from a trusted validator either update an
// existing slot or insert a new one. If we are not running enhanced
// squelching also deduplicate untrusted validator messages
if (isTrusted || !enhancedSquelchEnabled_)
{
// if enhanced squelching is disabled, keep untrusted validator slots
// separately from trusted ones
auto it = (isTrusted ? trustedSlots_ : untrustedSlots_)
.emplace(std::make_pair(
validator,
Slot(
handler_,
logs_.journal("Slot"),
maxSelectedPeers_,
isTrusted,
clock_)))
.first;
it->second.update(validator, id, report);
}
else
{
auto it = untrustedSlots_.find(validator);
// If we received a message from a validator that is not
// selected, and is not squelched, there is nothing to do. It
// will be squelched later when `updateValidatorSlot` is called.
if (it == untrustedSlots_.end())
return;
it->second.update(validator, id, report);
}
}
void
Slots::updateUntrustedValidatorSlot(
uint256 const& key,
PublicKey const& validator,
Peer::id_t id,
typename Slot::ignored_squelch_callback report)
{
// We received a message from an already selected validator
// we can ignore this message
if (untrustedSlots_.find(validator) != untrustedSlots_.end())
return;
// Did we receive a message from an already squelched validator?
// This could happen in few cases:
// 1. It happened so that the squelch for a particular peer expired
// before our local squelch.
// 2. We receive a message from a new peer that did not receive the
// squelch request.
// 3. The peer is ignoring our squelch request and we have not sent
// the control message in a while.
// In all of these cases we can only send them a squelch request again.
if (expireAndIsValidatorSquelched(validator))
{
if (!expireAndIsPeerSquelched(validator, id))
{
JLOG(journal_.debug())
<< "updateUntrustedValidatorSlot: received a message from a "
"squelched validator "
<< "validator: " << toBase58(TokenType::NodePublic, validator)
<< " peer: " << id;
registerSquelchedValidator(validator, id);
handler_.squelch(
validator,
id,
reduce_relay::MAX_UNSQUELCH_EXPIRE_DEFAULT.count());
}
return;
}
// Do we have any available slots for additional untrusted validators?
// This could happen in few cases:
// 1. We received a message from a new untrusted validator, but we
// are at capacity.
// 2. We received a message from a previously squelched validator.
// In all of these cases we send a squelch message to all peers.
// The validator may still be considered by the selector. However, it
// will be eventually cleaned and squelched
if (untrustedSlots_.size() == reduce_relay::MAX_UNTRUSTED_SLOTS)
{
JLOG(journal_.debug())
<< "updateUntrustedValidatorSlot: slots full squelching validator "
<< "validator: " << toBase58(TokenType::NodePublic, validator);
handler_.squelchAll(
validator,
MAX_UNSQUELCH_EXPIRE_DEFAULT.count(),
[&](Peer::id_t id) { registerSquelchedValidator(validator, id); });
return;
}
if (auto const v = updateConsideredValidator(validator, id))
{
JLOG(journal_.debug())
<< "updateUntrustedValidatorSlot: selected untrusted validator "
<< "validator: " << toBase58(TokenType::NodePublic, *v);
untrustedSlots_.emplace(std::make_pair(
*v,
Slot(
handler_,
logs_.journal("Slot"),
maxSelectedPeers_,
false,
clock_)));
}
// When we reach MAX_UNTRUSTED_SLOTS, don't explicitly clean them.
// Since we stop updating their counters, they will idle, and will be
// removed and squelched.
}
std::optional<PublicKey>
Slots::updateConsideredValidator(PublicKey const& validator, Peer::id_t peer)
{
auto const now = clock_.now();
auto it = consideredValidators_.find(validator);
if (it == consideredValidators_.end())
{
consideredValidators_.emplace(std::make_pair(
validator,
ValidatorInfo{
.count = 1,
.lastMessage = now,
.peers = {peer},
}));
return std::nullopt;
}
it->second.peers.insert(peer);
it->second.lastMessage = now;
++it->second.count;
// if the validator has not met selection criteria yet
if (it->second.count < reduce_relay::MAX_MESSAGE_THRESHOLD)
return std::nullopt;
auto const key = it->first;
consideredValidators_.erase(it);
return key;
}
void
Slots::squelchUntrustedValidator(PublicKey const& validator)
{
JLOG(journal_.info())
<< "squelchUntrustedValidator: squelching untrusted validator: "
<< toBase58(TokenType::NodePublic, validator);
// to prevent the validator from being reinserted squelch the validator
// before removing the validator from consideration and slots
handler_.squelchAll(
validator, MAX_UNSQUELCH_EXPIRE_DEFAULT.count(), [&](Peer::id_t id) {
registerSquelchedValidator(validator, id);
});
consideredValidators_.erase(validator);
untrustedSlots_.erase(validator);
}
void
Slots::deletePeer(Peer::id_t id, bool erase)
{
auto const f = [&](slots_map& slots) {
for (auto& [validator, slot] : slots)
slot.deletePeer(validator, id, erase);
};
f(trustedSlots_);
f(untrustedSlots_);
}
void
Slots::deleteIdlePeers()
{
auto const f = [&](slots_map& slots) {
auto const now = clock_.now();
for (auto it = slots.begin(); it != slots.end();)
{
auto const& validator = it->first;
auto& slot = it->second;
slot.deleteIdlePeer(validator);
// delete the slot if the untrusted slot no longer meets the
// selection critera or it has not been selected for a while
if ((!slot.isTrusted_ &&
slot.getPeers().size() < maxSelectedPeers_) ||
now - it->second.getLastSelected() >
reduce_relay::MAX_UNSQUELCH_EXPIRE_DEFAULT)
{
JLOG(journal_.trace())
<< "deleteIdlePeers: deleting "
<< (slot.isTrusted_ ? "trusted" : "untrusted") << " slot "
<< toBase58(TokenType::NodePublic, it->first) << " reason: "
<< (now - it->second.getLastSelected() >
reduce_relay::MAX_UNSQUELCH_EXPIRE_DEFAULT
? " inactive "
: " insufficient peers");
// if an untrusted validator slot idled - peers stopped
// sending messages for this validator squelch it
if (!it->second.isTrusted_)
handler_.squelchAll(
it->first,
reduce_relay::MAX_UNSQUELCH_EXPIRE_DEFAULT.count(),
[&](Peer::id_t id) {
registerSquelchedValidator(it->first, id);
});
it = slots.erase(it);
}
else
++it;
}
};
f(trustedSlots_);
f(untrustedSlots_);
// remove and squelch all validators that the selector deemed unsuitable
// there might be some good validators in this set that "lapsed".
// However, since these are untrusted validators we're not concerned
for (auto const& validator : cleanConsideredValidators())
handler_.squelchAll(
validator,
reduce_relay::MAX_UNSQUELCH_EXPIRE_DEFAULT.count(),
[&](Peer::id_t id) { registerSquelchedValidator(validator, id); });
}
std::vector<PublicKey>
Slots::cleanConsideredValidators()
{
auto const now = clock_.now();
std::vector<PublicKey> keys;
std::stringstream ss;
for (auto it = consideredValidators_.begin();
it != consideredValidators_.end();)
{
if (now - it->second.lastMessage >
reduce_relay::MAX_UNTRUSTED_VALIDATOR_IDLE)
{
keys.push_back(it->first);
ss << " " << toBase58(TokenType::NodePublic, it->first);
it = consideredValidators_.erase(it);
}
// Due to some reason the validator idled, reset their progress
else if (now - it->second.lastMessage > reduce_relay::PEER_IDLED)
{
it->second.reset();
++it;
}
else
++it;
}
if (keys.size() > 0)
{
JLOG(journal_.info())
<< "cleanConsideredValidators: removed considered validators "
<< ss.str();
}
return keys;
}
void
Slots::onWrite(beast::PropertyStream::Map& stream) const
{
auto const writeSlot = [](beast::PropertyStream::Set& set,
hash_map<PublicKey, Slot> const& slots) {
for (auto const& [validator, slot] : slots)
{
beast::PropertyStream::Map item(set);
item["validator"] = toBase58(TokenType::NodePublic, validator);
slot.onWrite(item);
}
};
beast::PropertyStream::Map slots("slots", stream);
{
beast::PropertyStream::Set set("trusted", slots);
writeSlot(set, trustedSlots_);
}
{
beast::PropertyStream::Set set("untrusted", slots);
writeSlot(set, untrustedSlots_);
}
{
beast::PropertyStream::Set set("considered", slots);
auto const now = clock_.now();
for (auto const& [validator, info] : consideredValidators_)
{
beast::PropertyStream::Map item(set);
item["validator"] = toBase58(TokenType::NodePublic, validator);
item["lastMessage"] =
std::chrono::duration_cast<std::chrono::seconds>(
now - info.lastMessage)
.count();
item["messageCount"] = info.count;
item["peers"] = info.peers.size();
}
}
}
} // namespace reduce_relay
} // namespace ripple

View File

@@ -1,101 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2025 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <xrpld/overlay/ReduceRelayCommon.h>
#include <xrpld/overlay/SquelchStore.h>
#include <xrpl/basics/Log.h>
#include <xrpl/beast/utility/Journal.h>
#include <xrpl/protocol/PublicKey.h>
#include <chrono>
#include <unordered_map>
namespace ripple {
namespace reduce_relay {
void
SquelchStore::handleSquelch(
PublicKey const& validator,
bool squelch,
std::chrono::seconds duration)
{
// Remove all expired squelches. This call is here, as it is on the least
// critical execution path, that does not require periodic cleanup calls.
removeExpired();
if (squelch)
{
// This should never trigger. The squelch duration is validated in
// PeerImp.onMessage(TMSquelch). However, if somehow invalid duration is
// passed, log is as an error
if ((duration < reduce_relay::MIN_UNSQUELCH_EXPIRE ||
duration > reduce_relay::MAX_UNSQUELCH_EXPIRE_PEERS))
{
JLOG(journal_.error())
<< "SquelchStore: invalid squelch duration validator: "
<< Slice(validator) << " duration: " << duration.count();
return;
}
add(validator, duration);
return;
}
remove(validator);
}
bool
SquelchStore::isSquelched(PublicKey const& validator) const
{
auto const now = clock_.now();
auto const it = squelched_.find(validator);
if (it == squelched_.end())
return false;
return it->second > now;
}
void
SquelchStore::add(
PublicKey const& validator,
std::chrono::seconds const& duration)
{
squelched_[validator] = clock_.now() + duration;
}
void
SquelchStore::remove(PublicKey const& validator)
{
squelched_.erase(validator);
}
void
SquelchStore::removeExpired()
{
auto const now = clock_.now();
std::erase_if(
squelched_, [&](auto const& entry) { return entry.second < now; });
}
} // namespace reduce_relay
} // namespace ripple

View File

@@ -114,7 +114,7 @@ getCountsJson(Application& app, int minObjectCount)
ret[jss::treenode_cache_size] =
app.getNodeFamily().getTreeNodeCache()->getCacheSize();
ret[jss::treenode_track_size] =
app.getNodeFamily().getTreeNodeCache()->getTrackSize();
static_cast<int>(app.getNodeFamily().getTreeNodeCache()->size());
std::string uptime;
auto s = UptimeClock::now();