Compare commits

..

16 Commits

Author SHA1 Message Date
Pratik Mankawde
f2ec087fe2 build only
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-01-23 11:21:43 +00:00
Pratik Mankawde
a6b0671f21 cleanup one more include
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-01-22 14:07:06 +00:00
Pratik Mankawde
9d08722cd0 minor change
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-01-22 12:43:18 +00:00
Pratik Mankawde
6e17998acb ordering changes
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-01-22 12:15:18 +00:00
Pratik Mankawde
72ad34d214 more changes
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-01-22 11:46:20 +00:00
Pratik Mankawde
c9dd2d73e2 Merge remote-tracking branch 'origin/develop' into pratik/Move-includes-to-cpp-files 2026-01-22 11:05:03 +00:00
Pratik Mankawde
63d91b24b9 first itr
Signed-off-by: Pratik Mankawde <3397372+pratikmankawde@users.noreply.github.com>
2026-01-22 11:03:40 +00:00
David Fuelling
211054baff docs: Update Ripple Bug Bounty public key (#6258)
The Ripple Bug Bounty program recently changed the public keys that security researchers can use to encrypt vulnerabilities and messages for submission to the program. This information was updated on https://ripple.com/legal/bug-bounty/ and this PR updates the `SECURITY.md` to align.
2026-01-21 19:55:56 -05:00
Bart
4fd4e93b3e ci: Add missing commit hash to Conan recipe version (#6256)
During several iterations of development of https://github.com/XRPLF/rippled/pull/6235, the commit hash was supposed to be moved into the `run:` statement, but it slipped through the cracks and did not get added. This change adds the commit hash as suffix to the Conan recipe version.
2026-01-21 19:17:05 -05:00
Ayaz Salikhov
4cd6cc3e01 fix: Include <functional> header in Number.h (#6254)
The `Number.h` header file now has `std::reference_wrapper` from `<functional>`, but the include is missing, causing downstream build problems. This change adds the header.
2026-01-21 18:52:22 -05:00
Bart
a37c556079 ci: Upload Conan recipe for merges into develop and commits to release (#6235)
This change uploads the `libxrpl` library as a Conan recipe to our remote when (i) merging into the `develop` branch, (ii) committing to a PR that targets a `release*` branch, and (iii) a versioned tag is applied. Clio is only notified in the second case. The user and channel are no longer used when uploading the recipe.

Specific changes are:
* A `generate-version` action is added, which extracts the build version from `BuildInfo.cpp` and appends the short 7-character commit hash to it for merges into the `develop` branch and for commits to a PR that targets a `release*` branch. When a tag is applied, however, the tag itself is used as the version. This functionality has been turned into a separate action as we will use the same versioning logic for creating .rpm and .deb packages, as well as Docker images.
* An `upload-recipe` action is added, which calls the `generate-version` action and further handles the uploading of the recipe to Conan.
* This action is called by both the `on-pr` and `on-trigger` workflows, and a new `on-tag` workflow.

The reason for this change is that we have downstream uses for the `libxrpl` library, but currently only upload the recipe to check for compatibility with Clio when making commits to a PR that targets the release branch.
2026-01-21 17:31:44 -05:00
Pratik Mankawde
5e808794d8 Limit reply size on TMGetObjectByHash queries (#6110)
`PeerImp` processes `TMGetObjectByHash` queries with an unbounded per-request loop, which performs a `NodeStore` fetch and then appends retrieved data to the reply for each queried object without a local count cap or reply-byte budget. However, the `Nodestore` fetches are expensive when high in numbers, which might slow down the process overall. Hence this code change adds an upper cap on the response size.
2026-01-21 09:19:53 -05:00
Bart
12c0d67ff6 ci: remove 'master' branch as a trigger (#6234)
This change removes the `master` branch as a trigger for the CI pipelines, and updates comments accordingly. It also fixes the pre-commit workflow, so it will run on all release branches.
2026-01-16 15:01:53 -05:00
Ed Hennis
00d3cee6cc Improve ledger_entry lookups for fee, amendments, NUNL, and hashes (#5644)
These "fixed location" objects can be found in multiple ways:

1. The lookup parameters use the same format as other ledger objects, but the only valid value is true or the valid index of the object: 
  - Amendments: "amendments" : true
  - FeeSettings: "fee" : true
  - NegativeUNL: "nunl" : true
  - LedgerHashes: "hashes" : true (For the "short" list. See below.)

2. With RPC API >= 3, using special case values to "index", such as "index" : "amendments". Uses the same names as above. Note that for "hashes", this option will only return the recent ledger hashes / "short" skip list.

3. LedgerHashes has two types: "short", which stores recent ledger hashes, and "long", which stores the flag ledger hashes for a particular ledger range.
  - To find a "long" LedgerHashes object, request '"hashes" : <ledger sequence>'. <ledger sequence> must be a number that evaluates to an unsigned integer.
  - To find the "short" LedgerHashes object, request "hashes": true as with the other fixed objects.

The following queries are all functionally equivalent:

  - "amendments" : true
  - "index" : "amendments" (API >=3 only)
  - "amendments" : "7DB0788C020F02780A673DC74757F23823FA3014C1866E72CC4CD8B226CD6EF4"
  - "index" : "7DB0788C020F02780A673DC74757F23823FA3014C1866E72CC4CD8B226CD6EF4"

Finally, whether the object is found or not, if a valid index is computed, that index will be returned. This can be used to confirm the query was valid, or to save the index for future use.
2026-01-16 12:26:30 -05:00
Pratik Mankawde
96d17b7f66 ci: Add sanitizers to CI builds (#5996)
This change adds support for sanitizer build options in CI builds workflow. Currently `asan+ubsan` is enabled, while `tsan+ubsan` is left disabled as more changes are required.
2026-01-15 16:18:14 +00:00
Ayaz Salikhov
ec44347ffc test: Use gtest instead of doctest (#6216)
This change switches over the doctest framework to the gtest framework.
2026-01-15 08:36:13 -05:00
114 changed files with 3101 additions and 1622 deletions

View File

@@ -28,6 +28,7 @@ ignoreRegExpList:
- /[\['"`]-[DWw][a-zA-Z0-9_-]+['"`\]]/g # compile flags
suggestWords:
- xprl->xrpl
- xprld->xrpld
- unsynched->unsynced
- synched->synced
- synch->sync
@@ -61,6 +62,7 @@ words:
- compr
- conanfile
- conanrun
- confs
- connectability
- coro
- coros
@@ -90,6 +92,7 @@ words:
- finalizers
- firewalled
- fmtdur
- fsanitize
- funclets
- gcov
- gcovr
@@ -126,6 +129,7 @@ words:
- lseq
- lsmf
- ltype
- mcmodel
- MEMORYSTATUSEX
- Merkle
- Metafuncton
@@ -235,6 +239,8 @@ words:
- txn
- txns
- txs
- UBSAN
- ubsan
- umant
- unacquired
- unambiguity
@@ -270,6 +276,7 @@ words:
- xbridge
- xchain
- ximinez
- EXPECT_STREQ
- XMACRO
- xrpkuwait
- xrpl

View File

@@ -18,6 +18,10 @@ inputs:
description: "The logging verbosity."
required: false
default: "verbose"
sanitizers:
description: "The sanitizers to enable."
required: false
default: ""
runs:
using: composite
@@ -29,9 +33,11 @@ runs:
BUILD_OPTION: ${{ inputs.force_build == 'true' && '*' || 'missing' }}
BUILD_TYPE: ${{ inputs.build_type }}
LOG_VERBOSITY: ${{ inputs.log_verbosity }}
SANITIZERS: ${{ inputs.sanitizers }}
run: |
echo 'Installing dependencies.'
conan install \
--profile ci \
--build="${BUILD_OPTION}" \
--options:host='&:tests=True' \
--options:host='&:xrpld=True' \

View File

@@ -0,0 +1,42 @@
name: Generate build version number
description: "Generate build version number."
outputs:
version:
description: "The generated build version number."
value: ${{ steps.version.outputs.version }}
runs:
using: composite
steps:
# When a tag is pushed, the version is used as-is.
- name: Generate version for tag event
if: ${{ github.event_name == 'tag' }}
shell: bash
env:
VERSION: ${{ github.ref_name }}
run: echo "VERSION=${VERSION}" >> "${GITHUB_ENV}"
# When a tag is not pushed, then the version is extracted from the
# BuildInfo.cpp file and the shortened commit hash appended to it.
- name: Generate version for non-tag event
if: ${{ github.event_name != 'tag' }}
shell: bash
run: |
echo 'Extracting version from BuildInfo.cpp.'
VERSION="$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" | awk -F '"' '{print $2}')"
if [[ -z "${VERSION}" ]]; then
echo 'Unable to extract version from BuildInfo.cpp.'
exit 1
fi
echo 'Appending shortened commit hash to version.'
SHA='${{ github.sha }}'
VERSION="${VERSION}-${SHA:0:7}"
echo "VERSION=${VERSION}" >> "${GITHUB_ENV}"
- name: Output version
id: version
shell: bash
run: echo "version=${VERSION}" >> "${GITHUB_OUTPUT}"

View File

@@ -2,11 +2,11 @@ name: Setup Conan
description: "Set up Conan configuration, profile, and remote."
inputs:
conan_remote_name:
remote_name:
description: "The name of the Conan remote to use."
required: false
default: xrplf
conan_remote_url:
remote_url:
description: "The URL of the Conan endpoint to use."
required: false
default: https://conan.ripplex.io
@@ -28,7 +28,7 @@ runs:
shell: bash
run: |
echo 'Installing profile.'
conan config install conan/profiles/default -tf $(conan config home)/profiles/
conan config install conan/profiles/ -tf $(conan config home)/profiles/
echo 'Conan profile:'
conan profile show
@@ -36,11 +36,11 @@ runs:
- name: Set up Conan remote
shell: bash
env:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
CONAN_REMOTE_URL: ${{ inputs.conan_remote_url }}
REMOTE_NAME: ${{ inputs.remote_name }}
REMOTE_URL: ${{ inputs.remote_url }}
run: |
echo "Adding Conan remote '${CONAN_REMOTE_NAME}' at '${CONAN_REMOTE_URL}'."
conan remote add --index 0 --force "${CONAN_REMOTE_NAME}" "${CONAN_REMOTE_URL}"
echo "Adding Conan remote '${REMOTE_NAME}' at '${REMOTE_URL}'."
conan remote add --index 0 --force "${REMOTE_NAME}" "${REMOTE_URL}"
echo 'Listing Conan remotes.'
conan remote list

View File

@@ -19,6 +19,7 @@ libxrpl.protocol > xrpl.json
libxrpl.protocol > xrpl.protocol
libxrpl.resource > xrpl.basics
libxrpl.resource > xrpl.json
libxrpl.resource > xrpl.protocol
libxrpl.resource > xrpl.resource
libxrpl.server > xrpl.basics
libxrpl.server > xrpl.json
@@ -104,6 +105,7 @@ test.overlay > xrpl.basics
test.overlay > xrpld.app
test.overlay > xrpld.overlay
test.overlay > xrpld.peerfinder
test.overlay > xrpl.nodestore
test.overlay > xrpl.protocol
test.overlay > xrpl.shamap
test.peerfinder > test.beast

View File

@@ -20,8 +20,8 @@ class Config:
Generate a strategy matrix for GitHub Actions CI.
On each PR commit we will build a selection of Debian, RHEL, Ubuntu, MacOS, and
Windows configurations, while upon merge into the develop, release, or master
branches, we will build all configurations, and test most of them.
Windows configurations, while upon merge into the develop or release branches,
we will build all configurations, and test most of them.
We will further set additional CMake arguments as follows:
- All builds will have the `tests`, `werr`, and `xrpld` options.
@@ -44,7 +44,7 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
# We build and test all configurations by default, except for Windows in
# Debug, because it is too slow, as well as when code coverage is
# enabled as that mode already runs the tests.
build_only = False
build_only = True
if os["distro_name"] == "windows" and build_type == "Debug":
build_only = True
@@ -229,7 +229,7 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
if (n := os["compiler_version"]) != "":
config_name += f"-{n}"
config_name += (
f"-{architecture['platform'][architecture['platform'].find('/') + 1 :]}"
f"-{architecture['platform'][architecture['platform'].find('/')+1:]}"
)
config_name += f"-{build_type.lower()}"
if "-Dcoverage=ON" in cmake_args:
@@ -240,17 +240,53 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
# Add the configuration to the list, with the most unique fields first,
# so that they are easier to identify in the GitHub Actions UI, as long
# names get truncated.
configurations.append(
{
"config_name": config_name,
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
}
)
# Add Address and Thread (both coupled with UB) sanitizers for specific bookworm distros.
# GCC-Asan rippled-embedded tests are failing because of https://github.com/google/sanitizers/issues/856
if (
os["distro_version"] == "bookworm"
and f"{os['compiler_name']}-{os['compiler_version']}" == "clang-20"
):
# Add ASAN + UBSAN configuration.
configurations.append(
{
"config_name": config_name + "-asan-ubsan",
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
"sanitizers": "address,undefinedbehavior",
}
)
# TSAN is deactivated due to seg faults with latest compilers.
activate_tsan = False
if activate_tsan:
configurations.append(
{
"config_name": config_name + "-tsan-ubsan",
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
"sanitizers": "thread,undefinedbehavior",
}
)
else:
configurations.append(
{
"config_name": config_name,
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
"sanitizers": "",
}
)
return configurations

View File

@@ -1,7 +1,8 @@
# This workflow runs all workflows to check, build and test the project on
# various Linux flavors, as well as on MacOS and Windows, on every push to a
# user branch. However, it will not run if the pull request is a draft unless it
# has the 'DraftRunCI' label.
# has the 'DraftRunCI' label. For commits to PRs that target a release branch,
# it also uploads the libxrpl recipe to the Conan remote.
name: PR
on:
@@ -53,7 +54,6 @@ jobs:
.github/scripts/rename/**
.github/workflows/reusable-check-levelization.yml
.github/workflows/reusable-check-rename.yml
.github/workflows/reusable-notify-clio.yml
.github/workflows/on-pr.yml
# Keep the paths below in sync with those in `on-trigger.yml`.
@@ -66,6 +66,7 @@ jobs:
.github/workflows/reusable-build-test.yml
.github/workflows/reusable-strategy-matrix.yml
.github/workflows/reusable-test.yml
.github/workflows/reusable-upload-recipe.yml
.codecov.yml
cmake/**
conan/**
@@ -121,22 +122,42 @@ jobs:
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
notify-clio:
upload-recipe:
needs:
- should-run
- build-test
if: ${{ needs.should-run.outputs.go == 'true' && (startsWith(github.base_ref, 'release') || github.base_ref == 'master') }}
uses: ./.github/workflows/reusable-notify-clio.yml
# Only run when committing to a PR that targets a release branch in the
# XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && needs.should-run.outputs.go == 'true' && startsWith(github.ref, 'refs/heads/release') }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
clio_notify_token: ${{ secrets.CLIO_NOTIFY_TOKEN }}
conan_remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
conan_remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}
notify-clio:
needs: upload-recipe
runs-on: ubuntu-latest
steps:
# Notify the Clio repository about the newly proposed release version, so
# it can be checked for compatibility before the release is actually made.
- name: Notify Clio
env:
GH_TOKEN: ${{ secrets.CLIO_NOTIFY_TOKEN }}
PR_URL: ${{ github.event.pull_request.html_url }}
run: |
gh api --method POST -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" \
/repos/xrplf/clio/dispatches -f "event_type=check_libxrpl" \
-F "client_payload[ref]=${{ needs.upload-recipe.outputs.recipe_ref }}" \
-F "client_payload[pr_url]=${PR_URL}"
passed:
if: failure() || cancelled()
needs:
- build-test
- check-levelization
- check-rename
- build-test
- upload-recipe
- notify-clio
runs-on: ubuntu-latest
steps:
- name: Fail

25
.github/workflows/on-tag.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
# This workflow uploads the libxrpl recipe to the Conan remote when a versioned
# tag is pushed.
name: Tag
on:
push:
tags:
- "v*"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
upload-recipe:
# Only run when a tag is pushed to the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}

View File

@@ -1,9 +1,7 @@
# This workflow runs all workflows to build the dependencies required for the
# project on various Linux flavors, as well as on MacOS and Windows, on a
# scheduled basis, on merge into the 'develop', 'release', or 'master' branches,
# or manually. The missing commits check is only run when the code is merged
# into the 'develop' or 'release' branches, and the documentation is built when
# the code is merged into the 'develop' branch.
# This workflow runs all workflows to build and test the code on various Linux
# flavors, as well as on MacOS and Windows, on a scheduled basis, on merge into
# the 'develop' or 'release*' branches, or when requested manually. Upon pushes
# to the develop branch it also uploads the libxrpl recipe to the Conan remote.
name: Trigger
on:
@@ -11,7 +9,6 @@ on:
branches:
- "develop"
- "release*"
- "master"
paths:
# These paths are unique to `on-trigger.yml`.
- ".github/workflows/on-trigger.yml"
@@ -26,6 +23,7 @@ on:
- ".github/workflows/reusable-build-test.yml"
- ".github/workflows/reusable-strategy-matrix.yml"
- ".github/workflows/reusable-test.yml"
- ".github/workflows/reusable-upload-recipe.yml"
- ".codecov.yml"
- "cmake/**"
- "conan/**"
@@ -70,11 +68,20 @@ jobs:
with:
# Enable ccache only for events targeting the XRPLF repository, since
# other accounts will not have access to our remote cache storage.
# However, we do not enable ccache for events targeting the master or a
# release branch, to protect against the rare case that the output
# produced by ccache is not identical to a regular compilation.
ccache_enabled: ${{ github.repository_owner == 'XRPLF' && !(github.base_ref == 'master' || startsWith(github.base_ref, 'release')) }}
# However, we do not enable ccache for events targeting a release branch,
# to protect against the rare case that the output produced by ccache is
# not identical to a regular compilation.
ccache_enabled: ${{ github.repository_owner == 'XRPLF' && !startsWith(github.ref, 'refs/heads/release') }}
os: ${{ matrix.os }}
strategy_matrix: ${{ github.event_name == 'schedule' && 'all' || 'minimal' }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
upload-recipe:
needs: build-test
# Only run when pushing to the develop branch in the XRPLF repository.
if: ${{ github.repository_owner == 'XRPLF' && github.event_name == 'push' && github.ref == 'refs/heads/develop' }}
uses: ./.github/workflows/reusable-upload-recipe.yml
secrets:
remote_username: ${{ secrets.CONAN_REMOTE_USERNAME }}
remote_password: ${{ secrets.CONAN_REMOTE_PASSWORD }}

View File

@@ -3,7 +3,9 @@ name: Run pre-commit hooks
on:
pull_request:
push:
branches: [develop, release, master]
branches:
- "develop"
- "release*"
workflow_dispatch:
jobs:

View File

@@ -51,6 +51,12 @@ on:
type: number
default: 2
sanitizers:
description: "The sanitizers to enable."
required: false
type: string
default: ""
secrets:
CODECOV_TOKEN:
description: "The Codecov token to use for uploading coverage reports."
@@ -91,6 +97,7 @@ jobs:
# Determine if coverage and voidstar should be enabled.
COVERAGE_ENABLED: ${{ contains(inputs.cmake_args, '-Dcoverage=ON') }}
VOIDSTAR_ENABLED: ${{ contains(inputs.cmake_args, '-Dvoidstar=ON') }}
SANITIZERS_ENABLED: ${{ inputs.sanitizers != '' }}
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
@@ -128,11 +135,13 @@ jobs:
# Set the verbosity to "quiet" for Windows to avoid an excessive
# amount of logs. For other OSes, the "verbose" logs are more useful.
log_verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }}
sanitizers: ${{ inputs.sanitizers }}
- name: Configure CMake
working-directory: ${{ env.BUILD_DIR }}
env:
BUILD_TYPE: ${{ inputs.build_type }}
SANITIZERS: ${{ inputs.sanitizers }}
CMAKE_ARGS: ${{ inputs.cmake_args }}
run: |
cmake \
@@ -174,7 +183,7 @@ jobs:
if-no-files-found: error
- name: Check linking (Linux)
if: ${{ runner.os == 'Linux' }}
if: ${{ runner.os == 'Linux' && env.SANITIZERS_ENABLED == 'false' }}
working-directory: ${{ env.BUILD_DIR }}
run: |
ldd ./xrpld
@@ -191,6 +200,14 @@ jobs:
run: |
./xrpld --version | grep libvoidstar
- name: Set sanitizer options
if: ${{ !inputs.build_only && env.SANITIZERS_ENABLED == 'true' }}
run: |
echo "ASAN_OPTIONS=print_stacktrace=1:detect_container_overflow=0:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/asan.supp" >> ${GITHUB_ENV}
echo "TSAN_OPTIONS=second_deadlock_stack=1:halt_on_error=0:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/tsan.supp" >> ${GITHUB_ENV}
echo "UBSAN_OPTIONS=suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/ubsan.supp" >> ${GITHUB_ENV}
echo "LSAN_OPTIONS=suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/lsan.supp" >> ${GITHUB_ENV}
- name: Run the separate tests
if: ${{ !inputs.build_only }}
working-directory: ${{ env.BUILD_DIR }}

View File

@@ -57,5 +57,6 @@ jobs:
runs_on: ${{ toJSON(matrix.architecture.runner) }}
image: ${{ contains(matrix.architecture.platform, 'linux') && format('ghcr.io/xrplf/ci/{0}-{1}:{2}-{3}-sha-{4}', matrix.os.distro_name, matrix.os.distro_version, matrix.os.compiler_name, matrix.os.compiler_version, matrix.os.image_sha) || '' }}
config_name: ${{ matrix.config_name }}
sanitizers: ${{ matrix.sanitizers }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -1,91 +0,0 @@
# This workflow exports the built libxrpl package to the Conan remote on a
# a channel named after the pull request, and notifies the Clio repository about
# the new version so it can check for compatibility.
name: Notify Clio
# This workflow can only be triggered by other workflows.
on:
workflow_call:
inputs:
conan_remote_name:
description: "The name of the Conan remote to use."
required: false
type: string
default: xrplf
conan_remote_url:
description: "The URL of the Conan endpoint to use."
required: false
type: string
default: https://conan.ripplex.io
secrets:
clio_notify_token:
description: "The GitHub token to notify Clio about new versions."
required: true
conan_remote_username:
description: "The username for logging into the Conan remote."
required: true
conan_remote_password:
description: "The password for logging into the Conan remote."
required: true
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-clio
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
upload:
if: ${{ github.event.pull_request.head.repo.full_name == github.repository }}
runs-on: ubuntu-latest
container: ghcr.io/xrplf/ci/ubuntu-noble:gcc-13-sha-5dd7158
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Generate outputs
id: generate
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
run: |
echo 'Generating user and channel.'
echo "user=clio" >> "${GITHUB_OUTPUT}"
echo "channel=pr_${PR_NUMBER}" >> "${GITHUB_OUTPUT}"
echo 'Extracting version.'
echo "version=$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" | awk -F '"' '{print $2}')" >> "${GITHUB_OUTPUT}"
- name: Calculate conan reference
id: conan_ref
run: |
echo "conan_ref=${{ steps.generate.outputs.version }}@${{ steps.generate.outputs.user }}/${{ steps.generate.outputs.channel }}" >> "${GITHUB_OUTPUT}"
- name: Set up Conan
uses: ./.github/actions/setup-conan
with:
conan_remote_name: ${{ inputs.conan_remote_name }}
conan_remote_url: ${{ inputs.conan_remote_url }}
- name: Log into Conan remote
env:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
run: conan remote login "${CONAN_REMOTE_NAME}" "${{ secrets.conan_remote_username }}" --password "${{ secrets.conan_remote_password }}"
- name: Upload package
env:
CONAN_REMOTE_NAME: ${{ inputs.conan_remote_name }}
run: |
conan export --user=${{ steps.generate.outputs.user }} --channel=${{ steps.generate.outputs.channel }} .
conan upload --confirm --check --remote="${CONAN_REMOTE_NAME}" xrpl/${{ steps.conan_ref.outputs.conan_ref }}
outputs:
conan_ref: ${{ steps.conan_ref.outputs.conan_ref }}
notify:
needs: upload
runs-on: ubuntu-latest
steps:
- name: Notify Clio
env:
GH_TOKEN: ${{ secrets.clio_notify_token }}
PR_URL: ${{ github.event.pull_request.html_url }}
run: |
gh api --method POST -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" \
/repos/xrplf/clio/dispatches -f "event_type=check_libxrpl" \
-F "client_payload[conan_ref]=${{ needs.upload.outputs.conan_ref }}" \
-F "client_payload[pr_url]=${PR_URL}"

View File

@@ -0,0 +1,78 @@
# This workflow exports the built libxrpl package to the Conan remote.
name: Upload Conan recipe
# This workflow can only be triggered by other workflows.
on:
workflow_call:
inputs:
remote_name:
description: "The name of the Conan remote to use."
required: false
type: string
default: xrplf
remote_url:
description: "The URL of the Conan endpoint to use."
required: false
type: string
default: https://conan.ripplex.io
secrets:
remote_username:
description: "The username for logging into the Conan remote."
required: true
remote_password:
description: "The password for logging into the Conan remote."
required: true
outputs:
recipe_ref:
description: "The Conan recipe reference ('name/version') that was uploaded."
value: ${{ jobs.upload.outputs.ref }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-upload-recipe
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
upload:
runs-on: ubuntu-latest
container: ghcr.io/xrplf/ci/ubuntu-noble:gcc-13-sha-5dd7158
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Generate build version number
id: version
uses: ./.github/actions/generate-version
- name: Determine recipe reference
id: ref
run: echo "ref=xrpl/${{ steps.version.outputs.version }}" >> "${GITHUB_OUTPUT}"
- name: Set up Conan
uses: ./.github/actions/setup-conan
with:
remote_name: ${{ inputs.remote_name }}
remote_url: ${{ inputs.remote_url }}
- name: Log into Conan remote
env:
REMOTE_NAME: ${{ inputs.remote_name }}
REMOTE_USERNAME: ${{ inputs.remote_username }}
REMOTE_PASSWORD: ${{ inputs.remote_password }}
run: conan remote login "${REMOTE_NAME}" "${REMOTE_USERNAME}" --password "${REMOTE_PASSWORD}"
- name: Upload Conan recipe
env:
RECIPE_REF: ${{ steps.ref.outputs.ref }}
REMOTE_NAME: ${{ inputs.remote_name }}
run: |
conan export .
conan upload --confirm --check --remote="${REMOTE_NAME}" ${RECIPE_REF}
outputs:
ref: ${{ steps.ref.outputs.ref }}

View File

@@ -86,8 +86,8 @@ jobs:
- name: Setup Conan
uses: ./.github/actions/setup-conan
with:
conan_remote_name: ${{ env.CONAN_REMOTE_NAME }}
conan_remote_url: ${{ env.CONAN_REMOTE_URL }}
remote_name: ${{ env.CONAN_REMOTE_NAME }}
remote_url: ${{ env.CONAN_REMOTE_URL }}
- name: Build dependencies
uses: ./.github/actions/build-deps

View File

@@ -1,5 +1,5 @@
| :warning: **WARNING** :warning:
|---|
| :warning: **WARNING** :warning: |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| These instructions assume you have a C++ development environment ready with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up on Linux, macOS, or Windows, [see this guide](./docs/build/environment.md). |
> These instructions also assume a basic familiarity with Conan and CMake.
@@ -523,18 +523,32 @@ stored inside the build directory, as either of:
- file named `coverage.`_extension_, with a suitable extension for the report format, or
- directory named `coverage`, with the `index.html` and other files inside, for the `html-details` or `html-nested` report formats.
## Sanitizers
To build dependencies and xrpld with sanitizer instrumentation, set the
`SANITIZERS` environment variable (only once before running conan and cmake) and use the `sanitizers` profile in conan:
```bash
export SANITIZERS=address,undefinedbehavior
conan install .. --output-folder . --profile:all sanitizers --build missing --settings build_type=Debug
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Debug -Dxrpld=ON -Dtests=ON ..
```
See [Sanitizers docs](./docs/build/sanitizers.md) for more details.
## Options
| Option | Default Value | Description |
| ---------- | ------------- | ------------------------------------------------------------------ |
| `assert` | OFF | Enable assertions. |
| `coverage` | OFF | Prepare the coverage report. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
| `tests` | OFF | Build tests. |
| `unity` | OFF | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld application, and not just the libxrpl library. |
| `werr` | OFF | Treat compilation warnings as errors |
| `wextra` | OFF | Enable additional compilation warnings |
| Option | Default Value | Description |
| ---------- | ------------- | -------------------------------------------------------------- |
| `assert` | OFF | Enable assertions. |
| `coverage` | OFF | Prepare the coverage report. |
| `tests` | OFF | Build tests. |
| `unity` | OFF | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld application, and not just the libxrpl library. |
| `werr` | OFF | Treat compilation warnings as errors |
| `wextra` | OFF | Enable additional compilation warnings |
[Unity builds][5] may be faster for the first build
(at the cost of much more memory) since they concatenate sources into fewer

View File

@@ -16,14 +16,16 @@ set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU")
include(CompilationEnv)
if(is_gcc)
# GCC-specific fixes
add_compile_options(-Wno-unknown-pragmas -Wno-subobject-linkage)
# -Wno-subobject-linkage can be removed when we upgrade GCC version to at least 13.3
elseif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
elseif(is_clang)
# Clang-specific fixes
add_compile_options(-Wno-unknown-warning-option) # Ignore unknown warning options
elseif(MSVC)
elseif(is_msvc)
# MSVC-specific fixes
add_compile_options(/wd4068) # Ignore unknown pragmas
endif()
@@ -77,6 +79,7 @@ if (packages_only)
return ()
endif ()
include(XrplCompiler)
include(XrplSanitizers)
include(XrplInterface)
option(only_docs "Include only the docs target?" FALSE)

View File

@@ -78,72 +78,61 @@ To report a qualifying bug, please send a detailed report to:
| Email Address | bugs@ripple.com |
| :-----------: | :-------------------------------------------------- |
| Short Key ID | `0xC57929BE` |
| Long Key ID | `0xCD49A0AFC57929BE` |
| Fingerprint | `24E6 3B02 37E0 FA9C 5E96 8974 CD49 A0AF C579 29BE` |
| Short Key ID | `0xA9F514E0` |
| Long Key ID | `0xD900855AA9F514E0` |
| Fingerprint | `B72C 0654 2F2A E250 2763 A268 D900 855A A9F5 14E0` |
The full PGP key for this address, which is also available on several key servers (e.g. on [keyserver.ubuntu.com](https://keyserver.ubuntu.com)), is:
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFUwGHYBEAC0wpGpBPkd8W1UdQjg9+cEFzeIEJRaoZoeuJD8mofwI5Ejnjdt
kCpUYEDal0ygkKobu8SzOoATcDl18iCrScX39VpTm96vISFZMhmOryYCIp4QLJNN
4HKc2ZdBj6W4igNi6vj5Qo6JMyGpLY2mz4CZskbt0TNuUxWrGood+UrCzpY8x7/N
a93fcvNw+prgCr0rCH3hAPmAFfsOBbtGzNnmq7xf3jg5r4Z4sDiNIF1X1y53DAfV
rWDx49IKsuCEJfPMp1MnBSvDvLaQ2hKXs+cOpx1BCZgHn3skouEUxxgqbtTzBLt1
xXpmuijsaltWngPnGO7mOAzbpZSdBm82/Emrk9bPMuD0QaLQjWr7HkTSUs6ZsKt4
7CLPdWqxyY/QVw9UaxeHEtWGQGMIQGgVJGh1fjtUr5O1sC9z9jXcQ0HuIHnRCTls
GP7hklJmfH5V4SyAJQ06/hLuEhUJ7dn+BlqCsT0tLmYTgZYNzNcLHcqBFMEZHvHw
9GENMx/tDXgajKql4bJnzuTK0iGU/YepanANLd1JHECJ4jzTtmKOus9SOGlB2/l1
0t0ADDYAS3eqOdOcUvo9ElSLCI5vSVHhShSte/n2FMWU+kMUboTUisEG8CgQnrng
g2CvvQvqDkeOtZeqMcC7HdiZS0q3LJUWtwA/ViwxrVlBDCxiTUXCotyBWwARAQAB
tDBSaXBwbGUgTGFicyBCdWcgQm91bnR5IFByb2dyYW0gPGJ1Z3NAcmlwcGxlLmNv
bT6JAjcEEwEKACEFAlUwGHYCGwMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ
zUmgr8V5Kb6R0g//SwY/mVJY59k87iL26/KayauSoOcz7xjcST26l4ZHVVX85gOY
HYZl8k0+m8X3zxeYm9a3QAoAml8sfoaFRFQP8ynnefRrLUPaZ2MjbJ0SACMwZNef
T6o7Mi8LBAaiNZdYVyIfX1oM6YXtqYkuJdav6ZCyvVYqc9OvMJPY2ZzJYuI/ZtvQ
/lTndxCeg9ALNX/iezOLGdfMpf4HuIFVwcPPlwGi+HDlB9/bggDEHC8z434SXVFc
aQatXAPcDkjMUweU7y0CZtYEj00HITd4pSX6MqGiHrxlDZTqinCOPs1Ieqp7qufs
MzlM6irLGucxj1+wa16ieyYvEtGaPIsksUKkywx0O7cf8N2qKg+eIkUk6O0Uc6eO
CszizmiXIXy4O6OiLlVHGKkXHMSW9Nwe9GE95O8G9WR8OZCEuDv+mHPAutO+IjdP
PDAAUvy+3XnkceO+HGWRpVvJZfFP2YH4A33InFL5yqlJmSoR/yVingGLxk55bZDM
+HYGR3VeMb8Xj1rf/02qERsZyccMCFdAvKDbTwmvglyHdVLu5sPmktxbBYiemfyJ
qxMxmYXCc9S0hWrWZW7edktBa9NpE58z1mx+hRIrDNbS2sDHrib9PULYCySyVYcF
P+PWEe1CAS5jqkR2ker5td2/pHNnJIycynBEs7l6zbc9fu+nktFJz0q2B+GJAhwE
EAEKAAYFAlUwGaQACgkQ+tiY1qQ2QkjMFw//f2hNY3BPNe+1qbhzumMDCnbTnGif
kLuAGl9OKt81VHG1f6RnaGiLpR696+6Ja45KzH15cQ5JJl5Bgs1YkR/noTGX8IAD
c70eNwiFu8JXTaaeeJrsmFkF9Tueufb364risYkvPP8tNUD3InBFEZT3WN7JKwix
coD4/BwekUwOZVDd/uCFEyhlhZsROxdKNisNo3VtAq2s+3tIBAmTrriFUl0K+ZC5
zgavcpnPN57zMtW9aK+VO3wXqAKYLYmtgxkVzSLUZt2M7JuwOaAdyuYWAneKZPCu
1AXkmyo+d84sd5mZaKOr5xArAFiNMWPUcZL4rkS1Fq4dKtGAqzzR7a7hWtA5o27T
6vynuxZ1n0PPh0er2O/zF4znIjm5RhTlfjp/VmhZdQfpulFEQ/dMxxGkQ9z5IYbX
mTlSDbCSb+FMsanRBJ7Drp5EmBIudVGY6SHI5Re1RQiEh7GoDfUMUwZO+TVDII5R
Ra7WyuimYleJgDo/+7HyfuIyGDaUCVj6pwVtYtYIdOI3tTw1R1Mr0V8yaNVnJghL
CHcEJQL+YHSmiMM3ySil3O6tm1By6lFz8bVe/rgG/5uklQrnjMR37jYboi1orCC4
yeIoQeV0ItlxeTyBwYIV/o1DBNxDevTZvJabC93WiGLw2XFjpZ0q/9+zI2rJUZJh
qxmKP+D4e27lCI65Ag0EVTAYdgEQAMvttYNqeRNBRpSX8fk45WVIV8Fb21fWdwk6
2SkZnJURbiC0LxQnOi7wrtii7DeFZtwM2kFHihS1VHekBnIKKZQSgGoKuFAQMGyu
a426H4ZsSmA9Ufd7kRbvdtEcp7/RTAanhrSL4lkBhaKJrXlxBJ27o3nd7/rh7r3a
OszbPY6DJ5bWClX3KooPTDl/RF2lHn+fweFk58UvuunHIyo4BWJUdilSXIjLun+P
Qaik4ZAsZVwNhdNz05d+vtai4AwbYoO7adboMLRkYaXSQwGytkm+fM6r7OpXHYuS
cR4zB/OK5hxCVEpWfiwN71N2NMvnEMaWd/9uhqxJzyvYgkVUXV9274TUe16pzXnW
ZLfmitjwc91e7mJBBfKNenDdhaLEIlDRwKTLj7k58f9srpMnyZFacntu5pUMNblB
cjXwWxz5ZaQikLnKYhIvrIEwtWPyjqOzNXNvYfZamve/LJ8HmWGCKao3QHoAIDvB
9XBxrDyTJDpxbog6Qu4SY8AdgVlan6c/PsLDc7EUegeYiNTzsOK+eq3G5/E92eIu
TsUXlciypFcRm1q8vLRr+HYYe2mJDo4GetB1zLkAFBcYJm/x9iJQbu0hn5NxJvZO
R0Y5nOJQdyi+muJzKYwhkuzaOlswzqVXkq/7+QCjg7QsycdcwDjiQh3OrsgXHrwl
M7gyafL9ABEBAAGJAh8EGAEKAAkFAlUwGHYCGwwACgkQzUmgr8V5Kb50BxAAhj9T
TwmNrgRldTHszj+Qc+v8RWqV6j+R+zc0cn5XlUa6XFaXI1OFFg71H4dhCPEiYeN0
IrnocyMNvCol+eKIlPKbPTmoixjQ4udPTR1DC1Bx1MyW5FqOrsgBl5t0e1VwEViM
NspSStxu5Hsr6oWz2GD48lXZWJOgoL1RLs+uxjcyjySD/em2fOKASwchYmI+ezRv
plfhAFIMKTSCN2pgVTEOaaz13M0U+MoprThqF1LWzkGkkC7n/1V1f5tn83BWiagG
2N2Q4tHLfyouzMUKnX28kQ9sXfxwmYb2sA9FNIgxy+TdKU2ofLxivoWT8zS189z/
Yj9fErmiMjns2FzEDX+bipAw55X4D/RsaFgC+2x2PDbxeQh6JalRA2Wjq32Ouubx
u+I4QhEDJIcVwt9x6LPDuos1F+M5QW0AiUhKrZJ17UrxOtaquh/nPUL9T3l2qPUn
1ChrZEEEhHO6vA8+jn0+cV9n5xEz30Str9iHnDQ5QyR5LyV4UBPgTdWyQzNVKA69
KsSr9lbHEtQFRzGuBKwt6UlSFv9vPWWJkJit5XDKAlcKuGXj0J8OlltToocGElkF
+gEBZfoOWi/IBjRLrFW2cT3p36DTR5O1Ud/1DLnWRqgWNBLrbs2/KMKE6EnHttyD
7Tz8SQkuxltX/yBXMV3Ddy0t6nWV2SZEfuxJAQI=
=spg4
mQINBGkSZAQBEACprU199OhgdsOsygNjiQV4msuN3vDOUooehL+NwfsGfW79Tbqq
Q2u7uQ3NZjW+M2T4nsDwuhkr7pe7xSReR5W8ssaczvtUyxkvbMClilcgZ2OSCAuC
N9tzJsqOqkwBvXoNXkn//T2jnPz0ZU2wSF+NrEibq5FeuyGdoX3yXXBxq9pW9HzK
HkQll63QSl6BzVSGRQq+B6lGgaZGLwf3mzmIND9Z5VGLNK2jKynyz9z091whNG/M
kV+E7/r/bujHk7WIVId07G5/COTXmSr7kFnNEkd2Umw42dkgfiNKvlmJ9M7c1wLK
KbL9Eb4ADuW6rRc5k4s1e6GT8R4/VPliWbCl9SE32hXH8uTkqVIFZP2eyM5WRRHs
aKzitkQG9UK9gcb0kdgUkxOvvgPHAe5IuZlcHFzU4y0dBbU1VEFWVpiLU0q+IuNw
5BRemeHc59YNsngkmAZ+/9zouoShRusZmC8Wzotv75C2qVBcjijPvmjWAUz0Zunm
Lsr+O71vqHE73pERjD07wuD/ISjiYRYYE/bVrXtXLZijC7qAH4RE3nID+2ojcZyO
/2jMQvt7un56RsGH4UBHi3aBHi9bUoDGCXKiQY981cEuNaOxpou7Mh3x/ONzzSvk
sTV6nl1LOZHykN1JyKwaNbTSAiuyoN+7lOBqbV04DNYAHL88PrT21P83aQARAQAB
tB1SaXBwbGUgTGFicyA8YnVnc0ByaXBwbGUuY29tPokCTgQTAQgAOBYhBLcsBlQv
KuJQJ2OiaNkAhVqp9RTgBQJpEmQEAhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheA
AAoJENkAhVqp9RTgBzgP/i7y+aDWl1maig1XMdyb+o0UGusumFSW4Hmj278wlKVv
usgLPihYgHE0PKrv6WRyKOMC1tQEcYYN93M+OeQ1vFhS2YyURq6RCMmh4zq/awXG
uZbG36OURB5NH8lGBOHiN/7O+nY0CgenBT2JWm+GW3nEOAVOVm4+r5GlpPlv+Dp1
NPBThcKXFMnH73++NpSQoDzTfRYHPxhDAX3jkLi/moXfSanOLlR6l94XNNN0jBHW
Quao0rzf4WSXq9g6AS224xhAA5JyIcFl8TX7hzj5HaFn3VWo3COoDu4U7H+BM0fl
85yqiMQypp7EhN2gxpMMWaHY5TFM85U/bFXFYfEgihZ4/gt4uoIzsNI9jlX7mYvG
KFdDij+oTlRsuOxdIy60B3dKcwOH9nZZCz0SPsN/zlRWgKzK4gDKdGhFkU9OlvPu
94ZqscanoiWKDoZkF96+sjgfjkuHsDK7Lwc1Xi+T4drHG/3aVpkYabXox+lrKB/S
yxZjeqOIQzWPhnLgCaLyvsKo5hxKzL0w3eURu8F3IS7RgOOlljv4M+Me9sEVcdNV
aN3/tQwbaomSX1X5D5YXqhBwC3rU3wXwamsscRTGEpkV+JCX6KUqGP7nWmxCpAly
FL05XuOd5SVHJjXLeuje0JqLUpN514uL+bThWwDbDTdAdwW3oK/2WbXz7IfJRLBj
uQINBGkSZAQBEADdI3SL2F72qkrgFqXWE6HSRBu9bsAvTE5QrRPWk7ux6at537r4
S4sIw2dOwLvbyIrDgKNq3LQ5wCK88NO/NeCOFm4AiCJSl3pJHXYnTDoUxTrrxx+o
vSRI4I3fHEql/MqzgiAb0YUezjgFdh3vYheMPp/309PFbOLhiFqEcx80Mx5h06UH
gDzu1qNj3Ec+31NLic5zwkrAkvFvD54d6bqYR3SEgMau6aYEewpGHbWBi2pLqSi2
lQcAeOFixqGpTwDmAnYR8YtjBYepy0MojEAdTHcQQlOYSDk4q4elG+io2N8vECfU
rD6ORecN48GXdZINYWTAdslrUeanmBdgQrYkSpce8TSghgT9P01SNaXxmyaehVUO
lqI4pcg5G2oojAE8ncNS3TwDtt7daTaTC3bAdr4PXDVAzNAiewjMNZPB7xidkDGQ
Y4W1LxTMXyJVWxehYOH7tsbBRKninlfRnLgYzmtIbNRAAvNcsxU6ihv3AV0WFknN
YbSzotEv1Xq/5wk309x8zCDe+sP0cQicvbXafXmUzPAZzeqFg+VLFn7F9MP1WGlW
B1u7VIvBF1Mp9Nd3EAGBAoLRdRu+0dVWIjPTQuPIuD9cCatJA0wVaKUrjYbBMl88
a12LixNVGeSFS9N7ADHx0/o7GNT6l88YbaLP6zggUHpUD/bR+cDN7vllIQARAQAB
iQI2BBgBCAAgFiEEtywGVC8q4lAnY6Jo2QCFWqn1FOAFAmkSZAQCGwwACgkQ2QCF
Wqn1FOAfAA/8CYq4p0p4bobY20CKEMsZrkBTFJyPDqzFwMeTjgpzqbD7Y3Qq5QCK
OBbvY02GWdiIsNOzKdBxiuam2xYP9WHZj4y7/uWEvT0qlPVmDFu+HXjoJ43oxwFd
CUp2gMuQ4cSL3X94VRJ3BkVL+tgBm8CNY0vnTLLOO3kum/R69VsGJS1JSGUWjNM+
4qwS3mz+73xJu1HmERyN2RZF/DGIZI2PyONQQ6aH85G1Dd2ohu2/DBAkQAMBrPbj
FrbDaBLyFhODxU3kTWqnfLlaElSm2EGdIU2yx7n4BggEa//NZRMm5kyeo4vzhtlQ
YIVUMLAOLZvnEqDnsLKp+22FzNR/O+htBQC4lPywl53oYSALdhz1IQlcAC1ru5KR
XPzhIXV6IIzkcx9xNkEclZxmsuy5ERXyKEmLbIHAlzFmnrldlt2ZgXDtzaorLmxj
klKibxd5tF50qOpOivz+oPtFo7n+HmFa1nlVAMxlDCUdM0pEVeYDKI5zfVwalyhZ
NnjpakdZSXMwgc7NP/hH9buF35hKDp7EckT2y3JNYwHsDdy1icXN2q40XZw5tSIn
zkPWdu3OUY8PISohN6Pw4h0RH4ZmoX97E8sEfmdKaT58U4Hf2aAv5r9IWCSrAVqY
u5jvac29CzQR9Kal0A+8phHAXHNFD83SwzIC0syaT9ficAguwGH8X6Q=
=nGuD
-----END PGP PUBLIC KEY BLOCK-----
```

View File

@@ -0,0 +1,54 @@
# Shared detection of compiler, operating system, and architecture.
#
# This module centralizes environment detection so that other
# CMake modules can use the same variables instead of repeating
# checks on CMAKE_* and built-in platform variables.
# Only run once per configure step.
include_guard(GLOBAL)
# --------------------------------------------------------------------
# Compiler detection (C++)
# --------------------------------------------------------------------
set(is_clang FALSE)
set(is_gcc FALSE)
set(is_msvc FALSE)
if(CMAKE_CXX_COMPILER_ID MATCHES ".*Clang") # Clang or AppleClang
set(is_clang TRUE)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(is_gcc TRUE)
elseif(MSVC)
set(is_msvc TRUE)
else()
message(FATAL_ERROR "Unsupported C++ compiler: ${CMAKE_CXX_COMPILER_ID}")
endif()
# --------------------------------------------------------------------
# Operating system detection
# --------------------------------------------------------------------
set(is_linux FALSE)
set(is_windows FALSE)
set(is_macos FALSE)
if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
set(is_linux TRUE)
elseif(CMAKE_SYSTEM_NAME STREQUAL "Windows")
set(is_windows TRUE)
elseif(CMAKE_SYSTEM_NAME STREQUAL "Darwin")
set(is_macos TRUE)
endif()
# --------------------------------------------------------------------
# Architecture
# --------------------------------------------------------------------
set(is_amd64 FALSE)
set(is_arm64 FALSE)
if(CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64|AMD64")
set(is_amd64 TRUE)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|arm64")
set(is_arm64 TRUE)
else()
message(FATAL_ERROR "Unknown architecture: ${CMAKE_SYSTEM_PROCESSOR}")
endif()

View File

@@ -2,16 +2,23 @@
setup project-wide compiler settings
#]===================================================================]
include(CompilationEnv)
#[=========================================================[
TODO some/most of these common settings belong in a
toolchain file, especially the ABI-impacting ones
#]=========================================================]
add_library (common INTERFACE)
add_library (Xrpl::common ALIAS common)
include(XrplSanitizers)
# add a single global dependency on this interface lib
link_libraries (Xrpl::common)
# Respect CMAKE_POSITION_INDEPENDENT_CODE setting (may be set by Conan toolchain)
if(NOT DEFINED CMAKE_POSITION_INDEPENDENT_CODE)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
endif()
set_target_properties (common
PROPERTIES INTERFACE_POSITION_INDEPENDENT_CODE ON)
PROPERTIES INTERFACE_POSITION_INDEPENDENT_CODE ${CMAKE_POSITION_INDEPENDENT_CODE})
set(CMAKE_CXX_EXTENSIONS OFF)
target_compile_definitions (common
INTERFACE
@@ -116,8 +123,8 @@ else ()
# link to static libc/c++ iff:
# * static option set and
# * NOT APPLE (AppleClang does not support static libc/c++) and
# * NOT san (sanitizers typically don't work with static libc/c++)
$<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${san}>>>:
# * NOT SANITIZERS (sanitizers typically don't work with static libc/c++)
$<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${SANITIZERS_ENABLED}>>>:
-static-libstdc++
-static-libgcc
>)

View File

@@ -2,6 +2,8 @@
xrpld compile options/settings via an interface library
#]===================================================================]
include(CompilationEnv)
add_library (opts INTERFACE)
add_library (Xrpl::opts ALIAS opts)
target_compile_definitions (opts
@@ -42,22 +44,6 @@ if(jemalloc)
target_link_libraries(opts INTERFACE jemalloc::jemalloc)
endif ()
if (san)
target_compile_options (opts
INTERFACE
# sanitizers recommend minimum of -O1 for reasonable performance
$<$<CONFIG:Debug>:-O1>
${SAN_FLAG}
-fno-omit-frame-pointer)
target_compile_definitions (opts
INTERFACE
$<$<STREQUAL:${san},address>:SANITIZER=ASAN>
$<$<STREQUAL:${san},thread>:SANITIZER=TSAN>
$<$<STREQUAL:${san},memory>:SANITIZER=MSAN>
$<$<STREQUAL:${san},undefined>:SANITIZER=UBSAN>)
target_link_libraries (opts INTERFACE ${SAN_FLAG} ${SAN_LIB})
endif ()
#[===================================================================[
xrpld transitive library deps via an interface library
#]===================================================================]

198
cmake/XrplSanitizers.cmake Normal file
View File

@@ -0,0 +1,198 @@
#[===================================================================[
Configure sanitizers based on environment variables.
This module reads the following environment variables:
- SANITIZERS: The sanitizers to enable. Possible values:
- "address"
- "address,undefinedbehavior"
- "thread"
- "thread,undefinedbehavior"
- "undefinedbehavior"
The compiler type and platform are detected in CompilationEnv.cmake.
The sanitizer compile options are applied to the 'common' interface library
which is linked to all targets in the project.
Internal flag variables set by this module:
- SANITIZER_TYPES: List of sanitizer types to enable (e.g., "address",
"thread", "undefined"). And two more flags for undefined behavior sanitizer (e.g., "float-divide-by-zero", "unsigned-integer-overflow").
This list is joined with commas and passed to -fsanitize=<list>.
- SANITIZERS_COMPILE_FLAGS: Compiler flags for sanitizer instrumentation.
Includes:
* -fno-omit-frame-pointer: Preserves frame pointers for stack traces
* -O1: Minimum optimization for reasonable performance
* -fsanitize=<types>: Enables sanitizer instrumentation
* -fsanitize-ignorelist=<path>: (Clang only) Compile-time ignorelist
* -mcmodel=large/medium: (GCC only) Code model for large binaries
* -Wno-stringop-overflow: (GCC only) Suppresses false positive warnings
* -Wno-tsan: (For GCC TSAN combination only) Suppresses atomic_thread_fence warnings
- SANITIZERS_LINK_FLAGS: Linker flags for sanitizer runtime libraries.
Includes:
* -fsanitize=<types>: Links sanitizer runtime libraries
* -mcmodel=large/medium: (GCC only) Matches compile-time code model
- SANITIZERS_RELOCATION_FLAGS: (GCC only) Code model flags for linking.
Used to handle large instrumented binaries on x86_64:
* -mcmodel=large: For AddressSanitizer (prevents relocation errors)
* -mcmodel=medium: For ThreadSanitizer (large model is incompatible)
#]===================================================================]
include(CompilationEnv)
# Read environment variable
set(SANITIZERS $ENV{SANITIZERS})
# Set SANITIZERS_ENABLED flag for use in other modules
if(SANITIZERS MATCHES "address|thread|undefinedbehavior")
set(SANITIZERS_ENABLED TRUE)
else()
set(SANITIZERS_ENABLED FALSE)
return()
endif()
# Sanitizers are not supported on Windows/MSVC
if(is_msvc)
message(FATAL_ERROR "Sanitizers are not supported on Windows/MSVC. "
"Please unset the SANITIZERS environment variable.")
endif()
message(STATUS "Configuring sanitizers: ${SANITIZERS}")
# Parse SANITIZERS value to determine which sanitizers to enable
set(enable_asan FALSE)
set(enable_tsan FALSE)
set(enable_ubsan FALSE)
# Normalize SANITIZERS into a list
set(san_list "${SANITIZERS}")
string(REPLACE "," ";" san_list "${san_list}")
separate_arguments(san_list)
foreach(san IN LISTS san_list)
if(san STREQUAL "address")
set(enable_asan TRUE)
elseif(san STREQUAL "thread")
set(enable_tsan TRUE)
elseif(san STREQUAL "undefinedbehavior")
set(enable_ubsan TRUE)
else()
message(FATAL_ERROR "Unsupported sanitizer type: ${san}"
"Supported: address, thread, undefinedbehavior and their combinations.")
endif()
endforeach()
# Validate sanitizer compatibility
if(enable_asan AND enable_tsan)
message(FATAL_ERROR "AddressSanitizer and ThreadSanitizer are incompatible and cannot be enabled simultaneously. "
"Use 'address' or 'thread', optionally with 'undefinedbehavior'.")
endif()
# Frame pointer is required for meaningful stack traces. Sanitizers recommend minimum of -O1 for reasonable performance
set(SANITIZERS_COMPILE_FLAGS "-fno-omit-frame-pointer" "-O1")
# Build the sanitizer flags list
set(SANITIZER_TYPES)
if(enable_asan)
list(APPEND SANITIZER_TYPES "address")
elseif(enable_tsan)
list(APPEND SANITIZER_TYPES "thread")
endif()
if(enable_ubsan)
# UB sanitizer flags
list(APPEND SANITIZER_TYPES "undefined" "float-divide-by-zero")
if(is_clang)
# Clang supports additional UB checks. More info here https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html
list(APPEND SANITIZER_TYPES "unsigned-integer-overflow")
endif()
endif()
# Configure code model for GCC on amd64
# Use large code model for ASAN to avoid relocation errors
# Use medium code model for TSAN (large is not compatible with TSAN)
set(SANITIZERS_RELOCATION_FLAGS)
# Compiler-specific configuration
if(is_gcc)
# Disable mold, gold and lld linkers for GCC with sanitizers
# Use default linker (bfd/ld) which is more lenient with mixed code models
# This is needed since the size of instrumented binary exceeds the limits set by mold, lld and gold linkers
set(use_mold OFF CACHE BOOL "Use mold linker" FORCE)
set(use_gold OFF CACHE BOOL "Use gold linker" FORCE)
set(use_lld OFF CACHE BOOL "Use lld linker" FORCE)
message(STATUS " Disabled mold, gold, and lld linkers for GCC with sanitizers")
# Suppress false positive warnings in GCC with stringop-overflow
list(APPEND SANITIZERS_COMPILE_FLAGS "-Wno-stringop-overflow")
if(is_amd64 AND enable_asan)
message(STATUS " Using large code model (-mcmodel=large)")
list(APPEND SANITIZERS_COMPILE_FLAGS "-mcmodel=large")
list(APPEND SANITIZERS_RELOCATION_FLAGS "-mcmodel=large")
elseif(enable_tsan)
# GCC doesn't support atomic_thread_fence with tsan. Suppress warnings.
list(APPEND SANITIZERS_COMPILE_FLAGS "-Wno-tsan")
message(STATUS " Using medium code model (-mcmodel=medium)")
list(APPEND SANITIZERS_COMPILE_FLAGS "-mcmodel=medium")
list(APPEND SANITIZERS_RELOCATION_FLAGS "-mcmodel=medium")
endif()
# Join sanitizer flags with commas for -fsanitize option
list(JOIN SANITIZER_TYPES "," SANITIZER_TYPES_STR)
# Add sanitizer to compile and link flags
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
set(SANITIZERS_LINK_FLAGS "${SANITIZERS_RELOCATION_FLAGS}" "-fsanitize=${SANITIZER_TYPES_STR}")
elseif(is_clang)
# Add ignorelist for Clang (GCC doesn't support this)
# Use CMAKE_SOURCE_DIR to get the path to the ignorelist
set(IGNORELIST_PATH "${CMAKE_SOURCE_DIR}/sanitizers/suppressions/sanitizer-ignorelist.txt")
if(NOT EXISTS "${IGNORELIST_PATH}")
message(FATAL_ERROR "Sanitizer ignorelist not found: ${IGNORELIST_PATH}")
endif()
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize-ignorelist=${IGNORELIST_PATH}")
message(STATUS " Using sanitizer ignorelist: ${IGNORELIST_PATH}")
# Join sanitizer flags with commas for -fsanitize option
list(JOIN SANITIZER_TYPES "," SANITIZER_TYPES_STR)
# Add sanitizer to compile and link flags
list(APPEND SANITIZERS_COMPILE_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
set(SANITIZERS_LINK_FLAGS "-fsanitize=${SANITIZER_TYPES_STR}")
endif()
message(STATUS " Compile flags: ${SANITIZERS_COMPILE_FLAGS}")
message(STATUS " Link flags: ${SANITIZERS_LINK_FLAGS}")
# Apply the sanitizer flags to the 'common' interface library
# This is the same library used by XrplCompiler.cmake
target_compile_options(common INTERFACE
$<$<COMPILE_LANGUAGE:CXX>:${SANITIZERS_COMPILE_FLAGS}>
$<$<COMPILE_LANGUAGE:C>:${SANITIZERS_COMPILE_FLAGS}>
)
# Apply linker flags
target_link_options(common INTERFACE ${SANITIZERS_LINK_FLAGS})
# Define SANITIZERS macro for BuildInfo.cpp
set(sanitizers_list)
if(enable_asan)
list(APPEND sanitizers_list "ASAN")
endif()
if(enable_tsan)
list(APPEND sanitizers_list "TSAN")
endif()
if(enable_ubsan)
list(APPEND sanitizers_list "UBSAN")
endif()
if(sanitizers_list)
list(JOIN sanitizers_list "." sanitizers_str)
target_compile_definitions(common INTERFACE SANITIZERS=${sanitizers_str})
endif()

View File

@@ -2,6 +2,8 @@
sanity checks
#]===================================================================]
include(CompilationEnv)
get_property(is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set (CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "" FORCE)
@@ -16,14 +18,12 @@ if (NOT is_multiconfig)
endif ()
endif ()
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES ".*Clang") # both Clang and AppleClang
set (is_clang TRUE)
if (is_clang) # both Clang and AppleClang
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang" AND
CMAKE_CXX_COMPILER_VERSION VERSION_LESS 16.0)
message (FATAL_ERROR "This project requires clang 16 or later")
endif ()
elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
set (is_gcc TRUE)
elseif (is_gcc)
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 12.0)
message (FATAL_ERROR "This project requires GCC 12 or later")
endif ()
@@ -40,11 +40,6 @@ if (MSVC AND CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
message (FATAL_ERROR "Visual Studio 32-bit build is not supported.")
endif ()
if (NOT CMAKE_SIZEOF_VOID_P EQUAL 8)
message (FATAL_ERROR "Xrpld requires a 64 bit target architecture.\n"
"The most likely cause of this warning is trying to build xrpld with a 32-bit OS.")
endif ()
if (APPLE AND NOT HOMEBREW)
find_program (HOMEBREW brew)
endif ()

View File

@@ -2,11 +2,7 @@
declare options and variables
#]===================================================================]
if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
set (is_linux TRUE)
else()
set(is_linux FALSE)
endif()
include(CompilationEnv)
if("$ENV{CI}" STREQUAL "true" OR "$ENV{CONTINUOUS_INTEGRATION}" STREQUAL "true")
set(is_ci TRUE)
@@ -62,7 +58,7 @@ else()
set(wextra OFF CACHE BOOL "gcc/clang only" FORCE)
endif()
if(is_linux)
if(is_linux AND NOT SANITIZER)
option(BUILD_SHARED_LIBS "build shared xrpl libraries" OFF)
option(static "link protobuf, openssl, libc++, and boost statically" ON)
option(perf "Enables flags that assist with perf recording" OFF)
@@ -107,33 +103,6 @@ option(local_protobuf
option(local_grpc
"Force a local build of gRPC instead of looking for an installed version." OFF)
# this one is a string and therefore can't be an option
set(san "" CACHE STRING "On gcc & clang, add sanitizer instrumentation")
set_property(CACHE san PROPERTY STRINGS ";undefined;memory;address;thread")
if(san)
string(TOLOWER ${san} san)
set(SAN_FLAG "-fsanitize=${san}")
set(SAN_LIB "")
if(is_gcc)
if(san STREQUAL "address")
set(SAN_LIB "asan")
elseif(san STREQUAL "thread")
set(SAN_LIB "tsan")
elseif(san STREQUAL "memory")
set(SAN_LIB "msan")
elseif(san STREQUAL "undefined")
set(SAN_LIB "ubsan")
endif()
endif()
set(_saved_CRL ${CMAKE_REQUIRED_LIBRARIES})
set(CMAKE_REQUIRED_LIBRARIES "${SAN_FLAG};${SAN_LIB}")
check_cxx_compiler_flag(${SAN_FLAG} COMPILER_SUPPORTS_SAN)
set(CMAKE_REQUIRED_LIBRARIES ${_saved_CRL})
if(NOT COMPILER_SUPPORTS_SAN)
message(FATAL_ERROR "${san} sanitizer does not seem to be supported by your compiler")
endif()
endif()
# the remaining options are obscure and rarely used
option(beast_no_unit_test_inline
"Prevents unit test definitions from being inserted into global table"

View File

@@ -1,3 +1,6 @@
include(CompilationEnv)
include(XrplSanitizers)
find_package(Boost REQUIRED
COMPONENTS
chrono
@@ -32,7 +35,7 @@ target_link_libraries(xrpl_boost
if(Boost_COMPILER)
target_link_libraries(xrpl_boost INTERFACE Boost::disable_autolinking)
endif()
if(san AND is_clang)
if(SANITIZERS_ENABLED AND is_clang)
# TODO: gcc does not support -fsanitize-blacklist...can we do something else
# for gcc ?
if(NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)

View File

@@ -17,9 +17,9 @@
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1765842973.03",
"libarchive/3.8.1#ffee18995c706e02bf96e7a2f7042e0d%1765850144.736",
"jemalloc/5.3.0#e951da9cf599e956cebc117880d2d9f8%1729241615.244",
"gtest/1.17.0#5224b3b3ff3b4ce1133cbdd27d53ee7d%1768312129.152",
"grpc/1.72.0#f244a57bff01e708c55a1100b12e1589%1765850193.734",
"ed25519/2015.03#ae761bdc52730a843f0809bdf6c1b1f6%1765850143.772",
"doctest/2.4.12#eb9fb352fb2fdfc8abb17ec270945165%1765850143.95",
"date/3.0.4#862e11e80030356b53c2c38599ceb32b%1765850143.772",
"c-ares/1.34.5#5581c2b62a608b40bb85d965ab3ec7c8%1765850144.336",
"bzip2/1.0.8#c470882369c2d95c5c77e970c0c7e321%1765850143.837",

1
conan/profiles/ci Normal file
View File

@@ -0,0 +1 @@
include(sanitizers)

59
conan/profiles/sanitizers Normal file
View File

@@ -0,0 +1,59 @@
include(default)
{% set compiler, version, compiler_exe = detect_api.detect_default_compiler() %}
{% set sanitizers = os.getenv("SANITIZERS") %}
[conf]
{% if sanitizers %}
{% if compiler == "gcc" %}
{% if "address" in sanitizers or "thread" in sanitizers or "undefinedbehavior" in sanitizers %}
{% set sanitizer_list = [] %}
{% set model_code = "" %}
{% set extra_cxxflags = ["-fno-omit-frame-pointer", "-O1", "-Wno-stringop-overflow"] %}
{% if "address" in sanitizers %}
{% set _ = sanitizer_list.append("address") %}
{% set model_code = "-mcmodel=large" %}
{% elif "thread" in sanitizers %}
{% set _ = sanitizer_list.append("thread") %}
{% set model_code = "-mcmodel=medium" %}
{% set _ = extra_cxxflags.append("-Wno-tsan") %}
{% endif %}
{% if "undefinedbehavior" in sanitizers %}
{% set _ = sanitizer_list.append("undefined") %}
{% set _ = sanitizer_list.append("float-divide-by-zero") %}
{% endif %}
{% set sanitizer_flags = "-fsanitize=" ~ ",".join(sanitizer_list) ~ " " ~ model_code %}
tools.build:cxxflags+=['{{sanitizer_flags}} {{" ".join(extra_cxxflags)}}']
tools.build:sharedlinkflags+=['{{sanitizer_flags}}']
tools.build:exelinkflags+=['{{sanitizer_flags}}']
{% endif %}
{% elif compiler == "apple-clang" or compiler == "clang" %}
{% if "address" in sanitizers or "thread" in sanitizers or "undefinedbehavior" in sanitizers %}
{% set sanitizer_list = [] %}
{% set extra_cxxflags = ["-fno-omit-frame-pointer", "-O1"] %}
{% if "address" in sanitizers %}
{% set _ = sanitizer_list.append("address") %}
{% elif "thread" in sanitizers %}
{% set _ = sanitizer_list.append("thread") %}
{% endif %}
{% if "undefinedbehavior" in sanitizers %}
{% set _ = sanitizer_list.append("undefined") %}
{% set _ = sanitizer_list.append("float-divide-by-zero") %}
{% set _ = sanitizer_list.append("unsigned-integer-overflow") %}
{% endif %}
{% set sanitizer_flags = "-fsanitize=" ~ ",".join(sanitizer_list) %}
tools.build:cxxflags+=['{{sanitizer_flags}} {{" ".join(extra_cxxflags)}}']
tools.build:sharedlinkflags+=['{{sanitizer_flags}}']
tools.build:exelinkflags+=['{{sanitizer_flags}}']
{% endif %}
{% endif %}
{% endif %}
tools.info.package_id:confs+=["tools.build:cxxflags", "tools.build:exelinkflags", "tools.build:sharedlinkflags"]

View File

@@ -39,7 +39,7 @@ class Xrpl(ConanFile):
]
test_requires = [
"doctest/2.4.12",
"gtest/1.17.0",
]
tool_requires = [

207
docs/build/sanitizers.md vendored Normal file
View File

@@ -0,0 +1,207 @@
# Sanitizer Configuration for Rippled
This document explains how to properly configure and run sanitizers (AddressSanitizer, undefinedbehaviorSanitizer, ThreadSanitizer) with the xrpld project.
Corresponding suppression files are located in the `sanitizers/suppressions` directory.
- [Sanitizer Configuration for Rippled](#sanitizer-configuration-for-rippled)
- [Building with Sanitizers](#building-with-sanitizers)
- [Summary](#summary)
- [Build steps:](#build-steps)
- [Install dependencies](#install-dependencies)
- [Call CMake](#call-cmake)
- [Build](#build)
- [Running Tests with Sanitizers](#running-tests-with-sanitizers)
- [AddressSanitizer (ASAN)](#addresssanitizer-asan)
- [ThreadSanitizer (TSan)](#threadsanitizer-tsan)
- [LeakSanitizer (LSan)](#leaksanitizer-lsan)
- [UndefinedBehaviorSanitizer (UBSan)](#undefinedbehaviorsanitizer-ubsan)
- [Suppression Files](#suppression-files)
- [`asan.supp`](#asansupp)
- [`lsan.supp`](#lsansupp)
- [`ubsan.supp`](#ubsansupp)
- [`tsan.supp`](#tsansupp)
- [`sanitizer-ignorelist.txt`](#sanitizer-ignorelisttxt)
- [Troubleshooting](#troubleshooting)
- ["ASAN is ignoring requested \_\_asan_handle_no_return" warnings](#asan-is-ignoring-requested-__asan_handle_no_return-warnings)
- [Sanitizer Mismatch Errors](#sanitizer-mismatch-errors)
- [References](#references)
## Building with Sanitizers
### Summary
Follow the same instructions as mentioned in [BUILD.md](../../BUILD.md) but with the following changes:
1. Make sure you have a clean build directory.
2. Set the `SANITIZERS` environment variable before calling conan install and cmake. Only set it once. Make sure both conan and cmake read the same values.
Example: `export SANITIZERS=address,undefinedbehavior`
3. Optionally use `--profile:all sanitizers` with Conan to build dependencies with sanitizer instrumentation. [!NOTE]Building with sanitizer-instrumented dependencies is slower but produces fewer false positives.
4. Set `ASAN_OPTIONS`, `LSAN_OPTIONS`, `UBSAN_OPTIONS` and `TSAN_OPTIONS` environment variables to configure sanitizer behavior when running executables. [More details below](#running-tests-with-sanitizers).
---
### Build steps:
```bash
cd /path/to/rippled
rm -rf .build
mkdir .build
cd .build
```
#### Install dependencies
The `SANITIZERS` environment variable is used by both Conan and CMake.
```bash
export SANITIZERS=address,undefinedbehavior
# Standard build (without instrumenting dependencies)
conan install .. --output-folder . --build missing --settings build_type=Debug
# Or with sanitizer-instrumented dependencies (takes longer but fewer false positives)
conan install .. --output-folder . --profile:all sanitizers --build missing --settings build_type=Debug
```
[!CAUTION]
Do not mix Address and Thread sanitizers - they are incompatible.
Since you already set the `SANITIZERS` environment variable when running Conan, same values will be read for the next part.
#### Call CMake
```bash
cmake .. -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=Debug \
-Dtests=ON -Dxrpld=ON
```
#### Build
```bash
cmake --build . --parallel 4
```
## Running Tests with Sanitizers
### AddressSanitizer (ASAN)
**IMPORTANT**: ASAN with Boost produces many false positives. Use these options:
```bash
export ASAN_OPTIONS="print_stacktrace=1:detect_container_overflow=0:suppressions=path/to/asan.supp:halt_on_error=0:log_path=asan.log"
export LSAN_OPTIONS="suppressions=path/to/lsan.supp:halt_on_error=0:log_path=lsan.log"
# Run tests
./xrpld --unittest --unittest-jobs=5
```
**Why `detect_container_overflow=0`?**
- Boost intrusive containers (used in `aged_unordered_container`) trigger false positives
- Boost context switching (used in `Workers.cpp`) confuses ASAN's stack tracking
- Since we usually don't build Boost (because we don't want to instrument Boost and detect issues in Boost code) with ASAN but use Boost containers in ASAN instrumented rippled code, it generates false positives.
- Building dependencies with ASAN instrumentation reduces false positives. But we don't want to instrument dependencies like Boost with ASAN because it is slow (to compile as well as run tests) and not necessary.
- See: https://github.com/google/sanitizers/wiki/AddressSanitizerContainerOverflow
- More such flags are detailed [here](https://github.com/google/sanitizers/wiki/AddressSanitizerFlags)
### ThreadSanitizer (TSan)
```bash
export TSAN_OPTIONS="suppressions=path/to/tsan.supp halt_on_error=0 log_path=tsan.log"
# Run tests
./xrpld --unittest --unittest-jobs=5
```
More details [here](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual).
### LeakSanitizer (LSan)
LSan is automatically enabled with ASAN. To disable it:
```bash
export ASAN_OPTIONS="detect_leaks=0"
```
More details [here](https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer).
### UndefinedBehaviorSanitizer (UBSan)
```bash
export UBSAN_OPTIONS="suppressions=path/to/ubsan.supp:print_stacktrace=1:halt_on_error=0:log_path=ubsan.log"
# Run tests
./xrpld --unittest --unittest-jobs=5
```
More details [here](https://clang.llvm.org/docs/undefinedbehaviorSanitizer.html).
## Suppression Files
[!NOTE] Attached files contain more details.
### [`asan.supp`](../../sanitizers/suppressions/asan.supp)
- **Purpose**: Suppress AddressSanitizer (ASAN) errors only
- **Format**: `interceptor_name:<pattern>` where pattern matches file names. Supported suppression types are:
- interceptor_name
- interceptor_via_fun
- interceptor_via_lib
- odr_violation
- **More info**: [AddressSanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizer)
- **Note**: Cannot suppress stack-buffer-overflow, container-overflow, etc.
### [`lsan.supp`](../../sanitizers/suppressions/lsan.supp)
- **Purpose**: Suppress LeakSanitizer (LSan) errors only
- **Format**: `leak:<pattern>` where pattern matches function/file names
- **More info**: [LeakSanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer)
### [`ubsan.supp`](../../sanitizers/suppressions/ubsan.supp)
- **Purpose**: Suppress undefinedbehaviorSanitizer errors
- **Format**: `<error_type>:<pattern>` (e.g., `unsigned-integer-overflow:protobuf`)
- **Covers**: Intentional overflows in sanitizers/suppressions libraries (protobuf, gRPC, stdlib)
- More info [UBSan suppressions](https://clang.llvm.org/docs/SanitizerSpecialCaseList.html).
### [`tsan.supp`](../../sanitizers/suppressions/tsan.supp)
- **Purpose**: Suppress ThreadSanitizer data race warnings
- **Format**: `race:<pattern>` where pattern matches function/file names
- **More info**: [ThreadSanitizer suppressions](https://github.com/google/sanitizers/wiki/ThreadSanitizerSuppressions)
### [`sanitizer-ignorelist.txt`](../../sanitizers/suppressions/sanitizer-ignorelist.txt)
- **Purpose**: Compile-time ignorelist for all sanitizers
- **Usage**: Passed via `-fsanitize-ignorelist=absolute/path/to/sanitizer-ignorelist.txt`
- **Format**: `<level>:<pattern>` (e.g., `src:Workers.cpp`)
## Troubleshooting
### "ASAN is ignoring requested \_\_asan_handle_no_return" warnings
These warnings appear when using Boost context switching and are harmless. They indicate potential false positives.
### Sanitizer Mismatch Errors
If you see undefined symbols like `___tsan_atomic_load` when building with ASAN:
**Problem**: Dependencies were built with a different sanitizer than the main project.
**Solution**: Rebuild everything with the same sanitizer:
```bash
rm -rf .build
# Then follow the build instructions above
```
Then review the log files: `asan.log.*`, `ubsan.log.*`, `tsan.log.*`
## References
- [AddressSanitizer Wiki](https://github.com/google/sanitizers/wiki/AddressSanitizer)
- [AddressSanitizer Flags](https://github.com/google/sanitizers/wiki/AddressSanitizerFlags)
- [Container Overflow Detection](https://github.com/google/sanitizers/wiki/AddressSanitizerContainerOverflow)
- [UndefinedBehavior Sanitizer](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html)
- [ThreadSanitizer](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual)

View File

@@ -1,75 +0,0 @@
#ifndef XRPL_BASICS_MALLOCTRIM_H_INCLUDED
#define XRPL_BASICS_MALLOCTRIM_H_INCLUDED
#include <xrpl/beast/utility/Journal.h>
#include <optional>
#include <string>
namespace xrpl {
// cSpell:ignore ptmalloc
// -----------------------------------------------------------------------------
// Allocator interaction note:
// - This facility invokes glibc's malloc_trim(0) on Linux/glibc to request that
// ptmalloc return free heap pages to the OS.
// - If an alternative allocator (e.g. jemalloc or tcmalloc) is linked or
// preloaded (LD_PRELOAD), calling glibc's malloc_trim typically has no effect
// on the *active* heap. The call is harmless but may not reclaim memory
// because those allocators manage their own arenas.
// - Only glibc sbrk/arena space is eligible for trimming; large mmap-backed
// allocations are usually returned to the OS on free regardless of trimming.
// - Call at known reclamation points (e.g., after cache sweeps / online delete)
// and consider rate limiting to avoid churn.
// -----------------------------------------------------------------------------
struct MallocTrimReport
{
bool supported{false};
int trimResult{-1};
long rssBeforeKB{-1};
long rssAfterKB{-1};
long long durationUs{-1};
long minfltDelta{-1};
long majfltDelta{-1};
[[nodiscard]] long
deltaKB() const noexcept
{
if (rssBeforeKB < 0 || rssAfterKB < 0)
return 0;
return rssAfterKB - rssBeforeKB;
}
};
/**
* @brief Attempt to return freed memory to the operating system.
*
* On Linux with glibc malloc, this issues ::malloc_trim(0), which may release
* free space from ptmalloc arenas back to the kernel. On other platforms, or if
* a different allocator is in use, this function is a no-op and the report will
* indicate that trimming is unsupported or had no effect.
*
* @param tag Optional identifier for logging/debugging purposes.
* @param journal Journal for diagnostic logging.
* @return Report containing before/after metrics and the trim result.
*
* @note If an alternative allocator (jemalloc/tcmalloc) is linked or preloaded,
* calling glibc's malloc_trim may have no effect on the active heap. The
* call is harmless but typically does not reclaim memory under those
* allocators.
*
* @note Only memory served from glibc's sbrk/arena heaps is eligible for trim.
* Large allocations satisfied via mmap are usually returned on free
* independently of trimming.
*
* @note Intended for use after operations that free significant memory (e.g.,
* cache sweeps, ledger cleanup, online delete). Consider rate limiting.
*/
MallocTrimReport
mallocTrim(std::optional<std::string> const& tag, beast::Journal journal);
} // namespace xrpl
#endif

View File

@@ -4,6 +4,7 @@
#include <xrpl/beast/utility/instrumentation.h>
#include <cstdint>
#include <functional>
#include <limits>
#include <optional>
#include <ostream>

View File

@@ -3,16 +3,19 @@
#include <xrpl/basics/CountedObject.h>
#include <xrpl/core/ClosureCounter.h>
#include <xrpl/core/LoadMonitor.h>
#include <functional>
#include <memory>
namespace xrpl {
class LoadEvent;
class LoadMonitor;
// Note that this queue should only be used for CPU-bound jobs
// It is primarily intended for signature checking
enum JobType {
enum JobType : int {
// Special type indicating an invalid job - will go away soon.
jtINVALID = -1,

View File

@@ -14,6 +14,8 @@
namespace xrpl {
class LoadEvent;
namespace perf {
class PerfLog;
}

View File

@@ -4,9 +4,12 @@
#include <xrpl/basics/Log.h>
#include <xrpl/beast/insight/Collector.h>
#include <xrpl/core/JobTypeInfo.h>
#include <xrpl/core/LoadMonitor.h>
namespace xrpl {
enum JobType : int;
struct JobTypeData
{
private:

View File

@@ -1,10 +1,13 @@
#ifndef XRPL_CORE_JOBTYPEINFO_H_INCLUDED
#define XRPL_CORE_JOBTYPEINFO_H_INCLUDED
#include <xrpl/core/Job.h>
#include <chrono>
#include <string>
namespace xrpl {
enum JobType : int;
/** Holds all the 'static' information about a job, which does not change */
class JobTypeInfo
{

View File

@@ -3,13 +3,14 @@
#include <xrpl/basics/UptimeClock.h>
#include <xrpl/beast/utility/Journal.h>
#include <xrpl/core/LoadEvent.h>
#include <chrono>
#include <mutex>
namespace xrpl {
class LoadEvent;
// Monitors load levels and response times
// VFALCO TODO Rename this. Having both LoadManager and LoadMonitor is

View File

@@ -1,9 +1,8 @@
#ifndef XRPL_CORE_PERFLOG_H
#define XRPL_CORE_PERFLOG_H
#include <xrpl/basics/BasicConfig.h>
#include <xrpl/beast/utility/Journal.h>
#include <xrpl/core/JobTypes.h>
#include <xrpl/json/json_value.h>
#include <boost/filesystem.hpp>
@@ -13,11 +12,14 @@
#include <memory>
#include <string>
namespace beast {
class Journal;
namespace Json {
class Value;
}
class Section;
namespace xrpl {
class Application;
namespace perf {

View File

@@ -7,11 +7,13 @@
#include <xrpl/ledger/ApplyView.h>
#include <xrpl/ledger/ReadView.h>
#include <xrpl/protocol/AccountID.h>
#include <xrpl/protocol/STArray.h>
#include <xrpl/protocol/STTx.h>
#include <xrpl/protocol/TER.h>
namespace xrpl {
class STArray;
class STTx;
namespace credentials {
// These function will be used by the code that use DepositPreauth / Credentials

View File

@@ -1,7 +1,6 @@
#ifndef XRPL_LEDGER_PAYMENTSANDBOX_H_INCLUDED
#define XRPL_LEDGER_PAYMENTSANDBOX_H_INCLUDED
#include <xrpl/ledger/RawView.h>
#include <xrpl/ledger/Sandbox.h>
#include <xrpl/ledger/detail/ApplyViewBase.h>
#include <xrpl/protocol/AccountID.h>
@@ -10,6 +9,9 @@
namespace xrpl {
// Forward declarations
class RawView;
namespace detail {
// VFALCO TODO Inline this implementation

View File

@@ -12,14 +12,17 @@
#include <xrpl/protocol/Rules.h>
#include <xrpl/protocol/STAmount.h>
#include <xrpl/protocol/STLedgerEntry.h>
#include <xrpl/protocol/STTx.h>
#include <cstdint>
#include <memory>
#include <optional>
#include <unordered_set>
namespace xrpl {
class STTx;
class STObject;
//------------------------------------------------------------------------------
/** A view into a ledger.

View File

@@ -6,7 +6,6 @@
#include <xrpl/protocol/Keylet.h>
#include <xrpl/protocol/LedgerFormats.h>
#include <xrpl/protocol/Protocol.h>
#include <xrpl/protocol/PublicKey.h>
#include <xrpl/protocol/STXChainBridge.h>
#include <xrpl/protocol/Serializer.h>
#include <xrpl/protocol/UintTypes.h>

View File

@@ -2,13 +2,14 @@
#define XRPL_PROTOCOL_NFTSYNTHETICSERIALIZER_H_INCLUDED
#include <xrpl/json/json_forwards.h>
#include <xrpl/protocol/STTx.h>
#include <xrpl/protocol/TxMeta.h>
#include <memory>
namespace xrpl {
class STTx;
class TxMeta;
namespace RPC {
/**

View File

@@ -3,8 +3,6 @@
#include <xrpl/basics/base_uint.h>
#include <xrpl/json/json_forwards.h>
#include <xrpl/protocol/STTx.h>
#include <xrpl/protocol/TxMeta.h>
#include <memory>
#include <optional>
@@ -12,6 +10,9 @@
namespace xrpl {
class STTx;
class TxMeta;
/**
Add a `nftoken_ids` field to the `meta` output parameter.
The field is only added to successful NFTokenMint, NFTokenAcceptOffer,

View File

@@ -3,14 +3,15 @@
#include <xrpl/basics/base_uint.h>
#include <xrpl/json/json_forwards.h>
#include <xrpl/protocol/STTx.h>
#include <xrpl/protocol/TxMeta.h>
#include <memory>
#include <optional>
namespace xrpl {
class STTx;
class TxMeta;
/**
Add an `offer_id` field to the `meta` output parameter.
The field is only added to successful NFTokenCreateOffer transactions.

View File

@@ -1,15 +1,17 @@
#ifndef XRPL_PROTOCOL_PERMISSION_H_INCLUDED
#define XRPL_PROTOCOL_PERMISSION_H_INCLUDED
#include <xrpl/protocol/Rules.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/protocol/TER.h>
#include <xrpl/protocol/TxFormats.h>
#include <optional>
#include <string>
#include <unordered_map>
namespace xrpl {
class Rules;
enum TxType : std::uint16_t;
/**
* We have both transaction type permissions and granular type permissions.
* Since we will reuse the TransactionFormats to parse the Transaction

View File

@@ -11,7 +11,6 @@
#include <xrpl/protocol/MPTAmount.h>
#include <xrpl/protocol/SField.h>
#include <xrpl/protocol/STBase.h>
#include <xrpl/protocol/Serializer.h>
#include <xrpl/protocol/XRPAmount.h>
#include <xrpl/protocol/json_get_or_throw.h>

View File

@@ -5,7 +5,6 @@
#include <xrpl/protocol/SField.h>
#include <xrpl/protocol/Serializer.h>
#include <ostream>
#include <string>
#include <type_traits>
#include <typeinfo>

View File

@@ -5,8 +5,6 @@
#include <xrpl/json/json_value.h>
#include <optional>
#include <ostream>
#include <string>
#include <unordered_map>
namespace xrpl {

View File

@@ -38,13 +38,16 @@ enum TxType : std::uint16_t
{
#pragma push_macro("TRANSACTION")
#pragma push_macro("TRANSACTION_INCLUDE")
#undef TRANSACTION
#undef TRANSACTION_INCLUDE
#define TRANSACTION(tag, value, ...) tag = value,
#include <xrpl/protocol/detail/transactions.macro>
#undef TRANSACTION
#pragma pop_macro("TRANSACTION_INCLUDE")
#pragma pop_macro("TRANSACTION")
/** This transaction type is deprecated; it is retained for historical purposes. */

View File

@@ -2,11 +2,13 @@
#define XRPL_RESOURCE_CONSUMER_H_INCLUDED
#include <xrpl/basics/Log.h>
#include <xrpl/protocol/PublicKey.h>
#include <xrpl/resource/Charge.h>
#include <xrpl/resource/Disposition.h>
namespace xrpl {
class PublicKey;
namespace Resource {
struct Entry;

View File

@@ -5,6 +5,7 @@
#include <xrpl/beast/clock/abstract_clock.h>
#include <xrpl/beast/core/List.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <xrpl/protocol/PublicKey.h>
#include <xrpl/resource/detail/Key.h>
#include <xrpl/resource/detail/Tuning.h>

View File

@@ -0,0 +1,29 @@
# The idea is to empty this file gradually by fixing the underlying issues and removing suppressions.
#
# ASAN_OPTIONS="print_stacktrace=1:detect_container_overflow=0:suppressions=sanitizers/suppressions/asan.supp:halt_on_error=0"
#
# The detect_container_overflow=0 option disables false positives from:
# - Boost intrusive containers (slist_iterator.hpp, hashtable.hpp, aged_unordered_container.h)
# - Boost context/coroutine stack switching (Workers.cpp, thread.h)
#
# See: https://github.com/google/sanitizers/wiki/AddressSanitizerContainerOverflow
# Boost
interceptor_name:boost/asio
# Leaks in Doctest tests: xrpl.test.*
interceptor_name:src/libxrpl/net/HTTPClient.cpp
interceptor_name:src/libxrpl/net/RegisterSSLCerts.cpp
interceptor_name:src/tests/libxrpl/net/HTTPClient.cpp
interceptor_name:xrpl/net/AutoSocket.h
interceptor_name:xrpl/net/HTTPClient.h
interceptor_name:xrpl/net/HTTPClientSSLContext.h
interceptor_name:xrpl/net/RegisterSSLCerts.h
# Suppress false positive stack-buffer errors in thread stack allocation
# Related to ASan's __asan_handle_no_return warnings (github.com/google/sanitizers/issues/189)
# These occur during multi-threaded test initialization on macOS
interceptor_name:memcpy
interceptor_name:__bzero
interceptor_name:__asan_memset
interceptor_name:__asan_memcpy

View File

@@ -0,0 +1,16 @@
# The idea is to empty this file gradually by fixing the underlying issues and removing suppresions.
# Suppress leaks detected by asan in rippled code.
leak:src/libxrpl/net/HTTPClient.cpp
leak:src/libxrpl/net/RegisterSSLCerts.cpp
leak:src/tests/libxrpl/net/HTTPClient.cpp
leak:xrpl/net/AutoSocket.h
leak:xrpl/net/HTTPClient.h
leak:xrpl/net/HTTPClientSSLContext.h
leak:xrpl/net/RegisterSSLCerts.h
leak:ripple::HTTPClient
leak:ripple::HTTPClientImp
# Suppress leaks detected by asan in boost code.
leak:boost::asio
leak:boost/asio

View File

@@ -0,0 +1,29 @@
# We were seeing some false positives and some repeated errors(since these are library files) in following files.
# Clang will skip instrumenting the files added here.
# We should fix the underlying issues(if any) and remove these entries.
deadlock:libxrpl/beast/utility/beast_Journal.cpp
deadlock:libxrpl/beast/utility/beast_PropertyStream.cpp
deadlock:test/beast/beast_PropertyStream_test.cpp
deadlock:xrpld/core/detail/Workers.cpp
deadlock:xrpld/core/JobQueue.cpp
race:libxrpl/beast/utility/beast_Journal.cpp
race:libxrpl/beast/utility/beast_PropertyStream.cpp
race:test/beast/beast_PropertyStream_test.cpp
race:xrpld/core/detail/Workers.cpp
race:xrpld/core/JobQueue.cpp
signal:libxrpl/beast/utility/beast_Journal.cpp
signal:libxrpl/beast/utility/beast_PropertyStream.cpp
signal:test/beast/beast_PropertyStream_test.cpp
signal:xrpld/core/detail/Workers.cpp
signal:xrpld/core/JobQueue.cpp
src:beast/utility/beast_Journal.cpp
src:beast/utility/beast_PropertyStream.cpp
src:core/detail/Workers.cpp
src:core/JobQueue.cpp
src:libxrpl/beast/utility/beast_Journal.cpp
src:test/beast/beast_PropertyStream_test.cpp
src:src/test/app/Invariants_test.cpp

View File

@@ -0,0 +1,102 @@
# The idea is to empty this file gradually by fixing the underlying issues and removing suppresions.
# Suppress race in Boost ASIO scheduler detected by GCC-15
# This is a false positive in Boost's internal pipe() synchronization
race:boost/asio/
race:boost/context/
race:boost/asio/executor.hpp
race:boost::asio
# Suppress tsan related issues in rippled code.
race:src/libxrpl/basics/make_SSLContext.cpp
race:src/libxrpl/basics/Number.cpp
race:src/libxrpl/json/json_value.cpp
race:src/libxrpl/json/to_string.cpp
race:src/libxrpl/ledger/OpenView.cpp
race:src/libxrpl/net/HTTPClient.cpp
race:src/libxrpl/nodestore/backend/NuDBFactory.cpp
race:src/libxrpl/protocol/InnerObjectFormats.cpp
race:src/libxrpl/protocol/STParsedJSON.cpp
race:src/libxrpl/resource/ResourceManager.cpp
race:src/test/app/Flow_test.cpp
race:src/test/app/LedgerReplay_test.cpp
race:src/test/app/NFToken_test.cpp
race:src/test/app/Offer_test.cpp
race:src/test/app/ValidatorSite_test.cpp
race:src/test/consensus/NegativeUNL_test.cpp
race:src/test/jtx/impl/Env.cpp
race:src/test/jtx/impl/JSONRPCClient.cpp
race:src/test/jtx/impl/pay.cpp
race:src/test/jtx/impl/token.cpp
race:src/test/rpc/Book_test.cpp
race:src/xrpld/app/ledger/detail/InboundTransactions.cpp
race:src/xrpld/app/main/Application.cpp
race:src/xrpld/app/main/BasicApp.cpp
race:src/xrpld/app/main/GRPCServer.cpp
race:src/xrpld/app/misc/detail/AmendmentTable.cpp
race:src/xrpld/app/misc/FeeVoteImpl.cpp
race:src/xrpld/app/rdb/detail/Wallet.cpp
race:src/xrpld/overlay/detail/OverlayImpl.cpp
race:src/xrpld/peerfinder/detail/PeerfinderManager.cpp
race:src/xrpld/peerfinder/detail/SourceStrings.cpp
race:src/xrpld/rpc/detail/ServerHandler.cpp
race:xrpl/server/detail/Door.h
race:xrpl/server/detail/Spawn.h
race:xrpl/server/detail/ServerImpl.h
race:xrpl/nodestore/detail/DatabaseNodeImp.h
race:src/libxrpl/beast/utility/beast_Journal.cpp
race:src/test/beast/LexicalCast_test.cpp
race:ripple::ServerHandler
# More suppressions in external library code.
race:crtstuff.c
race:pipe
# Deadlock / lock-order-inversion suppressions
# Note: GCC's TSAN may not fully support all deadlock suppression patterns
deadlock:src/libxrpl/beast/utility/beast_Journal.cpp
deadlock:src/libxrpl/beast/utility/beast_PropertyStream.cpp
deadlock:src/test/beast/beast_PropertyStream_test.cpp
deadlock:src/xrpld/core/detail/Workers.cpp
deadlock:src/xrpld/app/misc/detail/Manifest.cpp
deadlock:src/xrpld/app/misc/detail/ValidatorList.cpp
deadlock:src/xrpld/app/misc/detail/ValidatorSite.cpp
signal:src/libxrpl/beast/utility/beast_Journal.cpp
signal:src/xrpld/core/detail/Workers.cpp
signal:src/xrpld/core/JobQueue.cpp
signal:ripple::Workers::Worker
# Aggressive suppressing of deadlock tsan errors
deadlock:pthread_create
deadlock:pthread_rwlock_rdlock
deadlock:boost::asio
# Suppress SEGV crashes in TSAN itself during stringbuf operations
# This appears to be a GCC-15 TSAN instrumentation issue with basic_stringbuf::str()
# Commonly triggered in beast::Journal::ScopedStream destructor
signal:std::__cxx11::basic_stringbuf
signal:basic_stringbuf
signal:basic_ostringstream
called_from_lib:libclang_rt
race:ostreambuf_iterator
race:basic_ostream
# Suppress SEGV in Boost ASIO memory allocation with GCC-15 TSAN
signal:boost::asio::aligned_new
signal:boost::asio::detail::memory
# Suppress SEGV in execute_native_thread_routine
signal:execute_native_thread_routine
# Suppress data race in Boost Context fiber management
# This is a false positive in Boost's exception state management during fiber context switching
race:__cxxabiv1::manage_exception_state
race:boost::context::fiber::resume
race:boost::asio::detail::spawned_fiber_thread
race:boost::asio::detail::spawned_fiber_thread::suspend_with
race:boost::asio::detail::spawned_fiber_thread::destroy
# Suppress data race in __tsan_memcpy called from Boost fiber operations
race:__tsan_memcpy

View File

@@ -0,0 +1,237 @@
# The idea is to empty this file gradually by fixing the underlying issues and removing suppresions.
# Suppress UBSan errors in external code by source file path
# This matches any source file under the external/ directory
alignment:external
bool:external
bounds:external
cfi:external
enum:external
float-cast-overflow:external
float-divide-by-zero:external
function:external
implicit-integer-sign-change:external
implicit-signed-integer-truncation::external
implicit-signed-integer-truncation:external
implicit-unsigned-integer-truncation:external
integer-divide-by-zero:external
invalid-builtin-use:external
invalid-objc-cast:external
nonnull-attribute:external
null:external
nullability-arg:external
nullability-assign:external
nullability-return:external
object-size:external
pointer-overflow:external
return:external
returns-nonnull-attribute:external
shift-base:external
shift-exponent:external
signed-integer-overflow:external
undefined:external
unreachable:external
unsigned-integer-overflow:external
vla-bound:external
vptr_check:external
vptr:external
# Suppress all UBSan errors in Boost libraries
# This matches any files containing "boost" in its path or name
alignment:boost
bool:boost
bounds:boost
cfi:boost
enum:boost
float-cast-overflow:boost
float-divide-by-zero:boost
function:boost
implicit-integer-sign-change:boost
implicit-signed-integer-truncation:boost
implicit-unsigned-integer-truncation:boost
integer-divide-by-zero:boost
invalid-builtin-use:boost
invalid-objc-cast:boost
nonnull-attribute:boost
null:boost
nullability-arg:boost
nullability-assign:boost
nullability-return:boost
object-size:boost
pointer-overflow:boost
return:boost
returns-nonnull-attribute:boost
shift-base:boost
shift-exponent:boost
signed-integer-overflow:boost
undefined:boost
unreachable:boost
unsigned-integer-overflow:boost
vla-bound:boost
vptr_check:boost
vptr:boost
# Google protobuf
undefined:protobuf
# Suppress UBSan errors in rippled code by source file path
undefined:src/libxrpl/basics/base64.cpp
undefined:src/libxrpl/basics/Number.cpp
undefined:src/libxrpl/beast/utility/beast_Journal.cpp
undefined:src/libxrpl/crypto/RFC1751.cpp
undefined:src/libxrpl/ledger/ApplyView.cpp
undefined:src/libxrpl/ledger/View.cpp
undefined:src/libxrpl/protocol/Permissions.cpp
undefined:src/libxrpl/protocol/STAmount.cpp
undefined:src/libxrpl/protocol/STPathSet.cpp
undefined:src/libxrpl/protocol/tokens.cpp
undefined:src/libxrpl/shamap/SHAMap.cpp
undefined:src/test/app/Batch_test.cpp
undefined:src/test/app/Invariants_test.cpp
undefined:src/test/app/NFToken_test.cpp
undefined:src/test/app/Offer_test.cpp
undefined:src/test/app/Path_test.cpp
undefined:src/test/basics/XRPAmount_test.cpp
undefined:src/test/beast/LexicalCast_test.cpp
undefined:src/test/jtx/impl/acctdelete.cpp
undefined:src/test/ledger/SkipList_test.cpp
undefined:src/test/rpc/Subscribe_test.cpp
undefined:src/tests/libxrpl/basics/RangeSet.cpp
undefined:src/xrpld/app/main/BasicApp.cpp
undefined:src/xrpld/app/main/BasicApp.cpp
undefined:src/xrpld/app/misc/detail/AmendmentTable.cpp
undefined:src/xrpld/app/misc/NetworkOPs.cpp
undefined:src/libxrpl/json/json_value.cpp
undefined:src/xrpld/app/paths/detail/StrandFlow.h
undefined:src/xrpld/app/tx/detail/NFTokenMint.cpp
undefined:src/xrpld/app/tx/detail/SetOracle.cpp
undefined:src/xrpld/core/detail/JobQueue.cpp
undefined:src/xrpld/core/detail/Workers.cpp
undefined:src/xrpld/rpc/detail/Role.cpp
undefined:src/xrpld/rpc/handlers/GetAggregatePrice.cpp
undefined:xrpl/basics/base_uint.h
undefined:xrpl/basics/DecayingSample.h
undefined:xrpl/beast/test/yield_to.h
undefined:xrpl/beast/xor_shift_engine.h
undefined:xrpl/nodestore/detail/varint.h
undefined:xrpl/peerfinder/detail/Counts.h
undefined:xrpl/protocol/nft.h
# basic_string.h:483:51: runtime error: unsigned integer overflow
unsigned-integer-overflow:basic_string.h
unsigned-integer-overflow:bits/chrono.h
unsigned-integer-overflow:bits/random.h
unsigned-integer-overflow:bits/random.tcc
unsigned-integer-overflow:bits/stl_algobase.h
unsigned-integer-overflow:bits/uniform_int_dist.h
unsigned-integer-overflow:string_view
# runtime error: unsigned integer overflow: 0 - 1 cannot be represented in type 'std::size_t' (aka 'unsigned long')
unsigned-integer-overflow:src/libxrpl/basics/base64.cpp
unsigned-integer-overflow:src/libxrpl/basics/Number.cpp
unsigned-integer-overflow:src/libxrpl/crypto/RFC1751.cpp
unsigned-integer-overflow:rc/libxrpl/json/json_value.cpp
unsigned-integer-overflow:src/libxrpl/ledger/ApplyView.cpp
unsigned-integer-overflow:src/libxrpl/ledger/View.cpp
unsigned-integer-overflow:src/libxrpl/protocol/Permissions.cpp
unsigned-integer-overflow:src/libxrpl/protocol/STAmount.cpp
unsigned-integer-overflow:src/libxrpl/protocol/STPathSet.cpp
unsigned-integer-overflow:src/libxrpl/protocol/tokens.cpp
unsigned-integer-overflow:src/libxrpl/shamap/SHAMap.cpp
unsigned-integer-overflow:src/test/app/Batch_test.cpp
unsigned-integer-overflow:src/test/app/Invariants_test.cpp
unsigned-integer-overflow:src/test/app/NFToken_test.cpp
unsigned-integer-overflow:src/test/app/Offer_test.cpp
unsigned-integer-overflow:src/test/app/Path_test.cpp
unsigned-integer-overflow:src/test/basics/XRPAmount_test.cpp
unsigned-integer-overflow:src/test/beast/LexicalCast_test.cpp
unsigned-integer-overflow:src/test/jtx/impl/acctdelete.cpp
unsigned-integer-overflow:src/test/ledger/SkipList_test.cpp
unsigned-integer-overflow:src/test/rpc/Subscribe_test.cpp
unsigned-integer-overflow:src/tests/libxrpl/basics/RangeSet.cpp
unsigned-integer-overflow:src/xrpld/app/main/BasicApp.cpp
unsigned-integer-overflow:src/xrpld/app/misc/detail/AmendmentTable.cpp
unsigned-integer-overflow:src/xrpld/app/misc/NetworkOPs.cpp
unsigned-integer-overflow:src/xrpld/app/paths/detail/StrandFlow.h
unsigned-integer-overflow:src/xrpld/app/tx/detail/NFTokenMint.cpp
unsigned-integer-overflow:src/xrpld/app/tx/detail/SetOracle.cpp
unsigned-integer-overflow:src/xrpld/rpc/detail/Role.cpp
unsigned-integer-overflow:src/xrpld/rpc/handlers/GetAggregatePrice.cpp
unsigned-integer-overflow:xrpl/basics/base_uint.h
unsigned-integer-overflow:xrpl/basics/DecayingSample.h
unsigned-integer-overflow:xrpl/beast/test/yield_to.h
unsigned-integer-overflow:xrpl/beast/xor_shift_engine.h
unsigned-integer-overflow:xrpl/nodestore/detail/varint.h
unsigned-integer-overflow:xrpl/peerfinder/detail/Counts.h
unsigned-integer-overflow:xrpl/protocol/nft.h
# Rippled intentional overflows and operations
# STAmount uses intentional negation of INT64_MIN and overflow in arithmetic
signed-integer-overflow:src/libxrpl/protocol/STAmount.cpp
unsigned-integer-overflow:src/libxrpl/protocol/STAmount.cpp
# XRPAmount test intentional overflows
signed-integer-overflow:src/test/basics/XRPAmount_test.cpp
# Peerfinder intentional overflow in counter arithmetic
unsigned-integer-overflow:src/xrpld/peerfinder/detail/Counts.h
# Signed integer overflow suppressions
signed-integer-overflow:src/test/beast/LexicalCast_test.cpp
# External library suppressions
unsigned-integer-overflow:nudb/detail/xxhash.hpp
# Protobuf intentional overflows in hash functions
# Protobuf uses intentional unsigned overflow for hash computation (stringpiece.h:393)
unsigned-integer-overflow:google/protobuf/stubs/stringpiece.h
# gRPC intentional overflows
# gRPC uses intentional overflow in timer calculations
unsigned-integer-overflow:grpc
unsigned-integer-overflow:timer_manager.cc
# Standard library intentional overflows
# These are intentional overflows in random number generation and character conversion
unsigned-integer-overflow:__random/seed_seq.h
unsigned-integer-overflow:__charconv/traits.h
# Suppress errors in RocksDB
# RocksDB uses intentional unsigned integer overflows in hash functions and CRC calculations
unsigned-integer-overflow:rocks*/*/util/xxhash.h
unsigned-integer-overflow:rocks*/*/util/xxph3.h
unsigned-integer-overflow:rocks*/*/util/hash.cc
unsigned-integer-overflow:rocks*/*/util/crc32c.cc
unsigned-integer-overflow:rocks*/*/util/crc32c.h
unsigned-integer-overflow:rocks*/*/include/rocksdb/utilities/options_type.h
unsigned-integer-overflow:rocks*/*/table/format.h
unsigned-integer-overflow:rocks*/*/table/format.cc
unsigned-integer-overflow:rocks*/*/table/block_based/block_based_table_builder.cc
unsigned-integer-overflow:rocks*/*/table/block_based/reader_common.cc
unsigned-integer-overflow:rocks*/*/db/version_set.cc
# RocksDB misaligned loads (intentional for performance on ARM64)
alignment:rocks*/*/util/crc32c_arm64.cc
# nudb intentional overflows in hash functions
unsigned-integer-overflow:nudb/detail/xxhash.hpp
alignment:nudb/detail/xxhash.hpp
# Snappy compression library intentional overflows
unsigned-integer-overflow:snappy.cc
# Abseil intentional overflows
unsigned-integer-overflow:absl/strings/numbers.cc
unsigned-integer-overflow:absl/strings/internal/cord_rep_flat.h
unsigned-integer-overflow:absl/base/internal/low_level_alloc.cc
unsigned-integer-overflow:absl/hash/internal/hash.h
unsigned-integer-overflow:absl/container/internal/raw_hash_set.h
# Standard library intentional overflows in chrono duration arithmetic
unsigned-integer-overflow:__chrono/duration.h
# Suppress undefined errors in RocksDB and nudb
undefined:rocks.*/*/util/crc32c_arm64.cc
undefined:rocks.*/*/util/xxhash.h
undefined:nudb

View File

@@ -1,173 +0,0 @@
#include <xrpl/basics/Log.h>
#include <xrpl/basics/MallocTrim.h>
#include <boost/predef.h>
#include <chrono>
#include <cstdio>
#include <fstream>
// #include <thread>
#if defined(__GLIBC__) && BOOST_OS_LINUX
#include <sys/resource.h>
#include <malloc.h>
#include <unistd.h>
// Require RUSAGE_THREAD for thread-scoped page fault tracking
#ifndef RUSAGE_THREAD
#error "MallocTrim rusage instrumentation requires RUSAGE_THREAD on Linux/glibc"
#endif
namespace {
pid_t const cachedPid = ::getpid();
bool
getRusageThread(struct rusage& ru)
{
return ::getrusage(RUSAGE_THREAD, &ru) == 0;
}
} // namespace
#endif
namespace xrpl {
namespace detail {
#if defined(__GLIBC__) && BOOST_OS_LINUX
long
parseVmRSSkB(std::string const& status)
{
std::istringstream iss(status);
std::string line;
while (std::getline(iss, line))
{
// Allow leading spaces/tabs before the key.
auto const firstNonWs = line.find_first_not_of(" \t");
if (firstNonWs == std::string::npos)
continue;
constexpr char key[] = "VmRSS:";
constexpr auto keyLen = sizeof(key) - 1;
// Require the line (after leading whitespace) to start with "VmRSS:".
// Check if we have enough characters and the substring matches.
if (firstNonWs + keyLen > line.size() ||
line.substr(firstNonWs, keyLen) != key)
continue;
// Move past "VmRSS:" and any following whitespace.
auto pos = firstNonWs + keyLen;
while (pos < line.size() &&
std::isspace(static_cast<unsigned char>(line[pos])))
{
++pos;
}
long value = -1;
if (std::sscanf(line.c_str() + pos, "%ld", &value) == 1)
return value;
// Found the key but couldn't parse a number.
return -1;
}
// No VmRSS line found.
return -1;
}
#endif // __GLIBC__ && BOOST_OS_LINUX
} // namespace detail
MallocTrimReport
mallocTrim(
[[maybe_unused]] std::optional<std::string> const& tag,
beast::Journal journal)
{
MallocTrimReport report;
#if !(defined(__GLIBC__) && BOOST_OS_LINUX)
JLOG(journal.debug()) << "malloc_trim not supported on this platform";
#else
report.supported = true;
// Launch malloc_trim on a separate thread (async, fire-and-forget)
// std::thread trimThread([tag, journal]() {
if (journal.debug())
{
auto readFile = [](std::string const& path) -> std::string {
std::ifstream ifs(path);
if (!ifs.is_open())
return {};
return std::string(
std::istreambuf_iterator<char>(ifs),
std::istreambuf_iterator<char>());
};
std::string const tagStr = tag.value_or("default");
std::string const statusPath =
"/proc/" + std::to_string(cachedPid) + "/status";
auto const statusBefore = readFile(statusPath);
long const rssBeforeKB = detail::parseVmRSSkB(statusBefore);
struct rusage ru0
{
};
bool const have_ru0 = getRusageThread(ru0);
auto const t0 = std::chrono::steady_clock::now();
int const trimResult = ::malloc_trim(0);
auto const t1 = std::chrono::steady_clock::now();
struct rusage ru1
{
};
bool const have_ru1 = getRusageThread(ru1);
auto const statusAfter = readFile(statusPath);
long const rssAfterKB = detail::parseVmRSSkB(statusAfter);
long long const durationUs =
std::chrono::duration_cast<std::chrono::microseconds>(t1 - t0)
.count();
long minfltDelta = -1;
long majfltDelta = -1;
if (have_ru0 && have_ru1)
{
minfltDelta = ru1.ru_minflt - ru0.ru_minflt;
majfltDelta = ru1.ru_majflt - ru0.ru_majflt;
}
long const deltaKB = (rssBeforeKB < 0 || rssAfterKB < 0)
? 0
: (rssAfterKB - rssBeforeKB);
JLOG(journal.debug())
<< "malloc_trim tag=" << tagStr << " result=" << trimResult
<< " rss_before=" << rssBeforeKB << "kB"
<< " rss_after=" << rssAfterKB << "kB"
<< " delta=" << deltaKB << "kB"
<< " duration_us=" << durationUs << " minflt_delta=" << minfltDelta
<< " majflt_delta=" << majfltDelta;
}
else
{
::malloc_trim(0);
}
// });
// trimThread.detach();
#endif
return report;
}
} // namespace xrpl

View File

@@ -1,5 +1,7 @@
#include <xrpl/beast/core/CurrentThreadName.h>
#include <xrpl/core/Job.h>
#include <xrpl/core/LoadEvent.h>
#include <xrpl/core/LoadMonitor.h>
namespace xrpl {

View File

@@ -1,5 +1,6 @@
#include <xrpl/basics/contract.h>
#include <xrpl/core/JobQueue.h>
#include <xrpl/core/LoadEvent.h>
#include <xrpl/core/PerfLog.h>
#include <mutex>

View File

@@ -1,5 +1,6 @@
#include <xrpl/basics/Log.h>
#include <xrpl/basics/UptimeClock.h>
#include <xrpl/core/LoadEvent.h>
#include <xrpl/core/LoadMonitor.h>
namespace xrpl {

View File

@@ -1,6 +1,7 @@
#include <xrpl/basics/contract.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <xrpl/ledger/ApplyView.h>
#include <xrpl/protocol/Feature.h>
#include <xrpl/protocol/Protocol.h>
#include <limits>

View File

@@ -1,5 +1,7 @@
#include <xrpl/ledger/CredentialHelpers.h>
#include <xrpl/ledger/View.h>
#include <xrpl/protocol/STArray.h>
#include <xrpl/protocol/STTx.h>
#include <xrpl/protocol/TER.h>
#include <xrpl/protocol/digest.h>

View File

@@ -1,5 +1,6 @@
#include <xrpl/basics/contract.h>
#include <xrpl/ledger/OpenView.h>
#include <xrpl/protocol/STTx.h>
namespace xrpl {

View File

@@ -1,5 +1,6 @@
#include <xrpl/beast/utility/instrumentation.h>
#include <xrpl/ledger/PaymentSandbox.h>
#include <xrpl/ledger/RawView.h>
#include <xrpl/ledger/View.h>
#include <xrpl/protocol/SField.h>

View File

@@ -2,6 +2,7 @@
#include <xrpl/basics/Log.h>
#include <xrpl/basics/chrono.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <xrpl/ledger/ApplyView.h>
#include <xrpl/ledger/CredentialHelpers.h>
#include <xrpl/ledger/Credit.h>
#include <xrpl/ledger/ReadView.h>
@@ -451,9 +452,8 @@ getTrustLineBalance(
amount.clear(Issue{currency, issuer});
}
JLOG(j.trace()) << "getTrustLineBalance:"
<< " account=" << to_string(account)
<< " amount=" << amount.getFullText();
JLOG(j.trace()) << "getTrustLineBalance:" << " account="
<< to_string(account) << " amount=" << amount.getFullText();
return view.balanceHook(account, issuer, amount);
}
@@ -700,8 +700,7 @@ xrpLiquid(
STAmount const amount =
(balance < reserve) ? STAmount{0} : balance - reserve;
JLOG(j.trace()) << "accountHolds:"
<< " account=" << to_string(id)
JLOG(j.trace()) << "accountHolds:" << " account=" << to_string(id)
<< " amount=" << amount.getFullText()
<< " fullBalance=" << fullBalance.getFullText()
<< " balance=" << balance.getFullText()
@@ -1107,7 +1106,7 @@ adjustOwnerCount(
std::function<void(SLE::ref)>
describeOwnerDir(AccountID const& account)
{
return [&account](std::shared_ptr<SLE> const& sle) {
return [account](std::shared_ptr<SLE> const& sle) {
(*sle)[sfOwner] = account;
};
}

View File

@@ -1,7 +1,9 @@
#include <xrpl/basics/chrono.h>
#include <xrpl/beast/core/CurrentThreadName.h>
#include <xrpl/json/json_value.h>
#include <xrpl/nodestore/Backend.h>
#include <xrpl/nodestore/Database.h>
#include <xrpl/nodestore/Scheduler.h>
#include <xrpl/protocol/HashPrefix.h>
#include <xrpl/protocol/jss.h>

View File

@@ -3,6 +3,8 @@
#include <xrpl/beast/core/SemanticVersion.h>
#include <xrpl/protocol/BuildInfo.h>
#include <boost/preprocessor/stringize.hpp>
#include <algorithm>
#include <cstdint>
#include <string>
@@ -20,7 +22,7 @@ namespace BuildInfo {
char const* const versionString = "3.2.0-b0"
// clang-format on
#if defined(DEBUG) || defined(SANITIZER)
#if defined(DEBUG) || defined(SANITIZERS)
"+"
#ifdef GIT_COMMIT_HASH
GIT_COMMIT_HASH
@@ -28,13 +30,13 @@ char const* const versionString = "3.2.0-b0"
#endif
#ifdef DEBUG
"DEBUG"
#ifdef SANITIZER
#ifdef SANITIZERS
"."
#endif
#endif
#ifdef SANITIZER
BOOST_PP_STRINGIZE(SANITIZER) // cspell: disable-line
#ifdef SANITIZERS
BOOST_PP_STRINGIZE(SANITIZERS) // cspell: disable-line
#endif
#endif

View File

@@ -1,6 +1,8 @@
#include <xrpl/beast/utility/instrumentation.h>
#include <xrpl/protocol/Feature.h>
#include <xrpl/protocol/Permissions.h>
#include <xrpl/protocol/Rules.h>
#include <xrpl/protocol/TxFormats.h>
#include <xrpl/protocol/jss.h>
namespace xrpl {

View File

@@ -1,9 +1,11 @@
#include <xrpl/json/json_value.h>
#include <xrpl/protocol/TER.h>
#include <boost/range/adaptor/transformed.hpp>
#include <boost/range/iterator_range_core.hpp>
#include <optional>
#include <ostream>
#include <string>
#include <unordered_map>
#include <utility>

View File

@@ -1,6 +1,7 @@
#include <xrpl/basics/Log.h>
#include <xrpl/beast/utility/Journal.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <xrpl/protocol/PublicKey.h>
#include <xrpl/resource/Charge.h>
#include <xrpl/resource/Consumer.h>
#include <xrpl/resource/Disposition.h>

View File

@@ -1,9 +1,13 @@
#include <xrpl/basics/TaggedCache.ipp>
#include <xrpl/basics/contract.h>
#include <xrpl/shamap/Family.h>
#include <xrpl/shamap/SHAMap.h>
#include <xrpl/shamap/SHAMapAccountStateLeafNode.h>
#include <xrpl/shamap/SHAMapInnerNode.h>
#include <xrpl/shamap/SHAMapLeafNode.h>
#include <xrpl/shamap/SHAMapNodeID.h>
#include <xrpl/shamap/SHAMapSyncFilter.h>
#include <xrpl/shamap/SHAMapTreeNode.h>
#include <xrpl/shamap/SHAMapTxLeafNode.h>
#include <xrpl/shamap/SHAMapTxPlusMetaLeafNode.h>

View File

@@ -0,0 +1,211 @@
#include <test/jtx.h>
#include <test/jtx/Env.h>
#include <xrpld/overlay/Message.h>
#include <xrpld/overlay/detail/OverlayImpl.h>
#include <xrpld/overlay/detail/PeerImp.h>
#include <xrpld/overlay/detail/Tuning.h>
#include <xrpld/peerfinder/detail/SlotImp.h>
#include <xrpl/basics/make_SSLContext.h>
#include <xrpl/beast/unit_test.h>
#include <xrpl/nodestore/NodeObject.h>
#include <xrpl/protocol/digest.h>
#include <xrpl/protocol/messages.h>
namespace xrpl {
namespace test {
using namespace jtx;
/**
* Test for TMGetObjectByHash reply size limiting.
*
* This verifies the fix that limits TMGetObjectByHash replies to
* Tuning::hardMaxReplyNodes to prevent excessive memory usage and
* potential DoS attacks from peers requesting large numbers of objects.
*/
class TMGetObjectByHash_test : public beast::unit_test::suite
{
using middle_type = boost::beast::tcp_stream;
using stream_type = boost::beast::ssl_stream<middle_type>;
using socket_type = boost::asio::ip::tcp::socket;
using shared_context = std::shared_ptr<boost::asio::ssl::context>;
/**
* Test peer that captures sent messages for verification.
*/
class PeerTest : public PeerImp
{
public:
PeerTest(
Application& app,
std::shared_ptr<PeerFinder::Slot> const& slot,
http_request_type&& request,
PublicKey const& publicKey,
ProtocolVersion protocol,
Resource::Consumer consumer,
std::unique_ptr<TMGetObjectByHash_test::stream_type>&& stream_ptr,
OverlayImpl& overlay)
: PeerImp(
app,
id_++,
slot,
std::move(request),
publicKey,
protocol,
consumer,
std::move(stream_ptr),
overlay)
{
}
~PeerTest() = default;
void
run() override
{
}
void
send(std::shared_ptr<Message> const& m) override
{
lastSentMessage_ = m;
}
std::shared_ptr<Message>
getLastSentMessage() const
{
return lastSentMessage_;
}
static void
resetId()
{
id_ = 0;
}
private:
inline static Peer::id_t id_ = 0;
std::shared_ptr<Message> lastSentMessage_;
};
shared_context context_{make_SSLContext("")};
ProtocolVersion protocolVersion_{1, 7};
std::shared_ptr<PeerTest>
createPeer(jtx::Env& env)
{
auto& overlay = dynamic_cast<OverlayImpl&>(env.app().overlay());
boost::beast::http::request<boost::beast::http::dynamic_body> request;
auto stream_ptr = std::make_unique<stream_type>(
socket_type(env.app().getIOContext()), *context_);
beast::IP::Endpoint local(
boost::asio::ip::make_address("172.1.1.1"), 51235);
beast::IP::Endpoint remote(
boost::asio::ip::make_address("172.1.1.2"), 51235);
PublicKey key(std::get<0>(randomKeyPair(KeyType::ed25519)));
auto consumer = overlay.resourceManager().newInboundEndpoint(remote);
auto [slot, _] = overlay.peerFinder().new_inbound_slot(local, remote);
auto peer = std::make_shared<PeerTest>(
env.app(),
slot,
std::move(request),
key,
protocolVersion_,
consumer,
std::move(stream_ptr),
overlay);
overlay.add_active(peer);
return peer;
}
std::shared_ptr<protocol::TMGetObjectByHash>
createRequest(size_t const numObjects, Env& env)
{
// Store objects in the NodeStore that will be found during the query
auto& nodeStore = env.app().getNodeStore();
// Create and store objects
std::vector<uint256> hashes;
hashes.reserve(numObjects);
for (int i = 0; i < numObjects; ++i)
{
uint256 hash(xrpl::sha512Half(i));
hashes.push_back(hash);
Blob data(100, static_cast<unsigned char>(i % 256));
nodeStore.store(
hotLEDGER,
std::move(data),
hash,
nodeStore.earliestLedgerSeq());
}
// Create a request with more objects than hardMaxReplyNodes
auto request = std::make_shared<protocol::TMGetObjectByHash>();
request->set_type(protocol::TMGetObjectByHash_ObjectType_otLEDGER);
request->set_query(true);
for (int i = 0; i < numObjects; ++i)
{
auto object = request->add_objects();
object->set_hash(hashes[i].data(), hashes[i].size());
object->set_ledgerseq(i);
}
return request;
}
/**
* Test that reply is limited to hardMaxReplyNodes when more objects
* are requested than the limit allows.
*/
void
testReplyLimit(size_t const numObjects, int const expectedReplySize)
{
testcase("Reply Limit");
Env env(*this);
PeerTest::resetId();
auto peer = createPeer(env);
auto request = createRequest(numObjects, env);
// Call the onMessage handler
peer->onMessage(request);
// Verify that a reply was sent
auto sentMessage = peer->getLastSentMessage();
BEAST_EXPECT(sentMessage != nullptr);
// Parse the reply message
auto const& buffer =
sentMessage->getBuffer(compression::Compressed::Off);
BEAST_EXPECT(buffer.size() > 6);
// Skip the message header (6 bytes: 4 for size, 2 for type)
protocol::TMGetObjectByHash reply;
BEAST_EXPECT(
reply.ParseFromArray(buffer.data() + 6, buffer.size() - 6) == true);
// Verify the reply is limited to expectedReplySize
BEAST_EXPECT(reply.objects_size() == expectedReplySize);
}
void
run() override
{
int const limit = static_cast<int>(Tuning::hardMaxReplyNodes);
testReplyLimit(limit + 1, limit);
testReplyLimit(limit, limit);
testReplyLimit(limit - 1, limit - 1);
}
};
BEAST_DEFINE_TESTSUITE(TMGetObjectByHash, overlay, xrpl);
} // namespace test
} // namespace xrpl

View File

@@ -5,6 +5,8 @@
#include <test/jtx/multisign.h>
#include <test/jtx/xchain_bridge.h>
#include <xrpld/app/tx/apply.h>
#include <xrpl/beast/unit_test.h>
#include <xrpl/json/json_value.h>
#include <xrpl/protocol/AccountID.h>
@@ -30,6 +32,7 @@ enum class FieldType {
CurrencyField,
HashField,
HashOrObjectField,
FixedHashField,
IssueField,
ObjectField,
StringField,
@@ -86,6 +89,7 @@ getTypeName(FieldType typeID)
case FieldType::CurrencyField:
return "Currency";
case FieldType::HashField:
case FieldType::FixedHashField:
return "hex string";
case FieldType::HashOrObjectField:
return "hex string or object";
@@ -202,6 +206,7 @@ class LedgerEntry_test : public beast::unit_test::suite
static auto const& badBlobValues = remove({3, 7, 8, 16});
static auto const& badCurrencyValues = remove({14});
static auto const& badHashValues = remove({2, 3, 7, 8, 16});
static auto const& badFixedHashValues = remove({1, 2, 3, 4, 7, 8, 16});
static auto const& badIndexValues = remove({12, 16, 18, 19});
static auto const& badUInt32Values = remove({2, 3});
static auto const& badUInt64Values = remove({2, 3});
@@ -222,6 +227,8 @@ class LedgerEntry_test : public beast::unit_test::suite
return badHashValues;
case FieldType::HashOrObjectField:
return badIndexValues;
case FieldType::FixedHashField:
return badFixedHashValues;
case FieldType::IssueField:
return badIssueValues;
case FieldType::UInt32Field:
@@ -717,7 +724,12 @@ class LedgerEntry_test : public beast::unit_test::suite
}
// negative tests
runLedgerEntryTest(env, jss::amendments);
testMalformedField(
env,
Json::Value{},
jss::amendments,
FieldType::FixedHashField,
"malformedRequest");
}
void
@@ -1538,7 +1550,12 @@ class LedgerEntry_test : public beast::unit_test::suite
}
// negative tests
runLedgerEntryTest(env, jss::fee);
testMalformedField(
env,
Json::Value{},
jss::fee,
FieldType::FixedHashField,
"malformedRequest");
}
void
@@ -1561,7 +1578,12 @@ class LedgerEntry_test : public beast::unit_test::suite
}
// negative tests
runLedgerEntryTest(env, jss::hashes);
testMalformedField(
env,
Json::Value{},
jss::hashes,
FieldType::FixedHashField,
"malformedRequest");
}
void
@@ -1686,7 +1708,12 @@ class LedgerEntry_test : public beast::unit_test::suite
}
// negative tests
runLedgerEntryTest(env, jss::nunl);
testMalformedField(
env,
Json::Value{},
jss::nunl,
FieldType::FixedHashField,
"malformedRequest");
}
void
@@ -2343,6 +2370,438 @@ class LedgerEntry_test : public beast::unit_test::suite
}
}
/// Test the ledger entry types that don't take parameters
void
testFixed()
{
using namespace test::jtx;
Account const alice{"alice"};
Account const bob{"bob"};
Env env{*this, envconfig([](auto cfg) {
cfg->START_UP = Config::FRESH;
return cfg;
})};
env.close();
/** Verifies that the RPC result has the expected data
*
* @param good: Indicates that the request should have succeeded
* and returned a ledger object of `expectedType` type.
* @param jv: The RPC result Json value
* @param expectedType: The type that the ledger object should
* have if "good".
* @param expectedError: Optional. The expected error if not
* good. Defaults to "entryNotFound".
*/
auto checkResult =
[&](bool good,
Json::Value const& jv,
Json::StaticString const& expectedType,
std::optional<std::string> const& expectedError = {}) {
if (good)
{
BEAST_EXPECTS(
jv.isObject() && jv.isMember(jss::result) &&
!jv[jss::result].isMember(jss::error) &&
jv[jss::result].isMember(jss::node) &&
jv[jss::result][jss::node].isMember(
sfLedgerEntryType.jsonName) &&
jv[jss::result][jss::node]
[sfLedgerEntryType.jsonName] == expectedType,
to_string(jv));
}
else
{
BEAST_EXPECTS(
jv.isObject() && jv.isMember(jss::result) &&
jv[jss::result].isMember(jss::error) &&
!jv[jss::result].isMember(jss::node) &&
jv[jss::result][jss::error] ==
expectedError.value_or("entryNotFound"),
to_string(jv));
}
};
/** Runs a series of tests for a given fixed-position ledger
* entry.
*
* @param field: The Json request field to use.
* @param expectedType: The type that the ledger object should
* have if "good".
* @param expectedKey: The keylet of the fixed object.
* @param good: Indicates whether the object is expected to
* exist.
*/
auto test = [&](Json::StaticString const& field,
Json::StaticString const& expectedType,
Keylet const& expectedKey,
bool good) {
testcase << expectedType.c_str() << (good ? "" : " not")
<< " found";
auto const hexKey = strHex(expectedKey.key);
{
// Test bad values
// "field":null
Json::Value params;
params[jss::ledger_index] = jss::validated;
params[field] = Json::nullValue;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(false, jv, expectedType, "malformedRequest");
BEAST_EXPECT(!jv[jss::result].isMember(jss::index));
}
{
Json::Value params;
// "field":"string"
params[jss::ledger_index] = jss::validated;
params[field] = "arbitrary string";
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(false, jv, expectedType, "malformedRequest");
BEAST_EXPECT(!jv[jss::result].isMember(jss::index));
}
{
Json::Value params;
// "field":false
params[jss::ledger_index] = jss::validated;
params[field] = false;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(false, jv, expectedType, "invalidParams");
BEAST_EXPECT(!jv[jss::result].isMember(jss::index));
}
{
Json::Value params;
// "field":[incorrect index hash]
auto const badKey = strHex(expectedKey.key + uint256{1});
params[jss::ledger_index] = jss::validated;
params[field] = badKey;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(false, jv, expectedType, "entryNotFound");
BEAST_EXPECTS(
jv[jss::result][jss::index] == badKey, to_string(jv));
}
{
Json::Value params;
// "index":"field" using API 2
params[jss::ledger_index] = jss::validated;
params[jss::index] = field;
params[jss::api_version] = 2;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(false, jv, expectedType, "malformedRequest");
BEAST_EXPECT(!jv[jss::result].isMember(jss::index));
}
std::string const pdIdx = [&]() {
{
Json::Value params;
// Test good values
// Use the "field":true notation
params[jss::ledger_index] = jss::validated;
params[field] = true;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
// Index will always be returned for valid parameters.
std::string const pdIdx =
jv[jss::result][jss::index].asString();
BEAST_EXPECTS(hexKey == pdIdx, to_string(jv));
checkResult(good, jv, expectedType);
return pdIdx;
}
}();
{
Json::Value params;
// "field":"[index hash]"
params[jss::ledger_index] = jss::validated;
params[field] = hexKey;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(good, jv, expectedType);
BEAST_EXPECT(jv[jss::result][jss::index].asString() == hexKey);
}
{
// Bad value
// Use the "index":"field" notation with API 2
Json::Value params;
params[jss::ledger_index] = jss::validated;
params[jss::index] = field;
params[jss::api_version] = 2;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(false, jv, expectedType, "malformedRequest");
BEAST_EXPECT(!jv[jss::result].isMember(jss::index));
}
{
Json::Value params;
// Use the "index":"field" notation with API 3
params[jss::ledger_index] = jss::validated;
params[jss::index] = field;
params[jss::api_version] = 3;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
// Index is correct either way
BEAST_EXPECT(jv[jss::result][jss::index].asString() == hexKey);
checkResult(good, jv, expectedType);
}
{
Json::Value params;
// Use the "index":"[index hash]" notation
params[jss::ledger_index] = jss::validated;
params[jss::index] = pdIdx;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
// Index is correct either way
BEAST_EXPECT(jv[jss::result][jss::index].asString() == hexKey);
checkResult(good, jv, expectedType);
}
};
test(jss::amendments, jss::Amendments, keylet::amendments(), true);
test(jss::fee, jss::FeeSettings, keylet::fees(), true);
// There won't be an nunl
test(jss::nunl, jss::NegativeUNL, keylet::negativeUNL(), false);
// Can only get the short skip list this way
test(jss::hashes, jss::LedgerHashes, keylet::skip(), true);
}
void
testHashes()
{
using namespace test::jtx;
Account const alice{"alice"};
Account const bob{"bob"};
Env env{*this, envconfig([](auto cfg) {
cfg->START_UP = Config::FRESH;
return cfg;
})};
env.close();
/** Verifies that the RPC result has the expected data
*
* @param good: Indicates that the request should have succeeded
* and returned a ledger object of `expectedType` type.
* @param jv: The RPC result Json value
* @param expectedCount: The number of Hashes expected in the
* object if "good".
* @param expectedError: Optional. The expected error if not
* good. Defaults to "entryNotFound".
*/
auto checkResult =
[&](bool good,
Json::Value const& jv,
int expectedCount,
std::optional<std::string> const& expectedError = {}) {
if (good)
{
BEAST_EXPECTS(
jv.isObject() && jv.isMember(jss::result) &&
!jv[jss::result].isMember(jss::error) &&
jv[jss::result].isMember(jss::node) &&
jv[jss::result][jss::node].isMember(
sfLedgerEntryType.jsonName) &&
jv[jss::result][jss::node]
[sfLedgerEntryType.jsonName] == jss::LedgerHashes,
to_string(jv));
BEAST_EXPECTS(
jv[jss::result].isMember(jss::node) &&
jv[jss::result][jss::node].isMember("Hashes") &&
jv[jss::result][jss::node]["Hashes"].size() ==
expectedCount,
to_string(jv[jss::result][jss::node]["Hashes"].size()));
}
else
{
BEAST_EXPECTS(
jv.isObject() && jv.isMember(jss::result) &&
jv[jss::result].isMember(jss::error) &&
!jv[jss::result].isMember(jss::node) &&
jv[jss::result][jss::error] ==
expectedError.value_or("entryNotFound"),
to_string(jv));
}
};
/** Runs a series of tests for a given ledger index.
*
* @param ledger: The ledger index value of the "hashes" request
* parameter. May not necessarily be a number.
* @param expectedKey: The expected keylet of the object.
* @param good: Indicates whether the object is expected to
* exist.
* @param expectedCount: The number of Hashes expected in the
* object if "good".
*/
auto test = [&](Json::Value ledger,
Keylet const& expectedKey,
bool good,
int expectedCount = 0) {
testcase << "LedgerHashes: seq: " << env.current()->header().seq
<< " \"hashes\":" << to_string(ledger)
<< (good ? "" : " not") << " found";
auto const hexKey = strHex(expectedKey.key);
{
// Test bad values
// "hashes":null
Json::Value params;
params[jss::ledger_index] = jss::validated;
params[jss::hashes] = Json::nullValue;
auto jv = env.rpc("json", "ledger_entry", to_string(params));
checkResult(false, jv, 0, "malformedRequest");
BEAST_EXPECT(!jv[jss::result].isMember(jss::index));
}
{
Json::Value params;
// "hashes":"non-uint string"
params[jss::ledger_index] = jss::validated;
params[jss::hashes] = "arbitrary string";
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(false, jv, 0, "malformedRequest");
BEAST_EXPECT(!jv[jss::result].isMember(jss::index));
}
{
Json::Value params;
// "hashes":"uint string" is invalid, too
params[jss::ledger_index] = jss::validated;
params[jss::hashes] = "10";
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(false, jv, 0, "malformedRequest");
BEAST_EXPECT(!jv[jss::result].isMember(jss::index));
}
{
Json::Value params;
// "hashes":false
params[jss::ledger_index] = jss::validated;
params[jss::hashes] = false;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(false, jv, 0, "invalidParams");
BEAST_EXPECT(!jv[jss::result].isMember(jss::index));
}
{
Json::Value params;
// "hashes":-1
params[jss::ledger_index] = jss::validated;
params[jss::hashes] = -1;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(false, jv, 0, "internal");
BEAST_EXPECT(!jv[jss::result].isMember(jss::index));
}
// "hashes":[incorrect index hash]
{
Json::Value params;
auto const badKey = strHex(expectedKey.key + uint256{1});
params[jss::ledger_index] = jss::validated;
params[jss::hashes] = badKey;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(false, jv, 0, "entryNotFound");
BEAST_EXPECT(jv[jss::result][jss::index] == badKey);
}
{
Json::Value params;
// Test good values
// Use the "hashes":ledger notation
params[jss::ledger_index] = jss::validated;
params[jss::hashes] = ledger;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(good, jv, expectedCount);
// Index will always be returned for valid parameters.
std::string const pdIdx =
jv[jss::result][jss::index].asString();
BEAST_EXPECTS(hexKey == pdIdx, strHex(pdIdx));
}
{
Json::Value params;
// "hashes":"[index hash]"
params[jss::ledger_index] = jss::validated;
params[jss::hashes] = hexKey;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(good, jv, expectedCount);
// Index is correct either way
BEAST_EXPECTS(
hexKey == jv[jss::result][jss::index].asString(),
strHex(jv[jss::result][jss::index].asString()));
}
{
Json::Value params;
// Use the "index":"[index hash]" notation
params[jss::ledger_index] = jss::validated;
params[jss::index] = hexKey;
auto const jv =
env.rpc("json", "ledger_entry", to_string(params));
checkResult(good, jv, expectedCount);
// Index is correct either way
BEAST_EXPECTS(
hexKey == jv[jss::result][jss::index].asString(),
strHex(jv[jss::result][jss::index].asString()));
}
};
// short skip list
test(true, keylet::skip(), true, 2);
// long skip list at index 0
test(1, keylet::skip(1), false);
// long skip list at index 1
test(1 << 17, keylet::skip(1 << 17), false);
// Close more ledgers, but stop short of the flag ledger
for (auto i = env.current()->seq(); i <= 250; ++i)
env.close();
// short skip list
test(true, keylet::skip(), true, 249);
// long skip list at index 0
test(1, keylet::skip(1), false);
// long skip list at index 1
test(1 << 17, keylet::skip(1 << 17), false);
// Close a flag ledger so the first "long" skip list is created
for (auto i = env.current()->seq(); i <= 260; ++i)
env.close();
// short skip list
test(true, keylet::skip(), true, 256);
// long skip list at index 0
test(1, keylet::skip(1), true, 1);
// long skip list at index 1
test(1 << 17, keylet::skip(1 << 17), false);
}
void
testCLI()
{
@@ -2400,6 +2859,8 @@ public:
testOracleLedgerEntry();
testMPT();
testPermissionedDomain();
testFixed();
testHashes();
testCLI();
}
};

View File

@@ -1,5 +1,5 @@
# Unit tests
This directory contains unit tests for the project. The difference from existing `src/test` folder
is that we switch to 3rd party testing framework (doctest). We intend to gradually move existing tests
from our own framework to doctest and such tests will be moved to this new folder.
is that we switch to 3rd party testing framework (`gtest`). We intend to gradually move existing tests
from our own framework to `gtest` and such tests will be moved to this new folder.

View File

@@ -1,14 +1,14 @@
include(XrplAddTest)
# Test requirements.
find_package(doctest REQUIRED)
find_package(GTest REQUIRED)
# Custom target for all tests defined in this file
add_custom_target(xrpl.tests)
# Common library dependencies for the rest of the tests.
add_library(xrpl.imports.test INTERFACE)
target_link_libraries(xrpl.imports.test INTERFACE doctest::doctest xrpl.libxrpl)
target_link_libraries(xrpl.imports.test INTERFACE gtest::gtest xrpl.libxrpl)
# One test for each module.
xrpl_add_test(basics)

View File

@@ -1,207 +0,0 @@
#include <xrpl/basics/MallocTrim.h>
#include <boost/predef.h>
#include <doctest/doctest.h>
using namespace xrpl;
#if defined(__GLIBC__) && BOOST_OS_LINUX
namespace xrpl::detail {
long
parseVmRSSkB(std::string const& status);
} // namespace xrpl::detail
#endif
TEST_CASE("MallocTrimReport structure")
{
// Test default construction
MallocTrimReport report;
CHECK(report.supported == false);
CHECK(report.trimResult == -1);
CHECK(report.rssBeforeKB == -1);
CHECK(report.rssAfterKB == -1);
CHECK(report.deltaKB() == 0);
// Test deltaKB calculation - memory freed
report.rssBeforeKB = 1000;
report.rssAfterKB = 800;
CHECK(report.deltaKB() == -200);
// Test deltaKB calculation - memory increased
report.rssBeforeKB = 500;
report.rssAfterKB = 600;
CHECK(report.deltaKB() == 100);
// Test deltaKB calculation - no change
report.rssBeforeKB = 1234;
report.rssAfterKB = 1234;
CHECK(report.deltaKB() == 0);
}
#if defined(__GLIBC__) && BOOST_OS_LINUX
TEST_CASE("parseVmRSSkB")
{
using xrpl::detail::parseVmRSSkB;
// Test standard format
{
std::string status = "VmRSS: 123456 kB\n";
long result = parseVmRSSkB(status);
CHECK(result == 123456);
}
// Test with multiple lines
{
std::string status =
"Name: xrpld\n"
"VmPeak: 1234567 kB\n"
"VmSize: 1234567 kB\n"
"VmRSS: 987654 kB\n"
"VmData: 123456 kB\n";
long result = parseVmRSSkB(status);
CHECK(result == 987654);
}
// Test with minimal whitespace
{
std::string status = "VmRSS: 42 kB";
long result = parseVmRSSkB(status);
CHECK(result == 42);
}
// Test with extra whitespace
{
std::string status = "VmRSS: 999999 kB";
long result = parseVmRSSkB(status);
CHECK(result == 999999);
}
// Test with tabs
{
std::string status = "VmRSS:\t\t12345 kB";
long result = parseVmRSSkB(status);
// Note: tabs are not explicitly handled as spaces, this documents
// current behavior
CHECK(result == 12345);
}
// Test zero value
{
std::string status = "VmRSS: 0 kB\n";
long result = parseVmRSSkB(status);
CHECK(result == 0);
}
// Test missing VmRSS
{
std::string status =
"Name: xrpld\n"
"VmPeak: 1234567 kB\n"
"VmSize: 1234567 kB\n";
long result = parseVmRSSkB(status);
CHECK(result == -1);
}
// Test empty string
{
std::string status = "";
long result = parseVmRSSkB(status);
CHECK(result == -1);
}
// Test malformed data (VmRSS but no number)
{
std::string status = "VmRSS: \n";
long result = parseVmRSSkB(status);
// sscanf should fail to parse and return -1 unchanged
CHECK(result == -1);
}
// Test malformed data (VmRSS but invalid number)
{
std::string status = "VmRSS: abc kB\n";
long result = parseVmRSSkB(status);
// sscanf should fail and return -1 unchanged
CHECK(result == -1);
}
// Test partial match (should not match "NotVmRSS:")
{
std::string status = "NotVmRSS: 123456 kB\n";
long result = parseVmRSSkB(status);
CHECK(result == -1);
}
}
#endif
TEST_CASE("mallocTrim basic functionality")
{
beast::Journal journal{beast::Journal::getNullSink()};
// Test with no tag
{
MallocTrimReport report = mallocTrim(std::nullopt, journal);
#if defined(__GLIBC__) && BOOST_OS_LINUX
// On Linux with glibc, should be supported
CHECK(report.supported == true);
// trimResult should be 0 or 1 (success indicators)
CHECK(report.trimResult >= 0);
#else
// On other platforms, should be unsupported
CHECK(report.supported == false);
CHECK(report.trimResult == -1);
CHECK(report.rssBeforeKB == -1);
CHECK(report.rssAfterKB == -1);
#endif
}
// Test with tag
{
MallocTrimReport report =
mallocTrim(std::optional<std::string>("test_tag"), journal);
#if defined(__GLIBC__) && BOOST_OS_LINUX
CHECK(report.supported == true);
CHECK(report.trimResult >= 0);
#else
CHECK(report.supported == false);
#endif
}
}
TEST_CASE("mallocTrim with debug logging")
{
beast::Journal journal{beast::Journal::getNullSink()};
MallocTrimReport report =
mallocTrim(std::optional<std::string>("debug_test"), journal);
#if defined(__GLIBC__) && BOOST_OS_LINUX
CHECK(report.supported == true);
// The function should complete without crashing
#else
CHECK(report.supported == false);
#endif
}
TEST_CASE("mallocTrim repeated calls")
{
beast::Journal journal{beast::Journal::getNullSink()};
// Call malloc_trim multiple times to ensure it's safe
for (int i = 0; i < 5; ++i)
{
MallocTrimReport report = mallocTrim(
std::optional<std::string>("iteration_" + std::to_string(i)),
journal);
#if defined(__GLIBC__) && BOOST_OS_LINUX
CHECK(report.supported == true);
CHECK(report.trimResult >= 0);
#else
CHECK(report.supported == false);
#endif
}
}

View File

@@ -1,15 +1,13 @@
#include <xrpl/basics/RangeSet.h>
#include <doctest/doctest.h>
#include <gtest/gtest.h>
#include <cstdint>
#include <optional>
using namespace xrpl;
TEST_SUITE_BEGIN("RangeSet");
TEST_CASE("prevMissing")
TEST(RangeSet, prevMissing)
{
// Set will include:
// [ 0, 5]
@@ -31,80 +29,78 @@ TEST_CASE("prevMissing")
expected = ((i % 10) > 6) ? (i - 1) : oneBelowRange;
}
CHECK(prevMissing(set, i) == expected);
EXPECT_EQ(prevMissing(set, i), expected);
}
}
TEST_CASE("toString")
TEST(RangeSet, toString)
{
RangeSet<std::uint32_t> set;
CHECK(to_string(set) == "empty");
EXPECT_EQ(to_string(set), "empty");
set.insert(1);
CHECK(to_string(set) == "1");
EXPECT_EQ(to_string(set), "1");
set.insert(range(4u, 6u));
CHECK(to_string(set) == "1,4-6");
EXPECT_EQ(to_string(set), "1,4-6");
set.insert(2);
CHECK(to_string(set) == "1-2,4-6");
EXPECT_EQ(to_string(set), "1-2,4-6");
set.erase(range(4u, 5u));
CHECK(to_string(set) == "1-2,6");
EXPECT_EQ(to_string(set), "1-2,6");
}
TEST_CASE("fromString")
TEST(RangeSet, fromString)
{
RangeSet<std::uint32_t> set;
CHECK(!from_string(set, ""));
CHECK(boost::icl::length(set) == 0);
EXPECT_FALSE(from_string(set, ""));
EXPECT_EQ(boost::icl::length(set), 0);
CHECK(!from_string(set, "#"));
CHECK(boost::icl::length(set) == 0);
EXPECT_FALSE(from_string(set, "#"));
EXPECT_EQ(boost::icl::length(set), 0);
CHECK(!from_string(set, ","));
CHECK(boost::icl::length(set) == 0);
EXPECT_FALSE(from_string(set, ","));
EXPECT_EQ(boost::icl::length(set), 0);
CHECK(!from_string(set, ",-"));
CHECK(boost::icl::length(set) == 0);
EXPECT_FALSE(from_string(set, ",-"));
EXPECT_EQ(boost::icl::length(set), 0);
CHECK(!from_string(set, "1,,2"));
CHECK(boost::icl::length(set) == 0);
EXPECT_FALSE(from_string(set, "1,,2"));
EXPECT_EQ(boost::icl::length(set), 0);
CHECK(from_string(set, "1"));
CHECK(boost::icl::length(set) == 1);
CHECK(boost::icl::first(set) == 1);
EXPECT_TRUE(from_string(set, "1"));
EXPECT_EQ(boost::icl::length(set), 1);
EXPECT_EQ(boost::icl::first(set), 1);
CHECK(from_string(set, "1,1"));
CHECK(boost::icl::length(set) == 1);
CHECK(boost::icl::first(set) == 1);
EXPECT_TRUE(from_string(set, "1,1"));
EXPECT_EQ(boost::icl::length(set), 1);
EXPECT_EQ(boost::icl::first(set), 1);
CHECK(from_string(set, "1-1"));
CHECK(boost::icl::length(set) == 1);
CHECK(boost::icl::first(set) == 1);
EXPECT_TRUE(from_string(set, "1-1"));
EXPECT_EQ(boost::icl::length(set), 1);
EXPECT_EQ(boost::icl::first(set), 1);
CHECK(from_string(set, "1,4-6"));
CHECK(boost::icl::length(set) == 4);
CHECK(boost::icl::first(set) == 1);
CHECK(!boost::icl::contains(set, 2));
CHECK(!boost::icl::contains(set, 3));
CHECK(boost::icl::contains(set, 4));
CHECK(boost::icl::contains(set, 5));
CHECK(boost::icl::last(set) == 6);
EXPECT_TRUE(from_string(set, "1,4-6"));
EXPECT_EQ(boost::icl::length(set), 4);
EXPECT_EQ(boost::icl::first(set), 1);
EXPECT_FALSE(boost::icl::contains(set, 2));
EXPECT_FALSE(boost::icl::contains(set, 3));
EXPECT_TRUE(boost::icl::contains(set, 4));
EXPECT_TRUE(boost::icl::contains(set, 5));
EXPECT_EQ(boost::icl::last(set), 6);
CHECK(from_string(set, "1-2,4-6"));
CHECK(boost::icl::length(set) == 5);
CHECK(boost::icl::first(set) == 1);
CHECK(boost::icl::contains(set, 2));
CHECK(boost::icl::contains(set, 4));
CHECK(boost::icl::last(set) == 6);
EXPECT_TRUE(from_string(set, "1-2,4-6"));
EXPECT_EQ(boost::icl::length(set), 5);
EXPECT_EQ(boost::icl::first(set), 1);
EXPECT_TRUE(boost::icl::contains(set, 2));
EXPECT_TRUE(boost::icl::contains(set, 4));
EXPECT_EQ(boost::icl::last(set), 6);
CHECK(from_string(set, "1-2,6"));
CHECK(boost::icl::length(set) == 3);
CHECK(boost::icl::first(set) == 1);
CHECK(boost::icl::contains(set, 2));
CHECK(boost::icl::last(set) == 6);
EXPECT_TRUE(from_string(set, "1-2,6"));
EXPECT_EQ(boost::icl::length(set), 3);
EXPECT_EQ(boost::icl::first(set), 1);
EXPECT_TRUE(boost::icl::contains(set, 2));
EXPECT_EQ(boost::icl::last(set), 6);
}
TEST_SUITE_END();

View File

@@ -1,6 +1,6 @@
#include <xrpl/basics/Slice.h>
#include <doctest/doctest.h>
#include <gtest/gtest.h>
#include <array>
#include <cstdint>
@@ -12,37 +12,35 @@ static std::uint8_t const data[] = {
0x18, 0xb4, 0x70, 0xcb, 0xf5, 0xac, 0x2d, 0x89, 0x4d, 0x19, 0x9c,
0xf0, 0x2c, 0x15, 0xd1, 0xf9, 0x9b, 0x66, 0xd2, 0x30, 0xd3};
TEST_SUITE_BEGIN("Slice");
TEST_CASE("equality & inequality")
TEST(Slice, equality_and_inequality)
{
Slice const s0{};
CHECK(s0.size() == 0);
CHECK(s0.data() == nullptr);
CHECK(s0 == s0);
EXPECT_EQ(s0.size(), 0);
EXPECT_EQ(s0.data(), nullptr);
EXPECT_EQ(s0, s0);
// Test slices of equal and unequal size pointing to same data:
for (std::size_t i = 0; i != sizeof(data); ++i)
{
Slice const s1{data, i};
CHECK(s1.size() == i);
CHECK(s1.data() != nullptr);
EXPECT_EQ(s1.size(), i);
EXPECT_NE(s1.data(), nullptr);
if (i == 0)
CHECK(s1 == s0);
EXPECT_EQ(s1, s0);
else
CHECK(s1 != s0);
EXPECT_NE(s1, s0);
for (std::size_t j = 0; j != sizeof(data); ++j)
{
Slice const s2{data, j};
if (i == j)
CHECK(s1 == s2);
EXPECT_EQ(s1, s2);
else
CHECK(s1 != s2);
EXPECT_NE(s1, s2);
}
}
@@ -53,22 +51,22 @@ TEST_CASE("equality & inequality")
for (std::size_t i = 0; i != sizeof(data); ++i)
a[i] = b[i] = data[i];
CHECK(makeSlice(a) == makeSlice(b));
EXPECT_EQ(makeSlice(a), makeSlice(b));
b[7]++;
CHECK(makeSlice(a) != makeSlice(b));
EXPECT_NE(makeSlice(a), makeSlice(b));
a[7]++;
CHECK(makeSlice(a) == makeSlice(b));
EXPECT_EQ(makeSlice(a), makeSlice(b));
}
TEST_CASE("indexing")
TEST(Slice, indexing)
{
Slice const s{data, sizeof(data)};
for (std::size_t i = 0; i != sizeof(data); ++i)
CHECK(s[i] == data[i]);
EXPECT_EQ(s[i], data[i]);
}
TEST_CASE("advancing")
TEST(Slice, advancing)
{
for (std::size_t i = 0; i < sizeof(data); ++i)
{
@@ -77,10 +75,8 @@ TEST_CASE("advancing")
Slice s(data + i, sizeof(data) - i);
s += j;
CHECK(s.data() == data + i + j);
CHECK(s.size() == sizeof(data) - i - j);
EXPECT_EQ(s.data(), data + i + j);
EXPECT_EQ(s.size(), sizeof(data) - i - j);
}
}
}
TEST_SUITE_END();

View File

@@ -1,6 +1,6 @@
#include <xrpl/basics/base64.h>
#include <doctest/doctest.h>
#include <gtest/gtest.h>
#include <string>
@@ -10,11 +10,11 @@ static void
check(std::string const& in, std::string const& out)
{
auto const encoded = base64_encode(in);
CHECK(encoded == out);
CHECK(base64_decode(encoded) == in);
EXPECT_EQ(encoded, out);
EXPECT_EQ(base64_decode(encoded), in);
}
TEST_CASE("base64")
TEST(base64, base64)
{
// cspell: disable
check("", "");
@@ -46,5 +46,5 @@ TEST_CASE("base64")
std::string const notBase64 = "not_base64!!";
std::string const truncated = "not";
CHECK(base64_decode(notBase64) == base64_decode(truncated));
EXPECT_EQ(base64_decode(notBase64), base64_decode(truncated));
}

View File

@@ -1,13 +1,13 @@
#include <xrpl/basics/contract.h>
#include <doctest/doctest.h>
#include <gtest/gtest.h>
#include <stdexcept>
#include <string>
using namespace xrpl;
TEST_CASE("contract")
TEST(contract, contract)
{
try
{
@@ -15,7 +15,7 @@ TEST_CASE("contract")
}
catch (std::runtime_error const& e1)
{
CHECK(std::string(e1.what()) == "Throw test");
EXPECT_STREQ(e1.what(), "Throw test");
try
{
@@ -23,15 +23,15 @@ TEST_CASE("contract")
}
catch (std::runtime_error const& e2)
{
CHECK(std::string(e2.what()) == "Throw test");
EXPECT_STREQ(e2.what(), "Throw test");
}
catch (...)
{
CHECK(false);
FAIL() << "std::runtime_error should have been re-caught";
}
}
catch (...)
{
CHECK(false);
FAIL() << "std::runtime_error should have been caught the first time";
}
}

View File

@@ -1,2 +1,8 @@
#define DOCTEST_CONFIG_IMPLEMENT_WITH_MAIN
#include <doctest/doctest.h>
#include <gtest/gtest.h>
int
main(int argc, char** argv)
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}

View File

@@ -1,45 +1,45 @@
#include <xrpl/basics/mulDiv.h>
#include <doctest/doctest.h>
#include <gtest/gtest.h>
#include <cstdint>
#include <limits>
using namespace xrpl;
TEST_CASE("mulDiv")
TEST(mulDiv, mulDiv)
{
auto const max = std::numeric_limits<std::uint64_t>::max();
std::uint64_t const max32 = std::numeric_limits<std::uint32_t>::max();
auto result = mulDiv(85, 20, 5);
REQUIRE(result);
CHECK(*result == 340);
ASSERT_TRUE(result.has_value());
EXPECT_EQ(*result, 340);
result = mulDiv(20, 85, 5);
REQUIRE(result);
CHECK(*result == 340);
ASSERT_TRUE(result.has_value());
EXPECT_EQ(*result, 340);
result = mulDiv(0, max - 1, max - 3);
REQUIRE(result);
CHECK(*result == 0);
ASSERT_TRUE(result.has_value());
EXPECT_EQ(*result, 0);
result = mulDiv(max - 1, 0, max - 3);
REQUIRE(result);
CHECK(*result == 0);
ASSERT_TRUE(result.has_value());
EXPECT_EQ(*result, 0);
result = mulDiv(max, 2, max / 2);
REQUIRE(result);
CHECK(*result == 4);
ASSERT_TRUE(result.has_value());
EXPECT_EQ(*result, 4);
result = mulDiv(max, 1000, max / 1000);
REQUIRE(result);
CHECK(*result == 1000000);
ASSERT_TRUE(result.has_value());
EXPECT_EQ(*result, 1000000);
result = mulDiv(max, 1000, max / 1001);
REQUIRE(result);
CHECK(*result == 1001000);
ASSERT_TRUE(result.has_value());
EXPECT_EQ(*result, 1001000);
result = mulDiv(max32 + 1, max32 + 1, 5);
REQUIRE(result);
CHECK(*result == 3689348814741910323);
ASSERT_TRUE(result.has_value());
EXPECT_EQ(*result, 3689348814741910323);
// Overflow
result = mulDiv(max - 1, max - 2, 5);
CHECK(!result);
EXPECT_FALSE(result.has_value());
}

View File

@@ -1,10 +1,10 @@
#include <xrpl/basics/scope.h>
#include <doctest/doctest.h>
#include <gtest/gtest.h>
using namespace xrpl;
TEST_CASE("scope_exit")
TEST(scope, scope_exit)
{
// scope_exit always executes the functor on destruction,
// unless release() is called
@@ -12,23 +12,23 @@ TEST_CASE("scope_exit")
{
scope_exit x{[&i]() { i = 1; }};
}
CHECK(i == 1);
EXPECT_EQ(i, 1);
{
scope_exit x{[&i]() { i = 2; }};
x.release();
}
CHECK(i == 1);
EXPECT_EQ(i, 1);
{
scope_exit x{[&i]() { i += 2; }};
auto x2 = std::move(x);
}
CHECK(i == 3);
EXPECT_EQ(i, 3);
{
scope_exit x{[&i]() { i = 4; }};
x.release();
auto x2 = std::move(x);
}
CHECK(i == 3);
EXPECT_EQ(i, 3);
{
try
{
@@ -39,7 +39,7 @@ TEST_CASE("scope_exit")
{
}
}
CHECK(i == 5);
EXPECT_EQ(i, 5);
{
try
{
@@ -51,10 +51,10 @@ TEST_CASE("scope_exit")
{
}
}
CHECK(i == 5);
EXPECT_EQ(i, 5);
}
TEST_CASE("scope_fail")
TEST(scope, scope_fail)
{
// scope_fail executes the functor on destruction only
// if an exception is unwinding, unless release() is called
@@ -62,23 +62,23 @@ TEST_CASE("scope_fail")
{
scope_fail x{[&i]() { i = 1; }};
}
CHECK(i == 0);
EXPECT_EQ(i, 0);
{
scope_fail x{[&i]() { i = 2; }};
x.release();
}
CHECK(i == 0);
EXPECT_EQ(i, 0);
{
scope_fail x{[&i]() { i = 3; }};
auto x2 = std::move(x);
}
CHECK(i == 0);
EXPECT_EQ(i, 0);
{
scope_fail x{[&i]() { i = 4; }};
x.release();
auto x2 = std::move(x);
}
CHECK(i == 0);
EXPECT_EQ(i, 0);
{
try
{
@@ -89,7 +89,7 @@ TEST_CASE("scope_fail")
{
}
}
CHECK(i == 5);
EXPECT_EQ(i, 5);
{
try
{
@@ -101,10 +101,10 @@ TEST_CASE("scope_fail")
{
}
}
CHECK(i == 5);
EXPECT_EQ(i, 5);
}
TEST_CASE("scope_success")
TEST(scope, scope_success)
{
// scope_success executes the functor on destruction only
// if an exception is not unwinding, unless release() is called
@@ -112,23 +112,23 @@ TEST_CASE("scope_success")
{
scope_success x{[&i]() { i = 1; }};
}
CHECK(i == 1);
EXPECT_EQ(i, 1);
{
scope_success x{[&i]() { i = 2; }};
x.release();
}
CHECK(i == 1);
EXPECT_EQ(i, 1);
{
scope_success x{[&i]() { i += 2; }};
auto x2 = std::move(x);
}
CHECK(i == 3);
EXPECT_EQ(i, 3);
{
scope_success x{[&i]() { i = 4; }};
x.release();
auto x2 = std::move(x);
}
CHECK(i == 3);
EXPECT_EQ(i, 3);
{
try
{
@@ -139,7 +139,7 @@ TEST_CASE("scope_success")
{
}
}
CHECK(i == 3);
EXPECT_EQ(i, 3);
{
try
{
@@ -151,5 +151,5 @@ TEST_CASE("scope_success")
{
}
}
CHECK(i == 3);
EXPECT_EQ(i, 3);
}

View File

@@ -1,6 +1,6 @@
#include <xrpl/basics/tagged_integer.h>
#include <doctest/doctest.h>
#include <gtest/gtest.h>
#include <type_traits>
@@ -102,127 +102,123 @@ static_assert(
!std::is_convertible<TagUInt2, TagUInt3>::value,
"TagUInt2 should not be convertible to a TagUInt3");
TEST_SUITE_BEGIN("tagged_integer");
using TagInt = tagged_integer<std::int32_t, Tag1>;
TEST_CASE("comparison operators")
TEST(tagged_integer, comparison_operators)
{
TagInt const zero(0);
TagInt const one(1);
CHECK(one == one);
CHECK(!(one == zero));
EXPECT_TRUE(one == one);
EXPECT_FALSE(one == zero);
CHECK(one != zero);
CHECK(!(one != one));
EXPECT_TRUE(one != zero);
EXPECT_FALSE(one != one);
CHECK(zero < one);
CHECK(!(one < zero));
EXPECT_TRUE(zero < one);
EXPECT_FALSE(one < zero);
CHECK(one > zero);
CHECK(!(zero > one));
EXPECT_TRUE(one > zero);
EXPECT_FALSE(zero > one);
CHECK(one >= one);
CHECK(one >= zero);
CHECK(!(zero >= one));
EXPECT_TRUE(one >= one);
EXPECT_TRUE(one >= zero);
EXPECT_FALSE(zero >= one);
CHECK(zero <= one);
CHECK(zero <= zero);
CHECK(!(one <= zero));
EXPECT_TRUE(zero <= one);
EXPECT_TRUE(zero <= zero);
EXPECT_FALSE(one <= zero);
}
TEST_CASE("increment / decrement operators")
TEST(tagged_integer, increment_decrement_operators)
{
TagInt const zero(0);
TagInt const one(1);
TagInt a{0};
++a;
CHECK(a == one);
EXPECT_EQ(a, one);
--a;
CHECK(a == zero);
EXPECT_EQ(a, zero);
a++;
CHECK(a == one);
EXPECT_EQ(a, one);
a--;
CHECK(a == zero);
EXPECT_EQ(a, zero);
}
TEST_CASE("arithmetic operators")
TEST(tagged_integer, arithmetic_operators)
{
TagInt a{-2};
CHECK(+a == TagInt{-2});
CHECK(-a == TagInt{2});
CHECK(TagInt{-3} + TagInt{4} == TagInt{1});
CHECK(TagInt{-3} - TagInt{4} == TagInt{-7});
CHECK(TagInt{-3} * TagInt{4} == TagInt{-12});
CHECK(TagInt{8} / TagInt{4} == TagInt{2});
CHECK(TagInt{7} % TagInt{4} == TagInt{3});
EXPECT_EQ(+a, TagInt{-2});
EXPECT_EQ(-a, TagInt{2});
EXPECT_EQ(TagInt{-3} + TagInt{4}, TagInt{1});
EXPECT_EQ(TagInt{-3} - TagInt{4}, TagInt{-7});
EXPECT_EQ(TagInt{-3} * TagInt{4}, TagInt{-12});
EXPECT_EQ(TagInt{8} / TagInt{4}, TagInt{2});
EXPECT_EQ(TagInt{7} % TagInt{4}, TagInt{3});
CHECK(~TagInt{8} == TagInt{~TagInt::value_type{8}});
CHECK((TagInt{6} & TagInt{3}) == TagInt{2});
CHECK((TagInt{6} | TagInt{3}) == TagInt{7});
CHECK((TagInt{6} ^ TagInt{3}) == TagInt{5});
EXPECT_EQ(~TagInt{8}, TagInt{~TagInt::value_type{8}});
EXPECT_EQ((TagInt{6} & TagInt{3}), TagInt{2});
EXPECT_EQ((TagInt{6} | TagInt{3}), TagInt{7});
EXPECT_EQ((TagInt{6} ^ TagInt{3}), TagInt{5});
CHECK((TagInt{4} << TagInt{2}) == TagInt{16});
CHECK((TagInt{16} >> TagInt{2}) == TagInt{4});
EXPECT_EQ((TagInt{4} << TagInt{2}), TagInt{16});
EXPECT_EQ((TagInt{16} >> TagInt{2}), TagInt{4});
}
TEST_CASE("assignment operators")
TEST(tagged_integer, assignment_operators)
{
TagInt a{-2};
TagInt b{0};
b = a;
CHECK(b == TagInt{-2});
EXPECT_EQ(b, TagInt{-2});
// -3 + 4 == 1
a = TagInt{-3};
a += TagInt{4};
CHECK(a == TagInt{1});
EXPECT_EQ(a, TagInt{1});
// -3 - 4 == -7
a = TagInt{-3};
a -= TagInt{4};
CHECK(a == TagInt{-7});
EXPECT_EQ(a, TagInt{-7});
// -3 * 4 == -12
a = TagInt{-3};
a *= TagInt{4};
CHECK(a == TagInt{-12});
EXPECT_EQ(a, TagInt{-12});
// 8/4 == 2
a = TagInt{8};
a /= TagInt{4};
CHECK(a == TagInt{2});
EXPECT_EQ(a, TagInt{2});
// 7 % 4 == 3
a = TagInt{7};
a %= TagInt{4};
CHECK(a == TagInt{3});
EXPECT_EQ(a, TagInt{3});
// 6 & 3 == 2
a = TagInt{6};
a /= TagInt{3};
CHECK(a == TagInt{2});
EXPECT_EQ(a, TagInt{2});
// 6 | 3 == 7
a = TagInt{6};
a |= TagInt{3};
CHECK(a == TagInt{7});
EXPECT_EQ(a, TagInt{7});
// 6 ^ 3 == 5
a = TagInt{6};
a ^= TagInt{3};
CHECK(a == TagInt{5});
EXPECT_EQ(a, TagInt{5});
// 4 << 2 == 16
a = TagInt{4};
a <<= TagInt{2};
CHECK(a == TagInt{16});
EXPECT_EQ(a, TagInt{16});
// 16 >> 2 == 4
a = TagInt{16};
a >>= TagInt{2};
CHECK(a == TagInt{4});
EXPECT_EQ(a, TagInt{4});
}
TEST_SUITE_END();

View File

@@ -1,15 +1,15 @@
#include <xrpl/crypto/csprng.h>
#include <doctest/doctest.h>
#include <gtest/gtest.h>
using namespace xrpl;
TEST_CASE("get values")
TEST(csprng, get_values)
{
auto& engine = crypto_prng();
auto rand_val = engine();
CHECK(rand_val >= engine.min());
CHECK(rand_val <= engine.max());
EXPECT_GE(rand_val, engine.min());
EXPECT_LE(rand_val, engine.max());
uint16_t twoByte{0};
engine(&twoByte, sizeof(uint16_t));
}

View File

@@ -1,2 +1,8 @@
#define DOCTEST_CONFIG_IMPLEMENT_WITH_MAIN
#include <doctest/doctest.h>
#include <gtest/gtest.h>
int
main(int argc, char** argv)
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}

View File

@@ -2,31 +2,29 @@
#include <xrpl/json/json_reader.h>
#include <xrpl/json/json_writer.h>
#include <doctest/doctest.h>
#include <gtest/gtest.h>
#include <string>
using namespace xrpl;
using namespace Json;
TEST_SUITE_BEGIN("JsonOutput");
static void
checkOutput(std::string const& valueDesc)
{
std::string output;
Json::Value value;
REQUIRE(Json::Reader().parse(valueDesc, value));
ASSERT_TRUE(Json::Reader().parse(valueDesc, value));
auto out = stringOutput(output);
outputJson(value, out);
auto expected = Json::FastWriter().write(value);
CHECK(output == expected);
CHECK(output == valueDesc);
CHECK(output == jsonAsString(value));
EXPECT_EQ(output, expected);
EXPECT_EQ(output, valueDesc);
EXPECT_EQ(output, jsonAsString(value));
}
TEST_CASE("output cases")
TEST(JsonOutput, output_cases)
{
checkOutput("{}");
checkOutput("[]");
@@ -36,5 +34,3 @@ TEST_CASE("output cases")
checkOutput("[[]]");
checkOutput(R"({"array":[{"12":23},{},null,false,0.5]})");
}
TEST_SUITE_END();

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,7 @@
#include <xrpl/json/Writer.h>
#include <doctest/doctest.h>
#include <google/protobuf/stubs/port.h>
#include <gtest/gtest.h>
#include <memory>
#include <string>
@@ -9,14 +9,14 @@
using namespace xrpl;
using namespace Json;
TEST_SUITE_BEGIN("JsonWriter");
struct WriterFixture
class WriterFixture : public ::testing::Test
{
protected:
std::string output;
std::unique_ptr<Writer> writer;
WriterFixture()
void
SetUp() override
{
writer = std::make_unique<Writer>(stringOutput(output));
}
@@ -31,7 +31,7 @@ struct WriterFixture
void
expectOutput(std::string const& expected) const
{
CHECK(output == expected);
EXPECT_EQ(output, expected);
}
void
@@ -42,20 +42,20 @@ struct WriterFixture
}
};
TEST_CASE_FIXTURE(WriterFixture, "trivial")
TEST_F(WriterFixture, trivial)
{
CHECK(output.empty());
EXPECT_TRUE(output.empty());
checkOutputAndReset("");
}
TEST_CASE_FIXTURE(WriterFixture, "near trivial")
TEST_F(WriterFixture, near_trivial)
{
CHECK(output.empty());
EXPECT_TRUE(output.empty());
writer->output(0);
checkOutputAndReset("0");
}
TEST_CASE_FIXTURE(WriterFixture, "primitives")
TEST_F(WriterFixture, primitives)
{
writer->output(true);
checkOutputAndReset("true");
@@ -79,7 +79,7 @@ TEST_CASE_FIXTURE(WriterFixture, "primitives")
checkOutputAndReset("null");
}
TEST_CASE_FIXTURE(WriterFixture, "empty")
TEST_F(WriterFixture, empty)
{
writer->startRoot(Writer::array);
writer->finish();
@@ -90,7 +90,7 @@ TEST_CASE_FIXTURE(WriterFixture, "empty")
checkOutputAndReset("{}");
}
TEST_CASE_FIXTURE(WriterFixture, "escaping")
TEST_F(WriterFixture, escaping)
{
writer->output("\\");
checkOutputAndReset(R"("\\")");
@@ -108,7 +108,7 @@ TEST_CASE_FIXTURE(WriterFixture, "escaping")
checkOutputAndReset(R"("\b\f\n\r\t")");
}
TEST_CASE_FIXTURE(WriterFixture, "array")
TEST_F(WriterFixture, array)
{
writer->startRoot(Writer::array);
writer->append(12);
@@ -116,7 +116,7 @@ TEST_CASE_FIXTURE(WriterFixture, "array")
checkOutputAndReset("[12]");
}
TEST_CASE_FIXTURE(WriterFixture, "long array")
TEST_F(WriterFixture, long_array)
{
writer->startRoot(Writer::array);
writer->append(12);
@@ -126,7 +126,7 @@ TEST_CASE_FIXTURE(WriterFixture, "long array")
checkOutputAndReset(R"([12,true,"hello"])");
}
TEST_CASE_FIXTURE(WriterFixture, "embedded array simple")
TEST_F(WriterFixture, embedded_array_simple)
{
writer->startRoot(Writer::array);
writer->startAppend(Writer::array);
@@ -135,7 +135,7 @@ TEST_CASE_FIXTURE(WriterFixture, "embedded array simple")
checkOutputAndReset("[[]]");
}
TEST_CASE_FIXTURE(WriterFixture, "object")
TEST_F(WriterFixture, object)
{
writer->startRoot(Writer::object);
writer->set("hello", "world");
@@ -143,7 +143,7 @@ TEST_CASE_FIXTURE(WriterFixture, "object")
checkOutputAndReset(R"({"hello":"world"})");
}
TEST_CASE_FIXTURE(WriterFixture, "complex object")
TEST_F(WriterFixture, complex_object)
{
writer->startRoot(Writer::object);
writer->set("hello", "world");
@@ -160,7 +160,7 @@ TEST_CASE_FIXTURE(WriterFixture, "complex object")
R"({"hello":"world","array":[true,12,[{"goodbye":"cruel world.","subarray":[23.5]}]]})");
}
TEST_CASE_FIXTURE(WriterFixture, "json value")
TEST_F(WriterFixture, json_value)
{
Json::Value value(Json::objectValue);
value["foo"] = 23;
@@ -169,5 +169,3 @@ TEST_CASE_FIXTURE(WriterFixture, "json value")
writer->finish();
checkOutputAndReset(R"({"hello":{"foo":23}})");
}
TEST_SUITE_END();

View File

@@ -1,2 +1,8 @@
#define DOCTEST_CONFIG_IMPLEMENT_WITH_MAIN
#include <doctest/doctest.h>
#include <gtest/gtest.h>
int
main(int argc, char** argv)
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}

View File

@@ -7,7 +7,7 @@
#include <boost/beast/http.hpp>
#include <boost/beast/version.hpp>
#include <doctest/doctest.h>
#include <gtest/gtest.h>
#include <atomic>
#include <map>
@@ -217,7 +217,7 @@ runHTTPTest(
} // anonymous namespace
TEST_CASE("HTTPClient case insensitive Content-Length")
TEST(HTTPClient, case_insensitive_content_length)
{
// Test different cases of Content-Length header
std::vector<std::string> header_cases = {
@@ -249,14 +249,14 @@ TEST_CASE("HTTPClient case insensitive Content-Length")
result_error);
// Verify results
CHECK(test_completed);
CHECK(!result_error);
CHECK(result_status == 200);
CHECK(result_data == test_body);
EXPECT_TRUE(test_completed);
EXPECT_FALSE(result_error);
EXPECT_EQ(result_status, 200);
EXPECT_EQ(result_data, test_body);
}
}
TEST_CASE("HTTPClient basic HTTP request")
TEST(HTTPClient, basic_http_request)
{
TestHTTPServer server;
std::string test_body = "Test response body";
@@ -271,13 +271,13 @@ TEST_CASE("HTTPClient basic HTTP request")
bool test_completed = runHTTPTest(
server, "/basic", completed, result_status, result_data, result_error);
CHECK(test_completed);
CHECK(!result_error);
CHECK(result_status == 200);
CHECK(result_data == test_body);
EXPECT_TRUE(test_completed);
EXPECT_FALSE(result_error);
EXPECT_EQ(result_status, 200);
EXPECT_EQ(result_data, test_body);
}
TEST_CASE("HTTPClient empty response")
TEST(HTTPClient, empty_response)
{
TestHTTPServer server;
server.setResponseBody(""); // Empty body
@@ -291,13 +291,13 @@ TEST_CASE("HTTPClient empty response")
bool test_completed = runHTTPTest(
server, "/empty", completed, result_status, result_data, result_error);
CHECK(test_completed);
CHECK(!result_error);
CHECK(result_status == 200);
CHECK(result_data.empty());
EXPECT_TRUE(test_completed);
EXPECT_FALSE(result_error);
EXPECT_EQ(result_status, 200);
EXPECT_TRUE(result_data.empty());
}
TEST_CASE("HTTPClient different status codes")
TEST(HTTPClient, different_status_codes)
{
std::vector<unsigned int> status_codes = {200, 404, 500};
@@ -320,8 +320,8 @@ TEST_CASE("HTTPClient different status codes")
result_data,
result_error);
CHECK(test_completed);
CHECK(!result_error);
CHECK(result_status == static_cast<int>(status));
EXPECT_TRUE(test_completed);
EXPECT_FALSE(result_error);
EXPECT_EQ(result_status, static_cast<int>(status));
}
}

View File

@@ -1,2 +1,8 @@
#define DOCTEST_CONFIG_IMPLEMENT_WITH_MAIN
#include <doctest/doctest.h>
#include <gtest/gtest.h>
int
main(int argc, char** argv)
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}

View File

@@ -2,6 +2,8 @@
#include <xrpl/basics/Log.h>
#include <xrpl/basics/StringUtilities.h>
#include <xrpl/protocol/STTx.h>
#include <xrpl/protocol/TxFormats.h>
#include <xrpl/protocol/UintTypes.h>
#include <xrpl/protocol/jss.h>

View File

@@ -4,6 +4,8 @@
#include <xrpld/app/ledger/Ledger.h>
#include <xrpl/protocol/AccountID.h>
#include <xrpl/protocol/STTx.h>
#include <xrpl/protocol/TxFormats.h>
#include <boost/container/flat_set.hpp>

View File

@@ -7,6 +7,8 @@
#include <xrpld/overlay/PeerSet.h>
#include <xrpl/basics/CountedObject.h>
#include <xrpl/shamap/SHAMapAddNode.h>
#include <xrpl/shamap/SHAMapSyncFilter.h>
#include <mutex>
#include <set>

View File

@@ -23,6 +23,7 @@
#include <xrpl/protocol/SecretKey.h>
#include <xrpl/protocol/digest.h>
#include <xrpl/protocol/jss.h>
#include <xrpl/shamap/SHAMap.h>
#include <utility>
#include <vector>

View File

@@ -9,6 +9,7 @@
#include <xrpl/ledger/CachedView.h>
#include <xrpl/ledger/View.h>
#include <xrpl/protocol/Indexes.h>
#include <xrpl/protocol/PublicKey.h>
#include <xrpl/protocol/STLedgerEntry.h>
#include <xrpl/protocol/Serializer.h>
#include <xrpl/protocol/TxMeta.h>
@@ -16,6 +17,7 @@
namespace xrpl {
// Forward declarations
class Application;
class Job;
class TransactionMaster;

View File

@@ -28,6 +28,7 @@
#include <xrpld/app/rdb/RelationalDatabase.h>
#include <xrpld/app/rdb/Wallet.h>
#include <xrpld/app/tx/apply.h>
#include <xrpld/core/Config.h>
#include <xrpld/core/DatabaseCon.h>
#include <xrpld/overlay/Cluster.h>
#include <xrpld/overlay/PeerReservationTable.h>
@@ -36,7 +37,6 @@
#include <xrpld/shamap/NodeFamily.h>
#include <xrpl/basics/ByteUtilities.h>
#include <xrpl/basics/MallocTrim.h>
#include <xrpl/basics/ResolverAsio.h>
#include <xrpl/basics/random.h>
#include <xrpl/beast/asio/io_latency_probe.h>
@@ -1106,8 +1106,6 @@ public:
<< "; size after: " << cachedSLEs_.size();
}
mallocTrim(std::optional<std::string>("doSweep"), m_journal);
// Set timer to do another sweep later.
setSweepTimer();
}

View File

@@ -1,9 +1,6 @@
#ifndef XRPL_APP_MAIN_APPLICATION_H_INCLUDED
#define XRPL_APP_MAIN_APPLICATION_H_INCLUDED
#include <xrpld/core/Config.h>
#include <xrpld/overlay/PeerReservationTable.h>
#include <xrpl/basics/TaggedCache.h>
#include <xrpl/beast/utility/PropertyStream.h>
#include <xrpl/protocol/Protocol.h>
@@ -16,6 +13,10 @@
namespace xrpl {
// Forward declarations
class Config;
class PeerReservationTable;
namespace unl {
class Manager;
}

Some files were not shown because too many files have changed in this diff Show More