Compare commits

..

53 Commits

Author SHA1 Message Date
Ed Hennis
30716646d7 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-03-12 15:05:10 -04:00
Ed Hennis
49b69a59eb Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-03-10 13:38:46 -04:00
Ed Hennis
5e37aa63c2 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-03-06 13:04:22 -04:00
Ed Hennis
979beeb162 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-03-04 17:11:47 -04:00
Ed Hennis
75e134a1fd Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-03-03 20:46:56 -04:00
Ed Hennis
828bcb3a7d Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-03-03 15:54:51 -04:00
Ed Hennis
6fc972746d Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-02-24 17:35:00 -04:00
Ed Hennis
930afbdea8 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-02-20 18:50:00 -04:00
Ed Hennis
95c5bef48b Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-02-20 18:26:19 -04:00
Ed Hennis
b489b6c3ce Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-02-20 17:32:02 -04:00
Ed Hennis
70765acef4 Merge remote-tracking branch 'upstream/develop' into ximinez/acquireAsyncDispatch
* upstream/develop:
  ci: [DEPENDABOT] bump tj-actions/changed-files from 46.0.5 to 47.0.4 (6394)
  ci: [DEPENDABOT] bump codecov/codecov-action from 5.4.3 to 5.5.2 (6398)
  ci: Build docs in PRs and in private repos (6400)
  ci: Add dependabot config (6379)
  Fix tautological assertion (6393)
  chore: Apply clang-format width 100 (6387)
2026-02-20 16:30:42 -05:00
Ed Hennis
77aa90bd0e Update formatting 2026-02-20 16:27:59 -05:00
Ed Hennis
4758bb6dc9 Merge commit '25cca465538a56cce501477f9e5e2c1c7ea2d84c' into ximinez/acquireAsyncDispatch
* commit '25cca465538a56cce501477f9e5e2c1c7ea2d84c':
  chore: Set clang-format width to 100 in config file (6387)
2026-02-20 16:27:39 -05:00
Ed Hennis
46e3dcb5fb Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-02-19 16:24:54 -05:00
Ed Hennis
d57579f10b Merge remote-tracking branch 'XRPLF/develop' into ximinez/acquireAsyncDispatch
* XRPLF/develop:
  refactor: Modularize app/tx (6228)
  refactor: Decouple app/tx from `Application` and `Config` (6227)
  chore: Update clang-format to 21.1.8 (6352)
  refactor: Modularize `HashRouter`, `Conditions`, and `OrderBookDB` (6226)
  chore: Fix minor issues in comments (6346)
  refactor: Modularize the NetworkOPs interface (6225)
  chore: Fix `gcov` lib coverage build failure on macOS (6350)
  refactor: Modularize RelationalDB (6224)
  refactor: Modularize WalletDB and Manifest (6223)
  fix: Update invariant checks for Permissioned Domains (6134)
  refactor: Change main thread name to `xrpld-main` (6336)
  refactor: Fix spelling issues in tests (6199)
  test: Add file and line location to Env (6276)
  chore: Remove CODEOWNERS (6337)
  perf: Remove unnecessary caches (5439)
  chore: Restore unity builds (6328)
  refactor: Update secp256k1 to 0.7.1 (6331)
  fix: Increment sequence when accepting new manifests (6059)
  fix typo in LendingHelpers unit-test (6215)
2026-02-18 19:52:18 -05:00
Ed Hennis
d8dd376d1c Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-02-04 16:30:10 -04:00
Ed Hennis
a8c03e2e6c Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-02-03 16:08:06 -04:00
Ed Hennis
2167a66bc7 Fix formatting 2026-01-28 19:39:15 -05:00
Ed Hennis
ed948a858c Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-01-28 18:49:15 -04:00
Ed Hennis
608c102743 Merge commit '5f638f55536def0d88b970d1018a465a238e55f4' into ximinez/acquireAsyncDispatch
* commit '5f638f55536def0d88b970d1018a465a238e55f4':
  chore: Set ColumnLimit to 120 in clang-format (6288)
2026-01-28 17:47:53 -05:00
Ed Hennis
36d1607a4e Merge commit '92046785d1fea5f9efe5a770d636792ea6cab78b' into ximinez/acquireAsyncDispatch
* commit '92046785d1fea5f9efe5a770d636792ea6cab78b':
  test: Fix the `xrpl.net` unit test using async read (6241)
  ci: Upload Conan recipes for develop, release candidates, and releases (6286)
  fix: Stop embedded tests from hanging on ARM by using `atomic_flag` (6248)
  fix:  Remove DEFAULT fields that change to the default in associateAsset (6259) (6273)
  refactor: Update Boost to 1.90 (6280)
  refactor: clean up uses of `std::source_location` (6272)
  ci: Pass missing sanitizers input to actions (6266)
  ci: Properly propagate Conan credentials (6265)
  ci: Explicitly set version when exporting the Conan recipe (6264)
  ci: Use plus instead of hyphen for Conan recipe version suffix (6261)
  chore: Detect uninitialized variables in CMake files (6247)
  ci: Run on-trigger and on-pr when generate-version is modified (6257)
  refactor: Enforce 15-char limit and simplify labels for thread naming (6212)
  docs: Update Ripple Bug Bounty public key (6258)
  ci: Add missing commit hash to Conan recipe version (6256)
  fix: Include `<functional>` header in `Number.h` (6254)
  ci: Upload Conan recipe for merges into develop and commits to release (6235)
  Limit reply size on `TMGetObjectByHash` queries (6110)
  ci: remove 'master' branch as a trigger (6234)
  Improve ledger_entry lookups for fee, amendments, NUNL, and hashes (5644)
2026-01-28 17:47:47 -05:00
Ed Hennis
53ebb86d60 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-01-15 13:03:36 -04:00
Ed Hennis
1d989bc6de Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-01-15 12:06:00 -04:00
Ed Hennis
64c0cb8c7e Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-01-13 18:19:11 -04:00
Ed Hennis
c77cfef41c Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-01-13 15:28:01 -04:00
Ed Hennis
08aa8c06d1 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-01-12 14:52:16 -04:00
Ed Hennis
9498672f8e Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-01-11 00:50:43 -04:00
Ed Hennis
e91d55a0e0 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-01-08 17:06:11 -04:00
Ed Hennis
afdc452cfc Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-01-08 13:04:20 -04:00
Ed Hennis
a0d4ef1a54 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2026-01-06 14:02:15 -05:00
Ed Hennis
8bc384f8bf Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-12-22 17:39:59 -05:00
Ed Hennis
bd961c484b Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-12-18 19:59:52 -05:00
Ed Hennis
aee242a8d4 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-12-12 20:34:59 -05:00
Ed Hennis
fcae74de58 Merge remote-tracking branch 'XRPLF/develop' into ximinez/acquireAsyncDispatch
* XRPLF/develop:
  refactor: Rename `ripple` namespace to `xrpl` (5982)
  refactor: Move JobQueue and related classes into xrpl.core module (6121)
  refactor: Rename `rippled` binary to `xrpld` (5983)
  refactor: rename info() to header() (6138)
  refactor: rename `LedgerInfo` to `LedgerHeader` (6136)
  refactor: clean up `RPCHelpers` (5684)
  chore: Fix docs readme and cmake (6122)
  chore: Clean up .gitignore and .gitattributes (6001)
  chore: Use updated secp256k1 recipe (6118)
2025-12-11 15:33:12 -05:00
Ed Hennis
a56effcb00 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-12-05 21:13:10 -05:00
Ed Hennis
64c2eca465 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-12-02 17:37:29 -05:00
Ed Hennis
e56f750e1d Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-12-01 14:40:45 -05:00
Ed Hennis
fde000f3eb Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-11-28 15:46:44 -05:00
Ed Hennis
d0a62229da Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-11-27 01:48:56 -05:00
Ed Hennis
d5932cc7d4 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-11-26 00:25:17 -05:00
Ed Hennis
0b534da781 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-11-25 14:55:06 -05:00
Ed Hennis
71a70d343b Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-11-24 21:49:11 -05:00
Ed Hennis
0899e65030 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-11-24 21:30:22 -05:00
Ed Hennis
31ba529761 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-11-21 12:47:58 -05:00
Ed Hennis
e2c6e5ebb6 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-11-18 22:39:29 -05:00
Ed Hennis
9d807fce48 Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-11-15 03:08:41 -05:00
Ed Hennis
9ef160765c Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-11-13 12:19:21 -05:00
Ed Hennis
d6c0eb243b Merge branch 'develop' into ximinez/acquireAsyncDispatch 2025-11-12 14:12:55 -05:00
Ed Hennis
84c9fc123c Fix formatting 2025-11-10 19:53:05 -05:00
Ed Hennis
00a2a58cfa Add missing header 2025-11-10 19:53:05 -05:00
Ed Hennis
bb2098d873 Add a unit test for CanProcess
- Delete the copy ctor & operator
2025-11-10 19:53:05 -05:00
Ed Hennis
46a5bc74db refactor: acquireAsync will dispatch the job, not the other way around 2025-11-10 19:53:05 -05:00
Ed Hennis
7b72b9cc82 Improve job queue collision checks and logging
- Improve logging related to ledger acquisition and operating mode
  changes
- Class "CanProcess" to keep track of processing of distinct items
2025-11-10 19:53:05 -05:00
482 changed files with 3046 additions and 6829 deletions

View File

@@ -8,7 +8,6 @@ Checks: "-*,
bugprone-chained-comparison,
bugprone-compare-pointer-to-member-virtual-function,
bugprone-copy-constructor-init,
bugprone-crtp-constructor-accessibility,
bugprone-dangling-handle,
bugprone-dynamic-static-initializers,
bugprone-empty-catch,
@@ -60,29 +59,19 @@ Checks: "-*,
bugprone-suspicious-string-compare,
bugprone-suspicious-stringview-data-usage,
bugprone-swapped-arguments,
bugprone-switch-missing-default-case,
bugprone-terminating-continue,
bugprone-throw-keyword-missing,
bugprone-too-small-loop-variable,
# bugprone-unchecked-optional-access, # see https://github.com/XRPLF/rippled/pull/6502
bugprone-undefined-memory-manipulation,
bugprone-undelegated-constructor,
bugprone-unhandled-exception-at-new,
bugprone-unhandled-self-assignment,
bugprone-unique-ptr-array-mismatch,
bugprone-unsafe-functions,
bugprone-use-after-move,
bugprone-unused-raii,
bugprone-unused-return-value,
bugprone-unused-local-non-trivial-variable,
bugprone-virtual-near-miss,
cppcoreguidelines-init-variables,
cppcoreguidelines-misleading-capture-default-by-value,
cppcoreguidelines-no-suspend-with-lock,
cppcoreguidelines-pro-type-member-init,
cppcoreguidelines-pro-type-static-cast-downcast,
cppcoreguidelines-rvalue-reference-param-not-moved,
cppcoreguidelines-use-default-member-init,
cppcoreguidelines-virtual-class-destructor,
hicpp-ignored-remove-result,
misc-definitions-in-headers,
@@ -92,48 +81,62 @@ Checks: "-*,
misc-throw-by-value-catch-by-reference,
misc-unused-alias-decls,
misc-unused-using-decls,
readability-duplicate-include,
readability-enum-initial-value,
readability-misleading-indentation,
readability-non-const-parameter,
readability-redundant-declaration,
readability-reference-to-constructed-temporary,
modernize-deprecated-headers,
modernize-make-shared,
modernize-make-unique,
performance-implicit-conversion-in-loop,
performance-move-constructor-init,
performance-trivially-destructible,
readability-avoid-nested-conditional-operator,
readability-avoid-return-with-void-value,
readability-braces-around-statements,
readability-const-return-type,
readability-container-contains,
readability-container-size-empty,
readability-duplicate-include,
readability-else-after-return,
readability-enum-initial-value,
readability-make-member-function-const,
readability-misleading-indentation,
readability-non-const-parameter,
readability-redundant-casting,
readability-redundant-declaration,
readability-redundant-inline-specifier,
readability-redundant-member-init,
readability-redundant-string-init,
readability-reference-to-constructed-temporary,
readability-static-definition-in-anonymous-namespace,
readability-use-std-min-max
performance-trivially-destructible
"
# ---
# checks that have some issues that need to be resolved:
#
# bugprone-crtp-constructor-accessibility,
# bugprone-move-forwarding-reference,
# bugprone-switch-missing-default-case,
# bugprone-unused-raii,
# bugprone-unused-return-value,
# bugprone-use-after-move,
#
# cppcoreguidelines-misleading-capture-default-by-value,
# cppcoreguidelines-init-variables,
# cppcoreguidelines-pro-type-member-init,
# cppcoreguidelines-pro-type-static-cast-downcast,
# cppcoreguidelines-use-default-member-init,
# cppcoreguidelines-rvalue-reference-param-not-moved,
#
# llvm-namespace-comment,
# misc-const-correctness,
# misc-include-cleaner,
# misc-redundant-expression,
#
# readability-avoid-nested-conditional-operator,
# readability-avoid-return-with-void-value,
# readability-braces-around-statements,
# readability-container-contains,
# readability-container-size-empty,
# readability-convert-member-functions-to-static,
# readability-const-return-type,
# readability-else-after-return,
# readability-implicit-bool-conversion,
# readability-inconsistent-declaration-parameter-name,
# readability-identifier-naming,
# readability-make-member-function-const,
# readability-math-missing-parentheses,
# readability-redundant-inline-specifier,
# readability-redundant-member-init,
# readability-redundant-casting,
# readability-redundant-string-init,
# readability-simplify-boolean-expr,
# readability-static-definition-in-anonymous-namespace,
# readability-suspicious-call-argument,
# readability-use-std-min-max,
# readability-static-accessed-through-instance,
#
# modernize-concat-nested-namespaces,
@@ -157,7 +160,7 @@ Checks: "-*,
# ---
#
CheckOptions:
readability-braces-around-statements.ShortStatementLines: 2
# readability-braces-around-statements.ShortStatementLines: 2
# readability-identifier-naming.MacroDefinitionCase: UPPER_CASE
# readability-identifier-naming.ClassCase: CamelCase
# readability-identifier-naming.StructCase: CamelCase
@@ -192,7 +195,7 @@ CheckOptions:
# readability-identifier-naming.PublicMemberSuffix: ""
# readability-identifier-naming.FunctionIgnoredRegexp: ".*tag_invoke.*"
bugprone-unsafe-functions.ReportMoreUnsafeFunctions: true
bugprone-unused-return-value.CheckedReturnTypes: ::std::error_code;::std::error_condition;::std::errc
# bugprone-unused-return-value.CheckedReturnTypes: ::std::error_code;::std::error_condition;::std::errc
# misc-include-cleaner.IgnoreHeaders: '.*/(detail|impl)/.*;.*(expected|unexpected).*;.*ranges_lower_bound\.h;time.h;stdlib.h;__chrono/.*;fmt/chrono.h;boost/uuid/uuid_hash.hpp'
#
# HeaderFilterRegex: '^.*/(src|tests)/.*\.(h|hpp)$'

View File

@@ -134,7 +134,6 @@ test.peerfinder > xrpld.core
test.peerfinder > xrpld.peerfinder
test.peerfinder > xrpl.protocol
test.protocol > test.toplevel
test.protocol > test.unit_test
test.protocol > xrpl.basics
test.protocol > xrpl.json
test.protocol > xrpl.protocol
@@ -172,7 +171,6 @@ test.shamap > xrpl.shamap
test.toplevel > test.csf
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
test.unit_test > xrpl.protocol
tests.libxrpl > xrpl.basics
tests.libxrpl > xrpl.json
tests.libxrpl > xrpl.net

View File

@@ -1,13 +1,10 @@
name: Check PR title
on:
merge_group:
types:
- checks_requested
pull_request:
types: [opened, edited, reopened, synchronize]
branches: [develop]
jobs:
check_title:
uses: XRPLF/actions/.github/workflows/check-pr-title.yml@c6311685db43aa07971c4a6764320fecbc2acdcd
uses: XRPLF/actions/.github/workflows/check-pr-title.yml@943eb8277e8f4b010fde0c826ce4154c36c39509

View File

@@ -1,9 +1,6 @@
name: Run pre-commit hooks
on:
merge_group:
types:
- checks_requested
pull_request:
push:
branches:

View File

@@ -76,7 +76,7 @@ jobs:
name: ${{ inputs.config_name }}
runs-on: ${{ fromJSON(inputs.runs_on) }}
container: ${{ inputs.image != '' && inputs.image || null }}
timeout-minutes: ${{ inputs.sanitizers != '' && 360 || 60 }}
timeout-minutes: 60
env:
# Use a namespace to keep the objects separate for each configuration.
CCACHE_NAMESPACE: ${{ inputs.config_name }}
@@ -204,17 +204,11 @@ jobs:
- name: Set sanitizer options
if: ${{ !inputs.build_only && env.SANITIZERS_ENABLED == 'true' }}
env:
CONFIG_NAME: ${{ inputs.config_name }}
run: |
ASAN_OPTS="include=${GITHUB_WORKSPACE}/sanitizers/suppressions/runtime-asan-options.txt:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/asan.supp"
if [[ "${CONFIG_NAME}" == *gcc* ]]; then
ASAN_OPTS="${ASAN_OPTS}:alloc_dealloc_mismatch=0"
fi
echo "ASAN_OPTIONS=${ASAN_OPTS}" >> ${GITHUB_ENV}
echo "TSAN_OPTIONS=include=${GITHUB_WORKSPACE}/sanitizers/suppressions/runtime-tsan-options.txt:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/tsan.supp" >> ${GITHUB_ENV}
echo "UBSAN_OPTIONS=include=${GITHUB_WORKSPACE}/sanitizers/suppressions/runtime-ubsan-options.txt:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/ubsan.supp" >> ${GITHUB_ENV}
echo "LSAN_OPTIONS=include=${GITHUB_WORKSPACE}/sanitizers/suppressions/runtime-lsan-options.txt:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/lsan.supp" >> ${GITHUB_ENV}
echo "ASAN_OPTIONS=print_stacktrace=1:detect_container_overflow=0:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/asan.supp" >> ${GITHUB_ENV}
echo "TSAN_OPTIONS=second_deadlock_stack=1:halt_on_error=0:suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/tsan.supp" >> ${GITHUB_ENV}
echo "UBSAN_OPTIONS=suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/ubsan.supp" >> ${GITHUB_ENV}
echo "LSAN_OPTIONS=suppressions=${GITHUB_WORKSPACE}/sanitizers/suppressions/lsan.supp" >> ${GITHUB_ENV}
- name: Run the separate tests
if: ${{ !inputs.build_only }}

View File

@@ -35,10 +35,6 @@ This section contains changes targeting a future version.
- `LEDGER_ENTRY_FLAGS`: Maps ledger entry type names to their flags and flag values.
- `ACCOUNT_SET_FLAGS`: Maps AccountSet flag names (asf flags) to their numeric values.
### Bugfixes
- Peer Crawler: The `port` field in `overlay.active[]` now consistently returns an integer instead of a string for outbound peers. [#6318](https://github.com/XRPLF/rippled/pull/6318)
## XRP Ledger server version 3.1.0
[Version 3.1.0](https://github.com/XRPLF/rippled/releases/tag/3.1.0) was released on Jan 27, 2026.

View File

@@ -118,7 +118,7 @@ if(MSVC)
NOMINMAX
# TODO: Resolve these warnings, don't just silence them
_SILENCE_ALL_CXX17_DEPRECATION_WARNINGS
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>,$<NOT:$<BOOL:${is_ci}>>>:_CRTDBG_MAP_ALLOC>
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:_CRTDBG_MAP_ALLOC>
)
target_link_libraries(common INTERFACE -errorreport:none -machine:X64)
else()

View File

@@ -23,7 +23,7 @@ target_compile_definitions(
BOOST_FILESYSTEM_NO_DEPRECATED
>
$<$<NOT:$<BOOL:${boost_show_deprecated}>>:
BOOST_COROUTINES2_NO_DEPRECATION_WARNING
BOOST_COROUTINES_NO_DEPRECATION_WARNING
BOOST_BEAST_ALLOW_DEPRECATED
BOOST_FILESYSTEM_DEPRECATED
>

View File

@@ -7,7 +7,7 @@ find_package(
COMPONENTS
chrono
container
context
coroutine
date_time
filesystem
json
@@ -26,7 +26,7 @@ target_link_libraries(
Boost::headers
Boost::chrono
Boost::container
Boost::context
Boost::coroutine
Boost::date_time
Boost::filesystem
Boost::json
@@ -38,26 +38,23 @@ target_link_libraries(
if(Boost_COMPILER)
target_link_libraries(xrpl_boost INTERFACE Boost::disable_autolinking)
endif()
# GCC 14+ has a false positive -Wuninitialized warning in Boost.Coroutine2's
# state.hpp when compiled with -O3. This is due to GCC's intentional behavior
# change (Bug #98871, #119388) where warnings from inlined system header code
# are no longer suppressed by -isystem. The warning occurs in operator|= in
# boost/coroutine2/detail/state.hpp when inlined from push_control_block::destroy().
# See: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=119388
if(is_gcc AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 14)
target_compile_options(xrpl_boost INTERFACE -Wno-uninitialized)
endif()
# Boost.Context's ucontext backend has ASAN fiber-switching annotations
# (start/finish_switch_fiber) that are compiled in when BOOST_USE_ASAN is defined.
# This tells ASAN about coroutine stack switches, preventing false positive
# stack-use-after-scope errors. BOOST_USE_UCONTEXT ensures the ucontext backend
# is selected (fcontext does not support ASAN annotations).
# These defines must match what Boost was compiled with (see conan/profiles/sanitizers).
if(enable_asan)
target_compile_definitions(
xrpl_boost
INTERFACE BOOST_USE_ASAN BOOST_USE_UCONTEXT
if(SANITIZERS_ENABLED AND is_clang)
# TODO: gcc does not support -fsanitize-blacklist...can we do something else for gcc ?
if(NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)
get_target_property(
Boost_INCLUDE_DIRS
Boost::headers
INTERFACE_INCLUDE_DIRECTORIES
)
endif()
message(STATUS "Adding [${Boost_INCLUDE_DIRS}] to sanitizer blacklist")
file(
WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt
"src:${Boost_INCLUDE_DIRS}/*"
)
target_compile_options(
opts
INTERFACE # ignore boost headers for sanitizing
-fsanitize-blacklist=${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt
)
endif()

View File

@@ -7,21 +7,16 @@ include(default)
{% if compiler == "gcc" %}
{% if "address" in sanitizers or "thread" in sanitizers or "undefinedbehavior" in sanitizers %}
{% set sanitizer_list = [] %}
{% set defines = [] %}
{% set model_code = "" %}
{% set extra_cxxflags = ["-fno-omit-frame-pointer", "-O1", "-Wno-stringop-overflow"] %}
{% if "address" in sanitizers %}
{% set _ = sanitizer_list.append("address") %}
{% set model_code = "-mcmodel=large" %}
{% set _ = defines.append("BOOST_USE_ASAN")%}
{% set _ = defines.append("BOOST_USE_UCONTEXT")%}
{% elif "thread" in sanitizers %}
{% set _ = sanitizer_list.append("thread") %}
{% set model_code = "-mcmodel=medium" %}
{% set _ = extra_cxxflags.append("-Wno-tsan") %}
{% set _ = defines.append("BOOST_USE_TSAN")%}
{% set _ = defines.append("BOOST_USE_UCONTEXT")%}
{% endif %}
{% if "undefinedbehavior" in sanitizers %}
@@ -34,22 +29,16 @@ include(default)
tools.build:cxxflags+=['{{sanitizer_flags}} {{" ".join(extra_cxxflags)}}']
tools.build:sharedlinkflags+=['{{sanitizer_flags}}']
tools.build:exelinkflags+=['{{sanitizer_flags}}']
tools.build:defines+={{defines}}
{% endif %}
{% elif compiler == "apple-clang" or compiler == "clang" %}
{% if "address" in sanitizers or "thread" in sanitizers or "undefinedbehavior" in sanitizers %}
{% set sanitizer_list = [] %}
{% set defines = [] %}
{% set extra_cxxflags = ["-fno-omit-frame-pointer", "-O1"] %}
{% if "address" in sanitizers %}
{% set _ = sanitizer_list.append("address") %}
{% set _ = defines.append("BOOST_USE_ASAN")%}
{% set _ = defines.append("BOOST_USE_UCONTEXT")%}
{% elif "thread" in sanitizers %}
{% set _ = sanitizer_list.append("thread") %}
{% set _ = defines.append("BOOST_USE_TSAN")%}
{% set _ = defines.append("BOOST_USE_UCONTEXT")%}
{% endif %}
{% if "undefinedbehavior" in sanitizers %}
@@ -63,24 +52,8 @@ include(default)
tools.build:cxxflags+=['{{sanitizer_flags}} {{" ".join(extra_cxxflags)}}']
tools.build:sharedlinkflags+=['{{sanitizer_flags}}']
tools.build:exelinkflags+=['{{sanitizer_flags}}']
tools.build:defines+={{defines}}
{% endif %}
{% endif %}
{% endif %}
tools.info.package_id:confs+=["tools.build:cxxflags", "tools.build:exelinkflags", "tools.build:sharedlinkflags", "tools.build:defines"]
[options]
{% if sanitizers %}
{% if "address" in sanitizers %}
# Build Boost.Context with ucontext backend (not fcontext) so that
# ASAN fiber-switching annotations (__sanitizer_start/finish_switch_fiber)
# are compiled into the library. fcontext (assembly) has no ASAN support.
# define=BOOST_USE_ASAN=1 is critical: it must be defined when building
# Boost.Context itself so the ucontext backend compiles in the ASAN annotations.
boost/*:extra_b2_flags=context-impl=ucontext address-sanitizer=on define=BOOST_USE_ASAN=1
boost/*:without_context=False
# Boost stacktrace fails to build with some sanitizers
boost/*:without_stacktrace=True
{% endif %}
{% endif %}
tools.info.package_id:confs+=["tools.build:cxxflags", "tools.build:exelinkflags", "tools.build:sharedlinkflags"]

View File

@@ -1,5 +1,4 @@
import re
import os
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
@@ -58,9 +57,6 @@ class Xrpl(ConanFile):
"tests": False,
"unity": False,
"xrpld": False,
"boost/*:without_context": False,
"boost/*:without_coroutine": True,
"boost/*:without_coroutine2": False,
"date/*:header_only": True,
"ed25519/*:shared": False,
"grpc/*:shared": False,
@@ -130,12 +126,6 @@ class Xrpl(ConanFile):
if self.settings.compiler in ["clang", "gcc"]:
self.options["boost"].without_cobalt = True
# Check if environment variable exists
if "SANITIZERS" in os.environ:
sanitizers = os.environ["SANITIZERS"]
if "address" in sanitizers.lower():
self.default_options["fPIC"] = False
def requirements(self):
# Conan 2 requires transitive headers to be specified
transitive_headers_opt = (
@@ -206,7 +196,7 @@ class Xrpl(ConanFile):
"boost::headers",
"boost::chrono",
"boost::container",
"boost::context",
"boost::coroutine",
"boost::date_time",
"boost::filesystem",
"boost::json",

View File

@@ -99,7 +99,6 @@ words:
- endmacro
- exceptioned
- Falco
- fcontext
- finalizers
- firewalled
- fmtdur
@@ -112,7 +111,6 @@ words:
- gpgcheck
- gpgkey
- hotwallet
- hwaddress
- hwrap
- ifndef
- inequation

View File

@@ -89,8 +89,8 @@ cmake --build . --parallel 4
**IMPORTANT**: ASAN with Boost produces many false positives. Use these options:
```bash
export ASAN_OPTIONS="include=sanitizers/suppressions/runtime-asan-options.txt:suppressions=sanitizers/suppressions/asan.supp"
export LSAN_OPTIONS="include=sanitizers/suppressions/runtime-lsan-options.txt:suppressions=sanitizers/suppressions/lsan.supp"
export ASAN_OPTIONS="print_stacktrace=1:detect_container_overflow=0:suppressions=path/to/asan.supp:halt_on_error=0:log_path=asan.log"
export LSAN_OPTIONS="suppressions=path/to/lsan.supp:halt_on_error=0:log_path=lsan.log"
# Run tests
./xrpld --unittest --unittest-jobs=5
@@ -108,7 +108,7 @@ export LSAN_OPTIONS="include=sanitizers/suppressions/runtime-lsan-options.txt:su
### ThreadSanitizer (TSan)
```bash
export TSAN_OPTIONS="include=sanitizers/suppressions/runtime-tsan-options.txt:suppressions=sanitizers/suppressions/tsan.supp"
export TSAN_OPTIONS="suppressions=path/to/tsan.supp halt_on_error=0 log_path=tsan.log"
# Run tests
./xrpld --unittest --unittest-jobs=5
@@ -129,7 +129,7 @@ More details [here](https://github.com/google/sanitizers/wiki/AddressSanitizerLe
### UndefinedBehaviorSanitizer (UBSan)
```bash
export UBSAN_OPTIONS="include=sanitizers/suppressions/runtime-ubsan-options.txt:suppressions=sanitizers/suppressions/ubsan.supp"
export UBSAN_OPTIONS="suppressions=path/to/ubsan.supp:print_stacktrace=1:halt_on_error=0:log_path=ubsan.log"
# Run tests
./xrpld --unittest --unittest-jobs=5

View File

@@ -0,0 +1,139 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2024 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_BASICS_CANPROCESS_H_INCLUDED
#define RIPPLE_BASICS_CANPROCESS_H_INCLUDED
#include <functional>
#include <mutex>
#include <set>
/** RAII class to check if an Item is already being processed on another thread,
* as indicated by it's presence in a Collection.
*
* If the Item is not in the Collection, it will be added under lock in the
* ctor, and removed under lock in the dtor. The object will be considered
* "usable" and evaluate to `true`.
*
* If the Item is in the Collection, no changes will be made to the collection,
* and the CanProcess object will be considered "unusable".
*
* It's up to the caller to decide what "usable" and "unusable" mean. (e.g.
* Process or skip a block of code, or set a flag.)
*
* The current use is to avoid lock contention that would be involved in
* processing something associated with the Item.
*
* Examples:
*
* void IncomingLedgers::acquireAsync(LedgerHash const& hash, ...)
* {
* if (CanProcess check{acquiresMutex_, pendingAcquires_, hash})
* {
* acquire(hash, ...);
* }
* }
*
* bool
* NetworkOPsImp::recvValidation(
* std::shared_ptr<STValidation> const& val,
* std::string const& source)
* {
* CanProcess check(
* validationsMutex_, pendingValidations_, val->getLedgerHash());
* BypassAccept bypassAccept =
* check ? BypassAccept::no : BypassAccept::yes;
* handleNewValidation(app_, val, source, bypassAccept, m_journal);
* }
*
*/
class CanProcess
{
public:
template <class Mutex, class Collection, class Item>
CanProcess(Mutex& mtx, Collection& collection, Item const& item)
: cleanup_(insert(mtx, collection, item))
{
}
~CanProcess()
{
if (cleanup_)
cleanup_();
}
CanProcess(CanProcess const&) = delete;
CanProcess&
operator=(CanProcess const&) = delete;
explicit
operator bool() const
{
return static_cast<bool>(cleanup_);
}
private:
template <bool useIterator, class Mutex, class Collection, class Item>
std::function<void()>
doInsert(Mutex& mtx, Collection& collection, Item const& item)
{
std::unique_lock<Mutex> lock(mtx);
// TODO: Use structured binding once LLVM 16 is the minimum supported
// version. See also: https://github.com/llvm/llvm-project/issues/48582
// https://github.com/llvm/llvm-project/commit/127bf44385424891eb04cff8e52d3f157fc2cb7c
auto const insertResult = collection.insert(item);
auto const it = insertResult.first;
if (!insertResult.second)
return {};
if constexpr (useIterator)
return [&, it]() {
std::unique_lock<Mutex> lock(mtx);
collection.erase(it);
};
else
return [&]() {
std::unique_lock<Mutex> lock(mtx);
collection.erase(item);
};
}
// Generic insert() function doesn't use iterators because they may get
// invalidated
template <class Mutex, class Collection, class Item>
std::function<void()>
insert(Mutex& mtx, Collection& collection, Item const& item)
{
return doInsert<false>(mtx, collection, item);
}
// Specialize insert() for std::set, which does not invalidate iterators for
// insert and erase
template <class Mutex, class Item>
std::function<void()>
insert(Mutex& mtx, std::set<Item>& collection, Item const& item)
{
return doInsert<true>(mtx, collection, item);
}
// If set, then the item is "usable"
std::function<void()> cleanup_;
};
#endif

View File

@@ -1,6 +1,5 @@
#pragma once
#include <xrpl/basics/sanitizers.h>
#include <xrpl/beast/type_name.h>
#include <exception>
@@ -24,28 +23,16 @@ LogThrow(std::string const& title);
When called from within a catch block, it will pass
control to the next matching exception handler, if any.
Otherwise, std::terminate will be called.
ASAN can't handle sudden jumps in control flow very well. This
function is marked as XRPL_NO_SANITIZE_ADDRESS to prevent it from
triggering false positives, since it throws.
*/
[[noreturn]] XRPL_NO_SANITIZE_ADDRESS inline void
[[noreturn]] inline void
Rethrow()
{
LogThrow("Re-throwing exception");
throw;
}
/*
Logs and throws an exception of type E.
ASAN can't handle sudden jumps in control flow very well. This
function is marked as XRPL_NO_SANITIZE_ADDRESS to prevent it from
triggering false positives, since it throws.
*/
template <class E, class... Args>
[[noreturn]] XRPL_NO_SANITIZE_ADDRESS inline void
[[noreturn]] inline void
Throw(Args&&... args)
{
static_assert(

View File

@@ -1,7 +1,5 @@
#pragma once
#include <xrpl/beast/utility/instrumentation.h>
#include <type_traits>
namespace xrpl {
@@ -72,31 +70,4 @@ unsafe_cast(Src s) noexcept
return unsafe_cast<Dest>(static_cast<std::underlying_type_t<Src>>(s));
}
template <class Dest, class Src>
requires std::is_pointer_v<Dest>
inline Dest
safe_downcast(Src* s) noexcept
{
#ifdef NDEBUG
return static_cast<Dest>(s); // NOLINT(cppcoreguidelines-pro-type-static-cast-downcast)
#else
auto* result = dynamic_cast<Dest>(s);
XRPL_ASSERT(result != nullptr, "xrpl::safe_downcast : pointer downcast is valid");
return result;
#endif
}
template <class Dest, class Src>
requires std::is_lvalue_reference_v<Dest>
inline Dest
safe_downcast(Src& s) noexcept
{
#ifndef NDEBUG
XRPL_ASSERT(
dynamic_cast<std::add_pointer_t<std::remove_reference_t<Dest>>>(&s) != nullptr,
"xrpl::safe_downcast : reference downcast is valid");
#endif
return static_cast<Dest>(s); // NOLINT(cppcoreguidelines-pro-type-static-cast-downcast)
}
} // namespace xrpl

View File

@@ -1,13 +0,0 @@
#pragma once
// Helper to disable ASan/HwASan for specific functions
/*
ASAN flags some false positives with sudden jumps in control flow, like
exceptions, or when encountering coroutine stack switches. This macro can be used to disable ASAN
intrumentation for specific functions.
*/
#if defined(__GNUC__) || defined(__clang__)
#define XRPL_NO_SANITIZE_ADDRESS __attribute__((no_sanitize("address", "hwaddress")))
#else
#define XRPL_NO_SANITIZE_ADDRESS
#endif

View File

@@ -43,8 +43,8 @@ private:
template <typename>
friend class ListIterator;
ListNode* m_next = nullptr;
ListNode* m_prev = nullptr;
ListNode* m_next;
ListNode* m_prev;
};
//------------------------------------------------------------------------------
@@ -567,7 +567,7 @@ private:
}
private:
size_type m_size = 0u;
size_type m_size;
Node m_head;
Node m_tail;
};

View File

@@ -1,5 +1,7 @@
#pragma once
#include <xrpl/basics/ByteUtilities.h>
namespace xrpl {
template <class F>
@@ -9,18 +11,16 @@ JobQueue::Coro::Coro(Coro_create_t, JobQueue& jq, JobType type, std::string cons
, name_(name)
, running_(false)
, coro_(
// Stack size of 1MB wasn't sufficient for deep calls. ASAN tests flagged the issue. Hence
// increasing the size to 1.5MB.
boost::context::protected_fixedsize_stack(1536 * 1024),
[this, fn = std::forward<F>(f)](
boost::coroutines2::asymmetric_coroutine<void>::push_type& do_yield) {
boost::coroutines::asymmetric_coroutine<void>::push_type& do_yield) {
yield_ = &do_yield;
yield();
fn(shared_from_this());
#ifndef NDEBUG
finished_ = true;
#endif
})
},
boost::coroutines::attributes(megabytes(1)))
{
}

View File

@@ -199,7 +199,7 @@ public:
/** Add a suppression peer and get message's relay status.
* Return pair:
* element 1: true if the peer is added.
* element 1: true if the key is added.
* element 2: optional is seated to the relay time point or
* is unseated if has not relayed yet. */
std::pair<bool, std::optional<Stopwatch::time_point>>

View File

@@ -7,8 +7,7 @@
#include <xrpl/core/detail/Workers.h>
#include <xrpl/json/json_value.h>
#include <boost/context/protected_fixedsize_stack.hpp>
#include <boost/coroutine2/all.hpp>
#include <boost/coroutine/all.hpp>
#include <set>
@@ -49,8 +48,8 @@ public:
std::mutex mutex_;
std::mutex mutex_run_;
std::condition_variable cv_;
boost::coroutines2::coroutine<void>::pull_type coro_;
boost::coroutines2::coroutine<void>::push_type* yield_;
boost::coroutines::asymmetric_coroutine<void>::pull_type coro_;
boost::coroutines::asymmetric_coroutine<void>::push_type* yield_;
#ifndef NDEBUG
bool finished_ = false;
#endif

View File

@@ -421,7 +421,7 @@ private:
ObjectValues* map_{nullptr};
} value_;
ValueType type_ : 8;
int allocated_ : 1 {}; // Notes: if declared as bool, bitfield is useless.
int allocated_ : 1; // Notes: if declared as bool, bitfield is useless.
};
inline Value

View File

@@ -108,7 +108,7 @@ private:
std::string indentString_;
int rightMargin_;
int indentSize_;
bool addChildValues_{};
bool addChildValues_;
};
/** \brief Writes a Value in <a HREF="http://www.json.org">JSON</a> format in a
@@ -175,7 +175,7 @@ private:
std::string indentString_;
int rightMargin_;
std::string indentation_;
bool addChildValues_{};
bool addChildValues_;
};
std::string

View File

@@ -49,7 +49,7 @@ validDomain(ReadView const& view, uint256 domainID, AccountID const& subject);
// This function is only called when we about to return tecNO_PERMISSION
// because all the checks for the DepositPreauth authorization failed.
TER
authorizedDepositPreauth(ReadView const& view, STVector256 const& ctx, AccountID const& dst);
authorizedDepositPreauth(ApplyView const& view, STVector256 const& ctx, AccountID const& dst);
// Sort credentials array, return empty set if there are duplicates
std::set<std::pair<AccountID, Slice>>
@@ -74,7 +74,7 @@ verifyDepositPreauth(
ApplyView& view,
AccountID const& src,
AccountID const& dst,
std::shared_ptr<SLE const> const& sleDst,
std::shared_ptr<SLE> const& sleDst,
beast::Journal j);
} // namespace xrpl

View File

@@ -29,18 +29,6 @@ public:
bool sslVerify,
beast::Journal j);
/** Destroys the global SSL context created by initializeSSLContext().
*
* This releases the underlying boost::asio::ssl::context and any
* associated OpenSSL resources. Must not be called while any
* HTTPClient requests are in flight.
*
* @note Currently only called from tests during teardown. In production,
* the SSL context lives for the lifetime of the process.
*/
static void
cleanupSSLContext();
static void
get(bool bSSL,
boost::asio::io_context& io_context,

View File

@@ -138,9 +138,6 @@ public:
/** Returns the number of file descriptors the backend expects to need. */
virtual int
fdRequired() const = 0;
/** The number of hardware threads to use for compression of a batch. */
static unsigned int const numHardwareThreads;
};
} // namespace NodeStore

View File

@@ -64,49 +64,6 @@
namespace xrpl {
// Feature names must not exceed this length (in characters, excluding the null terminator).
static constexpr std::size_t maxFeatureNameSize = 63;
// Reserve this exact feature-name length (in characters/bytes, excluding the null terminator)
// so that a 32-byte uint256 (for example, in WASM or other interop contexts) can be used
// as a compact, fixed-size feature selector without conflicting with human-readable names.
static constexpr std::size_t reservedFeatureNameSize = 32;
// Both validFeatureNameSize and validFeatureName are consteval functions that can be used in
// static_asserts to validate feature names at compile time. They are only used inside
// enforceValidFeatureName in Feature.cpp, but are exposed here for testing. The expected
// parameter `auto fn` is a constexpr lambda which returns a const char*, making it available
// for compile-time evaluation. Read more in https://accu.org/journals/overload/30/172/wu/
consteval auto
validFeatureNameSize(auto fn) -> bool
{
constexpr char const* n = fn();
// Note, std::strlen is not constexpr, we need to implement our own here.
constexpr std::size_t N = [](auto n) {
std::size_t ret = 0;
for (auto ptr = n; *ptr != '\0'; ret++, ++ptr)
;
return ret;
}(n);
return N != reservedFeatureNameSize && //
N <= maxFeatureNameSize;
}
consteval auto
validFeatureName(auto fn) -> bool
{
constexpr char const* n = fn();
// Prevent the use of visually confusable characters and enforce that feature names
// are always valid ASCII. This is needed because C++ allows Unicode identifiers.
// Characters below 0x20 are nonprintable control characters, and characters with the 0x80 bit
// set are non-ASCII (e.g. UTF-8 encoding of Unicode), so both are disallowed.
for (auto ptr = n; *ptr != '\0'; ++ptr)
{
if (*ptr & 0x80 || *ptr < 0x20)
return false;
}
return true;
}
enum class VoteBehavior : int { Obsolete = -1, DefaultNo = 0, DefaultYes };
enum class AmendmentSupport : int { Retired = -1, Supported = 0, Unsupported };

View File

@@ -35,6 +35,8 @@ struct LedgerHeader
// If validated is false, it means "not yet validated."
// Once validated is true, it will never be set false at a later time.
// NOTE: If you are accessing this directly, you are probably doing it
// wrong. Use LedgerMaster::isValidated().
// VFALCO TODO Make this not mutable
bool mutable validated = false;
bool accepted = false;

View File

@@ -65,7 +65,7 @@ public:
std::optional<TxType>
getGranularTxType(GranularPermissionType const& gpType) const;
std::optional<std::reference_wrapper<uint256 const>>
std::optional<std::reference_wrapper<uint256 const>> const
getTxFeature(TxType txType) const;
bool

View File

@@ -209,7 +209,7 @@ std::size_t constexpr maxDIDDocumentLength = 256;
std::size_t constexpr maxDIDURILength = 256;
/** The maximum length of an Attestation inside a DID */
std::size_t constexpr maxDIDDataLength = 256;
std::size_t constexpr maxDIDAttestationLength = 256;
/** The maximum length of a domain */
std::size_t constexpr maxDomainLength = 256;

View File

@@ -44,7 +44,7 @@ protected:
// All the constructed public keys are valid, non-empty and contain 33
// bytes of data.
static constexpr std::size_t size_ = 33;
std::uint8_t buf_[size_]{}; // should be large enough
std::uint8_t buf_[size_]; // should be large enough
public:
using const_iterator = std::uint8_t const*;

View File

@@ -24,7 +24,7 @@ public:
STAccount();
STAccount(SField const& n);
STAccount(SField const& n, Buffer const& v);
STAccount(SField const& n, Buffer&& v);
STAccount(SerialIter& sit, SField const& name);
STAccount(SField const& n, AccountID const& v);

View File

@@ -57,7 +57,7 @@ class STObject : public STBase, public CountedObject<STObject>
using list_type = std::vector<detail::STVar>;
list_type v_;
SOTemplate const* mType{};
SOTemplate const* mType;
public:
using iterator = boost::transform_iterator<Transform, STObject::list_type::const_iterator>;
@@ -401,7 +401,7 @@ public:
getStyle(SField const& field) const;
bool
hasMatchingEntry(STBase const&) const;
hasMatchingEntry(STBase const&);
bool
operator==(STObject const& o) const;

View File

@@ -15,7 +15,7 @@
namespace xrpl {
enum class TxnSql : char {
enum TxnSql : char {
txnSqlNew = 'N',
txnSqlConflict = 'C',
txnSqlHeld = 'H',
@@ -83,9 +83,6 @@ public:
std::uint32_t
getSeqValue() const;
AccountID
getFeePayer() const;
boost::container::flat_set<AccountID>
getMentionedAccounts() const;
@@ -125,7 +122,7 @@ public:
getMetaSQL(
Serializer rawTxn,
std::uint32_t inLedger,
TxnSql status,
char status,
std::string const& escapedMetaData) const;
std::vector<uint256> const&

View File

@@ -16,11 +16,8 @@ namespace xrpl {
/** A secret key. */
class SecretKey
{
public:
static constexpr std::size_t size_ = 32;
private:
std::uint8_t buf_[size_]{};
std::uint8_t buf_[32];
public:
using const_iterator = std::uint8_t const*;
@@ -30,14 +27,9 @@ public:
SecretKey&
operator=(SecretKey const&) = default;
bool
operator==(SecretKey const&) = delete;
bool
operator!=(SecretKey const&) = delete;
~SecretKey();
SecretKey(std::array<std::uint8_t, size_> const& data);
SecretKey(std::array<std::uint8_t, 32> const& data);
SecretKey(Slice const& slice);
std::uint8_t const*
@@ -86,10 +78,16 @@ public:
};
inline bool
operator==(SecretKey const& lhs, SecretKey const& rhs) = delete;
operator==(SecretKey const& lhs, SecretKey const& rhs)
{
return lhs.size() == rhs.size() && std::memcmp(lhs.data(), rhs.data(), rhs.size()) == 0;
}
inline bool
operator!=(SecretKey const& lhs, SecretKey const& rhs) = delete;
operator!=(SecretKey const& lhs, SecretKey const& rhs)
{
return !(lhs == rhs);
}
//------------------------------------------------------------------------------

View File

@@ -13,7 +13,7 @@ namespace xrpl {
class Seed
{
private:
std::array<uint8_t, 16> buf_{};
std::array<uint8_t, 16> buf_;
public:
using const_iterator = std::array<uint8_t, 16>::const_iterator;

View File

@@ -14,7 +14,7 @@ namespace xrpl {
static inline std::string const&
systemName()
{
static std::string const name = "xrpld";
static std::string const name = "ripple";
return name;
}

View File

@@ -44,7 +44,7 @@ private:
// The largest "small object" we can accommodate
static std::size_t constexpr max_size = 72;
std::aligned_storage<max_size>::type d_ = {};
std::aligned_storage<max_size>::type d_;
STBase* p_ = nullptr;
public:

View File

@@ -40,7 +40,7 @@ public:
operator result_type() noexcept;
private:
char ctx_[96]{};
char ctx_[96];
};
/** SHA-512 digest
@@ -63,7 +63,7 @@ public:
operator result_type() noexcept;
private:
char ctx_[216]{};
char ctx_[216];
};
/** SHA-256 digest
@@ -86,7 +86,7 @@ public:
operator result_type() noexcept;
private:
char ctx_[112]{};
char ctx_[112];
};
//------------------------------------------------------------------------------

View File

@@ -112,7 +112,6 @@ JSS(accounts); // in: LedgerEntry, Subscribe,
// handlers/Ledger, Unsubscribe
JSS(accounts_proposed); // in: Subscribe, Unsubscribe
JSS(action);
JSS(active); // out: OverlayImpl
JSS(acquiring); // out: LedgerRequest
JSS(address); // out: PeerImp
JSS(affected); // out: AcceptedLedgerTx
@@ -301,7 +300,6 @@ JSS(id); // websocket.
JSS(ident); // in: AccountCurrencies, AccountInfo,
// OwnerInfo
JSS(ignore_default); // in: AccountLines
JSS(in); // out: OverlayImpl
JSS(inLedger); // out: tx/Transaction
JSS(inbound); // out: PeerImp
JSS(index); // in: LedgerEntry
@@ -466,7 +464,6 @@ JSS(open_ledger_level); // out: TxQ
JSS(optionality); // out: server_definitions
JSS(oracles); // in: get_aggregate_price
JSS(oracle_document_id); // in: get_aggregate_price
JSS(out); // out: OverlayImpl
JSS(owner); // in: LedgerEntry, out: NetworkOPs
JSS(owner_funds); // in/out: Ledger, NetworkOPs, AcceptedLedgerTx
JSS(page_index);

View File

@@ -173,7 +173,7 @@ public:
send(Json::Value const& jvObj, bool broadcast) = 0;
std::uint64_t
getSeq() const;
getSeq();
void
onSendEmpty();

View File

@@ -185,7 +185,7 @@ public:
virtual bool
isFull() = 0;
virtual void
setMode(OperatingMode om) = 0;
setMode(OperatingMode om, char const* reason) = 0;
virtual bool
isBlocked() = 0;
virtual bool

View File

@@ -114,7 +114,8 @@ protected:
beast::Journal const j_;
AccountID const account_;
XRPAmount preFeeBalance_{}; // Balance before fees.
XRPAmount mPriorBalance; // Balance before fees.
XRPAmount mSourceBalance; // Balance after fees.
virtual ~Transactor() = default;
Transactor(Transactor const&) = delete;

View File

@@ -213,7 +213,7 @@ public:
, flags(ctx_.flags)
, j(ctx_.j)
, ter(ter_)
, likelyToClaimFee(isTesSuccess(ter) || isTecClaimHardFail(ter, flags))
, likelyToClaimFee(ter == tesSUCCESS || isTecClaimHardFail(ter, flags))
{
}

View File

@@ -27,33 +27,6 @@ namespace xrpl {
* communicate the interface required of any invariant checker. Any invariant
* check implementation should implement the public methods documented here.
*
* ## Rules for implementing `finalize`
*
* ### Invariants must run regardless of transaction result
*
* An invariant's `finalize` method MUST perform meaningful checks even when
* the transaction has failed (i.e., `!isTesSuccess(tec)`). The following
* pattern is almost certainly wrong and must never be used:
*
* @code
* // WRONG: skipping all checks on failure defeats the purpose of invariants
* if (!isTesSuccess(tec))
* return true;
* @endcode
*
* The entire purpose of invariants is to detect and prevent the impossible.
* A bug or exploit could cause a failed transaction to mutate ledger state in
* unexpected ways. Invariants are the last line of defense against such
* scenarios.
*
* In general: an invariant that expects a domain-specific state change to
* occur (e.g., a new object being created) should only expect that change
* when the transaction succeeded. A failed VaultCreate must not have created
* a Vault. A failed LoanSet must not have created a Loan.
*
* Also be aware that failed transactions, regardless of type, carry no
* Privileges. Any privilege-gated checks must therefore also be applied to
* failed transactions.
*/
class InvariantChecker_PROTOTYPE
{
@@ -75,11 +48,7 @@ public:
/**
* @brief called after all ledger entries have been visited to determine
* the final status of the check.
*
* This method MUST perform meaningful checks even when `tec` indicates a
* failed transaction. See the class-level documentation for the rules
* governing how failed transactions must be handled.
* the final status of the check
*
* @param tx the transaction being applied
* @param tec the current TER result of the transaction
@@ -132,7 +101,7 @@ public:
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&) const;
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
/**
@@ -152,7 +121,7 @@ public:
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&) const;
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
/**
@@ -198,7 +167,7 @@ public:
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&) const;
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
/**
@@ -215,7 +184,7 @@ public:
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&) const;
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
/**
@@ -233,7 +202,7 @@ public:
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&) const;
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
/**
@@ -252,7 +221,7 @@ public:
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&) const;
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
/**
@@ -271,7 +240,7 @@ public:
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&) const;
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
/**
@@ -287,7 +256,7 @@ public:
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&) const;
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
/**
@@ -307,7 +276,7 @@ public:
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&) const;
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
/**
@@ -328,7 +297,7 @@ public:
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&) const;
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
/**

View File

@@ -25,7 +25,7 @@ public:
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&) const;
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
} // namespace xrpl

View File

@@ -36,7 +36,7 @@ public:
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&) const;
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
/**
@@ -64,7 +64,7 @@ public:
visitEntry(bool, std::shared_ptr<SLE const> const&, std::shared_ptr<SLE const> const&);
bool
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&) const;
finalize(STTx const&, TER const, XRPAmount const, ReadView const&, beast::Journal const&);
};
} // namespace xrpl

View File

@@ -22,7 +22,7 @@ private:
uint256 m_dir;
uint256 m_index;
std::shared_ptr<SLE> m_entry;
Quality m_quality{};
Quality m_quality;
public:
/** Create the iterator. */

View File

@@ -370,7 +370,7 @@ changeSpotPriceQuality(
if (!amounts)
{
JLOG(j.trace()) << "changeSpotPrice calc failed: " << to_string(pool.in) << " "
<< to_string(pool.out) << " " << quality << " " << tfee;
<< to_string(pool.out) << " " << quality << " " << tfee << std::endl;
return std::nullopt;
}
@@ -639,9 +639,9 @@ getRoundedAsset(Rules const& rules, STAmount const& balance, A const& frac, IsDe
STAmount
getRoundedAsset(
Rules const& rules,
std::function<Number()> const& noRoundCb,
std::function<Number()>&& noRoundCb,
STAmount const& balance,
std::function<Number()> const& productCb,
std::function<Number()>&& productCb,
IsDeposit isDeposit);
/** Round AMM deposit/withdrawal LPToken amount. Deposit/withdrawal formulas
@@ -672,9 +672,9 @@ getRoundedLPTokens(
STAmount
getRoundedLPTokens(
Rules const& rules,
std::function<Number()> const& noRoundCb,
std::function<Number()>&& noRoundCb,
STAmount const& lptAMMBalance,
std::function<Number()> const& productCb,
std::function<Number()>&& productCb,
IsDeposit isDeposit);
/* Next two functions adjust asset in/out amount to factor in the adjusted

View File

@@ -4,7 +4,6 @@
#include <xrpl/ledger/ApplyView.h>
#include <xrpl/protocol/AccountID.h>
#include <xrpl/protocol/TER.h>
#include <xrpl/protocol/TxFlags.h>
#include <xrpl/protocol/nft.h>
#include <xrpl/tx/Transactor.h>
@@ -55,7 +54,7 @@ removeToken(
ApplyView& view,
AccountID const& owner,
uint256 const& nftokenID,
std::shared_ptr<SLE> const& page);
std::shared_ptr<SLE>&& page);
/** Deletes the given token offer.
@@ -96,7 +95,7 @@ tokenOfferCreatePreflight(
std::uint16_t nftFlags,
Rules const& rules,
std::optional<AccountID> const& owner = std::nullopt,
std::uint32_t txFlags = tfSellNFToken);
std::uint32_t txFlags = lsfSellNFToken);
/** Preclaim checks shared by NFTokenCreateOffer and NFTokenMint */
TER
@@ -110,7 +109,7 @@ tokenOfferCreatePreclaim(
std::uint16_t xferFee,
beast::Journal j,
std::optional<AccountID> const& owner = std::nullopt,
std::uint32_t txFlags = tfSellNFToken);
std::uint32_t txFlags = lsfSellNFToken);
/** doApply implementation shared by NFTokenCreateOffer and NFTokenMint */
TER
@@ -124,7 +123,7 @@ tokenOfferCreateApply(
uint256 const& nftokenID,
XRPAmount const& priorBalance,
beast::Journal j,
std::uint32_t txFlags = tfSellNFToken);
std::uint32_t txFlags = lsfSellNFToken);
TER
checkTrustlineAuthorized(

View File

@@ -1,6 +1,24 @@
# The idea is to empty this file gradually by fixing the underlying issues and removing suppressions.
#
# ASAN_OPTIONS="print_stacktrace=1:detect_container_overflow=0:suppressions=sanitizers/suppressions/asan.supp:halt_on_error=0"
#
# The detect_container_overflow=0 option disables false positives from:
# - Boost intrusive containers (slist_iterator.hpp, hashtable.hpp, aged_unordered_container.h)
# - Boost context/coroutine stack switching (Workers.cpp, thread.h)
#
# See: https://github.com/google/sanitizers/wiki/AddressSanitizerContainerOverflow
# Boost
interceptor_name:boost/asio
# Leaks in Doctest tests: xrpl.test.*
interceptor_name:src/libxrpl/net/HTTPClient.cpp
interceptor_name:src/libxrpl/net/RegisterSSLCerts.cpp
interceptor_name:src/tests/libxrpl/net/HTTPClient.cpp
interceptor_name:xrpl/net/AutoSocket.h
interceptor_name:xrpl/net/HTTPClient.h
interceptor_name:xrpl/net/HTTPClientSSLContext.h
interceptor_name:xrpl/net/RegisterSSLCerts.h
# Suppress false positive stack-buffer errors in thread stack allocation
# Related to ASan's __asan_handle_no_return warnings (github.com/google/sanitizers/issues/189)

View File

@@ -1,5 +1,16 @@
# The idea is to empty this file gradually by fixing the underlying issues and removing suppresions.
# Suppress leaks detected by asan in rippled code.
leak:src/libxrpl/net/HTTPClient.cpp
leak:src/libxrpl/net/RegisterSSLCerts.cpp
leak:src/tests/libxrpl/net/HTTPClient.cpp
leak:xrpl/net/AutoSocket.h
leak:xrpl/net/HTTPClient.h
leak:xrpl/net/HTTPClientSSLContext.h
leak:xrpl/net/RegisterSSLCerts.h
leak:ripple::HTTPClient
leak:ripple::HTTPClientImp
# Suppress leaks detected by asan in boost code.
leak:boost::asio
leak:boost/asio

View File

@@ -1,8 +0,0 @@
detect_container_overflow=false
detect_stack_use_after_return=false
debug=true
halt_on_error=false
print_stats=true
print_cmdline=true
use_sigaltstack=0
print_stacktrace=1

View File

@@ -1 +0,0 @@
halt_on_error=false

View File

@@ -1,3 +0,0 @@
halt_on_error=false
verbosity=1
second_deadlock_stack=1

View File

@@ -1 +0,0 @@
halt_on_error=false

View File

@@ -27,11 +27,3 @@ src:core/JobQueue.cpp
src:libxrpl/beast/utility/beast_Journal.cpp
src:test/beast/beast_PropertyStream_test.cpp
src:src/test/app/Invariants_test.cpp
# ASan false positive: stack-use-after-scope in ErrorCodes.h inline functions.
# When Clang inlines the StaticString overloads (e.g. invalid_field_error(StaticString)),
# ASan scope-poisons the temporary std::string before the inlined callee finishes reading
# through the const ref. This corrupts the coroutine stack and crashes the Simulate test.
# See asan.supp comments for full explanation and planned fix.
[address]
src:*ErrorCodes.h

View File

@@ -182,17 +182,6 @@ signed-integer-overflow:src/test/beast/LexicalCast_test.cpp
# External library suppressions
unsigned-integer-overflow:nudb/detail/xxhash.hpp
# Loan_test.cpp intentional underflow in test arithmetic
unsigned-integer-overflow:src/test/app/Loan_test.cpp
undefined:src/test/app/Loan_test.cpp
# Source tree restructured paths (libxrpl/tx/transactors/)
# These duplicate the xrpld/app/tx/detail entries above for the new layout
unsigned-integer-overflow:src/libxrpl/tx/transactors/oracle/SetOracle.cpp
undefined:src/libxrpl/tx/transactors/oracle/SetOracle.cpp
unsigned-integer-overflow:src/libxrpl/tx/transactors/nft/NFTokenMint.cpp
undefined:src/libxrpl/tx/transactors/nft/NFTokenMint.cpp
# Protobuf intentional overflows in hash functions
# Protobuf uses intentional unsigned overflow for hash computation (stringpiece.h:393)
unsigned-integer-overflow:google/protobuf/stubs/stringpiece.h

View File

@@ -51,8 +51,8 @@ extractTarLz4(boost::filesystem::path const& src, boost::filesystem::path const&
if (archive_write_disk_set_standard_lookup(aw.get()) < ARCHIVE_OK)
Throw<std::runtime_error>(archive_error_string(aw.get()));
int result = 0;
struct archive_entry* entry = nullptr;
int result;
struct archive_entry* entry;
while (true)
{
result = archive_read_next_header(ar.get(), &entry);
@@ -67,9 +67,9 @@ extractTarLz4(boost::filesystem::path const& src, boost::filesystem::path const&
if (archive_entry_size(entry) > 0)
{
void const* buf = nullptr;
size_t sz = 0;
la_int64_t offset = 0;
void const* buf;
size_t sz;
la_int64_t offset;
while (true)
{
result = archive_read_data_block(ar.get(), &buf, &sz, &offset);

View File

@@ -55,7 +55,7 @@ Section::append(std::vector<std::string> const& lines)
val = "";
break;
}
if (val.at(comment - 1) == '\\')
else if (val.at(comment - 1) == '\\')
{
// we have an escaped comment char. Erase the escape char
// and keep looking
@@ -83,13 +83,9 @@ Section::append(std::vector<std::string> const& lines)
boost::smatch match;
if (boost::regex_match(line, match, re1))
{
set(match[1], match[2]);
}
else
{
values_.push_back(line);
}
lines_.push_back(std::move(line));
}

View File

@@ -417,13 +417,9 @@ public:
swap(holder_, sink);
if (holder_)
{
sink_ = *holder_;
}
else
{
sink_ = beast::Journal::getNullSink();
}
return sink;
}

View File

@@ -51,7 +51,7 @@ parseStatmRSSkB(std::string const& statm)
// /proc/self/statm format: size resident shared text lib data dt
// We want the second field (resident) which is in pages
std::istringstream iss(statm);
long size = 0, resident = 0;
long size, resident;
if (!(iss >> size >> resident))
return -1;

View File

@@ -66,12 +66,14 @@ concept UnsignedMantissa = std::is_unsigned_v<T> || std::is_same_v<T, uint128_t>
class Number::Guard
{
std::uint64_t digits_{0}; // 16 decimal guard digits
std::uint8_t xbit_ : 1 {0}; // has a non-zero digit been shifted off the end
std::uint8_t sbit_ : 1 {0}; // the sign of the guard digits
std::uint64_t digits_; // 16 decimal guard digits
std::uint8_t xbit_ : 1; // has a non-zero digit been shifted off the end
std::uint8_t sbit_ : 1; // the sign of the guard digits
public:
explicit Guard() = default;
explicit Guard() : digits_{0}, xbit_{0}, sbit_{0}
{
}
// set & test the sign bit
void
@@ -94,7 +96,7 @@ public:
// This enables the client to round towards nearest, and on
// tie, round towards even.
int
round() const noexcept;
round() noexcept;
// Modify the result to the correctly rounded value
template <UnsignedMantissa T>
@@ -114,7 +116,7 @@ public:
// Modify the result to the correctly rounded value
void
doRound(rep& drops, std::string location) const;
doRound(rep& drops, std::string location);
private:
void
@@ -171,7 +173,7 @@ Number::Guard::pop() noexcept
// 0 if Guard is exactly half
// 1 if Guard is greater than half
int
Number::Guard::round() const noexcept
Number::Guard::round() noexcept
{
auto mode = Number::getround();
@@ -256,7 +258,7 @@ Number::Guard::doRoundUp(
}
bringIntoRange(negative, mantissa, exponent, minMantissa);
if (exponent > maxExponent)
Throw<std::overflow_error>(std::string(location));
throw std::overflow_error(location);
}
template <UnsignedMantissa T>
@@ -282,7 +284,7 @@ Number::Guard::doRoundDown(
// Modify the result to the correctly rounded value
void
Number::Guard::doRound(rep& drops, std::string location) const
Number::Guard::doRound(rep& drops, std::string location)
{
auto r = round();
if (r == 1 || (r == 0 && (drops & 1) == 1))
@@ -296,7 +298,7 @@ Number::Guard::doRound(rep& drops, std::string location) const
// or "(maxRep + 1) / 10", neither of which will round up when
// converting to rep, though the latter might overflow _before_
// rounding.
Throw<std::overflow_error>(std::string(location)); // LCOV_EXCL_LINE
throw std::overflow_error(location); // LCOV_EXCL_LINE
}
++drops;
}
@@ -911,13 +913,9 @@ to_string(Number const& amount)
// Assemble the output:
if (pre_from == pre_to)
{
ret.append(1, '0');
}
else
{
ret.append(pre_from, pre_to);
}
if (post_to != post_from)
{

View File

@@ -35,6 +35,7 @@ namespace xrpl {
template <class Derived>
class AsyncObject
{
protected:
AsyncObject() : m_pending(0)
{
}
@@ -92,8 +93,6 @@ public:
private:
// The number of handlers pending.
std::atomic<int> m_pending;
friend Derived;
};
class ResolverAsioImpl : public ResolverAsio, public AsyncObject<ResolverAsioImpl>
@@ -109,7 +108,7 @@ public:
std::condition_variable m_cv;
std::mutex m_mut;
bool m_asyncHandlersCompleted{true};
bool m_asyncHandlersCompleted;
std::atomic<bool> m_stop_called;
std::atomic<bool> m_stopped;
@@ -136,6 +135,7 @@ public:
, m_io_context(io_context)
, m_strand(boost::asio::make_strand(io_context))
, m_resolver(io_context)
, m_asyncHandlersCompleted(true)
, m_stop_called(false)
, m_stopped(true)
{
@@ -374,7 +374,7 @@ public:
JLOG(m_journal.debug()) << "Queued new job with " << names.size() << " tasks. "
<< m_work.size() << " jobs outstanding.";
if (!m_work.empty())
if (m_work.size() > 0)
{
boost::asio::post(
m_io_context,

View File

@@ -103,7 +103,7 @@ trim_whitespace(std::string str)
std::optional<std::uint64_t>
to_uint64(std::string const& s)
{
std::uint64_t result = 0;
std::uint64_t result;
if (beast::lexicalCastChecked(result, s))
return result;
return std::nullopt;

View File

@@ -77,13 +77,13 @@ get_inverse()
}
/// Returns max chars needed to encode a base64 string
std::size_t constexpr encoded_size(std::size_t n)
inline std::size_t constexpr encoded_size(std::size_t n)
{
return 4 * ((n + 2) / 3);
}
/// Returns max bytes needed to decode a base64 string
std::size_t constexpr decoded_size(std::size_t n)
inline std::size_t constexpr decoded_size(std::size_t n)
{
return ((n / 4) * 3) + 2;
}
@@ -116,7 +116,6 @@ encode(void* dest, void const* src, std::size_t len)
in += 3;
}
// NOLINTNEXTLINE(bugprone-switch-missing-default-case)
switch (len % 3)
{
case 2:

View File

@@ -239,7 +239,6 @@ initAuthenticated(
{
boost::system::error_code ec;
// NOLINTNEXTLINE(bugprone-unused-return-value)
context.use_certificate_file(cert_file, boost::asio::ssl::context::pem, ec);
if (ec)
@@ -272,11 +271,9 @@ initAuthenticated(
if (!cert_set)
{
if (SSL_CTX_use_certificate(ssl, x) != 1)
{
LogicError(
"Problem retrieving SSL certificate from chain "
"file.");
}
cert_set = true;
}
@@ -301,7 +298,6 @@ initAuthenticated(
{
boost::system::error_code ec;
// NOLINTNEXTLINE(bugprone-unused-return-value)
context.use_private_key_file(key_file, boost::asio::ssl::context::pem, ec);
if (ec)

View File

@@ -16,7 +16,7 @@ class seconds_clock_thread
{
using Clock = basic_seconds_clock::Clock;
bool stop_{false};
bool stop_;
std::mutex mut_;
std::condition_variable cv_;
std::thread thread_;
@@ -48,7 +48,8 @@ seconds_clock_thread::~seconds_clock_thread()
thread_.join();
}
seconds_clock_thread::seconds_clock_thread() : tp_{Clock::now().time_since_epoch().count()}
seconds_clock_thread::seconds_clock_thread()
: stop_{false}, tp_{Clock::now().time_since_epoch().count()}
{
thread_ = std::thread(&seconds_clock_thread::run, this);
}

View File

@@ -29,7 +29,7 @@ print_identifiers(SemanticVersion::identifier_list const& list)
bool
isNumeric(std::string const& s)
{
int n = 0;
int n;
// Must be convertible to an integer
if (!lexicalCastChecked(n, s))
@@ -68,7 +68,7 @@ chopUInt(int& value, int limit, std::string& input)
if (item.empty())
return false;
int n = 0;
int n;
// Must be convertible to an integer
if (!lexicalCastChecked(n, item))
@@ -234,43 +234,27 @@ int
compare(SemanticVersion const& lhs, SemanticVersion const& rhs)
{
if (lhs.majorVersion > rhs.majorVersion)
{
return 1;
}
if (lhs.majorVersion < rhs.majorVersion)
{
else if (lhs.majorVersion < rhs.majorVersion)
return -1;
}
if (lhs.minorVersion > rhs.minorVersion)
{
return 1;
}
if (lhs.minorVersion < rhs.minorVersion)
{
else if (lhs.minorVersion < rhs.minorVersion)
return -1;
}
if (lhs.patchVersion > rhs.patchVersion)
{
return 1;
}
if (lhs.patchVersion < rhs.patchVersion)
{
else if (lhs.patchVersion < rhs.patchVersion)
return -1;
}
if (lhs.isPreRelease() || rhs.isPreRelease())
{
// Pre-releases have a lower precedence
if (lhs.isRelease() && rhs.isPreRelease())
{
return 1;
}
if (lhs.isPreRelease() && rhs.isRelease())
{
else if (lhs.isPreRelease() && rhs.isRelease())
return -1;
}
// Compare pre-release identifiers
for (int i = 0;
@@ -279,26 +263,18 @@ compare(SemanticVersion const& lhs, SemanticVersion const& rhs)
{
// A larger list of identifiers has a higher precedence
if (i >= rhs.preReleaseIdentifiers.size())
{
return 1;
}
if (i >= lhs.preReleaseIdentifiers.size())
{
else if (i >= lhs.preReleaseIdentifiers.size())
return -1;
}
std::string const& left(lhs.preReleaseIdentifiers[i]);
std::string const& right(rhs.preReleaseIdentifiers[i]);
// Numeric identifiers have lower precedence
if (!isNumeric(left) && isNumeric(right))
{
return 1;
}
if (isNumeric(left) && !isNumeric(right))
{
else if (isNumeric(left) && !isNumeric(right))
return -1;
}
if (isNumeric(left))
{
@@ -308,13 +284,9 @@ compare(SemanticVersion const& lhs, SemanticVersion const& rhs)
int const iRight(lexicalCastThrow<int>(right));
if (iLeft > iRight)
{
return 1;
}
if (iLeft < iRight)
{
else if (iLeft < iRight)
return -1;
}
}
else
{

View File

@@ -104,8 +104,8 @@ private:
std::shared_ptr<StatsDCollectorImp> m_impl;
std::string m_name;
CounterImpl::value_type m_value{0};
bool m_dirty{false};
CounterImpl::value_type m_value;
bool m_dirty;
};
//------------------------------------------------------------------------------
@@ -162,9 +162,9 @@ private:
std::shared_ptr<StatsDCollectorImp> m_impl;
std::string m_name;
GaugeImpl::value_type m_last_value{0};
GaugeImpl::value_type m_value{0};
bool m_dirty{false};
GaugeImpl::value_type m_last_value;
GaugeImpl::value_type m_value;
bool m_dirty;
};
//------------------------------------------------------------------------------
@@ -194,8 +194,8 @@ private:
std::shared_ptr<StatsDCollectorImp> m_impl;
std::string m_name;
MeterImpl::value_type m_value{0};
bool m_dirty{false};
MeterImpl::value_type m_value;
bool m_dirty;
};
//------------------------------------------------------------------------------
@@ -470,7 +470,6 @@ public:
m_io_context.run();
// NOLINTNEXTLINE(bugprone-unused-return-value)
m_socket.shutdown(boost::asio::ip::udp::socket::shutdown_send, ec);
m_socket.close();
@@ -505,7 +504,7 @@ StatsDHookImpl::do_process()
StatsDCounterImpl::StatsDCounterImpl(
std::string const& name,
std::shared_ptr<StatsDCollectorImp> const& impl)
: m_impl(impl), m_name(name)
: m_impl(impl), m_name(name), m_value(0), m_dirty(false)
{
m_impl->add(*this);
}
@@ -587,7 +586,7 @@ StatsDEventImpl::do_notify(EventImpl::value_type const& value)
StatsDGaugeImpl::StatsDGaugeImpl(
std::string const& name,
std::shared_ptr<StatsDCollectorImp> const& impl)
: m_impl(impl), m_name(name)
: m_impl(impl), m_name(name), m_last_value(0), m_value(0), m_dirty(false)
{
m_impl->add(*this);
}
@@ -676,7 +675,7 @@ StatsDGaugeImpl::do_process()
StatsDMeterImpl::StatsDMeterImpl(
std::string const& name,
std::shared_ptr<StatsDCollectorImp> const& impl)
: m_impl(impl), m_name(name)
: m_impl(impl), m_name(name), m_value(0), m_dirty(false)
{
m_impl->add(*this);
}

View File

@@ -95,14 +95,10 @@ operator>>(std::istream& is, Endpoint& endpoint)
char i{0};
char readTo{0};
is.get(i);
if (i == '[')
{ // we are an IPv6 endpoint
if (i == '[') // we are an IPv6 endpoint
readTo = ']';
}
else
{
addrStr += i;
}
while (is && is.rdbuf()->in_avail() > 0 && is.get(i))
{
@@ -163,16 +159,14 @@ operator>>(std::istream& is, Endpoint& endpoint)
if (is.rdbuf()->in_avail() > 0)
{
Port port = 0;
Port port;
is >> port;
if (is.fail())
return is;
endpoint = Endpoint(addr, port);
}
else
{
endpoint = Endpoint(addr);
}
return is;
}

View File

@@ -124,13 +124,9 @@ Journal::ScopedStream::~ScopedStream()
if (!s.empty())
{
if (s == "\n")
{
m_sink.write(m_level, "");
}
else
{
m_sink.write(m_level, s);
}
}
}

View File

@@ -15,7 +15,7 @@ namespace beast {
//
//------------------------------------------------------------------------------
PropertyStream::Item::Item(Source* source) : ListNode(), m_source(source)
PropertyStream::Item::Item(Source* source) : m_source(source)
{
}
@@ -237,13 +237,9 @@ PropertyStream::Source::write(PropertyStream& stream, std::string const& path)
return;
if (result.second)
{
result.first->write(stream);
}
else
{
result.first->write_one(stream);
}
}
std::pair<PropertyStream::Source*, bool>
@@ -305,13 +301,9 @@ PropertyStream::Source::peel_name(std::string* path)
std::string s(first, pos);
if (pos != last)
{
*path = std::string(pos + 1, last);
}
else
{
*path = std::string();
}
return s;
}
@@ -379,13 +371,9 @@ void
PropertyStream::add(std::string const& key, bool value)
{
if (value)
{
add(key, "true");
}
else
{
add(key, "false");
}
}
void
@@ -476,13 +464,9 @@ void
PropertyStream::add(bool value)
{
if (value)
{
add("true");
}
else
{
add("false");
}
}
void

View File

@@ -331,7 +331,7 @@ JobQueue::finishJob(JobType type)
void
JobQueue::processTask(int instance)
{
JobType type = jtINVALID;
JobType type;
{
using namespace std::chrono;

View File

@@ -205,9 +205,11 @@ Workers::Worker::run()
// We got paused
break;
}
// Undo our decrement
++m_workers.m_pauseCount;
else
{
// Undo our decrement
++m_workers.m_pauseCount;
}
}
// We couldn't pause so we must have gotten

View File

@@ -198,10 +198,10 @@ char const* RFC1751::s_dictionary[2048] = {
unsigned long
RFC1751::extract(char const* s, int start, int length)
{
unsigned char cl = 0;
unsigned char cc = 0;
unsigned char cr = 0;
unsigned long x = 0;
unsigned char cl;
unsigned char cc;
unsigned char cr;
unsigned long x;
XRPL_ASSERT(length <= 11, "xrpl::RFC1751::extract : maximum length");
XRPL_ASSERT(start >= 0, "xrpl::RFC1751::extract : minimum start");
@@ -226,7 +226,7 @@ void
RFC1751::btoe(std::string& strHuman, std::string const& strData)
{
char caBuffer[9]; /* add in room for the parity 2 bits*/
int p = 0, i = 0;
int p, i;
memcpy(caBuffer, strData.c_str(), 8);
@@ -245,11 +245,11 @@ RFC1751::btoe(std::string& strHuman, std::string const& strData)
void
RFC1751::insert(char* s, int x, int start, int length)
{
unsigned char cl = 0;
unsigned char cc = 0;
unsigned char cr = 0;
unsigned long y = 0;
int shift = 0;
unsigned char cl;
unsigned char cc;
unsigned char cr;
unsigned long y;
int shift;
XRPL_ASSERT(length <= 11, "xrpl::RFC1751::insert : maximum length");
XRPL_ASSERT(start >= 0, "xrpl::RFC1751::insert : minimum start");
@@ -285,21 +285,13 @@ RFC1751::standard(std::string& strWord)
for (auto& letter : strWord)
{
if (islower(static_cast<unsigned char>(letter)))
{
letter = toupper(static_cast<unsigned char>(letter));
}
else if (letter == '1')
{
letter = 'L';
}
else if (letter == '0')
{
letter = 'O';
}
else if (letter == '5')
{
letter = 'S';
}
}
}
@@ -344,7 +336,7 @@ RFC1751::etob(std::string& strData, std::vector<std::string> vsHuman)
if (6 != vsHuman.size())
return -1;
int i = 0, p = 0;
int i, p = 0;
char b[9] = {0};
for (auto& strWord : vsHuman)

View File

@@ -30,7 +30,7 @@ csprng_engine::~csprng_engine()
void
csprng_engine::mix_entropy(void* buffer, std::size_t count)
{
std::array<std::random_device::result_type, 128> entropy{};
std::array<std::random_device::result_type, 128> entropy;
{
// On every platform we support, std::random_device
@@ -71,7 +71,7 @@ csprng_engine::operator()(void* ptr, std::size_t count)
csprng_engine::result_type
csprng_engine::operator()()
{
result_type ret = 0;
result_type ret;
(*this)(&ret, sizeof(result_type));
return ret;
}

View File

@@ -25,7 +25,7 @@ std::map<char, char const*> jsonSpecialCharacterEscape = {
{'\r', "\\r"},
{'\t', "\\t"}};
size_t const jsonEscapeLength = 2;
static size_t const jsonEscapeLength = 2;
// All other JSON punctuation.
char const closeBrace = '}';
@@ -36,7 +36,7 @@ char const openBrace = '{';
char const openBracket = '[';
char const quote = '"';
auto const integralFloatsBecomeInts = false;
static auto const integralFloatsBecomeInts = false;
size_t
lengthWithoutTrailingZeros(std::string const& s)
@@ -137,13 +137,9 @@ public:
check(false, "Not an " + ((type == array ? "array: " : "object: ") + message));
}
if (stack_.top().isFirst)
{
stack_.top().isFirst = false;
}
else
{
output_({&comma, 1});
}
}
void
@@ -200,7 +196,7 @@ private:
explicit Collection() = default;
/** What type of collection are we in? */
Writer::CollectionType type = Writer::CollectionType::array;
Writer::CollectionType type;
/** Is this the first entry in a collection?
* If false, we have to emit a , before we write the next entry. */

View File

@@ -94,7 +94,7 @@ Reader::parse(char const* beginDoc, char const* endDoc, Value& root)
nodes_.push(&root);
bool successful = readValue(0);
Token token{};
Token token;
skipCommentTokens(token);
if (!root.isNull() && !root.isArray() && !root.isObject())
@@ -114,7 +114,7 @@ Reader::parse(char const* beginDoc, char const* endDoc, Value& root)
bool
Reader::readValue(unsigned depth)
{
Token token{};
Token token;
skipCommentTokens(token);
if (depth > nest_limit)
return addError("Syntax error: maximum nesting depth exceeded", token);
@@ -278,13 +278,9 @@ Reader::skipSpaces()
Char c = *current_;
if (c == ' ' || c == '\t' || c == '\r' || c == '\n')
{
++current_;
}
else
{
break;
}
}
}
@@ -297,10 +293,8 @@ Reader::match(Location pattern, int patternLength)
int index = patternLength;
while (index--)
{
if (current_[index] != pattern[index])
return false;
}
current_ += patternLength;
return true;
@@ -390,13 +384,9 @@ Reader::readString()
c = getNextChar();
if (c == '\\')
{
getNextChar();
}
else if (c == '"')
{
break;
}
}
return c == '"';
@@ -405,7 +395,7 @@ Reader::readString()
bool
Reader::readObject(Token& tokenStart, unsigned depth)
{
Token tokenName{};
Token tokenName;
std::string name;
currentValue() = Value(objectValue);
@@ -430,7 +420,7 @@ Reader::readObject(Token& tokenStart, unsigned depth)
if (!decodeString(tokenName, name))
return recoverFromError(tokenObjectEnd);
Token colon{};
Token colon;
if (!readToken(colon) || colon.type_ != tokenMemberSeparator)
{
@@ -450,7 +440,7 @@ Reader::readObject(Token& tokenStart, unsigned depth)
if (!ok) // error already set
return recoverFromError(tokenObjectEnd);
Token comma{};
Token comma;
if (!readToken(comma) ||
(comma.type_ != tokenObjectEnd && comma.type_ != tokenArraySeparator &&
@@ -480,7 +470,7 @@ Reader::readArray(Token& tokenStart, unsigned depth)
if (*current_ == ']') // empty array
{
Token endArray{};
Token endArray;
readToken(endArray);
return true;
}
@@ -497,7 +487,7 @@ Reader::readArray(Token& tokenStart, unsigned depth)
if (!ok) // error already set
return recoverFromError(tokenArrayEnd);
Token token{};
Token token;
// Accept Comment after last item in the array.
ok = readToken(token);
@@ -588,13 +578,9 @@ Reader::decodeNumber(Token& token)
// If it's representable as a signed integer, construct it as one.
if (value <= Value::maxInt)
{
currentValue() = static_cast<Value::Int>(value);
}
else
{
currentValue() = static_cast<Value::UInt>(value);
}
}
return true;
@@ -605,7 +591,7 @@ Reader::decodeDouble(Token& token)
{
double value = 0;
int const bufferSize = 32;
int count = 0;
int count;
int length = int(token.end_ - token.start_);
// Sanity check to avoid buffer overflow exploits.
if (length < 0)
@@ -660,10 +646,8 @@ Reader::decodeString(Token& token, std::string& decoded)
Char c = *current++;
if (c == '"')
{
break;
}
if (c == '\\')
else if (c == '\\')
{
if (current == end)
return addError("Empty escape sequence in string", token, current);
@@ -705,7 +689,7 @@ Reader::decodeString(Token& token, std::string& decoded)
break;
case 'u': {
unsigned int unicode = 0;
unsigned int unicode;
if (!decodeUnicodeCodePoint(token, current, end, unicode))
return false;
@@ -737,23 +721,19 @@ Reader::decodeUnicodeCodePoint(Token& token, Location& current, Location end, un
{
// surrogate pairs
if (end - current < 6)
{
return addError(
"additional six characters expected to parse unicode surrogate "
"pair.",
token,
current);
}
unsigned int surrogatePair = 0;
unsigned int surrogatePair;
if (*current != '\\' || *(current + 1) != 'u')
{
return addError(
"expecting another \\u token to begin the second half of a unicode surrogate pair",
token,
current);
}
current += 2; // skip two characters checked above
@@ -774,10 +754,8 @@ Reader::decodeUnicodeEscapeSequence(
unsigned int& unicode)
{
if (end - current < 4)
{
return addError(
"Bad unicode escape sequence in string: four digits expected.", token, current);
}
unicode = 0;
@@ -787,25 +765,17 @@ Reader::decodeUnicodeEscapeSequence(
unicode *= 16;
if (c >= '0' && c <= '9')
{
unicode += c - '0';
}
else if (c >= 'a' && c <= 'f')
{
unicode += c - 'a' + 10;
}
else if (c >= 'A' && c <= 'F')
{
unicode += c - 'A' + 10;
}
else
{
return addError(
"Bad unicode escape sequence in string: hexadecimal digit "
"expected.",
token,
current);
}
}
return true;
@@ -826,7 +796,7 @@ bool
Reader::recoverFromError(TokenType skipUntilToken)
{
int errorCount = int(errors_.size());
Token skip{};
Token skip;
while (true)
{
@@ -897,7 +867,7 @@ Reader::getLocationLineAndColumn(Location location, int& line, int& column) cons
std::string
Reader::getLocationLineAndColumn(Location location) const
{
int line = 0, column = 0;
int line, column;
getLocationLineAndColumn(location, line, column);
return "Line " + std::to_string(line) + ", Column " + std::to_string(column);
}

View File

@@ -98,11 +98,8 @@ Value::CZString::CZString(CZString const& other)
other.index_ != noDuplication && other.cstr_ != 0
? valueAllocator()->makeMemberName(other.cstr_)
: other.cstr_)
, index_([&]() -> int {
if (!other.cstr_)
return other.index_;
return other.index_ == noDuplication ? noDuplication : duplicate;
}())
, index_(
other.cstr_ ? (other.index_ == noDuplication ? noDuplication : duplicate) : other.index_)
{
}
@@ -160,7 +157,7 @@ Value::CZString::isStaticString() const
* memset( this, 0, sizeof(Value) )
* This optimization is used in ValueInternalMap fast allocator.
*/
Value::Value(ValueType type) : type_(type)
Value::Value(ValueType type) : type_(type), allocated_(0)
{
switch (type)
{
@@ -228,7 +225,7 @@ Value::Value(std::string const& value) : type_(stringValue), allocated_(true)
valueAllocator()->duplicateStringValue(value.c_str(), (unsigned int)value.length());
}
Value::Value(StaticString const& value) : type_(stringValue)
Value::Value(StaticString const& value) : type_(stringValue), allocated_(false)
{
value_.string_ = const_cast<char*>(value.c_str());
}
@@ -257,9 +254,7 @@ Value::Value(Value const& other) : type_(other.type_)
allocated_ = true;
}
else
{
value_.string_ = 0;
}
break;
@@ -356,9 +351,7 @@ integerCmp(Int i, UInt ui)
return -1;
// Now we can safely compare.
if (i < ui)
return -1;
return (i == ui) ? 0 : 1;
return (i < ui) ? -1 : (i == ui) ? 0 : 1;
}
bool
@@ -367,13 +360,9 @@ operator<(Value const& x, Value const& y)
if (auto signum = x.type_ - y.type_)
{
if (x.type_ == intValue && y.type_ == uintValue)
{
signum = integerCmp(x.value_.int_, y.value_.uint_);
}
else if (x.type_ == uintValue && y.type_ == intValue)
{
signum = -integerCmp(y.value_.int_, x.value_.uint_);
}
return signum < 0;
}
@@ -707,7 +696,7 @@ Value::asBool() const
case arrayValue:
case objectValue:
return !value_.map_->empty();
return value_.map_->size() != 0;
// LCOV_EXCL_START
default:
@@ -754,10 +743,10 @@ Value::isConvertibleTo(ValueType other) const
(other == nullValue && (!value_.string_ || value_.string_[0] == 0));
case arrayValue:
return other == arrayValue || (other == nullValue && value_.map_->empty());
return other == arrayValue || (other == nullValue && value_.map_->size() == 0);
case objectValue:
return other == objectValue || (other == nullValue && value_.map_->empty());
return other == objectValue || (other == nullValue && value_.map_->size() == 0);
// LCOV_EXCL_START
default:

View File

@@ -13,7 +13,7 @@ namespace Json {
// //////////////////////////////////////////////////////////////////
// //////////////////////////////////////////////////////////////////
ValueIteratorBase::ValueIteratorBase() : isNull_(true)
ValueIteratorBase::ValueIteratorBase() : current_(), isNull_(true)
{
}

View File

@@ -304,9 +304,7 @@ StyledWriter::writeValue(Value const& value)
Value::Members members(value.getMemberNames());
if (members.empty())
{
pushValue("{}");
}
else
{
writeWithIndent("{");
@@ -341,9 +339,7 @@ StyledWriter::writeArrayValue(Value const& value)
unsigned size = value.size();
if (size == 0)
{
pushValue("[]");
}
else
{
bool isArrayMultiLine = isMultilineArray(value);
@@ -360,9 +356,7 @@ StyledWriter::writeArrayValue(Value const& value)
Value const& childValue = value[index];
if (hasChildValue)
{
writeWithIndent(childValues_[index]);
}
else
{
writeIndent();
@@ -435,13 +429,9 @@ void
StyledWriter::pushValue(std::string const& value)
{
if (addChildValues_)
{
childValues_.push_back(value);
}
else
{
document_ += value;
}
}
void
@@ -539,9 +529,7 @@ StyledStreamWriter::writeValue(Value const& value)
Value::Members members(value.getMemberNames());
if (members.empty())
{
pushValue("{}");
}
else
{
writeWithIndent("{");
@@ -576,9 +564,7 @@ StyledStreamWriter::writeArrayValue(Value const& value)
unsigned size = value.size();
if (size == 0)
{
pushValue("[]");
}
else
{
bool isArrayMultiLine = isMultilineArray(value);
@@ -595,9 +581,7 @@ StyledStreamWriter::writeArrayValue(Value const& value)
Value const& childValue = value[index];
if (hasChildValue)
{
writeWithIndent(childValues_[index]);
}
else
{
writeIndent();
@@ -670,13 +654,9 @@ void
StyledStreamWriter::pushValue(std::string const& value)
{
if (addChildValues_)
{
childValues_.push_back(value);
}
else
{
*document_ << value;
}
}
void

View File

@@ -107,7 +107,7 @@ ApplyStateTable::apply(
Mods newMod;
for (auto& item : items_)
{
SField const* type = nullptr;
SField const* type;
switch (item.second.first)
{
default:
@@ -169,11 +169,9 @@ ApplyStateTable::apply(
"xrpl::detail::ApplyStateTable::apply : valid nodes for "
"modification");
if (curNode->isThreadedType(to.rules()))
{ // thread transaction to node
// item modified
if (curNode->isThreadedType(to.rules())) // thread transaction to node
// item modified
threadItem(meta, curNode);
}
STObject prevs(sfPreviousFields);
for (auto const& obj : *origNode)
@@ -515,7 +513,7 @@ void
ApplyStateTable::threadItem(TxMeta& meta, std::shared_ptr<SLE> const& sle)
{
key_type prevTxID;
LedgerIndex prevLgrID = 0;
LedgerIndex prevLgrID;
if (!sle->thread(meta.getTxID(), meta.getLgrSeq(), prevTxID, prevLgrID))
return;

View File

@@ -38,9 +38,7 @@ BookListeners::publish(MultiApiJson const& jvObj, hash_set<std::uint64_t>& haveP
++it;
}
else
{
it = mListeners.erase(it);
}
}
}

View File

@@ -40,17 +40,11 @@ CachedViewImpl::read(Keylet const& k) const
// If the sle is null, then a failure must have occurred in base_.read()
XRPL_ASSERT(sle || baseRead, "xrpl::CachedView::read : null SLE result from base");
if (cacheHit && baseRead)
{
hitsexpired.increment();
}
else if (cacheHit)
{
hits.increment();
}
else
{
misses.increment();
}
if (!cacheHit)
{

View File

@@ -186,12 +186,10 @@ validDomain(ReadView const& view, uint256 domainID, AccountID const& subject)
foundExpired = true;
continue;
}
if (sleCredential->getFlags() & lsfAccepted)
{
else if (sleCredential->getFlags() & lsfAccepted)
return tesSUCCESS;
}
continue;
else
continue;
}
}
@@ -199,7 +197,7 @@ validDomain(ReadView const& view, uint256 domainID, AccountID const& subject)
}
TER
authorizedDepositPreauth(ReadView const& view, STVector256 const& credIDs, AccountID const& dst)
authorizedDepositPreauth(ApplyView const& view, STVector256 const& credIDs, AccountID const& dst)
{
std::set<std::pair<AccountID, Slice>> sorted;
std::vector<std::shared_ptr<SLE const>> lifeExtender;
@@ -320,7 +318,7 @@ verifyDepositPreauth(
ApplyView& view,
AccountID const& src,
AccountID const& dst,
std::shared_ptr<SLE const> const& sleDst,
std::shared_ptr<SLE> const& sleDst,
beast::Journal j)
{
// If depositPreauth is enabled, then an account that requires
@@ -339,11 +337,9 @@ verifyDepositPreauth(
if (src != dst)
{
if (!view.exists(keylet::depositPreauth(dst, src)))
{
return !credentialsPresent ? tecNO_PERMISSION
: credentials::authorizedDepositPreauth(
view, tx.getFieldV256(sfCredentialIDs), dst);
}
}
}

View File

@@ -185,7 +185,7 @@ OpenView::txsEnd() const -> std::unique_ptr<txs_type::iter_base>
bool
OpenView::txExists(key_type const& key) const
{
return txs_.contains(key);
return txs_.find(key) != txs_.end();
}
auto
@@ -198,13 +198,9 @@ OpenView::txRead(key_type const& key) const -> tx_type
auto stx = std::make_shared<STTx const>(SerialIter{item.txn->slice()});
decltype(tx_type::second) sto;
if (item.meta)
{
sto = std::make_shared<STObject const>(SerialIter{item.meta->slice()}, sfMetadata);
}
else
{
sto = nullptr;
}
return {std::move(stx), std::move(sto)};
}

View File

@@ -11,11 +11,9 @@ auto
DeferredCredits::makeKey(AccountID const& a1, AccountID const& a2, Currency const& c) -> Key
{
if (a1 < a2)
{
return std::make_tuple(a1, a2, c);
}
return std::make_tuple(a2, a1, c);
else
return std::make_tuple(a2, a1, c);
}
void
@@ -55,13 +53,9 @@ DeferredCredits::credit(
// only record the balance the first time, do not record it here
auto& v = i->second;
if (sender < receiver)
{
v.highAcctCredits += amount;
}
else
{
v.lowAcctCredits += amount;
}
}
}
@@ -107,9 +101,11 @@ DeferredCredits::adjustments(
result.emplace(v.highAcctCredits, v.lowAcctCredits, v.lowAcctOrigBalance);
return result;
}
result.emplace(v.lowAcctCredits, v.highAcctCredits, -v.lowAcctOrigBalance);
return result;
else
{
result.emplace(v.lowAcctCredits, v.highAcctCredits, -v.lowAcctOrigBalance);
return result;
}
}
void
@@ -183,13 +179,11 @@ PaymentSandbox::balanceHook(
adjustedAmt.setIssuer(amount.getIssuer());
if (isXRP(issuer) && adjustedAmt < beast::zero)
{
// A calculated negative XRP balance is not an error case. Consider a
// payment snippet that credits a large XRP amount and then debits the
// same amount. The credit can't be used but we subtract the debit and
// calculate a negative value. It's not an error case.
adjustedAmt.clear();
}
return adjustedAmt;
}

View File

@@ -94,13 +94,9 @@ public:
dereference() const override
{
if (!sle1_)
{
return sle0_;
}
if (!sle0_)
{
else if (!sle0_)
return sle1_;
}
if (sle1_->key() <= sle0_->key())
return sle1_;
return sle0_;
@@ -112,13 +108,9 @@ private:
{
++iter0_;
if (iter0_ == end0_)
{
sle0_ = nullptr;
}
else
{
sle0_ = *iter0_;
}
}
void
@@ -126,13 +118,9 @@ private:
{
++iter1_;
if (iter1_ == end1_)
{
sle1_ = nullptr;
}
else
{
sle1_ = iter1_->second.sle;
}
}
void

View File

@@ -51,13 +51,9 @@ internalDirNext(
}
if constexpr (std::is_const_v<N>)
{
page = view.read(keylet::page(root, next));
}
else
{
page = view.peek(keylet::page(root, next));
}
XRPL_ASSERT(page, "xrpl::detail::internalDirNext : non-null root");
@@ -87,13 +83,9 @@ internalDirFirst(
uint256& entry)
{
if constexpr (std::is_const_v<N>)
{
page = view.read(keylet::page(root));
}
else
{
page = view.peek(keylet::page(root));
}
if (!page)
return false;
@@ -188,13 +180,9 @@ isGlobalFrozen(ReadView const& view, Asset const& asset)
return std::visit(
[&]<ValidIssueType TIss>(TIss const& issue) {
if constexpr (std::is_same_v<TIss, Issue>)
{
return isGlobalFrozen(view, issue.getIssuer());
}
else
{
return isGlobalFrozen(view, issue);
}
},
asset.value());
}
@@ -393,7 +381,7 @@ getLineIfUsable(
{
return nullptr; // LCOV_EXCL_LINE
}
if (sleIssuer->isFieldPresent(sfAMMID))
else if (sleIssuer->isFieldPresent(sfAMMID))
{
auto const sleAmm = view.read(keylet::amm((*sleIssuer)[sfAMMID]));
@@ -469,11 +457,9 @@ accountHolds(
bool const returnSpendable = (includeFullBalance == shFULL_BALANCE);
if (returnSpendable && account == issuer)
{
// If the account is the issuer, then their limit is effectively
// infinite
return STAmount{Issue{currency, issuer}, STAmount::cMaxValue, STAmount::cMaxOffset};
}
// IOU: Return balance on trust line modulo freeze
SLE::const_pointer const sle =
@@ -528,13 +514,9 @@ accountHolds(
auto const sleMpt = view.read(keylet::mptoken(mptIssue.getMptID(), account));
if (!sleMpt)
{
amount.clear(mptIssue);
}
else if (zeroIfFrozen == fhZERO_IF_FROZEN && isFrozen(view, account, mptIssue))
{
amount.clear(mptIssue);
}
else
{
amount = STAmount{mptIssue, sleMpt->getFieldU64(sfMPTAmount)};
@@ -769,10 +751,8 @@ forEachItemAfter(
if (!ownerDir)
return true;
for (auto const& key : ownerDir->getFieldV256(sfIndexes))
{
if (f(view.read(keylet::child(key))) && limit-- <= 1)
return true;
}
auto const uNodeNext = ownerDir->getFieldU64(sfIndexNext);
if (uNodeNext == 0)
return true;
@@ -811,13 +791,9 @@ transferRate(ReadView const& view, STAmount const& amount)
return std::visit(
[&]<ValidIssueType TIss>(TIss const& issue) {
if constexpr (std::is_same_v<TIss, Issue>)
{
return transferRate(view, issue.getIssuer());
}
else
{
return transferRate(view, issue.getMptID());
}
},
amount.asset().value());
}
@@ -1192,13 +1168,9 @@ canAddHolding(ReadView const& view, Issue const& issue)
auto const issuer = view.read(keylet::account(issue.getIssuer()));
if (!issuer)
{
return terNO_ACCOUNT;
}
if (!issuer->isFlag(lsfDefaultRipple))
{
else if (!issuer->isFlag(lsfDefaultRipple))
return terNO_RIPPLE;
}
return tesSUCCESS;
}
@@ -1349,7 +1321,7 @@ doWithdraw(
}
else
{
auto dstSle = view.read(keylet::account(dstAcct));
auto dstSle = view.peek(keylet::account(dstAcct));
if (auto err = verifyDepositPreauth(tx, view, senderAcct, dstAcct, dstSle, j))
return err;
}
@@ -1554,15 +1526,11 @@ authorizeMPToken(
// Issuer wants to unauthorize the holder, unset lsfMPTAuthorized on
// their MPToken
if (flags & tfMPTUnauthorize)
{
flagsOut &= ~lsfMPTAuthorized;
}
// Issuer wants to authorize a holder, set lsfMPTAuthorized on their
// MPToken
else
{
flagsOut |= lsfMPTAuthorized;
}
if (flagsIn != flagsOut)
sleMpt->setFieldU32(sfFlags, flagsOut);
@@ -2027,7 +1995,7 @@ rippleSendIOU(
{
// Direct send: redeeming IOUs and/or sending own IOUs.
auto const ter = rippleCreditIOU(view, uSenderID, uReceiverID, saAmount, false, j);
if (!isTesSuccess(ter))
if (ter != tesSUCCESS)
return ter;
saActual = saAmount;
return tesSUCCESS;
@@ -2351,13 +2319,15 @@ accountSendMultiIOU(
{
return TER{tecFAILED_PROCESSING};
}
else
{
auto const sndBal = sender->getFieldAmount(sfBalance);
view.creditHook(senderID, xrpAccount(), takeFromSender, sndBal);
auto const sndBal = sender->getFieldAmount(sfBalance);
view.creditHook(senderID, xrpAccount(), takeFromSender, sndBal);
// Decrement XRP balance.
sender->setFieldAmount(sfBalance, sndBal - takeFromSender);
view.update(sender);
// Decrement XRP balance.
sender->setFieldAmount(sfBalance, sndBal - takeFromSender);
view.update(sender);
}
}
if (auto stream = j.trace())
@@ -2405,9 +2375,7 @@ rippleCreditMPT(
view.update(sle);
}
else
{
return tecNO_AUTH;
}
}
if (uReceiverID == issuer)
@@ -2420,9 +2388,7 @@ rippleCreditMPT(
view.update(sleIssuance);
}
else
{
return tecINTERNAL; // LCOV_EXCL_LINE
}
}
else
{
@@ -2433,9 +2399,7 @@ rippleCreditMPT(
view.update(sle);
}
else
{
return tecNO_AUTH;
}
}
return tesSUCCESS;
@@ -2475,7 +2439,7 @@ rippleSendMPT(
// Direct send: redeeming MPTs and/or sending own MPTs.
auto const ter = rippleCreditMPT(view, uSenderID, uReceiverID, saAmount, j);
if (!isTesSuccess(ter))
if (ter != tesSUCCESS)
return ter;
saActual = saAmount;
return tesSUCCESS;
@@ -2491,7 +2455,7 @@ rippleSendMPT(
<< " cost=" << saActual.getFullText();
if (auto const terResult = rippleCreditMPT(view, issuer, uReceiverID, saAmount, j);
!isTesSuccess(terResult))
terResult != tesSUCCESS)
return terResult;
return rippleCreditMPT(view, uSenderID, issuer, saActual, j);
@@ -2635,13 +2599,9 @@ accountSend(
return std::visit(
[&]<ValidIssueType TIss>(TIss const& issue) {
if constexpr (std::is_same_v<TIss, Issue>)
{
return accountSendIOU(view, uSenderID, uReceiverID, saAmount, j, waiveFee);
}
else
{
return accountSendMPT(view, uSenderID, uReceiverID, saAmount, j, waiveFee);
}
},
saAmount.asset().value());
}
@@ -2660,13 +2620,9 @@ accountSendMulti(
return std::visit(
[&]<ValidIssueType TIss>(TIss const& issue) {
if constexpr (std::is_same_v<TIss, Issue>)
{
return accountSendMultiIOU(view, senderID, issue, receivers, j, waiveFee);
}
else
{
return accountSendMultiMPT(view, senderID, issue, receivers, j, waiveFee);
}
},
asset.value());
}
@@ -2769,14 +2725,12 @@ issueIOU(
// time of deletion.
state->setFieldAmount(sfBalance, final_balance);
if (must_delete)
{
return trustDelete(
view,
state,
bSenderHigh ? account : issue.account,
bSenderHigh ? issue.account : account,
j);
}
view.update(state);
@@ -2944,11 +2898,9 @@ requireAuth(ReadView const& view, Issue const& issue, AccountID const& account,
issuerAccount && (*issuerAccount)[sfFlags] & lsfRequireAuth)
{
if (trustLine)
{
return ((*trustLine)[sfFlags] & ((account > issue.account) ? lsfLowAuth : lsfHighAuth))
? tesSUCCESS
: TER{tecNO_AUTH};
}
return TER{tecNO_LINE};
}
@@ -2996,13 +2948,9 @@ requireAuth(
if (auto const err = std::visit(
[&]<ValidIssueType TIss>(TIss const& issue) {
if constexpr (std::is_same_v<TIss, Issue>)
{
return requireAuth(view, issue, account, authType);
}
else
{
return requireAuth(view, issue, account, authType, depth + 1);
}
},
asset.value());
!isTesSuccess(err))
@@ -3026,15 +2974,11 @@ requireAuth(
sleIssuance->getFieldU32(sfFlags) & lsfMPTRequireAuth,
"xrpl::requireAuth : issuance requires authorization");
// ter = tefINTERNAL | tecOBJECT_NOT_FOUND | tecNO_AUTH | tecEXPIRED
auto const ter = credentials::validDomain(view, *maybeDomainID, account);
if (isTesSuccess(ter))
{
if (auto const ter = credentials::validDomain(view, *maybeDomainID, account);
isTesSuccess(ter))
return ter; // Note: sleToken might be null
}
if (!sleToken)
{
else if (!sleToken)
return ter;
}
// We ignore error from validDomain if we found sleToken, as it could
// belong to someone who is explicitly authorized e.g. a vault owner.
}
@@ -3101,14 +3045,14 @@ enforceMPTokenAuthorization(
// Either way, return tecNO_AUTH and there is nothing else to do
return expired ? tecEXPIRED : tecNO_AUTH;
}
if (!authorizedByDomain && maybeDomainID.has_value())
else if (!authorizedByDomain && maybeDomainID.has_value())
{
// Found an MPToken but the account is not authorized and we expect
// it to have been authorized by the domain. This could be because the
// credentials used to create the MPToken have expired or been deleted.
return expired ? tecEXPIRED : tecNO_AUTH;
}
if (!authorizedByDomain)
else if (!authorizedByDomain)
{
// We found an MPToken, but sfDomainID is not set, so this is a classic
// MPToken which requires authorization by the token issuer.
@@ -3120,7 +3064,7 @@ enforceMPTokenAuthorization(
return tecNO_AUTH;
}
if (authorizedByDomain && sleToken != nullptr)
else if (authorizedByDomain && sleToken != nullptr)
{
// Found an MPToken, authorized by the domain. Ignore authorization flag
// lsfMPTAuthorized because it is meaningless. Return tesSUCCESS
@@ -3129,7 +3073,7 @@ enforceMPTokenAuthorization(
"xrpl::enforceMPTokenAuthorization : found MPToken for domain");
return tesSUCCESS;
}
if (authorizedByDomain)
else if (authorizedByDomain)
{
// Could not find MPToken but there should be one because we are
// authorized by domain. Proceed to create it, then return tesSUCCESS
@@ -3246,7 +3190,7 @@ cleanupOnAccountDelete(
// Deleter handles the details of specific account-owned object
// deletion
auto const [ter, skipEntry] = deleter(nodeType, dirEntry, sleItem);
if (!isTesSuccess(ter))
if (ter != tesSUCCESS)
return ter;
// dirFirst() and dirNext() are like iterators with exposed
@@ -3315,7 +3259,7 @@ deleteAMMTrustLine(
if (ammAccountID && (low != *ammAccountID && high != *ammAccountID))
return terNO_AMM;
if (auto const ter = trustDelete(view, sleState, low, high, j); !isTesSuccess(ter))
if (auto const ter = trustDelete(view, sleState, low, high, j); ter != tesSUCCESS)
{
JLOG(j.error()) << "deleteAMMTrustLine: failed to delete the trustline.";
return ter;
@@ -3370,11 +3314,9 @@ assetsToSharesDeposit(
Number const assetTotal = vault->at(sfAssetsTotal);
STAmount shares{vault->at(sfShareMPTID)};
if (assetTotal == 0)
{
return STAmount{
shares.asset(),
Number(assets.mantissa(), assets.exponent() + vault->at(sfScale)).truncate()};
}
Number const shareTotal = issuance->at(sfOutstandingAmount);
shares = ((shareTotal * assets) / assetTotal).truncate();
@@ -3397,10 +3339,8 @@ sharesToAssetsDeposit(
Number const assetTotal = vault->at(sfAssetsTotal);
STAmount assets{vault->at(sfAsset)};
if (assetTotal == 0)
{
return STAmount{
assets.asset(), shares.mantissa(), shares.exponent() - vault->at(sfScale), false};
}
Number const shareTotal = issuance->at(sfOutstandingAmount);
assets = (assetTotal * shares) / shareTotal;
@@ -3515,13 +3455,9 @@ rippleLockEscrowMPT(
} // LCOV_EXCL_STOP
if (sle->isFieldPresent(sfLockedAmount))
{
(*sle)[sfLockedAmount] += pay;
}
else
{
sle->setFieldU64(sfLockedAmount, pay);
}
view.update(sle);
}
@@ -3542,13 +3478,9 @@ rippleLockEscrowMPT(
} // LCOV_EXCL_STOP
if (sleIssuance->isFieldPresent(sfLockedAmount))
{
(*sleIssuance)[sfLockedAmount] += pay;
}
else
{
sleIssuance->setFieldU64(sfLockedAmount, pay);
}
view.update(sleIssuance);
}
@@ -3565,10 +3497,8 @@ rippleUnlockEscrowMPT(
beast::Journal j)
{
if (!view.rules().enabled(fixTokenEscrowV1))
{
XRPL_ASSERT(
netAmount == grossAmount, "xrpl::rippleUnlockEscrowMPT : netAmount == grossAmount");
}
auto const& issuer = netAmount.getIssuer();
auto const& mptIssue = netAmount.get<MPTIssue>();
@@ -3603,13 +3533,9 @@ rippleUnlockEscrowMPT(
auto const newLocked = locked - redeem;
if (newLocked == 0)
{
sleIssuance->makeFieldAbsent(sfLockedAmount);
}
else
{
sleIssuance->setFieldU64(sfLockedAmount, newLocked);
}
view.update(sleIssuance);
}
@@ -3662,44 +3588,42 @@ rippleUnlockEscrowMPT(
"cannot unlock MPTs.";
return tecINTERNAL;
} // LCOV_EXCL_STOP
// Decrease the MPT Holder EscrowedAmount
auto const mptokenID = keylet::mptoken(mptID.key, sender);
auto sle = view.peek(mptokenID);
if (!sle)
{ // LCOV_EXCL_START
JLOG(j.error()) << "rippleUnlockEscrowMPT: MPToken not found for " << sender;
return tecOBJECT_NOT_FOUND;
} // LCOV_EXCL_STOP
if (!sle->isFieldPresent(sfLockedAmount))
{ // LCOV_EXCL_START
JLOG(j.error()) << "rippleUnlockEscrowMPT: no locked amount in MPToken for "
<< to_string(sender);
return tecINTERNAL;
} // LCOV_EXCL_STOP
auto const locked = sle->getFieldU64(sfLockedAmount);
auto const delta = grossAmount.mpt().value();
// Underflow check for subtraction
if (!canSubtract(STAmount(mptIssue, locked), STAmount(mptIssue, delta)))
{ // LCOV_EXCL_START
JLOG(j.error()) << "rippleUnlockEscrowMPT: insufficient locked amount for "
<< to_string(sender) << ": " << locked << " < " << delta;
return tecINTERNAL;
} // LCOV_EXCL_STOP
auto const newLocked = locked - delta;
if (newLocked == 0)
{
sle->makeFieldAbsent(sfLockedAmount);
}
else
{
sle->setFieldU64(sfLockedAmount, newLocked);
// Decrease the MPT Holder EscrowedAmount
auto const mptokenID = keylet::mptoken(mptID.key, sender);
auto sle = view.peek(mptokenID);
if (!sle)
{ // LCOV_EXCL_START
JLOG(j.error()) << "rippleUnlockEscrowMPT: MPToken not found for " << sender;
return tecOBJECT_NOT_FOUND;
} // LCOV_EXCL_STOP
if (!sle->isFieldPresent(sfLockedAmount))
{ // LCOV_EXCL_START
JLOG(j.error()) << "rippleUnlockEscrowMPT: no locked amount in MPToken for "
<< to_string(sender);
return tecINTERNAL;
} // LCOV_EXCL_STOP
auto const locked = sle->getFieldU64(sfLockedAmount);
auto const delta = grossAmount.mpt().value();
// Underflow check for subtraction
if (!canSubtract(STAmount(mptIssue, locked), STAmount(mptIssue, delta)))
{ // LCOV_EXCL_START
JLOG(j.error()) << "rippleUnlockEscrowMPT: insufficient locked amount for "
<< to_string(sender) << ": " << locked << " < " << delta;
return tecINTERNAL;
} // LCOV_EXCL_STOP
auto const newLocked = locked - delta;
if (newLocked == 0)
sle->makeFieldAbsent(sfLockedAmount);
else
sle->setFieldU64(sfLockedAmount, newLocked);
view.update(sle);
}
view.update(sle);
// Note: The gross amount is the amount that was locked, the net
// amount is the amount that is being unlocked. The difference is the fee

View File

@@ -26,12 +26,6 @@ HTTPClient::initializeSSLContext(
httpClientSSLContext.emplace(sslVerifyDir, sslVerifyFile, sslVerify, j);
}
void
HTTPClient::cleanupSSLContext()
{
httpClientSSLContext.reset();
}
//------------------------------------------------------------------------------
//
// Fetch a web page via http or https.
@@ -478,7 +472,7 @@ public:
private:
using pointer = std::shared_ptr<HTTPClient>;
bool mSSL{};
bool mSSL;
AutoSocket mSocket;
boost::asio::ip::tcp::resolver mResolver;
@@ -496,7 +490,7 @@ private:
std::string mBody;
unsigned short const mPort;
std::size_t const maxResponseSize_;
int mStatus{};
int mStatus;
std::function<void(boost::asio::streambuf& sb, std::string const& strHost)> mBuild;
std::function<
bool(boost::system::error_code const& ecResult, int iStatus, std::string const& strData)>
@@ -508,7 +502,7 @@ private:
boost::system::error_code mShutdown;
std::deque<std::string> mDeqSites;
std::chrono::seconds mTimeout{};
std::chrono::seconds mTimeout;
beast::Journal j_;
};

View File

@@ -79,7 +79,6 @@ registerSSLCerts(boost::asio::ssl::context& ctx, boost::system::error_code& ec,
SSL_CTX_set_cert_store(ctx.native_handle(), store.release());
#else
// NOLINTNEXTLINE(bugprone-unused-return-value)
ctx.set_default_verify_paths(ec);
#endif
}

View File

@@ -75,7 +75,7 @@ BatchWriter::writeBatch()
}
}
BatchWriteReport report{};
BatchWriteReport report;
report.writeCount = set.size();
auto const before = std::chrono::steady_clock::now();

View File

@@ -29,7 +29,7 @@ DatabaseNodeImp::fetchNodeObject(
bool duplicate)
{
std::shared_ptr<NodeObject> nodeObject = nullptr;
Status status = ok;
Status status;
try
{

View File

@@ -101,7 +101,7 @@ DatabaseRotatingImp::fetchNodeObject(
bool duplicate)
{
auto fetch = [&](std::shared_ptr<Backend> const& backend) {
Status status = ok;
Status status;
std::shared_ptr<NodeObject> nodeObject;
try
{

View File

@@ -1,18 +0,0 @@
#include <xrpl/nodestore/Backend.h>
#include <algorithm>
#include <thread>
namespace xrpl {
namespace NodeStore {
// Initialize the static constant for hardware thread count. The `hardware_concurrency` function can
// return 0 on some platforms, in which case we default to 1. We limit the total number of threads
// to 8 to avoid contention.
unsigned int const Backend::numHardwareThreads = []() {
auto const hw = std::thread::hardware_concurrency();
return std::min(std::max(hw, 1u), 8u);
}();
} // namespace NodeStore
} // namespace xrpl

View File

@@ -142,13 +142,9 @@ public:
std::shared_ptr<NodeObject> nObj;
Status status = fetch(h, &nObj);
if (status != ok)
{
results.push_back({});
}
else
{
results.push_back(nObj);
}
}
return {results, ok};

View File

@@ -7,21 +7,15 @@
#include <xrpl/nodestore/detail/EncodedBlob.h>
#include <xrpl/nodestore/detail/codec.h>
#include <boost/asio/post.hpp>
#include <boost/asio/thread_pool.hpp>
#include <boost/filesystem.hpp>
#include <nudb/nudb.hpp>
#include <atomic>
#include <chrono>
#include <cstdint>
#include <cstdio>
#include <exception>
#include <latch>
#include <memory>
#include <thread>
#include <vector>
namespace xrpl {
namespace NodeStore {
@@ -43,7 +37,6 @@ public:
nudb::store db_;
std::atomic<bool> deletePath_;
Scheduler& scheduler_;
boost::asio::thread_pool threadPool_;
NuDBBackend(
size_t keyBytes,
@@ -58,7 +51,6 @@ public:
, blockSize_(parseBlockSize(name_, keyValues, journal))
, deletePath_(false)
, scheduler_(scheduler)
, threadPool_(numHardwareThreads)
{
if (name_.empty())
Throw<std::runtime_error>("nodestore: Missing path in NuDB backend");
@@ -79,7 +71,6 @@ public:
, db_(context)
, deletePath_(false)
, scheduler_(scheduler)
, threadPool_(numHardwareThreads)
{
if (name_.empty())
Throw<std::runtime_error>("nodestore: Missing path in NuDB backend");
@@ -190,10 +181,9 @@ public:
Status
fetch(uint256 const& hash, std::shared_ptr<NodeObject>* pno) override
{
Status status = ok;
Status status;
pno->reset();
nudb::error_code ec;
db_.fetch(
hash.data(),
[&hash, pno, &status](void const* data, std::size_t size) {
@@ -209,7 +199,6 @@ public:
status = ok;
},
ec);
if (ec == nudb::error::key_not_found)
return notFound;
if (ec)
@@ -220,66 +209,18 @@ public:
std::pair<std::vector<std::shared_ptr<NodeObject>>, Status>
fetchBatch(std::vector<uint256> const& hashes) override
{
std::vector<std::shared_ptr<NodeObject>> results(hashes.size());
// Determine the number of threads to use for data compression from the number of available
// cores and the size of the batch. We would like each thread to at least process 4 items,
// except for the last thread that might process fewer items.
auto const numThreads = std::min(
std::max(static_cast<unsigned int>(hashes.size()) / 4u, 1u), numHardwareThreads);
// If we need only one thread, just do it sequentially.
if (numThreads == 1u)
std::vector<std::shared_ptr<NodeObject>> results;
results.reserve(hashes.size());
for (auto const& h : hashes)
{
for (size_t i = 0; i < hashes.size(); ++i)
{
std::shared_ptr<NodeObject> nObj;
if (fetch(hashes[i], &nObj) == ok)
{
results[i] = nObj;
}
}
return {results, ok};
std::shared_ptr<NodeObject> nObj;
Status status = fetch(h, &nObj);
if (status != ok)
results.push_back({});
else
results.push_back(nObj);
}
// Use a latch to synchronize task completion.
std::latch taskCompletion(numThreads);
// Submit fetch tasks to the thread pool.
auto const itemsPerThread = (hashes.size() + numThreads - 1) / numThreads;
for (auto t = 0u; t < numThreads; ++t)
{
auto const startIdx = t * itemsPerThread;
XRPL_ASSERT(
startIdx < hashes.size(),
"xrpl::NuDBFactory::fetchBatch : startIdx < hashes.size()");
if (startIdx >= hashes.size())
{
taskCompletion.count_down();
continue;
}
auto const endIdx = std::min(startIdx + itemsPerThread, hashes.size());
auto task = [this, &hashes, &results, &taskCompletion, startIdx, endIdx]() {
// Fetch the items assigned to this task.
for (size_t i = startIdx; i < endIdx; ++i)
{
std::shared_ptr<NodeObject> nObj;
if (fetch(hashes[i], &nObj) == ok)
{
results[i] = nObj;
}
}
// Signal task completion.
taskCompletion.count_down();
};
boost::asio::post(threadPool_, std::move(task));
}
// Wait for all fetch tasks to complete.
taskCompletion.wait();
return {results, ok};
}
@@ -287,11 +228,9 @@ public:
do_insert(std::shared_ptr<NodeObject> const& no)
{
EncodedBlob e(no);
nudb::error_code ec;
nudb::detail::buffer bf;
auto const result = nodeobject_compress(e.getData(), e.getSize(), bf);
nudb::error_code ec;
db_.insert(e.getKey(), result.first, result.second, ec);
if (ec && ec != nudb::error::key_exists)
Throw<nudb::system_error>(ec);
@@ -300,22 +239,10 @@ public:
void
store(std::shared_ptr<NodeObject> const& no) override
{
BatchWriteReport report{};
BatchWriteReport report;
report.writeCount = 1;
auto const start = std::chrono::steady_clock::now();
++pendingWrites_;
try
{
do_insert(no);
}
catch (...)
{
--pendingWrites_;
throw;
}
--pendingWrites_;
do_insert(no);
report.elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - start);
scheduler_.onBatchWrite(report);
@@ -324,126 +251,11 @@ public:
void
storeBatch(Batch const& batch) override
{
BatchWriteReport report{};
BatchWriteReport report;
report.writeCount = batch.size();
auto const start = std::chrono::steady_clock::now();
pendingWrites_ += static_cast<int>(batch.size());
// Determine the number of threads to use for data compression from the number of available
// cores and the size of the batch. We would like each thread to at least process 4 items,
// except for the last thread that might process fewer items.
auto const numThreads = std::min(
std::max(static_cast<unsigned int>(batch.size()) / 4u, 1u), numHardwareThreads);
// If we need only one thread, just do it sequentially.
if (numThreads == 1u)
{
for (auto const& e : batch)
{
try
{
do_insert(e);
}
catch (...)
{
pendingWrites_ -= static_cast<int>(batch.size());
throw;
}
}
pendingWrites_ -= static_cast<int>(batch.size());
report.elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - start);
scheduler_.onBatchWrite(report);
return;
}
// Helper struct that stores actual item data, not pointers, to avoid dangling references
// after EncodedBlob and buffer go out of scope in the thread.
struct CompressedData
{
std::vector<std::uint8_t> key;
std::vector<std::uint8_t> data;
std::exception_ptr eptr;
};
std::vector<CompressedData> compressed(batch.size());
// Use a latch to synchronize task completion.
std::latch taskCompletion(numThreads);
// Submit compression tasks to the thread pool.
auto const itemsPerThread = (batch.size() + numThreads - 1) / numThreads;
for (auto t = 0u; t < numThreads; ++t)
{
auto const startIdx = t * itemsPerThread;
XRPL_ASSERT(
startIdx < batch.size(), "xrpl::NuDBFactory::storeBatch : startIdx < batch.size()");
if (startIdx >= batch.size())
{
taskCompletion.count_down();
continue;
}
auto const endIdx = std::min(startIdx + itemsPerThread, batch.size());
auto task =
[&batch, &compressed, &taskCompletion, startIdx, endIdx, keyBytes = keyBytes_]() {
// Compress the items assigned to this task.
for (size_t i = startIdx; i < endIdx; ++i)
{
auto& item = compressed[i];
try
{
EncodedBlob e(batch[i]);
// Copy the key data to avoid dangling pointer.
auto const* keyPtr = static_cast<std::uint8_t const*>(e.getKey());
item.key.assign(keyPtr, keyPtr + keyBytes);
// Compress and copy the data to avoid dangling pointer.
nudb::detail::buffer bf;
auto const comp = nodeobject_compress(e.getData(), e.getSize(), bf);
auto const* dataPtr = static_cast<std::uint8_t const*>(comp.first);
item.data.assign(dataPtr, dataPtr + comp.second);
}
catch (...)
{
// Store the exception so it can be rethrown in the sequential phase
// below.
item.eptr = std::current_exception();
}
}
// Signal task completion.
taskCompletion.count_down();
};
boost::asio::post(threadPool_, std::move(task));
}
// Wait for all compression tasks to complete.
taskCompletion.wait();
// Insert the compressed data sequentially, since NuDB is designed as an append-only data
// store that only supports one writer.
for (auto const& item : compressed)
{
if (item.eptr)
{
pendingWrites_ -= static_cast<int>(batch.size());
std::rethrow_exception(item.eptr);
}
nudb::error_code ec;
db_.insert(item.key.data(), item.data.data(), item.data.size(), ec);
if (ec && ec != nudb::error::key_exists)
{
pendingWrites_ -= static_cast<int>(batch.size());
Throw<nudb::system_error>(ec);
}
}
pendingWrites_ -= static_cast<int>(batch.size());
for (auto const& e : batch)
do_insert(e);
report.elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - start);
scheduler_.onBatchWrite(report);
@@ -460,7 +272,7 @@ public:
auto const dp = db_.dat_path();
auto const kp = db_.key_path();
auto const lp = db_.log_path();
// auto const appnum = db_.appnum();
nudb::error_code ec;
db_.close(ec);
if (ec)
@@ -494,7 +306,7 @@ public:
int
getWriteLoad() override
{
return pendingWrites_.load();
return 0;
}
void
@@ -529,8 +341,6 @@ public:
}
private:
std::atomic<int> pendingWrites_{0};
static std::size_t
parseBlockSize(std::string const& name, Section const& keyValues, beast::Journal journal)
{

View File

@@ -13,7 +13,6 @@
#include <atomic>
#include <memory>
#include <thread>
namespace xrpl {
namespace NodeStore {
@@ -167,10 +166,8 @@ public:
auto const s = rocksdb::GetBlockBasedTableOptionsFromString(
config_options, table_options, get(keyValues, "bbt_options"), &table_options);
if (!s.ok())
{
Throw<std::runtime_error>(
std::string("Unable to set RocksDB bbt_options: ") + s.ToString());
}
}
m_options.table_factory.reset(NewBlockBasedTableFactory(table_options));
@@ -180,47 +177,10 @@ public:
auto const s =
rocksdb::GetOptionsFromString(m_options, get(keyValues, "options"), &m_options);
if (!s.ok())
{
Throw<std::runtime_error>(
std::string("Unable to set RocksDB options: ") + s.ToString());
}
}
// Enable pipelined writes for better write concurrency.
m_options.enable_pipelined_write = true;
// Set background job parallelism for better compaction/flush performance to the number of
// hardware threads, unless the value is explicitly provided in the config. The default is
// 2 (see include/rocksdb/options.h in the Conan dependency directory), so don't use fewer
// than that if no value is explicitly provided.
if (keyValues.exists("max_background_jobs"))
{
m_options.max_background_jobs = get<unsigned int>(keyValues, "max_background_jobs");
}
else if (auto v = numHardwareThreads; v > 2)
{
m_options.max_background_jobs = v;
}
// Set subcompactions for parallel compaction within a job to the number of hardware
// threads, unless the value is explicitly provided in the config. The default is 1 (see
// include/rocksdb/options.h in the Conan dependency directory), so don't use fewer
// than that if no value is explicitly provided.
if (keyValues.exists("max_subcompactions"))
{
m_options.max_subcompactions = get<unsigned int>(keyValues, "max_subcompactions");
}
else if (auto v = numHardwareThreads / 2; v > 1)
{
m_options.max_subcompactions = v;
}
// Enable direct I/O by default unless explicitly disabled in the config. This bypasses the
// OS page cache for better predictable performance on SSDs.
m_options.use_direct_reads = get<bool>(keyValues, "use_direct_io", true);
m_options.use_direct_io_for_flush_and_compaction =
get<bool>(keyValues, "use_direct_io", true);
std::string s1, s2;
rocksdb::GetStringFromDBOptions(&s1, m_options, "; ");
rocksdb::GetStringFromColumnFamilyOptions(&s2, m_options, "; ");
@@ -250,10 +210,8 @@ public:
m_options.create_if_missing = createIfMissing;
rocksdb::Status status = rocksdb::DB::Open(m_options, m_name, &db);
if (!status.ok() || !db)
{
Throw<std::runtime_error>(
std::string("Unable to open/create RocksDB: ") + status.ToString());
}
m_db.reset(db);
}
@@ -295,19 +253,23 @@ public:
rocksdb::ReadOptions const options;
rocksdb::Slice const slice(std::bit_cast<char const*>(hash.data()), m_keyBytes);
std::string string;
rocksdb::Status getStatus = m_db->Get(options, slice, &string);
if (getStatus.ok())
{
DecodedBlob decoded(hash.data(), string.data(), string.size());
if (decoded.wasOk())
{
*pObject = decoded.createObject();
}
else
{
// Decoding failed, probably corrupted.
// Decoding failed, probably corrupted!
//
status = dataCorrupt;
}
}
@@ -324,6 +286,7 @@ public:
else
{
status = Status(customCode + unsafe_cast<int>(getStatus.code()));
JLOG(m_journal.error()) << getStatus.ToString();
}
}
@@ -334,43 +297,16 @@ public:
std::pair<std::vector<std::shared_ptr<NodeObject>>, Status>
fetchBatch(std::vector<uint256> const& hashes) override
{
XRPL_ASSERT(m_db, "xrpl::NodeStore::RocksDBBackend::fetchBatch : non-null database");
if (hashes.empty())
return {{}, ok};
// Use MultiGet for parallel reads to allow RocksDB to fetch multiple keys concurrently,
// significantly improving throughput compared to sequential fetch() calls.
std::vector<rocksdb::Slice> keys;
keys.reserve(hashes.size());
std::vector<std::shared_ptr<NodeObject>> results;
results.reserve(hashes.size());
for (auto const& h : hashes)
{
keys.emplace_back(std::bit_cast<char const*>(h.data()), m_keyBytes);
}
rocksdb::ReadOptions options;
options.async_io = true; // Enable for better concurrency on supported platforms.
std::vector<std::string> values(hashes.size());
auto statuses = m_db->MultiGet(options, keys, &values);
std::vector<std::shared_ptr<NodeObject>> results(hashes.size());
for (size_t i = 0; i < hashes.size(); ++i)
{
if (statuses[i].ok())
{
DecodedBlob decoded(hashes[i].data(), values[i].data(), values[i].size());
if (decoded.wasOk())
{
results[i] = decoded.createObject();
}
}
else if (!statuses[i].IsNotFound())
{
// Log other errors but continue processing.
JLOG(m_journal.warn()) << "fetchBatch: MultiGet error for key "
<< keys[i].ToString() << ": " << statuses[i].ToString();
}
std::shared_ptr<NodeObject> nObj;
Status status = fetch(h, &nObj);
if (status != ok)
results.push_back({});
else
results.push_back(nObj);
}
return {results, ok};
@@ -385,7 +321,10 @@ public:
void
storeBatch(Batch const& batch) override
{
XRPL_ASSERT(m_db, "xrpl::NodeStore::RocksDBBackend::storeBatch : non-null database");
XRPL_ASSERT(
m_db,
"xrpl::NodeStore::RocksDBBackend::storeBatch : non-null "
"database");
rocksdb::WriteBatch wb;
for (auto const& e : batch)
@@ -397,27 +336,7 @@ public:
rocksdb::Slice(std::bit_cast<char const*>(encoded.getData()), encoded.getSize()));
}
// Configure WriteOptions for high throughput.
// Note: no_slowdown is intentionally NOT set here. When set to true, RocksDB returns an
// error instead of stalling when write buffers are full, which could cause write
// failures during high load. We prefer to accept brief stalls over dropped writes.
rocksdb::WriteOptions options;
// Setting `sync = false` improves write throughput significantly by allowing the OS to
// batch fsync operations, rather than forcing immediate disk synchronization on every
// write. The Write-Ahead Log (WAL) is still written and flushed, so database consistency is
// maintained across clean restarts and crashes.
//
// Note: On hard shutdown up to a few seconds of recent writes (since the last OS-initiated
// flush) may be lost from this node. However, since ledger data is replicated across
// the network, lost writes can be re-synced from peers during startup.
options.sync = false;
// Keep WAL enabled for crash recovery consistency.
options.disableWAL = false;
// Ensure RocksDB will not aggressive throttle the writes.
options.low_pri = false;
rocksdb::WriteOptions const options;
auto ret = m_db->Write(options, &wb);

View File

@@ -3,7 +3,6 @@
#include <xrpl/beast/core/SemanticVersion.h>
#include <xrpl/git/Git.h>
#include <xrpl/protocol/BuildInfo.h>
#include <xrpl/protocol/SystemParameters.h>
#include <boost/preprocessor/stringize.hpp>
@@ -81,7 +80,7 @@ getVersionString()
std::string const&
getFullVersionString()
{
static std::string const value = systemName() + "-" + getVersionString();
static std::string const value = "rippled-" + getVersionString();
return value;
}

Some files were not shown because too many files have changed in this diff Show More