Compare commits

...

11 Commits

Author SHA1 Message Date
Jingchen
d5a3923228 Merge branch 'develop' into dangell/relay 2025-07-23 14:16:25 +01:00
Valentin Balaschenko
c233df720a refactor: Makes HashRouter flags more type-safe (#5371)
This change addresses the issue #5336: Refactor HashRouter flags to be more type-safe.

* Switched numeric flags to enum type.
* Updated unit tests
2025-07-23 12:03:12 +00:00
Bronek Kozicki
7ff4f79d30 Fix clang-format CI job (#5598)
For jobs running in containers, $GITHUB_WORKSPACE and ${{ github.workspace }} might not be the same directory. The actions/checkout step is supposed to checkout into `$GITHUB_WORKSPACE` and then add it to safe.directory (see instructions at https://github.com/actions/checkout), but that's apparently not happening for some container images. We can't be sure what is actually happening, so we preemptively add both directories to `safe.directory`. See also the GitHub issue opened in 2022 that still has not been resolved https://github.com/actions/runner/issues/2058.
2025-07-23 10:44:18 +00:00
Luc des Trois Maisons
60909655d3 Restructure beast::rngfill (#5563)
The current implementation of rngfill is prone to false warnings from GCC about array bounds violations. Looking at the code, the implementation naively manipulates both the bytes count and the buffer pointer directly to ensure the trailing memcpy doesn't overrun the buffer. As expressed, there is a data dependency on both fields between loop iterations.

Now, ideally, an optimizing compiler would realize that these dependencies were unnecessary and end up restructuring its intermediate representation into a functionally equivalent form with them absent. However, the point at which this occurs may be disjoint from when warning analyses are performed, potentially rendering them more difficult to
determine precisely.

In addition, it may also consume a portion of the budget the optimizer has allocated to attempting to improve a translation unit's performance. Given this is a function template which requires context-sensitive instantiation, this code would be more prone than most to being inlined, with a decrease in optimization budget corresponding to the effort the optimizer has already expended, having already optimized one or more calling functions. Thus, the scope for impacting the the ultimate quality of the code generated is elevated.

For this change, we rearrange things so that the location and contents of each memcpy can be computed independently, relying on a simple loop iteration counter as the only changing input between iterations.
2025-07-22 11:42:43 -04:00
Bronek Kozicki
03e46cd026 Remove include(default) from libxrpl profile (#5587)
Remove `include(default)` from `conan/profiles/libxrpl`. This means that we will now rely on compiler workarounds stored elsewhere e.g. in global.conf.
2025-07-21 14:03:53 +00:00
Vito Tumas
e95683a0fb refactor: Change boost::shared_mutex to std::shared_mutex (#5576)
This change reverts the usage of boost::shared_mutex back to std::shared_mutex. The change was originally introduced as a workaround for a bug in glibc 2.28 and older versions, which could cause threads using std::shared_mutex to stall. This issue primarily affected Ubuntu 18.04 and earlier distributions, which we no longer support.
2025-07-21 13:14:22 +00:00
Jingchen
13353ae36d Fix macos runner (#5585)
This change fixes the MacOS pipeline issue by limiting GitHub to choose the existing runners, ensuring the new experimental runners are excluded until they are ready.
2025-07-21 12:22:32 +00:00
Denis Angell
d4bfb4feec [temp] Optional Open Ledger 2025-07-19 21:58:59 +02:00
Denis Angell
7cffafb8ce relay transactions at earliest moment
- update TxQ::apply to minimize preflight/preclaim calls
- add read only open view for NetworkOPs::apply preflight & preclaim
- NetworkOPs::apply call preflight and preclaim, then relay, then apply
2025-07-19 10:12:25 +02:00
Chenna Keshava B S
1a40f18bdd Remove the type filter from "ledger" RPC command (#4934)
This issue was reported on the Javascript client library: XRPLF/xrpl.js#2611

The type filter (Note: as of the latest version of rippled, type parameter is deprecated) does not work as expected. This PR removes the type filter from the ledger command.
2025-07-18 17:58:46 +00:00
Bart
90e6380383 refactor: Update date, libarchive, nudb, openssl, sqlite3, xxhash packages (#5567)
This PR updates several dependencies to their latest versions. Not all dependencies have been updated, as some need to be patched and some require additional code changes due to backward incompatibilities introduced by the version bump.
2025-07-18 16:55:15 +00:00
36 changed files with 662 additions and 354 deletions

View File

@@ -12,7 +12,6 @@ runs:
conan export --version 1.1.10 external/snappy
conan export --version 9.7.3 external/rocksdb
conan export --version 4.0.3 external/soci
conan export --version 2.0.8 external/nudb
- name: add Ripple Conan remote
if: env.CONAN_URL != ''
shell: bash

View File

@@ -11,6 +11,15 @@ jobs:
runs-on: ubuntu-24.04
container: ghcr.io/xrplf/ci/tools-rippled-clang-format
steps:
# For jobs running in containers, $GITHUB_WORKSPACE and ${{ github.workspace }} might not be the
# same directory. The actions/checkout step is *supposed* to checkout into $GITHUB_WORKSPACE and
# then add it to safe.directory (see instructions at https://github.com/actions/checkout)
# but that's apparently not happening for some container images. We can't be sure what is actually
# happening, so let's pre-emptively add both directories to safe.directory. There's a
# Github issue opened in 2022 and not resolved in 2025 https://github.com/actions/runner/issues/2058 ¯\_(ツ)_/¯
- run: |
git config --global --add safe.directory $GITHUB_WORKSPACE
git config --global --add safe.directory ${{ github.workspace }}
- uses: actions/checkout@v4
- name: Format first-party sources
run: |

View File

@@ -24,6 +24,8 @@ env:
CONAN_GLOBAL_CONF: |
core.download:parallel={{os.cpu_count()}}
core.upload:parallel={{os.cpu_count()}}
core:default_build_profile=libxrpl
core:default_profile=libxrpl
tools.build:jobs={{ (os.cpu_count() * 4/5) | int }}
tools.build:verbosity=verbose
tools.compilation:verbosity=verbose
@@ -40,7 +42,7 @@ jobs:
- Ninja
configuration:
- Release
runs-on: [self-hosted, macOS]
runs-on: [self-hosted, macOS, mac-runner-m1]
env:
# The `build` action requires these variables.
build_dir: .build
@@ -87,7 +89,7 @@ jobs:
clang --version
- name: configure Conan
run : |
echo "${CONAN_GLOBAL_CONF}" > global.conf
echo "${CONAN_GLOBAL_CONF}" >> $(conan config home)/global.conf
conan config install conan/profiles/ -tf $(conan config home)/profiles/
conan profile show
- name: export custom recipes
@@ -96,7 +98,6 @@ jobs:
conan export --version 1.1.10 external/snappy
conan export --version 9.7.3 external/rocksdb
conan export --version 4.0.3 external/soci
conan export --version 2.0.8 external/nudb
- name: add Ripple Conan remote
if: env.CONAN_URL != ''
shell: bash

View File

@@ -25,6 +25,8 @@ env:
CONAN_GLOBAL_CONF: |
core.download:parallel={{ os.cpu_count() }}
core.upload:parallel={{ os.cpu_count() }}
core:default_build_profile=libxrpl
core:default_profile=libxrpl
tools.build:jobs={{ (os.cpu_count() * 4/5) | int }}
tools.build:verbosity=verbose
tools.compilation:verbosity=verbose
@@ -91,7 +93,8 @@ jobs:
env | sort
- name: configure Conan
run: |
echo "${CONAN_GLOBAL_CONF}" >> ${CONAN_HOME}/global.conf
echo "${CONAN_GLOBAL_CONF}" >> $(conan config home)/global.conf
conan config install conan/profiles/ -tf $(conan config home)/profiles/
conan profile show
- name: archive profile
# Create this archive before dependencies are added to the local cache.
@@ -164,6 +167,16 @@ jobs:
generator: Ninja
configuration: ${{ matrix.configuration }}
cmake-args: "-Dassert=TRUE -Dwerr=TRUE ${{ matrix.cmake-args }}"
- name: check linking
run: |
cd ${build_dir}
ldd ./rippled
if [ "$(ldd ./rippled | grep -E '(libstdc\+\+|libgcc)' | wc -l)" -eq 0 ]; then
echo 'The binary is statically linked.'
else
echo 'The binary is dynamically linked.'
exit 1
fi
- name: test
run: |
cd ${build_dir}
@@ -220,6 +233,7 @@ jobs:
cd ${build_dir}
./rippled --unittest --unittest-jobs $(nproc)
ctest -j $(nproc) --output-on-failure
coverage:
strategy:
fail-fast: false
@@ -296,7 +310,6 @@ jobs:
attempt_limit: 5
attempt_delay: 210000 # in milliseconds
conan:
needs: dependencies
runs-on: [self-hosted, heavy]
@@ -313,7 +326,6 @@ jobs:
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
with:
name: ${{ env.platform }}-${{ env.compiler }}-${{ env.configuration }}
- name: extract cache
run: |
mkdir -p ${CONAN_HOME}
@@ -370,7 +382,8 @@ jobs:
- name: configure Conan
run: |
echo "${CONAN_GLOBAL_CONF}" >> ${CONAN_HOME}/global.conf
echo "${CONAN_GLOBAL_CONF}" >> $(conan config home)/global.conf
conan config install conan/profiles/ -tf $(conan config home)/profiles/
conan profile show
- name: build dependencies
run: |

View File

@@ -27,6 +27,8 @@ env:
CONAN_GLOBAL_CONF: |
core.download:parallel={{os.cpu_count()}}
core.upload:parallel={{os.cpu_count()}}
core:default_build_profile=libxrpl
core:default_profile=libxrpl
tools.build:jobs=24
tools.build:verbosity=verbose
tools.compilation:verbosity=verbose
@@ -82,8 +84,7 @@ jobs:
- name: configure Conan
shell: bash
run: |
echo "${CONAN_GLOBAL_CONF}" > global.conf
mv conan/profiles/libxrpl conan/profiles/default
echo "${CONAN_GLOBAL_CONF}" >> $(conan config home)/global.conf
conan config install conan/profiles/ -tf $(conan config home)/profiles/
conan profile show
- name: export custom recipes
@@ -92,7 +93,6 @@ jobs:
conan export --version 1.1.10 external/snappy
conan export --version 9.7.3 external/rocksdb
conan export --version 4.0.3 external/soci
conan export --version 2.0.8 external/nudb
- name: add Ripple Conan remote
if: env.CONAN_URL != ''
shell: bash

View File

@@ -167,8 +167,6 @@ It does not explicitly link the C++ standard library,
which allows you to statically link it with GCC, if you want.
```
# Conan 1.x
conan export external/snappy snappy/1.1.10@
# Conan 2.x
conan export --version 1.1.10 external/snappy
```
@@ -177,8 +175,6 @@ Export our [Conan recipe for RocksDB](./external/rocksdb).
It does not override paths to dependencies when building with Visual Studio.
```
# Conan 1.x
conan export external/rocksdb rocksdb/9.7.3@
# Conan 2.x
conan export --version 9.7.3 external/rocksdb
```
@@ -187,23 +183,10 @@ Export our [Conan recipe for SOCI](./external/soci).
It patches their CMake to correctly import its dependencies.
```
# Conan 1.x
conan export external/soci soci/4.0.3@
# Conan 2.x
conan export --version 4.0.3 external/soci
```
Export our [Conan recipe for NuDB](./external/nudb).
It fixes some source files to add missing `#include`s.
```
# Conan 1.x
conan export external/nudb nudb/2.0.8@
# Conan 2.x
conan export --version 2.0.8 external/nudb
```
### Build and Test
1. Create a build directory and move into it.

View File

@@ -6,10 +6,6 @@
{% set compiler_version = detect_api.default_compiler_version(compiler, version) %}
{% endif %}
{% if os == "Linux" %}
include(default)
{% endif %}
[settings]
os={{ os }}
arch={{ arch }}

View File

@@ -25,9 +25,9 @@ class Xrpl(ConanFile):
requires = [
'grpc/1.50.1',
'libarchive/3.7.6',
'nudb/2.0.8',
'openssl/1.1.1v',
'libarchive/3.8.1',
'nudb/2.0.9',
'openssl/1.1.1w',
'soci/4.0.3',
'zlib/1.3.1',
]
@@ -37,7 +37,7 @@ class Xrpl(ConanFile):
]
tool_requires = [
'protobuf/3.21.9',
'protobuf/3.21.12',
]
default_options = {
@@ -105,15 +105,15 @@ class Xrpl(ConanFile):
# Conan 2 requires transitive headers to be specified
transitive_headers_opt = {'transitive_headers': True} if conan_version.split('.')[0] == '2' else {}
self.requires('boost/1.83.0', force=True, **transitive_headers_opt)
self.requires('date/3.0.3', **transitive_headers_opt)
self.requires('date/3.0.4', **transitive_headers_opt)
self.requires('lz4/1.10.0', force=True)
self.requires('protobuf/3.21.9', force=True)
self.requires('sqlite3/3.47.0', force=True)
self.requires('protobuf/3.21.12', force=True)
self.requires('sqlite3/3.49.1', force=True)
if self.options.jemalloc:
self.requires('jemalloc/5.3.0')
if self.options.rocksdb:
self.requires('rocksdb/9.7.3')
self.requires('xxhash/0.8.2', **transitive_headers_opt)
self.requires('xxhash/0.8.3', **transitive_headers_opt)
exports_sources = (
'CMakeLists.txt',

View File

@@ -1,10 +0,0 @@
sources:
"2.0.8":
url: "https://github.com/CPPAlliance/NuDB/archive/2.0.8.tar.gz"
sha256: "9b71903d8ba111cd893ab064b9a8b6ac4124ed8bd6b4f67250205bc43c7f13a8"
patches:
"2.0.8":
- patch_file: "patches/2.0.8-0001-add-include-stdexcept-for-msvc.patch"
patch_description: "Fix build for MSVC by including stdexcept"
patch_type: "portability"
patch_source: "https://github.com/cppalliance/NuDB/pull/100/files"

View File

@@ -1,72 +0,0 @@
import os
from conan import ConanFile
from conan.tools.build import check_min_cppstd
from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get
from conan.tools.layout import basic_layout
required_conan_version = ">=1.52.0"
class NudbConan(ConanFile):
name = "nudb"
description = "A fast key/value insert-only database for SSD drives in C++11"
license = "BSL-1.0"
url = "https://github.com/conan-io/conan-center-index"
homepage = "https://github.com/CPPAlliance/NuDB"
topics = ("header-only", "KVS", "insert-only")
package_type = "header-library"
settings = "os", "arch", "compiler", "build_type"
no_copy_source = True
@property
def _min_cppstd(self):
return 11
def export_sources(self):
export_conandata_patches(self)
def layout(self):
basic_layout(self, src_folder="src")
def requirements(self):
self.requires("boost/1.83.0")
def package_id(self):
self.info.clear()
def validate(self):
if self.settings.compiler.cppstd:
check_min_cppstd(self, self._min_cppstd)
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def build(self):
apply_conandata_patches(self)
def package(self):
copy(self, "LICENSE*",
dst=os.path.join(self.package_folder, "licenses"),
src=self.source_folder)
copy(self, "*",
dst=os.path.join(self.package_folder, "include"),
src=os.path.join(self.source_folder, "include"))
def package_info(self):
self.cpp_info.bindirs = []
self.cpp_info.libdirs = []
self.cpp_info.set_property("cmake_target_name", "NuDB")
self.cpp_info.set_property("cmake_target_aliases", ["NuDB::nudb"])
self.cpp_info.set_property("cmake_find_mode", "both")
self.cpp_info.components["core"].set_property("cmake_target_name", "nudb")
self.cpp_info.components["core"].names["cmake_find_package"] = "nudb"
self.cpp_info.components["core"].names["cmake_find_package_multi"] = "nudb"
self.cpp_info.components["core"].requires = ["boost::thread", "boost::system"]
# TODO: to remove in conan v2 once cmake_find_package_* generators removed
self.cpp_info.names["cmake_find_package"] = "NuDB"
self.cpp_info.names["cmake_find_package_multi"] = "NuDB"

View File

@@ -1,24 +0,0 @@
diff --git a/include/nudb/detail/stream.hpp b/include/nudb/detail/stream.hpp
index 6c07bf1..e0ce8ed 100644
--- a/include/nudb/detail/stream.hpp
+++ b/include/nudb/detail/stream.hpp
@@ -14,6 +14,7 @@
#include <cstdint>
#include <cstring>
#include <memory>
+#include <stdexcept>
namespace nudb {
namespace detail {
diff --git a/include/nudb/impl/context.ipp b/include/nudb/impl/context.ipp
index beb7058..ffde0b3 100644
--- a/include/nudb/impl/context.ipp
+++ b/include/nudb/impl/context.ipp
@@ -9,6 +9,7 @@
#define NUDB_IMPL_CONTEXT_IPP
#include <nudb/detail/store_base.hpp>
+#include <stdexcept>
namespace nudb {

View File

@@ -31,38 +31,28 @@ namespace beast {
template <class Generator>
void
rngfill(void* buffer, std::size_t bytes, Generator& g)
rngfill(void* const buffer, std::size_t const bytes, Generator& g)
{
using result_type = typename Generator::result_type;
constexpr std::size_t result_size = sizeof(result_type);
while (bytes >= sizeof(result_type))
std::uint8_t* const buffer_start = static_cast<std::uint8_t*>(buffer);
std::size_t const complete_iterations = bytes / result_size;
std::size_t const bytes_remaining = bytes % result_size;
for (std::size_t count = 0; count < complete_iterations; ++count)
{
auto const v = g();
std::memcpy(buffer, &v, sizeof(v));
buffer = reinterpret_cast<std::uint8_t*>(buffer) + sizeof(v);
bytes -= sizeof(v);
result_type const v = g();
std::size_t const offset = count * result_size;
std::memcpy(buffer_start + offset, &v, result_size);
}
XRPL_ASSERT(
bytes < sizeof(result_type), "beast::rngfill(void*) : maximum bytes");
#ifdef __GNUC__
// gcc 11.1 (falsely) warns about an array-bounds overflow in release mode.
// gcc 12.1 (also falsely) warns about an string overflow in release mode.
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Warray-bounds"
#pragma GCC diagnostic ignored "-Wstringop-overflow"
#endif
if (bytes > 0)
if (bytes_remaining > 0)
{
auto const v = g();
std::memcpy(buffer, &v, bytes);
result_type const v = g();
std::size_t const offset = complete_iterations * result_size;
std::memcpy(buffer_start + offset, &v, bytes_remaining);
}
#ifdef __GNUC__
#pragma GCC diagnostic pop
#endif
}
template <

View File

@@ -169,6 +169,8 @@ enum warning_code_i {
warnRPC_AMENDMENT_BLOCKED = 1002,
warnRPC_EXPIRED_VALIDATOR_LIST = 1003,
// unused = 1004
warnRPC_FIELDS_DEPRECATED = 2004, // rippled needs to maintain
// compatibility with Clio on this code.
};
//------------------------------------------------------------------------------

View File

@@ -3652,14 +3652,18 @@ class Batch_test : public beast::unit_test::suite
{
// Submit a tx with tfInnerBatchTxn
uint256 const txBad = submitTx(tfInnerBatchTxn);
BEAST_EXPECT(env.app().getHashRouter().getFlags(txBad) == 0);
BEAST_EXPECT(
env.app().getHashRouter().getFlags(txBad) ==
HashRouterFlags::UNDEFINED);
}
// Validate: NetworkOPs::processTransaction()
{
uint256 const txid = processTxn(tfInnerBatchTxn);
// HashRouter::getFlags() should return SF_BAD
BEAST_EXPECT(env.app().getHashRouter().getFlags(txid) == SF_BAD);
// HashRouter::getFlags() should return LedgerFlags::BAD
BEAST_EXPECT(
env.app().getHashRouter().getFlags(txid) ==
HashRouterFlags::BAD);
}
}

View File

@@ -45,15 +45,19 @@ class HashRouter_test : public beast::unit_test::suite
TestStopwatch stopwatch;
HashRouter router(getSetup(2s, 1s), stopwatch);
uint256 const key1(1);
uint256 const key2(2);
uint256 const key3(3);
HashRouterFlags key1(HashRouterFlags::PRIVATE1);
HashRouterFlags key2(HashRouterFlags::PRIVATE2);
HashRouterFlags key3(HashRouterFlags::PRIVATE3);
auto const ukey1 = uint256{static_cast<std::uint64_t>(key1)};
auto const ukey2 = uint256{static_cast<std::uint64_t>(key2)};
auto const ukey3 = uint256{static_cast<std::uint64_t>(key3)};
// t=0
router.setFlags(key1, 11111);
BEAST_EXPECT(router.getFlags(key1) == 11111);
router.setFlags(key2, 22222);
BEAST_EXPECT(router.getFlags(key2) == 22222);
router.setFlags(ukey1, HashRouterFlags::PRIVATE1);
BEAST_EXPECT(router.getFlags(ukey1) == HashRouterFlags::PRIVATE1);
router.setFlags(ukey2, HashRouterFlags::PRIVATE2);
BEAST_EXPECT(router.getFlags(ukey2) == HashRouterFlags::PRIVATE2);
// key1 : 0
// key2 : 0
// key3: null
@@ -62,7 +66,7 @@ class HashRouter_test : public beast::unit_test::suite
// Because we are accessing key1 here, it
// will NOT be expired for another two ticks
BEAST_EXPECT(router.getFlags(key1) == 11111);
BEAST_EXPECT(router.getFlags(ukey1) == HashRouterFlags::PRIVATE1);
// key1 : 1
// key2 : 0
// key3 null
@@ -70,9 +74,9 @@ class HashRouter_test : public beast::unit_test::suite
++stopwatch;
// t=3
router.setFlags(key3, 33333); // force expiration
BEAST_EXPECT(router.getFlags(key1) == 11111);
BEAST_EXPECT(router.getFlags(key2) == 0);
router.setFlags(ukey3, HashRouterFlags::PRIVATE3); // force expiration
BEAST_EXPECT(router.getFlags(ukey1) == HashRouterFlags::PRIVATE1);
BEAST_EXPECT(router.getFlags(ukey2) == HashRouterFlags::UNDEFINED);
}
void
@@ -83,15 +87,21 @@ class HashRouter_test : public beast::unit_test::suite
TestStopwatch stopwatch;
HashRouter router(getSetup(2s, 1s), stopwatch);
uint256 const key1(1);
uint256 const key2(2);
uint256 const key3(3);
uint256 const key4(4);
HashRouterFlags key1(HashRouterFlags::PRIVATE1);
HashRouterFlags key2(HashRouterFlags::PRIVATE2);
HashRouterFlags key3(HashRouterFlags::PRIVATE3);
HashRouterFlags key4(HashRouterFlags::PRIVATE4);
auto const ukey1 = uint256{static_cast<std::uint64_t>(key1)};
auto const ukey2 = uint256{static_cast<std::uint64_t>(key2)};
auto const ukey3 = uint256{static_cast<std::uint64_t>(key3)};
auto const ukey4 = uint256{static_cast<std::uint64_t>(key4)};
BEAST_EXPECT(key1 != key2 && key2 != key3 && key3 != key4);
// t=0
router.setFlags(key1, 12345);
BEAST_EXPECT(router.getFlags(key1) == 12345);
router.setFlags(ukey1, HashRouterFlags::BAD);
BEAST_EXPECT(router.getFlags(ukey1) == HashRouterFlags::BAD);
// key1 : 0
// key2 : null
// key3 : null
@@ -103,26 +113,27 @@ class HashRouter_test : public beast::unit_test::suite
// so key1 will be expired after the second
// call to setFlags.
// t=1
router.setFlags(key2, 9999);
BEAST_EXPECT(router.getFlags(key1) == 12345);
BEAST_EXPECT(router.getFlags(key2) == 9999);
router.setFlags(ukey2, HashRouterFlags::PRIVATE5);
BEAST_EXPECT(router.getFlags(ukey1) == HashRouterFlags::BAD);
BEAST_EXPECT(router.getFlags(ukey2) == HashRouterFlags::PRIVATE5);
// key1 : 1
// key2 : 1
// key3 : null
++stopwatch;
// t=2
BEAST_EXPECT(router.getFlags(key2) == 9999);
BEAST_EXPECT(router.getFlags(ukey2) == HashRouterFlags::PRIVATE5);
// key1 : 1
// key2 : 2
// key3 : null
++stopwatch;
// t=3
router.setFlags(key3, 2222);
BEAST_EXPECT(router.getFlags(key1) == 0);
BEAST_EXPECT(router.getFlags(key2) == 9999);
BEAST_EXPECT(router.getFlags(key3) == 2222);
router.setFlags(ukey3, HashRouterFlags::BAD);
BEAST_EXPECT(router.getFlags(ukey1) == HashRouterFlags::UNDEFINED);
BEAST_EXPECT(router.getFlags(ukey2) == HashRouterFlags::PRIVATE5);
BEAST_EXPECT(router.getFlags(ukey3) == HashRouterFlags::BAD);
// key1 : 3
// key2 : 3
// key3 : 3
@@ -130,10 +141,10 @@ class HashRouter_test : public beast::unit_test::suite
++stopwatch;
// t=4
// No insertion, no expiration
router.setFlags(key1, 7654);
BEAST_EXPECT(router.getFlags(key1) == 7654);
BEAST_EXPECT(router.getFlags(key2) == 9999);
BEAST_EXPECT(router.getFlags(key3) == 2222);
router.setFlags(ukey1, HashRouterFlags::SAVED);
BEAST_EXPECT(router.getFlags(ukey1) == HashRouterFlags::SAVED);
BEAST_EXPECT(router.getFlags(ukey2) == HashRouterFlags::PRIVATE5);
BEAST_EXPECT(router.getFlags(ukey3) == HashRouterFlags::BAD);
// key1 : 4
// key2 : 4
// key3 : 4
@@ -142,11 +153,11 @@ class HashRouter_test : public beast::unit_test::suite
++stopwatch;
// t=6
router.setFlags(key4, 7890);
BEAST_EXPECT(router.getFlags(key1) == 0);
BEAST_EXPECT(router.getFlags(key2) == 0);
BEAST_EXPECT(router.getFlags(key3) == 0);
BEAST_EXPECT(router.getFlags(key4) == 7890);
router.setFlags(ukey4, HashRouterFlags::TRUSTED);
BEAST_EXPECT(router.getFlags(ukey1) == HashRouterFlags::UNDEFINED);
BEAST_EXPECT(router.getFlags(ukey2) == HashRouterFlags::UNDEFINED);
BEAST_EXPECT(router.getFlags(ukey3) == HashRouterFlags::UNDEFINED);
BEAST_EXPECT(router.getFlags(ukey4) == HashRouterFlags::TRUSTED);
// key1 : 6
// key2 : 6
// key3 : 6
@@ -168,18 +179,18 @@ class HashRouter_test : public beast::unit_test::suite
uint256 const key4(4);
BEAST_EXPECT(key1 != key2 && key2 != key3 && key3 != key4);
int flags = 12345; // This value is ignored
HashRouterFlags flags(HashRouterFlags::BAD); // This value is ignored
router.addSuppression(key1);
BEAST_EXPECT(router.addSuppressionPeer(key2, 15));
BEAST_EXPECT(router.addSuppressionPeer(key3, 20, flags));
BEAST_EXPECT(flags == 0);
BEAST_EXPECT(flags == HashRouterFlags::UNDEFINED);
++stopwatch;
BEAST_EXPECT(!router.addSuppressionPeer(key1, 2));
BEAST_EXPECT(!router.addSuppressionPeer(key2, 3));
BEAST_EXPECT(!router.addSuppressionPeer(key3, 4, flags));
BEAST_EXPECT(flags == 0);
BEAST_EXPECT(flags == HashRouterFlags::UNDEFINED);
BEAST_EXPECT(router.addSuppressionPeer(key4, 5));
}
@@ -192,9 +203,9 @@ class HashRouter_test : public beast::unit_test::suite
HashRouter router(getSetup(2s, 1s), stopwatch);
uint256 const key1(1);
BEAST_EXPECT(router.setFlags(key1, 10));
BEAST_EXPECT(!router.setFlags(key1, 10));
BEAST_EXPECT(router.setFlags(key1, 20));
BEAST_EXPECT(router.setFlags(key1, HashRouterFlags::PRIVATE1));
BEAST_EXPECT(!router.setFlags(key1, HashRouterFlags::PRIVATE1));
BEAST_EXPECT(router.setFlags(key1, HashRouterFlags::PRIVATE2));
}
void
@@ -250,7 +261,7 @@ class HashRouter_test : public beast::unit_test::suite
HashRouter router(getSetup(5s, 1s), stopwatch);
uint256 const key(1);
HashRouter::PeerShortID peer = 1;
int flags;
HashRouterFlags flags;
BEAST_EXPECT(router.shouldProcess(key, peer, flags, 1s));
BEAST_EXPECT(!router.shouldProcess(key, peer, flags, 1s));
@@ -364,6 +375,39 @@ class HashRouter_test : public beast::unit_test::suite
}
}
void
testFlagsOps()
{
testcase("Bitwise Operations");
using HF = HashRouterFlags;
using UHF = std::underlying_type_t<HF>;
HF f1 = HF::BAD;
HF f2 = HF::SAVED;
HF combined = f1 | f2;
BEAST_EXPECT(
static_cast<UHF>(combined) ==
(static_cast<UHF>(f1) | static_cast<UHF>(f2)));
HF temp = f1;
temp |= f2;
BEAST_EXPECT(temp == combined);
HF intersect = combined & f1;
BEAST_EXPECT(intersect == f1);
HF temp2 = combined;
temp2 &= f1;
BEAST_EXPECT(temp2 == f1);
BEAST_EXPECT(any(f1));
BEAST_EXPECT(any(f2));
BEAST_EXPECT(any(combined));
BEAST_EXPECT(!any(HF::UNDEFINED));
}
public:
void
run() override
@@ -375,6 +419,7 @@ public:
testRelay();
testProcess();
testSetup();
testFlagsOps();
}
};

View File

@@ -0,0 +1,70 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2024 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <test/jtx.h>
namespace ripple {
namespace test {
struct Simple_test : public beast::unit_test::suite
{
void
testSimple(FeatureBitset features)
{
testcase("Simple");
using namespace test::jtx;
using namespace std::literals;
Env env{*this, features};
auto const alice = Account("alice");
auto const bob = Account("bob");
// env.fund(XRP(100'000), alice, bob);
env(pay(env.master, alice, XRP(1000)));
env.close();
// create open ledger with 1000 transactions
// for (int i = 0; i < 2500; ++i)
// env(pay(alice, bob, XRP(1)), fee(XRP(1)));
// env.close();
// {
// Json::Value params;
// params[jss::ledger_index] = env.current()->seq() - 1;
// params[jss::transactions] = true;
// params[jss::expand] = true;
// auto const jrr = env.rpc("json", "ledger", to_string(params));
// std::cout << jrr << std::endl;
// }
}
public:
void
run() override
{
using namespace test::jtx;
FeatureBitset const all{testable_amendments()};
testSimple(all);
}
};
BEAST_DEFINE_TESTSUITE(Simple, app, ripple);
} // namespace test
} // namespace ripple

View File

@@ -711,6 +711,7 @@ class LedgerRPC_test : public beast::unit_test::suite
env.close();
std::string index;
int hashesLedgerEntryIndex = -1;
{
Json::Value jvParams;
jvParams[jss::ledger_index] = 3u;
@@ -721,11 +722,27 @@ class LedgerRPC_test : public beast::unit_test::suite
env.rpc("json", "ledger", to_string(jvParams))[jss::result];
BEAST_EXPECT(jrr[jss::ledger].isMember(jss::accountState));
BEAST_EXPECT(jrr[jss::ledger][jss::accountState].isArray());
BEAST_EXPECT(jrr[jss::ledger][jss::accountState].size() == 1u);
for (auto i = 0; i < jrr[jss::ledger][jss::accountState].size();
i++)
if (jrr[jss::ledger][jss::accountState][i]["LedgerEntryType"] ==
jss::LedgerHashes)
{
index = jrr[jss::ledger][jss::accountState][i]["index"]
.asString();
hashesLedgerEntryIndex = i;
}
for (auto const& object : jrr[jss::ledger][jss::accountState])
if (object["LedgerEntryType"] == jss::LedgerHashes)
index = object["index"].asString();
// jss::type is a deprecated field
BEAST_EXPECT(
jrr[jss::ledger][jss::accountState][0u]["LedgerEntryType"] ==
jss::LedgerHashes);
index = jrr[jss::ledger][jss::accountState][0u]["index"].asString();
jrr.isMember(jss::warnings) && jrr[jss::warnings].isArray() &&
jrr[jss::warnings].size() == 1 &&
jrr[jss::warnings][0u][jss::id].asInt() ==
warnRPC_FIELDS_DEPRECATED);
}
{
Json::Value jvParams;
@@ -737,8 +754,17 @@ class LedgerRPC_test : public beast::unit_test::suite
env.rpc("json", "ledger", to_string(jvParams))[jss::result];
BEAST_EXPECT(jrr[jss::ledger].isMember(jss::accountState));
BEAST_EXPECT(jrr[jss::ledger][jss::accountState].isArray());
BEAST_EXPECT(jrr[jss::ledger][jss::accountState].size() == 1u);
BEAST_EXPECT(jrr[jss::ledger][jss::accountState][0u] == index);
BEAST_EXPECT(
hashesLedgerEntryIndex > 0 &&
jrr[jss::ledger][jss::accountState][hashesLedgerEntryIndex] ==
index);
// jss::type is a deprecated field
BEAST_EXPECT(
jrr.isMember(jss::warnings) && jrr[jss::warnings].isArray() &&
jrr[jss::warnings].size() == 1 &&
jrr[jss::warnings][0u][jss::id].asInt() ==
warnRPC_FIELDS_DEPRECATED);
}
}

View File

@@ -996,7 +996,8 @@ pendSaveValidated(
bool isSynchronous,
bool isCurrent)
{
if (!app.getHashRouter().setFlags(ledger->info().hash, SF_SAVED))
if (!app.getHashRouter().setFlags(
ledger->info().hash, HashRouterFlags::SAVED))
{
// We have tried to save this ledger recently
auto stream = app.journal("Ledger").debug();

View File

@@ -37,9 +37,8 @@ struct LedgerFill
ReadView const& l,
RPC::Context* ctx,
int o = 0,
std::vector<TxQ::TxDetails> q = {},
LedgerEntryType t = ltANY)
: ledger(l), options(o), txQueue(std::move(q)), type(t), context(ctx)
std::vector<TxQ::TxDetails> q = {})
: ledger(l), options(o), txQueue(std::move(q)), context(ctx)
{
if (context)
closeTime = context->ledgerMaster.getCloseTimeBySeq(ledger.seq());
@@ -58,7 +57,6 @@ struct LedgerFill
ReadView const& ledger;
int options;
std::vector<TxQ::TxDetails> txQueue;
LedgerEntryType type;
RPC::Context* context;
std::optional<NetClock::time_point> closeTime;
};

View File

@@ -114,6 +114,9 @@ public:
std::shared_ptr<OpenView const>
current() const;
std::shared_ptr<OpenView const>
read() const;
/** Modify the open ledger
Thread safety:
@@ -217,6 +220,9 @@ OpenLedger::apply(
ApplyFlags flags,
beast::Journal j)
{
if (view.isMock())
return;
for (auto iter = txs.begin(); iter != txs.end(); ++iter)
{
try

View File

@@ -268,19 +268,16 @@ fillJsonState(Object& json, LedgerFill const& fill)
for (auto const& sle : ledger.sles)
{
if (fill.type == ltANY || sle->getType() == fill.type)
if (binary)
{
if (binary)
{
auto&& obj = appendObject(array);
obj[jss::hash] = to_string(sle->key());
obj[jss::tx_blob] = serializeHex(*sle);
}
else if (expanded)
array.append(sle->getJson(JsonOptions::none));
else
array.append(to_string(sle->key()));
auto&& obj = appendObject(array);
obj[jss::hash] = to_string(sle->key());
obj[jss::tx_blob] = serializeHex(*sle);
}
else if (expanded)
array.append(sle->getJson(JsonOptions::none));
else
array.append(to_string(sle->key()));
}
}

View File

@@ -54,6 +54,15 @@ OpenLedger::current() const
return current_;
}
std::shared_ptr<OpenView const>
OpenLedger::read() const
{
std::lock_guard lock(current_mutex_);
// Create a copy of the current view for read-only access
// This snapshot won't change even if current_ is updated
return std::make_shared<OpenView const>(*current_);
}
bool
OpenLedger::modify(modify_type const& f)
{
@@ -88,10 +97,60 @@ OpenLedger::accept(
using empty = std::vector<std::shared_ptr<STTx const>>;
apply(app, *next, *ledger, empty{}, retries, flags, j_);
}
// Pre-apply local transactions and broadcast early if beneficial
std::vector<PreApplyResult> localPreApplyResults;
// Track which transactions we've already relayed
std::set<uint256> earlyRelayedTxs;
if (!locals.empty())
{
localPreApplyResults.reserve(locals.size());
// Use the next view as read-only for preApply (it's not being modified
// yet)
for (auto const& item : locals)
{
auto const result =
app.getTxQ().preApply(app, *next, item.second, flags, j_);
localPreApplyResults.push_back(result);
// Skip transactions that are not likely to claim fees
if (!result.pcresult.likelyToClaimFee)
continue;
auto const txId = item.second->getTransactionID();
// Skip batch transactions from relaying
if (!(item.second->isFlag(tfInnerBatchTxn) &&
rules.enabled(featureBatch)))
{
if (auto const toSkip = app.getHashRouter().shouldRelay(txId))
{
JLOG(j_.debug()) << "Early relaying local tx " << txId;
protocol::TMTransaction msg;
Serializer s;
item.second->add(s);
msg.set_rawtransaction(s.data(), s.size());
msg.set_status(protocol::tsCURRENT);
msg.set_receivetimestamp(
app.timeKeeper().now().time_since_epoch().count());
msg.set_deferred(result.pcresult.ter == terQUEUED);
app.overlay().relay(txId, msg, *toSkip);
// Track that we've already relayed this transaction
earlyRelayedTxs.insert(txId);
}
}
}
}
// Block calls to modify, otherwise
// new tx going into the open ledger
// would get lost.
std::lock_guard lock1(modify_mutex_);
// Apply tx from the current open view
if (!current_->txs.empty())
{
@@ -110,19 +169,36 @@ OpenLedger::accept(
flags,
j_);
}
// Call the modifier
if (f)
f(*next, j_);
// Apply local tx
for (auto const& item : locals)
app.getTxQ().apply(app, *next, item.second, flags, j_);
// If we didn't relay this transaction recently, relay it to all peers
// Apply local tx using pre-computed results
auto localIter = locals.begin();
for (size_t i = 0; i < localPreApplyResults.size(); ++i, ++localIter)
{
app.getTxQ().queueApply(
app,
*next,
localIter->second,
flags,
localPreApplyResults[i].pfresult,
j_);
}
// Relay transactions that weren't already broadcast early
// (This handles transactions that weren't likely to claim fees initially
// but succeeded, plus any transactions from current_->txs and retries)
for (auto const& txpair : next->txs)
{
auto const& tx = txpair.first;
auto const txId = tx->getTransactionID();
// Skip if we already relayed this transaction early
if (earlyRelayedTxs.find(txId) != earlyRelayedTxs.end())
continue;
// skip batch txns
// LCOV_EXCL_START
if (tx->isFlag(tfInnerBatchTxn) && rules.enabled(featureBatch))

View File

@@ -65,7 +65,10 @@ HashRouter::addSuppressionPeerWithStatus(uint256 const& key, PeerShortID peer)
}
bool
HashRouter::addSuppressionPeer(uint256 const& key, PeerShortID peer, int& flags)
HashRouter::addSuppressionPeer(
uint256 const& key,
PeerShortID peer,
HashRouterFlags& flags)
{
std::lock_guard lock(mutex_);
@@ -79,7 +82,7 @@ bool
HashRouter::shouldProcess(
uint256 const& key,
PeerShortID peer,
int& flags,
HashRouterFlags& flags,
std::chrono::seconds tx_interval)
{
std::lock_guard lock(mutex_);
@@ -91,7 +94,7 @@ HashRouter::shouldProcess(
return s.shouldProcess(suppressionMap_.clock().now(), tx_interval);
}
int
HashRouterFlags
HashRouter::getFlags(uint256 const& key)
{
std::lock_guard lock(mutex_);
@@ -100,9 +103,10 @@ HashRouter::getFlags(uint256 const& key)
}
bool
HashRouter::setFlags(uint256 const& key, int flags)
HashRouter::setFlags(uint256 const& key, HashRouterFlags flags)
{
XRPL_ASSERT(flags, "ripple::HashRouter::setFlags : valid input");
XRPL_ASSERT(
static_cast<bool>(flags), "ripple::HashRouter::setFlags : valid input");
std::lock_guard lock(mutex_);

View File

@@ -31,20 +31,59 @@
namespace ripple {
// TODO convert these macros to int constants or an enum
#define SF_BAD 0x02 // Temporarily bad
#define SF_SAVED 0x04
#define SF_HELD 0x08 // Held by LedgerMaster after potential processing failure
#define SF_TRUSTED 0x10 // comes from trusted source
enum class HashRouterFlags : std::uint16_t {
// Public flags
UNDEFINED = 0x00,
BAD = 0x02, // Temporarily bad
SAVED = 0x04,
HELD = 0x08, // Held by LedgerMaster after potential processing failure
TRUSTED = 0x10, // Comes from a trusted source
// Private flags, used internally in apply.cpp.
// Do not attempt to read, set, or reuse.
#define SF_PRIVATE1 0x0100
#define SF_PRIVATE2 0x0200
#define SF_PRIVATE3 0x0400
#define SF_PRIVATE4 0x0800
#define SF_PRIVATE5 0x1000
#define SF_PRIVATE6 0x2000
// Private flags (used internally in apply.cpp)
// Do not attempt to read, set, or reuse.
PRIVATE1 = 0x0100,
PRIVATE2 = 0x0200,
PRIVATE3 = 0x0400,
PRIVATE4 = 0x0800,
PRIVATE5 = 0x1000,
PRIVATE6 = 0x2000
};
constexpr HashRouterFlags
operator|(HashRouterFlags lhs, HashRouterFlags rhs)
{
return static_cast<HashRouterFlags>(
static_cast<std::underlying_type_t<HashRouterFlags>>(lhs) |
static_cast<std::underlying_type_t<HashRouterFlags>>(rhs));
}
constexpr HashRouterFlags&
operator|=(HashRouterFlags& lhs, HashRouterFlags rhs)
{
lhs = lhs | rhs;
return lhs;
}
constexpr HashRouterFlags
operator&(HashRouterFlags lhs, HashRouterFlags rhs)
{
return static_cast<HashRouterFlags>(
static_cast<std::underlying_type_t<HashRouterFlags>>(lhs) &
static_cast<std::underlying_type_t<HashRouterFlags>>(rhs));
}
constexpr HashRouterFlags&
operator&=(HashRouterFlags& lhs, HashRouterFlags rhs)
{
lhs = lhs & rhs;
return lhs;
}
constexpr bool
any(HashRouterFlags flags)
{
return static_cast<std::underlying_type_t<HashRouterFlags>>(flags) != 0;
}
class Config;
@@ -101,14 +140,14 @@ private:
peers_.insert(peer);
}
int
HashRouterFlags
getFlags(void) const
{
return flags_;
}
void
setFlags(int flagsToSet)
setFlags(HashRouterFlags flagsToSet)
{
flags_ |= flagsToSet;
}
@@ -154,7 +193,7 @@ private:
}
private:
int flags_ = 0;
HashRouterFlags flags_ = HashRouterFlags::UNDEFINED;
std::set<PeerShortID> peers_;
// This could be generalized to a map, if more
// than one flag needs to expire independently.
@@ -190,14 +229,17 @@ public:
addSuppressionPeerWithStatus(uint256 const& key, PeerShortID peer);
bool
addSuppressionPeer(uint256 const& key, PeerShortID peer, int& flags);
addSuppressionPeer(
uint256 const& key,
PeerShortID peer,
HashRouterFlags& flags);
// Add a peer suppression and return whether the entry should be processed
bool
shouldProcess(
uint256 const& key,
PeerShortID peer,
int& flags,
HashRouterFlags& flags,
std::chrono::seconds tx_interval);
/** Set the flags on a hash.
@@ -205,9 +247,9 @@ public:
@return `true` if the flags were changed. `false` if unchanged.
*/
bool
setFlags(uint256 const& key, int flags);
setFlags(uint256 const& key, HashRouterFlags flags);
int
HashRouterFlags
getFlags(uint256 const& key);
/** Determines whether the hashed item should be relayed.

View File

@@ -1207,7 +1207,7 @@ NetworkOPsImp::submitTransaction(std::shared_ptr<STTx const> const& iTrans)
auto const txid = trans->getTransactionID();
auto const flags = app_.getHashRouter().getFlags(txid);
if ((flags & SF_BAD) != 0)
if ((flags & HashRouterFlags::BAD) != HashRouterFlags::UNDEFINED)
{
JLOG(m_journal.warn()) << "Submitted transaction cached bad";
return;
@@ -1251,7 +1251,7 @@ NetworkOPsImp::preProcessTransaction(std::shared_ptr<Transaction>& transaction)
{
auto const newFlags = app_.getHashRouter().getFlags(transaction->getID());
if ((newFlags & SF_BAD) != 0)
if ((newFlags & HashRouterFlags::BAD) != HashRouterFlags::UNDEFINED)
{
// cached bad
JLOG(m_journal.warn()) << transaction->getID() << ": cached bad!\n";
@@ -1270,7 +1270,8 @@ NetworkOPsImp::preProcessTransaction(std::shared_ptr<Transaction>& transaction)
{
transaction->setStatus(INVALID);
transaction->setResult(temINVALID_FLAG);
app_.getHashRouter().setFlags(transaction->getID(), SF_BAD);
app_.getHashRouter().setFlags(
transaction->getID(), HashRouterFlags::BAD);
return false;
}
@@ -1289,7 +1290,8 @@ NetworkOPsImp::preProcessTransaction(std::shared_ptr<Transaction>& transaction)
JLOG(m_journal.info()) << "Transaction has bad signature: " << reason;
transaction->setStatus(INVALID);
transaction->setResult(temBAD_SIGNATURE);
app_.getHashRouter().setFlags(transaction->getID(), SF_BAD);
app_.getHashRouter().setFlags(
transaction->getID(), HashRouterFlags::BAD);
return false;
}
@@ -1412,7 +1414,8 @@ NetworkOPsImp::processTransactionSet(CanonicalTXSet const& set)
JLOG(m_journal.trace())
<< "Exception checking transaction: " << reason;
}
app_.getHashRouter().setFlags(tx->getTransactionID(), SF_BAD);
app_.getHashRouter().setFlags(
tx->getTransactionID(), HashRouterFlags::BAD);
continue;
}
@@ -1491,15 +1494,75 @@ NetworkOPsImp::apply(std::unique_lock<std::mutex>& batchLock)
{
std::unique_lock masterLock{app_.getMasterMutex(), std::defer_lock};
bool changed = false;
// Structure to hold preclaim results
std::vector<PreApplyResult> preapplyResults;
preapplyResults.reserve(transactions.size());
{
std::unique_lock ledgerLock{
m_ledgerMaster.peekMutex(), std::defer_lock};
std::lock(masterLock, ledgerLock);
app_.openLedger().modify([&](OpenView& view, beast::Journal j) {
for (TransactionStatus& e : transactions)
// Stage 1: Pre-apply and broadcast in single loop
auto newOL = app_.openLedger().read();
for (TransactionStatus& e : transactions)
{
// Use read-only view for preApply
auto const result = app_.getTxQ().preApply(
app_,
*newOL,
e.transaction->getSTransaction(),
e.failType == FailHard::yes ? tapFAIL_HARD : tapNONE,
m_journal);
preapplyResults.push_back(result);
// Immediately broadcast if transaction is likely to claim a fee
bool shouldBroadcast = result.pcresult.likelyToClaimFee;
// Check for hard failure
bool enforceFailHard =
(e.failType == FailHard::yes &&
!isTesSuccess(result.pcresult.ter));
if (shouldBroadcast && !enforceFailHard)
{
// we check before adding to the batch
auto const toSkip = app_.getHashRouter().shouldRelay(
e.transaction->getID());
if (auto const sttx = *(e.transaction->getSTransaction());
toSkip &&
// Skip relaying if it's an inner batch txn and batch
// feature is enabled
!(sttx.isFlag(tfInnerBatchTxn) &&
newOL->rules().enabled(featureBatch)))
{
protocol::TMTransaction tx;
Serializer s;
sttx.add(s);
tx.set_rawtransaction(s.data(), s.size());
tx.set_status(protocol::tsCURRENT);
tx.set_receivetimestamp(
app_.timeKeeper().now().time_since_epoch().count());
tx.set_deferred(result.pcresult.ter == terQUEUED);
app_.overlay().relay(
e.transaction->getID(), tx, *toSkip);
e.transaction->setBroadcast();
}
}
}
// Stage 2: Actually apply the transactions using pre-computed
// results
app_.openLedger().modify([&](OpenView& view, beast::Journal j) {
for (size_t i = 0; i < transactions.size(); ++i)
{
auto& e = transactions[i];
auto const& preResult = preapplyResults[i];
ApplyFlags flags = tapNONE;
if (e.admin)
flags |= tapUNLIMITED;
@@ -1507,8 +1570,15 @@ NetworkOPsImp::apply(std::unique_lock<std::mutex>& batchLock)
if (e.failType == FailHard::yes)
flags |= tapFAIL_HARD;
auto const result = app_.getTxQ().apply(
app_, view, e.transaction->getSTransaction(), flags, j);
// Use the pre-computed results from Stage 1
auto const result = app_.getTxQ().queueApply(
app_,
view,
e.transaction->getSTransaction(),
flags,
preResult.pfresult,
j);
e.result = result.ter;
e.applied = result.applied;
changed = changed || result.applied;
@@ -1516,6 +1586,7 @@ NetworkOPsImp::apply(std::unique_lock<std::mutex>& batchLock)
return changed;
});
}
if (changed)
reportFeeChange();
@@ -1524,6 +1595,8 @@ NetworkOPsImp::apply(std::unique_lock<std::mutex>& batchLock)
validatedLedgerIndex = l->info().seq;
auto newOL = app_.openLedger().current();
// Process results (rest of the method remains the same)
for (TransactionStatus& e : transactions)
{
e.transaction->clearSubmitResult();
@@ -1538,7 +1611,8 @@ NetworkOPsImp::apply(std::unique_lock<std::mutex>& batchLock)
e.transaction->setResult(e.result);
if (isTemMalformed(e.result))
app_.getHashRouter().setFlags(e.transaction->getID(), SF_BAD);
app_.getHashRouter().setFlags(
e.transaction->getID(), HashRouterFlags::BAD);
#ifdef DEBUG
if (e.result != tesSUCCESS)
@@ -1626,7 +1700,8 @@ NetworkOPsImp::apply(std::unique_lock<std::mutex>& batchLock)
// (5) ledgers into the future. (Remember that an
// unseated optional compares as less than all seated
// values, so it has to be checked explicitly first.)
// 3. The SF_HELD flag is not set on the txID. (setFlags
// 3. The HashRouterFlags::BAD flag is not set on the txID.
// (setFlags
// checks before setting. If the flag is set, it returns
// false, which means it's been held once without one of
// the other conditions, so don't hold it again. Time's
@@ -1635,7 +1710,7 @@ NetworkOPsImp::apply(std::unique_lock<std::mutex>& batchLock)
if (e.local ||
(ledgersLeft && ledgersLeft <= LocalTxs::holdLedgers) ||
app_.getHashRouter().setFlags(
e.transaction->getID(), SF_HELD))
e.transaction->getID(), HashRouterFlags::HELD))
{
// transaction should be held
JLOG(m_journal.debug())
@@ -1672,36 +1747,6 @@ NetworkOPsImp::apply(std::unique_lock<std::mutex>& batchLock)
e.transaction->setKept();
}
if ((e.applied ||
((mMode != OperatingMode::FULL) &&
(e.failType != FailHard::yes) && e.local) ||
(e.result == terQUEUED)) &&
!enforceFailHard)
{
auto const toSkip =
app_.getHashRouter().shouldRelay(e.transaction->getID());
if (auto const sttx = *(e.transaction->getSTransaction());
toSkip &&
// Skip relaying if it's an inner batch txn and batch
// feature is enabled
!(sttx.isFlag(tfInnerBatchTxn) &&
newOL->rules().enabled(featureBatch)))
{
protocol::TMTransaction tx;
Serializer s;
sttx.add(s);
tx.set_rawtransaction(s.data(), s.size());
tx.set_status(protocol::tsCURRENT);
tx.set_receivetimestamp(
app_.timeKeeper().now().time_since_epoch().count());
tx.set_deferred(e.result == terQUEUED);
// FIXME: This should be when we received it
app_.overlay().relay(e.transaction->getID(), tx, *toSkip);
e.transaction->setBroadcast();
}
}
if (validatedLedgerIndex)
{
auto [fee, accountSeq, availableSeq] =

View File

@@ -36,6 +36,12 @@
namespace ripple {
struct PreApplyResult
{
PreflightResult pfresult;
PreclaimResult pcresult;
};
class Application;
class Config;
@@ -261,6 +267,30 @@ public:
/// Destructor
virtual ~TxQ();
/**
Prepares the transaction for application to the open ledger.
This is a preflight step that checks the transaction and
prepares it for application.
@return A `PreApplyResult` with the result of the preflight.
*/
PreApplyResult
preApply(
Application& app,
OpenView const& view,
std::shared_ptr<STTx const> const& tx,
ApplyFlags flags,
beast::Journal j);
ApplyResult
queueApply(
Application& app,
OpenView& view,
std::shared_ptr<STTx const> const& tx,
ApplyFlags flags,
PreflightResult const& pfresult,
beast::Journal j);
/**
Add a new transaction to the open ledger, hold it in the queue,
or reject it.

View File

@@ -226,7 +226,7 @@ class ValidatorList
TimeKeeper& timeKeeper_;
boost::filesystem::path const dataPath_;
beast::Journal const j_;
boost::shared_mutex mutable mutex_;
std::shared_mutex mutable mutex_;
using lock_guard = std::lock_guard<decltype(mutex_)>;
using shared_lock = std::shared_lock<decltype(mutex_)>;

View File

@@ -726,12 +726,27 @@ TxQ::tryClearAccountQueueUpThruTx(
// b. The entire queue also has a (dynamic) maximum size. Transactions
// beyond that limit are rejected.
//
PreApplyResult
TxQ::preApply(
Application& app,
OpenView const& view,
std::shared_ptr<STTx const> const& tx,
ApplyFlags flags,
beast::Journal j)
{
PreflightResult const pfresult =
preflight(app, view.rules(), *tx, flags, j);
PreclaimResult const& pcresult = preclaim(pfresult, app, view);
return {pfresult, pcresult};
}
ApplyResult
TxQ::apply(
TxQ::queueApply(
Application& app,
OpenView& view,
std::shared_ptr<STTx const> const& tx,
ApplyFlags flags,
PreflightResult const& pfresult,
beast::Journal j)
{
STAmountSO stAmountSO{view.rules().enabled(fixSTAmountCanonicalize)};
@@ -740,9 +755,6 @@ TxQ::apply(
// See if the transaction is valid, properly formed,
// etc. before doing potentially expensive queue
// replace and multi-transaction operations.
auto const pfresult = preflight(app, view.rules(), *tx, flags, j);
if (pfresult.ter != tesSUCCESS)
return {pfresult.ter, false};
// See if the transaction paid a high enough fee that it can go straight
// into the ledger.
@@ -1350,6 +1362,24 @@ TxQ::apply(
return {terQUEUED, false};
}
ApplyResult
TxQ::apply(
Application& app,
OpenView& view,
std::shared_ptr<STTx const> const& tx,
ApplyFlags flags,
beast::Journal j)
{
// See if the transaction is valid, properly formed,
// etc. before doing potentially expensive queue
// replace and multi-transaction operations.
auto const pfresult = preflight(app, view.rules(), *tx, flags, j);
if (pfresult.ter != tesSUCCESS)
return {pfresult.ter, false};
return queueApply(app, view, tx, flags, pfresult, j);
}
/*
1. Update the fee metrics based on the fee levels of the
txs in the validated ledger and whether consensus is

View File

@@ -34,13 +34,13 @@
#include <xrpl/protocol/TxFlags.h>
#include <xrpl/protocol/XRPAmount.h>
namespace ripple {
// During an EscrowFinish, the transaction must specify both
// a condition and a fulfillment. We track whether that
// fulfillment matches and validates the condition.
#define SF_CF_INVALID SF_PRIVATE5
#define SF_CF_VALID SF_PRIVATE6
namespace ripple {
constexpr HashRouterFlags SF_CF_INVALID = HashRouterFlags::PRIVATE5;
constexpr HashRouterFlags SF_CF_VALID = HashRouterFlags::PRIVATE6;
/*
Escrow
@@ -663,7 +663,7 @@ EscrowFinish::preflight(PreflightContext const& ctx)
// If we haven't checked the condition, check it
// now. Whether it passes or not isn't important
// in preflight.
if (!(flags & (SF_CF_INVALID | SF_CF_VALID)))
if (!any(flags & (SF_CF_INVALID | SF_CF_VALID)))
{
if (checkCondition(*fb, *cb))
router.setFlags(id, SF_CF_VALID);
@@ -1064,7 +1064,7 @@ EscrowFinish::doApply()
// It's unlikely that the results of the check will
// expire from the hash router, but if it happens,
// simply re-run the check.
if (cb && !(flags & (SF_CF_INVALID | SF_CF_VALID)))
if (cb && !any(flags & (SF_CF_INVALID | SF_CF_VALID)))
{
auto const fb = ctx_.tx[~sfFulfillment];
@@ -1081,7 +1081,7 @@ EscrowFinish::doApply()
// If the check failed, then simply return an error
// and don't look at anything else.
if (flags & SF_CF_INVALID)
if (any(flags & SF_CF_INVALID))
return tecCRYPTOCONDITION_ERROR;
// Check against condition in the ledger entry:

View File

@@ -27,11 +27,16 @@
namespace ripple {
// These are the same flags defined as SF_PRIVATE1-4 in HashRouter.h
#define SF_SIGBAD SF_PRIVATE1 // Signature is bad
#define SF_SIGGOOD SF_PRIVATE2 // Signature is good
#define SF_LOCALBAD SF_PRIVATE3 // Local checks failed
#define SF_LOCALGOOD SF_PRIVATE4 // Local checks passed
// These are the same flags defined as HashRouterFlags::PRIVATE1-4 in
// HashRouter.h
constexpr HashRouterFlags SF_SIGBAD =
HashRouterFlags::PRIVATE1; // Signature is bad
constexpr HashRouterFlags SF_SIGGOOD =
HashRouterFlags::PRIVATE2; // Signature is good
constexpr HashRouterFlags SF_LOCALBAD =
HashRouterFlags::PRIVATE3; // Local checks failed
constexpr HashRouterFlags SF_LOCALGOOD =
HashRouterFlags::PRIVATE4; // Local checks passed
//------------------------------------------------------------------------------
@@ -66,11 +71,11 @@ checkValidity(
return {Validity::Valid, ""};
}
if (flags & SF_SIGBAD)
if (any(flags & SF_SIGBAD))
// Signature is known bad
return {Validity::SigBad, "Transaction has bad signature."};
if (!(flags & SF_SIGGOOD))
if (!any(flags & SF_SIGGOOD))
{
// Don't know signature state. Check it.
auto const requireCanonicalSig =
@@ -88,12 +93,12 @@ checkValidity(
}
// Signature is now known good
if (flags & SF_LOCALBAD)
if (any(flags & SF_LOCALBAD))
// ...but the local checks
// are known bad.
return {Validity::SigGoodOnly, "Local checks failed."};
if (flags & SF_LOCALGOOD)
if (any(flags & SF_LOCALGOOD))
// ...and the local checks
// are known good.
return {Validity::Valid, ""};
@@ -112,7 +117,7 @@ checkValidity(
void
forceValidity(HashRouter& router, uint256 const& txid, Validity validity)
{
int flags = 0;
HashRouterFlags flags = HashRouterFlags::UNDEFINED;
switch (validity)
{
case Validity::Valid:
@@ -125,7 +130,7 @@ forceValidity(HashRouter& router, uint256 const& txid, Validity validity)
// would be silly to call directly
break;
}
if (flags)
if (any(flags))
router.setFlags(txid, flags);
}

View File

@@ -111,6 +111,7 @@ private:
std::size_t baseTxCount_ = 0;
bool open_ = true;
bool mock_ = true;
public:
OpenView() = delete;
@@ -187,6 +188,12 @@ public:
*/
OpenView(ReadView const* base, std::shared_ptr<void const> hold = nullptr);
bool
isMock() const
{
return mock_;
}
/** Returns true if this reflects an open ledger. */
bool
open() const override

View File

@@ -21,6 +21,8 @@
#include <xrpl/basics/contract.h>
#include <iostream>
namespace ripple {
class OpenView::txs_iter_impl : public txs_type::iter_base
@@ -77,15 +79,32 @@ public:
OpenView::OpenView(OpenView const& rhs)
: ReadView(rhs)
, TxsRawView(rhs)
, monotonic_resource_{std::make_unique<
boost::container::pmr::monotonic_buffer_resource>(initialBufferSize)}
, txs_{rhs.txs_, monotonic_resource_.get()}
, rules_{rhs.rules_}
, info_{rhs.info_}
, base_{rhs.base_}
, items_{rhs.items_}
, hold_{rhs.hold_}
, open_{rhs.open_} {};
, baseTxCount_{rhs.baseTxCount_}
, open_{rhs.open_}
, mock_{rhs.mock_}
{
// Calculate optimal buffer size based on source data
size_t estimatedNeeds =
rhs.txs_.size() * 300; // rough estimate: 300 bytes per tx entry
size_t bufferSize =
std::max(initialBufferSize, estimatedNeeds * 3 / 2); // 50% headroom
// std::cout << "[OpenView Memory] Copy constructor - Source has "
// << rhs.txs_.size() << " txs"
// << ", estimated needs: " << estimatedNeeds / 1024 << "KB"
// << ", allocating: " << bufferSize / 1024 << "KB" << std::endl;
monotonic_resource_ =
std::make_unique<boost::container::pmr::monotonic_buffer_resource>(
bufferSize);
txs_ = txs_map{rhs.txs_, monotonic_resource_.get()};
}
OpenView::OpenView(
open_ledger_t,

View File

@@ -1296,13 +1296,13 @@ PeerImp::handleTransaction(
}
// LCOV_EXCL_STOP
int flags;
HashRouterFlags flags;
constexpr std::chrono::seconds tx_interval = 10s;
if (!app_.getHashRouter().shouldProcess(txID, id_, flags, tx_interval))
{
// we have seen this transaction recently
if (flags & SF_BAD)
if (any(flags & HashRouterFlags::BAD))
{
fee_.update(Resource::feeUselessData, "known bad");
JLOG(p_journal_.debug()) << "Ignoring known bad tx " << txID;
@@ -1329,7 +1329,7 @@ PeerImp::handleTransaction(
{
// Skip local checks if a server we trust
// put the transaction in its open ledger
flags |= SF_TRUSTED;
flags |= HashRouterFlags::TRUSTED;
}
// for non-validator nodes only -- localPublicKey is set for
@@ -2841,7 +2841,7 @@ PeerImp::doTransactions(
void
PeerImp::checkTransaction(
int flags,
HashRouterFlags flags,
bool checkSignature,
std::shared_ptr<STTx const> const& stx,
bool batch)
@@ -2866,7 +2866,8 @@ PeerImp::checkTransaction(
(stx->getFieldU32(sfLastLedgerSequence) <
app_.getLedgerMaster().getValidLedgerIndex()))
{
app_.getHashRouter().setFlags(stx->getTransactionID(), SF_BAD);
app_.getHashRouter().setFlags(
stx->getTransactionID(), HashRouterFlags::BAD);
charge(Resource::feeUselessData, "expired tx");
return;
}
@@ -2925,8 +2926,10 @@ PeerImp::checkTransaction(
<< "Exception checking transaction: " << validReason;
}
// Probably not necessary to set SF_BAD, but doesn't hurt.
app_.getHashRouter().setFlags(stx->getTransactionID(), SF_BAD);
// Probably not necessary to set HashRouterFlags::BAD, but
// doesn't hurt.
app_.getHashRouter().setFlags(
stx->getTransactionID(), HashRouterFlags::BAD);
charge(
Resource::feeInvalidSignature,
"check transaction signature failure");
@@ -2949,12 +2952,13 @@ PeerImp::checkTransaction(
JLOG(p_journal_.trace())
<< "Exception checking transaction: " << reason;
}
app_.getHashRouter().setFlags(stx->getTransactionID(), SF_BAD);
app_.getHashRouter().setFlags(
stx->getTransactionID(), HashRouterFlags::BAD);
charge(Resource::feeInvalidSignature, "tx (impossible)");
return;
}
bool const trusted(flags & SF_TRUSTED);
bool const trusted = any(flags & HashRouterFlags::TRUSTED);
app_.getOPs().processTransaction(
tx, trusted, false, NetworkOPs::FailHard::no);
}
@@ -2962,7 +2966,8 @@ PeerImp::checkTransaction(
{
JLOG(p_journal_.warn())
<< "Exception in " << __func__ << ": " << ex.what();
app_.getHashRouter().setFlags(stx->getTransactionID(), SF_BAD);
app_.getHashRouter().setFlags(
stx->getTransactionID(), HashRouterFlags::BAD);
using namespace std::string_literals;
charge(Resource::feeInvalidData, "tx "s + ex.what());
}

View File

@@ -22,6 +22,7 @@
#include <xrpld/app/consensus/RCLCxPeerPos.h>
#include <xrpld/app/ledger/detail/LedgerReplayMsgHandler.h>
#include <xrpld/app/misc/HashRouter.h>
#include <xrpld/overlay/Squelch.h>
#include <xrpld/overlay/detail/OverlayImpl.h>
#include <xrpld/overlay/detail/ProtocolVersion.h>
@@ -98,7 +99,7 @@ private:
// Node public key of peer.
PublicKey const publicKey_;
std::string name_;
boost::shared_mutex mutable nameMutex_;
std::shared_mutex mutable nameMutex_;
// The indices of the smallest and largest ledgers this peer has available
//
@@ -214,7 +215,7 @@ private:
total_bytes() const;
private:
boost::shared_mutex mutable mutex_;
std::shared_mutex mutable mutex_;
boost::circular_buffer<std::uint64_t> rollingAvg_{30, 0ull};
clock_type::time_point intervalStart_{clock_type::now()};
std::uint64_t totalBytes_{0};
@@ -612,7 +613,7 @@ private:
void
checkTransaction(
int flags,
HashRouterFlags flags,
bool checkSignature,
std::shared_ptr<STTx const> const& stx,
bool batch);

View File

@@ -54,10 +54,6 @@ LedgerHandler::check()
bool const binary = params[jss::binary].asBool();
bool const owner_funds = params[jss::owner_funds].asBool();
bool const queue = params[jss::queue].asBool();
auto type = chooseLedgerEntryType(params);
if (type.first)
return type.first;
type_ = type.second;
options_ = (full ? LedgerFill::full : 0) |
(expand ? LedgerFill::expand : 0) |

View File

@@ -76,7 +76,6 @@ private:
std::vector<TxQ::TxDetails> queueTxs_;
Json::Value result_;
int options_ = 0;
LedgerEntryType type_;
};
////////////////////////////////////////////////////////////////////////////////
@@ -91,7 +90,7 @@ LedgerHandler::writeResult(Object& value)
if (ledger_)
{
Json::copyFrom(value, result_);
addJson(value, {*ledger_, &context_, options_, queueTxs_, type_});
addJson(value, {*ledger_, &context_, options_, queueTxs_});
}
else
{
@@ -105,6 +104,21 @@ LedgerHandler::writeResult(Object& value)
addJson(open, {*master.getCurrentLedger(), &context_, 0});
}
}
Json::Value warnings{Json::arrayValue};
if (context_.params.isMember(jss::type))
{
Json::Value& w = warnings.append(Json::objectValue);
w[jss::id] = warnRPC_FIELDS_DEPRECATED;
w[jss::message] =
"Some fields from your request are deprecated. Please check the "
"documentation at "
"https://xrpl.org/docs/references/http-websocket-apis/ "
"and update your request. Field `type` is deprecated.";
}
if (warnings.size())
value[jss::warnings] = std::move(warnings);
}
} // namespace RPC