Compare commits

..

46 Commits

Author SHA1 Message Date
Ed Hennis
9b349510b8 Merge branch 'develop' into ximinez/fix/validator-cache 2026-03-03 20:46:52 -04:00
Ayaz Salikhov
fcec31ed20 chore: Update pre-commit hooks (#6460) 2026-03-03 20:23:22 +00:00
Ed Hennis
a495b41179 Merge branch 'develop' into ximinez/fix/validator-cache 2026-03-03 15:54:47 -04:00
dependabot[bot]
0abd762781 ci: [DEPENDABOT] bump actions/upload-artifact from 6.0.0 to 7.0.0 (#6450) 2026-03-03 17:17:08 +00:00
Sergey Kuznetsov
5300e65686 tests: Improve stability of Subscribe tests (#6420)
The `Subscribe` tests were flaky, because each test performs some operations (e.g. sends transactions) and waits for messages to appear in subscription with a 100ms timeout. If tests are slow (e.g. compiled in debug mode or a slow machine) then some of them could fail. This change adds an attempt to synchronize the background Env's thread and the test's thread by ensuring that all the scheduled operations are started before the test's thread starts to wait for a websocket message. This is done by limiting I/O threads of the app inside Env to 1 and adding a synchronization barrier after closing the ledger.
2026-03-03 08:46:55 -05:00
Ed Hennis
ae41a712b3 Merge branch 'develop' into ximinez/fix/validator-cache 2026-02-24 17:34:55 -04:00
Ed Hennis
7684f9dd58 Merge branch 'develop' into ximinez/fix/validator-cache 2026-02-20 18:49:57 -04:00
Ed Hennis
7c34be898d Merge branch 'develop' into ximinez/fix/validator-cache 2026-02-20 18:26:15 -04:00
Ed Hennis
5e282f49da Merge branch 'develop' into ximinez/fix/validator-cache 2026-02-20 17:31:58 -04:00
Ed Hennis
86722689ac Merge remote-tracking branch 'upstream/develop' into ximinez/fix/validator-cache
* upstream/develop:
  ci: [DEPENDABOT] bump tj-actions/changed-files from 46.0.5 to 47.0.4 (6394)
  ci: [DEPENDABOT] bump codecov/codecov-action from 5.4.3 to 5.5.2 (6398)
  ci: Build docs in PRs and in private repos (6400)
  ci: Add dependabot config (6379)
  Fix tautological assertion (6393)
  chore: Apply clang-format width 100 (6387)
2026-02-20 16:25:05 -05:00
Ed Hennis
3759144bba Update formatting 2026-02-20 16:23:53 -05:00
Ed Hennis
16c41c2143 Merge commit '25cca465538a56cce501477f9e5e2c1c7ea2d84c' into ximinez/fix/validator-cache
* commit '25cca465538a56cce501477f9e5e2c1c7ea2d84c':
  chore: Set clang-format width to 100 in config file (6387)
2026-02-20 16:22:13 -05:00
Ed Hennis
af28042946 Merge branch 'develop' into ximinez/fix/validator-cache 2026-02-19 16:24:50 -05:00
Ed Hennis
734426554d Merge branch 'develop' into ximinez/fix/validator-cache 2026-02-18 20:49:50 -04:00
Ed Hennis
7f17daa95f Merge branch 'develop' into ximinez/fix/validator-cache 2026-02-04 16:30:05 -04:00
Ed Hennis
f359cd8dad Merge branch 'develop' into ximinez/fix/validator-cache 2026-02-03 16:08:02 -04:00
Ed Hennis
bf0b10404d Fix formatting 2026-01-28 19:40:27 -05:00
Ed Hennis
d019ebaf36 Merge branch 'develop' into ximinez/fix/validator-cache 2026-01-28 18:44:52 -04:00
Ed Hennis
b6e4620349 Merge branch 'develop' into ximinez/fix/validator-cache 2026-01-15 13:03:28 -04:00
Ed Hennis
db0ef6a370 Merge branch 'develop' into ximinez/fix/validator-cache 2026-01-15 12:05:56 -04:00
Ed Hennis
11a45a0ac2 Merge branch 'develop' into ximinez/fix/validator-cache 2026-01-13 18:19:08 -04:00
Ed Hennis
aa035f4cfd Merge branch 'develop' into ximinez/fix/validator-cache 2026-01-13 15:27:57 -04:00
Ed Hennis
8988f9117f Merge branch 'develop' into ximinez/fix/validator-cache 2026-01-12 14:52:12 -04:00
Ed Hennis
ae4f379845 Merge branch 'develop' into ximinez/fix/validator-cache 2026-01-11 00:50:40 -04:00
Ed Hennis
671aa11649 Merge branch 'develop' into ximinez/fix/validator-cache 2026-01-08 17:06:06 -04:00
Ed Hennis
53d35fd8ea Merge branch 'develop' into ximinez/fix/validator-cache 2026-01-08 13:04:16 -04:00
Ed Hennis
0c7ea2e333 Merge branch 'develop' into ximinez/fix/validator-cache 2026-01-06 14:02:10 -05:00
Ed Hennis
5f54be25e9 Merge branch 'develop' into ximinez/fix/validator-cache 2025-12-22 17:39:55 -05:00
Ed Hennis
d82756519c Merge branch 'develop' into ximinez/fix/validator-cache 2025-12-18 19:59:49 -05:00
Ed Hennis
1f23832659 Merge branch 'develop' into ximinez/fix/validator-cache 2025-12-12 20:34:55 -05:00
Ed Hennis
4c50969bde Merge branch 'develop' into ximinez/fix/validator-cache 2025-12-11 15:31:29 -05:00
Ed Hennis
aabdf372dd Merge branch 'develop' into ximinez/fix/validator-cache 2025-12-05 21:13:06 -05:00
Ed Hennis
c6d63a4b90 Merge branch 'develop' into ximinez/fix/validator-cache 2025-12-02 17:37:25 -05:00
Ed Hennis
1e6c3208db Merge branch 'develop' into ximinez/fix/validator-cache 2025-12-01 14:40:41 -05:00
Ed Hennis
a74f223efb Merge branch 'develop' into ximinez/fix/validator-cache 2025-11-28 15:46:40 -05:00
Ed Hennis
1eb3a3ea5a Merge branch 'develop' into ximinez/fix/validator-cache 2025-11-27 01:48:53 -05:00
Ed Hennis
630e428929 Merge branch 'develop' into ximinez/fix/validator-cache 2025-11-26 00:25:12 -05:00
Ed Hennis
3f93edc5e0 Merge branch 'develop' into ximinez/fix/validator-cache 2025-11-25 14:55:02 -05:00
Ed Hennis
baf62689ff Merge branch 'develop' into ximinez/fix/validator-cache 2025-11-24 21:49:07 -05:00
Ed Hennis
ddf7d6cac4 Merge branch 'develop' into ximinez/fix/validator-cache 2025-11-24 21:30:18 -05:00
Ed Hennis
fcd2ea2d6e Merge branch 'develop' into ximinez/fix/validator-cache 2025-11-21 12:47:54 -05:00
Ed Hennis
a16aa5b12f Merge branch 'develop' into ximinez/fix/validator-cache 2025-11-18 22:39:25 -05:00
Ed Hennis
ef2de81870 Merge branch 'develop' into ximinez/fix/validator-cache 2025-11-15 03:08:38 -05:00
Ed Hennis
fce6757260 Merge branch 'develop' into ximinez/fix/validator-cache 2025-11-13 12:19:10 -05:00
Ed Hennis
d759a0a2b0 Merge branch 'develop' into ximinez/fix/validator-cache 2025-11-12 14:12:51 -05:00
Ed Hennis
d2dda416e8 Use Validator List (VL) cache files in more scenarios
- If any [validator_list_keys] are not available after all
  [validator_list_sites] have had a chance to be queried, then fall
  back to loading cache files. Currently, cache files are only used if
  no sites are defined, or the request to one of them has an error. It
  does not include cases where not enough sites are defined, or if a
  site returns an invalid VL (or something else entirely).
- Resolves #5320
2025-11-10 19:53:02 -05:00
17 changed files with 175 additions and 81 deletions

View File

@@ -177,7 +177,7 @@ jobs:
- name: Upload the binary (Linux)
if: ${{ github.repository_owner == 'XRPLF' && runner.os == 'Linux' }}
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: xrpld-${{ inputs.config_name }}
path: ${{ env.BUILD_DIR }}/xrpld

View File

@@ -84,7 +84,7 @@ jobs:
- name: Upload clang-tidy output
if: steps.run_clang_tidy.outcome != 'success'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: clang-tidy-results
path: clang-tidy-output.txt

View File

@@ -20,7 +20,7 @@ repos:
args: [--assume-in-merge]
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: 75ca4ad908dc4a99f57921f29b7e6c1521e10b26 # frozen: v21.1.8
rev: cd481d7b0bfb5c7b3090c21846317f9a8262e891 # frozen: v22.1.0
hooks:
- id: clang-format
args: [--style=file]
@@ -33,17 +33,17 @@ repos:
additional_dependencies: [PyYAML]
- repo: https://github.com/rbubley/mirrors-prettier
rev: 5ba47274f9b181bce26a5150a725577f3c336011 # frozen: v3.6.2
rev: c2bc67fe8f8f549cc489e00ba8b45aa18ee713b1 # frozen: v3.8.1
hooks:
- id: prettier
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 831207fd435b47aeffdf6af853097e64322b4d44 # frozen: v25.12.0
rev: ea488cebbfd88a5f50b8bd95d5c829d0bb76feb8 # frozen: 26.1.0
hooks:
- id: black
- repo: https://github.com/streetsidesoftware/cspell-cli
rev: 1cfa010f078c354f3ffb8413616280cc28f5ba21 # frozen: v9.4.0
rev: a42085ade523f591dca134379a595e7859986445 # frozen: v9.7.0
hooks:
- id: cspell # Spell check changed files
exclude: .config/cspell.config.yaml

View File

@@ -23,13 +23,13 @@ public:
static constexpr size_t initialBufferSize = kilobytes(256);
RawStateTable()
: monotonic_resource_{std::make_unique<boost::container::pmr::monotonic_buffer_resource>(
initialBufferSize)}
: monotonic_resource_{
std::make_unique<boost::container::pmr::monotonic_buffer_resource>(initialBufferSize)}
, items_{monotonic_resource_.get()} {};
RawStateTable(RawStateTable const& rhs)
: monotonic_resource_{std::make_unique<boost::container::pmr::monotonic_buffer_resource>(
initialBufferSize)}
: monotonic_resource_{
std::make_unique<boost::container::pmr::monotonic_buffer_resource>(initialBufferSize)}
, items_{rhs.items_, monotonic_resource_.get()}
, dropsDestroyed_{rhs.dropsDestroyed_} {};

View File

@@ -72,8 +72,8 @@ OpenView::OpenView(
ReadView const* base,
Rules const& rules,
std::shared_ptr<void const> hold)
: monotonic_resource_{std::make_unique<boost::container::pmr::monotonic_buffer_resource>(
initialBufferSize)}
: monotonic_resource_{
std::make_unique<boost::container::pmr::monotonic_buffer_resource>(initialBufferSize)}
, txs_{monotonic_resource_.get()}
, rules_(rules)
, header_(base->header())
@@ -88,8 +88,8 @@ OpenView::OpenView(
}
OpenView::OpenView(ReadView const* base, std::shared_ptr<void const> hold)
: monotonic_resource_{std::make_unique<boost::container::pmr::monotonic_buffer_resource>(
initialBufferSize)}
: monotonic_resource_{
std::make_unique<boost::container::pmr::monotonic_buffer_resource>(initialBufferSize)}
, txs_{monotonic_resource_.get()}
, rules_(base->rules())
, header_(base->header())

View File

@@ -133,9 +133,9 @@ STVar::constructST(SerializedTypeID id, int depth, Args&&... args)
{
construct<T>(std::forward<Args>(args)...);
}
else if constexpr (std::is_same_v<
std::tuple<std::remove_cvref_t<Args>...>,
std::tuple<SerialIter, SField>>)
else if constexpr (
std::
is_same_v<std::tuple<std::remove_cvref_t<Args>...>, std::tuple<SerialIter, SField>>)
{
construct<T>(std::forward<Args>(args)..., depth);
}

View File

@@ -180,8 +180,9 @@ ammAccountHolds(ReadView const& view, AccountID const& ammAccountID, Issue const
if (auto const sle = view.read(keylet::account(ammAccountID)))
return (*sle)[sfBalance];
}
else if (auto const sle = view.read(keylet::line(ammAccountID, issue.account, issue.currency));
sle && !isFrozen(view, ammAccountID, issue.currency, issue.account))
else if (
auto const sle = view.read(keylet::line(ammAccountID, issue.account, issue.currency));
sle && !isFrozen(view, ammAccountID, issue.currency, issue.account))
{
auto amount = (*sle)[sfBalance];
if (ammAccountID > issue.account)

View File

@@ -42,8 +42,9 @@ AMMVote::preclaim(PreclaimContext const& ctx)
}
else if (ammSle->getFieldAmount(sfLPTokenBalance) == beast::zero)
return tecAMM_EMPTY;
else if (auto const lpTokensNew = ammLPHolds(ctx.view, *ammSle, ctx.tx[sfAccount], ctx.j);
lpTokensNew == beast::zero)
else if (
auto const lpTokensNew = ammLPHolds(ctx.view, *ammSle, ctx.tx[sfAccount], ctx.j);
lpTokensNew == beast::zero)
{
JLOG(ctx.j.debug()) << "AMM Vote: account is not LP.";
return tecAMM_INVALID_TOKENS;

View File

@@ -84,11 +84,12 @@ LoanSet::preflight(PreflightContext const& ctx)
!validNumericMinimum(paymentInterval, LoanSet::minPaymentInterval))
return temINVALID;
// Grace period is between min default value and payment interval
else if (auto const gracePeriod = tx[~sfGracePeriod]; //
!validNumericRange(
gracePeriod,
paymentInterval.value_or(LoanSet::defaultPaymentInterval),
defaultGracePeriod))
else if (
auto const gracePeriod = tx[~sfGracePeriod]; //
!validNumericRange(
gracePeriod,
paymentInterval.value_or(LoanSet::defaultPaymentInterval),
defaultGracePeriod))
return temINVALID;
// Copied from preflight2

View File

@@ -31,6 +31,7 @@
#include <xrpl/protocol/STTx.h>
#include <functional>
#include <future>
#include <source_location>
#include <string>
#include <tuple>
@@ -393,6 +394,48 @@ public:
return close(std::chrono::seconds(5));
}
/** Close and advance the ledger, then synchronize with the server's
io_context to ensure all async operations initiated by the close have
been started.
This function performs the same ledger close as close(), but additionally
ensures that all tasks posted to the server's io_context (such as
WebSocket subscription message sends) have been initiated before returning.
What it guarantees:
- All async operations posted before syncClose() have been STARTED
- For WebSocket sends: async_write_some() has been called
- The actual I/O completion may still be pending (async)
What it does NOT guarantee:
- Async operations have COMPLETED
- WebSocket messages have been received by clients
- However, for localhost connections, the remaining latency is typically
microseconds, making tests reliable
Use this instead of close() when:
- Test code immediately checks for subscription messages
- Race conditions between test and worker threads must be avoided
- Deterministic test behavior is required
@param timeout Maximum time to wait for the barrier task to execute
@return true if close succeeded and barrier executed within timeout,
false otherwise
*/
[[nodiscard]] bool
syncClose(std::chrono::steady_clock::duration timeout = std::chrono::seconds{1})
{
XRPL_ASSERT(
app().getNumberOfThreads() == 1,
"syncClose() is only useful on an application with a single thread");
auto const result = close();
auto serverBarrier = std::make_shared<std::promise<void>>();
auto future = serverBarrier->get_future();
boost::asio::post(app().getIOContext(), [serverBarrier]() { serverBarrier->set_value(); });
auto const status = future.wait_for(timeout);
return result && status == std::future_status::ready;
}
/** Turn on JSON tracing.
With no arguments, trace all
*/

View File

@@ -73,6 +73,8 @@ std::unique_ptr<Config> admin_localnet(std::unique_ptr<Config>);
std::unique_ptr<Config> secure_gateway_localnet(std::unique_ptr<Config>);
std::unique_ptr<Config> single_thread_io(std::unique_ptr<Config>);
/// @brief adjust configuration with params needed to be a validator
///
/// this is intended for use with envconfig, as in

View File

@@ -87,6 +87,12 @@ secure_gateway_localnet(std::unique_ptr<Config> cfg)
(*cfg)[PORT_WS].set("secure_gateway", "127.0.0.0/8");
return cfg;
}
std::unique_ptr<Config>
single_thread_io(std::unique_ptr<Config> cfg)
{
cfg->IO_WORKERS = 1;
return cfg;
}
auto constexpr defaultseed = "shUwVw52ofnCUX5m7kPTKzJdr4HEH";

View File

@@ -26,7 +26,7 @@ public:
{
using namespace std::chrono_literals;
using namespace jtx;
Env env(*this);
Env env{*this, single_thread_io(envconfig())};
auto wsc = makeWSClient(env.app().config());
Json::Value stream;
@@ -92,7 +92,7 @@ public:
{
using namespace std::chrono_literals;
using namespace jtx;
Env env(*this);
Env env{*this, single_thread_io(envconfig())};
auto wsc = makeWSClient(env.app().config());
Json::Value stream;
@@ -114,7 +114,7 @@ public:
{
// Accept a ledger
env.close();
BEAST_EXPECT(env.syncClose());
// Check stream update
BEAST_EXPECT(wsc->findMsg(5s, [&](auto const& jv) {
@@ -125,7 +125,7 @@ public:
{
// Accept another ledger
env.close();
BEAST_EXPECT(env.syncClose());
// Check stream update
BEAST_EXPECT(wsc->findMsg(5s, [&](auto const& jv) {
@@ -150,7 +150,7 @@ public:
{
using namespace std::chrono_literals;
using namespace jtx;
Env env(*this);
Env env(*this, single_thread_io(envconfig()));
auto baseFee = env.current()->fees().base.drops();
auto wsc = makeWSClient(env.app().config());
Json::Value stream;
@@ -171,7 +171,7 @@ public:
{
env.fund(XRP(10000), "alice");
env.close();
BEAST_EXPECT(env.syncClose());
// Check stream update for payment transaction
BEAST_EXPECT(wsc->findMsg(5s, [&](auto const& jv) {
@@ -195,7 +195,7 @@ public:
}));
env.fund(XRP(10000), "bob");
env.close();
BEAST_EXPECT(env.syncClose());
// Check stream update for payment transaction
BEAST_EXPECT(wsc->findMsg(5s, [&](auto const& jv) {
@@ -249,12 +249,12 @@ public:
{
// Transaction that does not affect stream
env.fund(XRP(10000), "carol");
env.close();
BEAST_EXPECT(env.syncClose());
BEAST_EXPECT(!wsc->getMsg(10ms));
// Transactions concerning alice
env.trust(Account("bob")["USD"](100), "alice");
env.close();
BEAST_EXPECT(env.syncClose());
// Check stream updates
BEAST_EXPECT(wsc->findMsg(5s, [&](auto const& jv) {
@@ -288,6 +288,7 @@ public:
using namespace jtx;
Env env(*this, envconfig([](std::unique_ptr<Config> cfg) {
cfg->FEES.reference_fee = 10;
cfg = single_thread_io(std::move(cfg));
return cfg;
}));
auto wsc = makeWSClient(env.app().config());
@@ -310,7 +311,7 @@ public:
{
env.fund(XRP(10000), "alice");
env.close();
BEAST_EXPECT(env.syncClose());
// Check stream update for payment transaction
BEAST_EXPECT(wsc->findMsg(5s, [&](auto const& jv) {
@@ -360,7 +361,7 @@ public:
testManifests()
{
using namespace jtx;
Env env(*this);
Env env(*this, single_thread_io(envconfig()));
auto wsc = makeWSClient(env.app().config());
Json::Value stream;
@@ -394,7 +395,7 @@ public:
{
using namespace jtx;
Env env{*this, envconfig(validator, ""), features};
Env env{*this, single_thread_io(envconfig(validator, "")), features};
auto& cfg = env.app().config();
if (!BEAST_EXPECT(cfg.section(SECTION_VALIDATION_SEED).empty()))
return;
@@ -483,7 +484,7 @@ public:
// at least one flag ledger.
while (env.closed()->header().seq < 300)
{
env.close();
BEAST_EXPECT(env.syncClose());
using namespace std::chrono_literals;
BEAST_EXPECT(wsc->findMsg(5s, validValidationFields));
}
@@ -505,7 +506,7 @@ public:
{
using namespace jtx;
testcase("Subscribe by url");
Env env{*this};
Env env{*this, single_thread_io(envconfig())};
Json::Value jv;
jv[jss::url] = "http://localhost/events";
@@ -536,7 +537,7 @@ public:
auto const method = subscribe ? "subscribe" : "unsubscribe";
testcase << "Error cases for " << method;
Env env{*this};
Env env{*this, single_thread_io(envconfig())};
auto wsc = makeWSClient(env.app().config());
{
@@ -572,7 +573,7 @@ public:
}
{
Env env_nonadmin{*this, no_admin(envconfig())};
Env env_nonadmin{*this, single_thread_io(no_admin(envconfig()))};
Json::Value jv;
jv[jss::url] = "no-url";
auto jr = env_nonadmin.rpc("json", method, to_string(jv))[jss::result];
@@ -834,12 +835,13 @@ public:
* send payments between the two accounts a and b,
* and close ledgersToClose ledgers
*/
auto sendPayments = [](Env& env,
Account const& a,
Account const& b,
int newTxns,
std::uint32_t ledgersToClose,
int numXRP = 10) {
auto sendPayments = [this](
Env& env,
Account const& a,
Account const& b,
int newTxns,
std::uint32_t ledgersToClose,
int numXRP = 10) {
env.memoize(a);
env.memoize(b);
for (int i = 0; i < newTxns; ++i)
@@ -852,7 +854,7 @@ public:
jtx::sig(jtx::autofill));
}
for (int i = 0; i < ledgersToClose; ++i)
env.close();
BEAST_EXPECT(env.syncClose());
return newTxns;
};
@@ -945,7 +947,7 @@ public:
*
* also test subscribe to the account before it is created
*/
Env env(*this);
Env env(*this, single_thread_io(envconfig()));
auto wscTxHistory = makeWSClient(env.app().config());
Json::Value request;
request[jss::account_history_tx_stream] = Json::objectValue;
@@ -988,7 +990,7 @@ public:
* subscribe genesis account tx history without txns
* subscribe to bob's account after it is created
*/
Env env(*this);
Env env(*this, single_thread_io(envconfig()));
auto wscTxHistory = makeWSClient(env.app().config());
Json::Value request;
request[jss::account_history_tx_stream] = Json::objectValue;
@@ -998,6 +1000,7 @@ public:
if (!BEAST_EXPECT(goodSubRPC(jv)))
return;
IdxHashVec genesisFullHistoryVec;
BEAST_EXPECT(env.syncClose());
if (!BEAST_EXPECT(!getTxHash(*wscTxHistory, genesisFullHistoryVec, 1).first))
return;
@@ -1016,6 +1019,7 @@ public:
if (!BEAST_EXPECT(goodSubRPC(jv)))
return;
IdxHashVec bobFullHistoryVec;
BEAST_EXPECT(env.syncClose());
r = getTxHash(*wscTxHistory, bobFullHistoryVec, 1);
if (!BEAST_EXPECT(r.first && r.second))
return;
@@ -1050,6 +1054,7 @@ public:
"rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh";
jv = wscTxHistory->invoke("subscribe", request);
genesisFullHistoryVec.clear();
BEAST_EXPECT(env.syncClose());
BEAST_EXPECT(getTxHash(*wscTxHistory, genesisFullHistoryVec, 31).second);
jv = wscTxHistory->invoke("unsubscribe", request);
@@ -1062,13 +1067,13 @@ public:
* subscribe account and subscribe account tx history
* and compare txns streamed
*/
Env env(*this);
Env env(*this, single_thread_io(envconfig()));
auto wscAccount = makeWSClient(env.app().config());
auto wscTxHistory = makeWSClient(env.app().config());
std::array<Account, 2> accounts = {alice, bob};
env.fund(XRP(222222), accounts);
env.close();
BEAST_EXPECT(env.syncClose());
// subscribe account
Json::Value stream = Json::objectValue;
@@ -1131,18 +1136,18 @@ public:
* alice issues USD to carol
* mix USD and XRP payments
*/
Env env(*this);
Env env(*this, single_thread_io(envconfig()));
auto const USD_a = alice["USD"];
std::array<Account, 2> accounts = {alice, carol};
env.fund(XRP(333333), accounts);
env.trust(USD_a(20000), carol);
env.close();
BEAST_EXPECT(env.syncClose());
auto mixedPayments = [&]() -> int {
sendPayments(env, alice, carol, 1, 0);
env(pay(alice, carol, USD_a(100)));
env.close();
BEAST_EXPECT(env.syncClose());
return 2;
};
@@ -1152,6 +1157,7 @@ public:
request[jss::account_history_tx_stream][jss::account] = carol.human();
auto ws = makeWSClient(env.app().config());
auto jv = ws->invoke("subscribe", request);
BEAST_EXPECT(env.syncClose());
{
// take out existing txns from the stream
IdxHashVec tempVec;
@@ -1169,10 +1175,10 @@ public:
/*
* long transaction history
*/
Env env(*this);
Env env(*this, single_thread_io(envconfig()));
std::array<Account, 2> accounts = {alice, carol};
env.fund(XRP(444444), accounts);
env.close();
BEAST_EXPECT(env.syncClose());
// many payments, and close lots of ledgers
auto oneRound = [&](int numPayments) {
@@ -1185,6 +1191,7 @@ public:
request[jss::account_history_tx_stream][jss::account] = carol.human();
auto wscLong = makeWSClient(env.app().config());
auto jv = wscLong->invoke("subscribe", request);
BEAST_EXPECT(env.syncClose());
{
// take out existing txns from the stream
IdxHashVec tempVec;
@@ -1222,7 +1229,7 @@ public:
jtx::testable_amendments() | featurePermissionedDomains | featureCredentials |
featurePermissionedDEX};
Env env(*this, all);
Env env(*this, single_thread_io(envconfig()), all);
PermissionedDEX permDex(env);
auto const alice = permDex.alice;
auto const bob = permDex.bob;
@@ -1241,10 +1248,10 @@ public:
if (!BEAST_EXPECT(jv[jss::status] == "success"))
return;
env(offer(alice, XRP(10), USD(10)), domain(domainID), txflags(tfHybrid));
env.close();
BEAST_EXPECT(env.syncClose());
env(pay(bob, carol, USD(5)), path(~USD), sendmax(XRP(5)), domain(domainID));
env.close();
BEAST_EXPECT(env.syncClose());
BEAST_EXPECT(wsc->findMsg(5s, [&](auto const& jv) {
if (jv[jss::changes].size() != 1)
@@ -1284,9 +1291,9 @@ public:
Account const bob{"bob"};
Account const broker{"broker"};
Env env{*this, features};
Env env{*this, single_thread_io(envconfig()), features};
env.fund(XRP(10000), alice, bob, broker);
env.close();
BEAST_EXPECT(env.syncClose());
auto wsc = test::makeWSClient(env.app().config());
Json::Value stream;
@@ -1350,12 +1357,12 @@ public:
// Verify the NFTokenIDs are correct in the NFTokenMint tx meta
uint256 const nftId1{token::getNextID(env, alice, 0u, tfTransferable)};
env(token::mint(alice, 0u), txflags(tfTransferable));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenID(nftId1);
uint256 const nftId2{token::getNextID(env, alice, 0u, tfTransferable)};
env(token::mint(alice, 0u), txflags(tfTransferable));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenID(nftId2);
// Alice creates one sell offer for each NFT
@@ -1363,32 +1370,32 @@ public:
// meta
uint256 const aliceOfferIndex1 = keylet::nftoffer(alice, env.seq(alice)).key;
env(token::createOffer(alice, nftId1, drops(1)), txflags(tfSellNFToken));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenOfferID(aliceOfferIndex1);
uint256 const aliceOfferIndex2 = keylet::nftoffer(alice, env.seq(alice)).key;
env(token::createOffer(alice, nftId2, drops(1)), txflags(tfSellNFToken));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenOfferID(aliceOfferIndex2);
// Alice cancels two offers she created
// Verify the NFTokenIDs are correct in the NFTokenCancelOffer tx
// meta
env(token::cancelOffer(alice, {aliceOfferIndex1, aliceOfferIndex2}));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenIDsInCancelOffer({nftId1, nftId2});
// Bobs creates a buy offer for nftId1
// Verify the offer id is correct in the NFTokenCreateOffer tx meta
auto const bobBuyOfferIndex = keylet::nftoffer(bob, env.seq(bob)).key;
env(token::createOffer(bob, nftId1, drops(1)), token::owner(alice));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenOfferID(bobBuyOfferIndex);
// Alice accepts bob's buy offer
// Verify the NFTokenID is correct in the NFTokenAcceptOffer tx meta
env(token::acceptBuyOffer(alice, bobBuyOfferIndex));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenID(nftId1);
}
@@ -1397,7 +1404,7 @@ public:
// Alice mints a NFT
uint256 const nftId{token::getNextID(env, alice, 0u, tfTransferable)};
env(token::mint(alice, 0u), txflags(tfTransferable));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenID(nftId);
// Alice creates sell offer and set broker as destination
@@ -1405,18 +1412,18 @@ public:
env(token::createOffer(alice, nftId, drops(1)),
token::destination(broker),
txflags(tfSellNFToken));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenOfferID(offerAliceToBroker);
// Bob creates buy offer
uint256 const offerBobToBroker = keylet::nftoffer(bob, env.seq(bob)).key;
env(token::createOffer(bob, nftId, drops(1)), token::owner(alice));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenOfferID(offerBobToBroker);
// Check NFTokenID meta for NFTokenAcceptOffer in brokered mode
env(token::brokerOffers(broker, offerBobToBroker, offerAliceToBroker));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenID(nftId);
}
@@ -1426,24 +1433,24 @@ public:
// Alice mints a NFT
uint256 const nftId{token::getNextID(env, alice, 0u, tfTransferable)};
env(token::mint(alice, 0u), txflags(tfTransferable));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenID(nftId);
// Alice creates 2 sell offers for the same NFT
uint256 const aliceOfferIndex1 = keylet::nftoffer(alice, env.seq(alice)).key;
env(token::createOffer(alice, nftId, drops(1)), txflags(tfSellNFToken));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenOfferID(aliceOfferIndex1);
uint256 const aliceOfferIndex2 = keylet::nftoffer(alice, env.seq(alice)).key;
env(token::createOffer(alice, nftId, drops(1)), txflags(tfSellNFToken));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenOfferID(aliceOfferIndex2);
// Make sure the metadata only has 1 nft id, since both offers are
// for the same nft
env(token::cancelOffer(alice, {aliceOfferIndex1, aliceOfferIndex2}));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenIDsInCancelOffer({nftId});
}
@@ -1451,7 +1458,7 @@ public:
{
uint256 const aliceMintWithOfferIndex1 = keylet::nftoffer(alice, env.seq(alice)).key;
env(token::mint(alice), token::amount(XRP(0)));
env.close();
BEAST_EXPECT(env.syncClose());
verifyNFTokenOfferID(aliceMintWithOfferIndex1);
}
}

View File

@@ -1072,6 +1072,12 @@ public:
return trapTxID_;
}
size_t
getNumberOfThreads() const override
{
return get_number_of_threads();
}
private:
// For a newly-started validator, this is the greatest persisted ledger
// and new validations must be greater than this.

View File

@@ -157,6 +157,10 @@ public:
* than the last ledger it persisted. */
virtual LedgerIndex
getMaxDisallowedLedger() = 0;
/** Returns the number of io_context (I/O worker) threads used by the application. */
virtual size_t
getNumberOfThreads() const = 0;
};
std::unique_ptr<Application>

View File

@@ -23,4 +23,10 @@ public:
{
return io_context_;
}
size_t
get_number_of_threads() const
{
return threads_.size();
}
};

View File

@@ -129,7 +129,11 @@ ValidatorSite::load(
{
try
{
sites_.emplace_back(uri);
// This is not super efficient, but it doesn't happen often.
bool found = std::ranges::any_of(
sites_, [&uri](auto const& site) { return site.loadedResource->uri == uri; });
if (!found)
sites_.emplace_back(uri);
}
catch (std::exception const& e)
{
@@ -190,6 +194,16 @@ ValidatorSite::setTimer(
std::lock_guard<std::mutex> const& site_lock,
std::lock_guard<std::mutex> const& state_lock)
{
if (!sites_.empty() && //
std::ranges::all_of(
sites_, [](auto const& site) { return site.lastRefreshStatus.has_value(); }))
{
// If all of the sites have been handled at least once (including
// errors and timeouts), call missingSite, which will load the cache
// files for any lists that are still unavailable.
missingSite(site_lock);
}
auto next = std::min_element(sites_.begin(), sites_.end(), [](Site const& a, Site const& b) {
return a.nextRefresh < b.nextRefresh;
});
@@ -299,12 +313,15 @@ ValidatorSite::onRequestTimeout(std::size_t siteIdx, error_code const& ec)
// processes a network error. Usually, this function runs first,
// but on extremely rare occasions, the response handler can run
// first, which will leave activeResource empty.
auto const& site = sites_[siteIdx];
auto& site = sites_[siteIdx];
if (site.activeResource)
JLOG(j_.warn()) << "Request for " << site.activeResource->uri << " took too long";
else
JLOG(j_.error()) << "Request took too long, but a response has "
"already been processed";
if (!site.lastRefreshStatus)
site.lastRefreshStatus.emplace(
Site::Status{clock_type::now(), ListDisposition::invalid, "timeout"});
}
std::lock_guard lock_state{state_mutex_};