Compare commits

..

30 Commits

Author SHA1 Message Date
Denis Angell
5fe5ef727c [fold] fix build error 2024-11-19 12:27:41 +01:00
RichardAH
d8515b5afe Merge branch 'dev' into sync-rippled 2024-11-19 14:58:51 +10:00
RichardAH
f6d789464e Merge branch 'dev' into sync-rippled 2024-11-12 08:57:53 +10:00
Denis Angell
c34ca594a4 Merge branch 'dev' into sync-rippled 2024-10-31 17:09:44 +01:00
Denis Angell
e47d6891cc [fold] fix bad merge
- add back filter for ripple state on account_channels
- add back network id test (env auto adds network id in xahau)
2024-10-31 13:18:40 +01:00
Denis Angell
bfe1463c37 [fold] bad merge 2024-10-31 11:20:11 +01:00
Richard Holland
8d04a1a434 clang 2024-10-25 12:59:46 +11:00
RichardAH
d688644727 Merge branch 'dev' into sync-rippled 2024-10-25 11:34:19 +10:00
Denis Angell
f0b6e57408 Merge branch 'dev' into sync-rippled 2024-10-16 10:21:16 +02:00
RichardAH
43ae851238 Merge branch 'dev' into sync-rippled 2024-03-25 08:46:34 +11:00
Denis Angell
eefc0f1150 Revert "Fix typo (#4508)"
This reverts commit 2956f14de8.
2024-03-18 12:22:46 +01:00
Denis Angell
87097576f4 Revert "Fix the fix for std::result_of (#4496)"
This reverts commit cee8409d60.
2024-03-18 12:22:13 +01:00
Chenna Keshava B S
0c73050e6f fix: remove redundant moves (#4565)
- Resolve gcc compiler warning:
      AccountObjects.cpp:182:47: warning: redundant move in initialization [-Wredundant-move]
  - The std::move() operation on trivially copyable types may generate a
    compile warning in newer versions of gcc.
- Remove extraneous header (unused imports) from a unit test file.
2024-03-18 12:16:20 +01:00
Denis Angell
ae1c00e339 fix node size estimation (#4536)
Fix a bug in the `NODE_SIZE` auto-detection feature in `Config.cpp`.
Specifically, this patch corrects the calculation for the total amount
of RAM available, which was previously returned in bytes, but is now
being returned in units of the system's memory unit. Additionally, the
patch adjusts the node size based on the number of available hardware
threads of execution.
2024-03-18 12:16:10 +01:00
Scott Schurr
44bc7f6109 Trivial: add comments for NFToken-related invariants (#4558) 2024-03-18 12:16:00 +01:00
Scott Determan
be7bb83a05 Add missing includes for gcc 13.1: (#4555)
gcc 13.1 failed to compile due to missing headers. This patch adds the
needed headers.
2024-03-18 12:15:49 +01:00
Scott Determan
289c1ebc68 Fix unaligned load and stores: (#4528) (#4531)
Misaligned load and store operations are supported by both Intel and ARM
CPUs. However, in C++, these operations are undefined behavior (UB).
Substituting these operations with a `memcpy` fixes this UB. The
compiled assembly code is equivalent to the original, so there is no
performance penalty to using memcpy.

For context: The unaligned load and store operations fixed here were
originally introduced in the slab allocator (#4218).
2024-03-18 12:15:34 +01:00
Ed Hennis
997b487bbb Move faulty assert (#4533)
This assert was put in the wrong place, but it only triggers if shards
are configured. This change moves the assert to the right place and
updates it to ensure correctness.

The assert could be hit after the server downloads some shards. It may
be necessary to restart after the shards are downloaded.

Note that asserts are normally checked only in debug builds, so release
packages should not be affected.

Introduced in: #4319 (66627b26cf)
2024-03-18 12:09:31 +01:00
Scott Determan
2157440cda Ensure that switchover vars are initialized before use: (#4527)
Global variables in different TUs are initialized in an undefined order.
At least one global variable was accessing a global switchover variable.
This caused the switchover variable to be accessed in an uninitialized
state.

Since the switchover is always explicitly set before transaction
processing, this bug can not effect transaction processing, but could
effect unit tests (and potentially the value of some global variables).
Note: at the time of this patch the offending bug is not yet in
production.
2024-03-18 12:08:56 +01:00
Shawn Xie
3f76ff5afe Add nftoken_id, nftoken_ids, offer_id fields for NFTokens (#4447)
Three new fields are added to the `Tx` responses for NFTs:

1. `nftoken_id`: This field is included in the `Tx` responses for
   `NFTokenMint` and `NFTokenAcceptOffer`. This field indicates the
   `NFTokenID` for the `NFToken` that was modified on the ledger by the
   transaction.
2. `nftoken_ids`: This array is included in the `Tx` response for
   `NFTokenCancelOffer`. This field provides a list of all the
   `NFTokenID`s for the `NFToken`s that were modified on the ledger by
   the transaction.
3. `offer_id`: This field is included in the `Tx` response for
   `NFTokenCreateOffer` transactions and shows the OfferID of the
   `NFTokenOffer` created.

The fields make it easier to track specific tokens and offers. The
implementation includes code (by @ledhed2222) from the Clio project to
extract NFTokenIDs from mint transactions.
2024-03-18 12:08:41 +01:00
drlongle
c923970607 fix!: Prevent API from accepting seed or public key for account (#4404)
The API would allow seeds (and public keys) to be used in place of
accounts at several locations in the API. For example, when calling
account_info, you could pass `"account": "foo"`. The string "foo" is
treated like a seed, so the method returns `actNotFound` (instead of
`actMalformed`, as most developers would expect). In the early days,
this was a convenience to make testing easier. However, it allows for
poor security practices, so it is no longer a good idea. Allowing a
secret or passphrase is now considered a bug. Previously, it was
controlled by the `strict` option on some methods. With this commit,
since the API does not interpret `account` as `seed`, the option
`strict` is no longer needed and is removed.

Removing this behavior from the API is a [breaking
change](https://xrpl.org/request-formatting.html#breaking-changes). One
could argue that it shouldn't be done without bumping the API version;
however, in this instance, there is no evidence that anyone is using the
API in the "legacy" way. Furthermore, it is a potential security hole,
as it allows users to send secrets to places where they are not needed,
where they could end up in logs, error messages, etc. There's no reason
to take such a risk with a seed/secret, since only the public address is
needed.

Resolves: #3329, #3330, #4337

BREAKING CHANGE: Remove non-strict account parsing (#3330)
2024-03-18 12:07:50 +01:00
solmsted
2956f14de8 Fix typo (#4508) 2024-03-18 12:05:50 +01:00
John Freeman
e2f61ce86c Fix errors for Clang 16: (#4501)
Address issues related to the removal of `std::{u,bi}nary_function` in
C++17 and some warnings with Clang 16. Some warnings appeared with the
upgrade to Apple clang version 14.0.3 (clang-1403.0.22.14.1).

- `std::{u,bi}nary_function` were removed in C++17. They were empty
  classes with a few associated types. We already have conditional code
  to define the types. Just make it unconditional.
- libc++ checks a cast in an unevaluated context to see if a type
  inherits from a binary function class in the standard library, e.g.
  `std::equal_to`, and this causes an error when the type privately
  inherits from such a class. Change these instances to public
  inheritance.
- We don't need a middle-man for the empty base optimization. Prefer to
  inherit directly from an empty class than from
  `beast::detail::empty_base_optimization`.
- Clang warns when all the uses of a variable are removed by conditional
  compilation of assertions. Add a `[[maybe_unused]]` annotation to
  suppress it.
- As a drive-by clean-up, remove commented code.

See related work in #4486.
2024-03-18 12:04:41 +01:00
Mark Travis
1fafd1059d Use quorum specified via command line: (#4489)
If `--quorum` setting is present on the command line, use the specified
value as the minimum quorum. This allows for the use of a potentially
fork-unsafe quorum, but it is sometimes necessary for small and test
networks.

Fix #4488.

---------

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2024-03-18 12:04:09 +01:00
John Freeman
cee8409d60 Fix the fix for std::result_of (#4496)
Newer compilers, such as Apple Clang 15.0, have removed `std::result_of`
as part of C++20. The build instructions provided a fix for this (by
adding a preprocessor definition), but the fix was broken.

This fixes the fix by:
* Adding the `conf` prefix for tool configurations (which had been
  forgotten).
* Passing `extra_b2_flags` to `boost` package to fix its build.
  * Define `BOOST_ASIO_HAS_STD_INVOKE_RESULT` in order to build boost
    1.77 with a newer compiler.
2024-03-18 12:01:25 +01:00
RichardAH
f23c32cc00 Prevent replay attacks with NetworkID field: (#4370)
Add a `NetworkID` field to help prevent replay attacks on and from
side-chains.

The new field must be used when the server is using a network id > 1024.

To preserve legacy behavior, all chains with a network ID less than 1025
retain the existing behavior. This includes Mainnet, Testnet, Devnet,
and hooks-testnet. If `sfNetworkID` is present in any transaction
submitted to any of the nodes on one of these chains, then
`telNETWORK_ID_MAKES_TX_NON_CANONICAL` is returned.

Since chains with a network ID less than 1025, including Mainnet, retain
the existing behavior, there is no need for an amendment.

The `NetworkID` helps to prevent replay attacks because users specify a
`NetworkID` field in every transaction for that chain.

This change introduces a new UINT32 field, `sfNetworkID` ("NetworkID").
There are also three new local error codes for transaction results:

- `telNETWORK_ID_MAKES_TX_NON_CANONICAL`
- `telREQUIRES_NETWORK_ID`
- `telWRONG_NETWORK`

To learn about the other transaction result codes, see:
https://xrpl.org/transaction-results.html

Local error codes were chosen because a transaction is not necessarily
malformed if it is submitted to a node running on the incorrect chain.
This is a local error specific to that node and could be corrected by
switching to a different node or by changing the `network_id` on that
node. See:
https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html

In addition to using `NetworkID`, it is still generally recommended to
use different accounts and keys on side-chains. However, people will
undoubtedly use the same keys on multiple chains; for example, this is
common practice on other blockchain networks. There are also some
legitimate use cases for this.

A `app.NetworkID` test suite has been added, and `core.Config` was
updated to include some network_id tests.
2024-03-18 12:00:37 +01:00
Nik Bougalis
1b835b7c05 Avoid using std::shared_ptr when not necessary: (#4218)
The `Ledger` class contains two `SHAMap` instances: the state and
transaction maps. Previously, the maps were dynamically allocated using
`std::make_shared` despite the fact that they did not require lifetime
management separate from the lifetime of the `Ledger` instance to which
they belong.

The two `SHAMap` instances are now regular member variables. Some smart
pointers and dynamic memory allocation was avoided by using stack-based
alternatives.

Commit 3 of 3 in #4218.
2024-03-18 11:57:30 +01:00
Nik Bougalis
9ec1e527a3 Optimize SHAMapItem and leverage new slab allocator: (#4218)
The `SHAMapItem` class contains a variable-sized buffer that
holds the serialized data associated with a particular item
inside a `SHAMap`.

Prior to this commit, the buffer for the serialized data was
allocated separately. Coupled with the fact that most instances
of `SHAMapItem` were wrapped around a `std::shared_ptr` meant
that an instantiation might result in up to three separate
memory allocations.

This commit switches away from `std::shared_ptr` for `SHAMapItem`
and uses `boost::intrusive_ptr` instead, allowing the reference
count for an instance to live inside the instance itself. Coupled
with using a slab-based allocator to optimize memory allocation
for the most commonly sized buffers, the net result is significant
memory savings. In testing, the reduction in memory usage hovers
between 400MB and 650MB. Other scenarios might result in larger
savings.

In performance testing with NFTs, this commit reduces memory size by
about 15% sustained over long duration.

Commit 2 of 3 in #4218.
2024-03-18 11:57:15 +01:00
Nik Bougalis
469eb2b8ac Introduce support for a slabbed allocator: (#4218)
When instantiating a large amount of fixed-sized objects on the heap
the overhead that dynamic memory allocation APIs impose will quickly
become significant.

In some cases, allocating a large amount of memory at once and using
a slabbing allocator to carve the large block into fixed-sized units
that are used to service requests for memory out will help to reduce
memory fragmentation significantly and, potentially, improve overall
performance.

This commit introduces a new `SlabAllocator<>` class that exposes an
API that is _similar_ to the C++ concept of an `Allocator` but it is
not meant to be a general-purpose allocator.

It should not be used unless profiling and analysis of specific memory
allocation patterns indicates that the additional complexity introduced
will improve the performance of the system overall, and subsequent
profiling proves it.

A helper class, `SlabAllocatorSet<>` simplifies handling of variably
sized objects that benefit from slab allocations.

This commit incorporates improvements suggested by Greg Popovitch
(@greg7mdp).

Commit 1 of 3 in #4218.
2024-03-18 11:57:00 +01:00
ledhed2222
0dc262dd9c Add jss fields used by Clio nft_info: (#4320)
Add Clio-specific JSS constants to ensure a common vocabulary of
keywords in Clio and this project. By providing visibility of the full
API keyword namespace, it reduces the likelihood of developers
introducing minor variations on names used by Clio, or unknowingly
claiming a keyword that Clio has already claimed. This change moves this
project slightly away from having only the code necessary for running
the core server, but it is a step toward the goal of keeping this
server's and Clio's APIs similar. The added JSS constants are annotated
to indicate their relevance to Clio.

Clio can be found here: https://github.com/XRPLF/clio

Signed-off-by: ledhed2222 <ledhed2222@users.noreply.github.com>
2024-03-18 11:56:20 +01:00
32 changed files with 228 additions and 2398 deletions

View File

@@ -392,7 +392,6 @@ target_sources (rippled PRIVATE
src/ripple/app/misc/NegativeUNLVote.cpp
src/ripple/app/misc/NetworkOPs.cpp
src/ripple/app/misc/SHAMapStoreImp.cpp
src/ripple/app/misc/StateAccounting.cpp
src/ripple/app/misc/detail/impl/WorkSSL.cpp
src/ripple/app/misc/impl/AccountTxPaging.cpp
src/ripple/app/misc/impl/AmendmentTable.cpp

View File

@@ -67,5 +67,5 @@ git-subtree. See those directories' README files for more details.
- [explorer.xahau.network](https://explorer.xahau.network)
- **Testnet & Faucet**: Test applications and obtain test XAH at [xahau-test.net](https://xahau-test.net) and use the testnet explorer at [explorer.xahau.network](https://explorer.xahau.network).
- **Supporting Wallets**: A list of wallets that support XAH and Xahau-based assets.
- [Xaman](https://xaman.app)
- [Xumm](https://xumm.app)
- [Crossmark](https://crossmark.io)

View File

@@ -152,9 +152,6 @@ public:
std::string
getCompleteLedgers();
RangeSet<std::uint32_t>
getCompleteLedgersRangeSet();
/** Apply held transactions to the open ledger
This is normally called as we close the ledger.
The open ledger remains open to handle new transactions

View File

@@ -1714,13 +1714,6 @@ LedgerMaster::getCompleteLedgers()
return to_string(mCompleteLedgers);
}
RangeSet<std::uint32_t>
LedgerMaster::getCompleteLedgersRangeSet()
{
std::lock_guard sl(mCompleteLock);
return mCompleteLedgers;
}
std::optional<NetClock::time_point>
LedgerMaster::getCloseTimeBySeq(LedgerIndex ledgerIndex)
{

View File

@@ -37,7 +37,6 @@
#include <ripple/app/main/NodeStoreScheduler.h>
#include <ripple/app/main/Tuning.h>
#include <ripple/app/misc/AmendmentTable.h>
#include <ripple/app/misc/DatagramMonitor.h>
#include <ripple/app/misc/HashRouter.h>
#include <ripple/app/misc/LoadFeeTrack.h>
#include <ripple/app/misc/NetworkOPs.h>
@@ -168,8 +167,6 @@ public:
std::unique_ptr<Logs> logs_;
std::unique_ptr<TimeKeeper> timeKeeper_;
std::unique_ptr<DatagramMonitor> datagram_monitor_;
std::uint64_t const instanceCookie_;
beast::Journal m_journal;
@@ -1526,14 +1523,6 @@ ApplicationImp::setup(boost::program_options::variables_map const& cmdline)
if (reportingETL_)
reportingETL_->start();
// Datagram monitor if applicable
if (!config_->standalone() && !config_->DATAGRAM_MONITOR.empty())
{
datagram_monitor_ = std::make_unique<DatagramMonitor>(*this);
if (datagram_monitor_)
datagram_monitor_->start();
}
return true;
}

File diff suppressed because it is too large Load Diff

View File

@@ -33,7 +33,6 @@
#include <ripple/app/misc/HashRouter.h>
#include <ripple/app/misc/LoadFeeTrack.h>
#include <ripple/app/misc/NetworkOPs.h>
#include <ripple/app/misc/StateAccounting.h>
#include <ripple/app/misc/Transaction.h>
#include <ripple/app/misc/TxQ.h>
#include <ripple/app/misc/ValidatorKeys.h>
@@ -68,9 +67,9 @@
#include <ripple/rpc/CTID.h>
#include <ripple/rpc/DeliveredAmount.h>
#include <ripple/rpc/impl/RPCHelpers.h>
#include <ripple/rpc/impl/UDPInfoSub.h>
#include <boost/asio/ip/host_name.hpp>
#include <boost/asio/steady_timer.hpp>
#include <exception>
#include <mutex>
#include <set>
@@ -117,6 +116,81 @@ class NetworkOPsImp final : public NetworkOPs
running,
};
static std::array<char const*, 5> const states_;
/**
* State accounting records two attributes for each possible server state:
* 1) Amount of time spent in each state (in microseconds). This value is
* updated upon each state transition.
* 2) Number of transitions to each state.
*
* This data can be polled through server_info and represented by
* monitoring systems similarly to how bandwidth, CPU, and other
* counter-based metrics are managed.
*
* State accounting is more accurate than periodic sampling of server
* state. With periodic sampling, it is very likely that state transitions
* are missed, and accuracy of time spent in each state is very rough.
*/
class StateAccounting
{
struct Counters
{
explicit Counters() = default;
std::uint64_t transitions = 0;
std::chrono::microseconds dur = std::chrono::microseconds(0);
};
OperatingMode mode_ = OperatingMode::DISCONNECTED;
std::array<Counters, 5> counters_;
mutable std::mutex mutex_;
std::chrono::steady_clock::time_point start_ =
std::chrono::steady_clock::now();
std::chrono::steady_clock::time_point const processStart_ = start_;
std::uint64_t initialSyncUs_{0};
static std::array<Json::StaticString const, 5> const states_;
public:
explicit StateAccounting()
{
counters_[static_cast<std::size_t>(OperatingMode::DISCONNECTED)]
.transitions = 1;
}
/**
* Record state transition. Update duration spent in previous
* state.
*
* @param om New state.
*/
void
mode(OperatingMode om);
/**
* Output state counters in JSON format.
*
* @obj Json object to which to add state accounting data.
*/
void
json(Json::Value& obj) const;
struct CounterData
{
decltype(counters_) counters;
decltype(mode_) mode;
decltype(start_) start;
decltype(initialSyncUs_) initialSyncUs;
};
CounterData
getCounterData() const
{
std::lock_guard lock(mutex_);
return {counters_, mode_, start_, initialSyncUs_};
}
};
//! Server fees published on `server` subscription
struct ServerFeeSummary
{
@@ -198,9 +272,6 @@ public:
std::string
strOperatingMode(bool const admin = false) const override;
StateAccounting::CounterData
getStateAccountingData();
//
// Transaction operations.
//
@@ -705,17 +776,11 @@ private:
DispatchState mDispatchState = DispatchState::none;
std::vector<TransactionStatus> mTransactions;
StateAccounting accounting_;
StateAccounting accounting_{};
std::set<uint256> pendingValidations_;
std::mutex validationsMutex_;
RCLConsensus&
getConsensus();
LedgerMaster&
getLedgerMaster();
private:
struct Stats
{
@@ -778,6 +843,19 @@ private:
//------------------------------------------------------------------------------
static std::array<char const*, 5> const stateNames{
{"disconnected", "connected", "syncing", "tracking", "full"}};
std::array<char const*, 5> const NetworkOPsImp::states_ = stateNames;
std::array<Json::StaticString const, 5> const
NetworkOPsImp::StateAccounting::states_ = {
{Json::StaticString(stateNames[0]),
Json::StaticString(stateNames[1]),
Json::StaticString(stateNames[2]),
Json::StaticString(stateNames[3]),
Json::StaticString(stateNames[4])}};
static auto const genesisAccountId = calcAccountID(
generateKeyPair(KeyType::secp256k1, generateSeed("masterpassphrase"))
.first);
@@ -1052,7 +1130,7 @@ NetworkOPsImp::strOperatingMode(OperatingMode const mode, bool const admin)
}
}
return {StateAccounting::states_[static_cast<std::size_t>(mode)].c_str()};
return states_[static_cast<std::size_t>(mode)];
}
void
@@ -2318,19 +2396,6 @@ NetworkOPsImp::getConsensusInfo()
return mConsensus.getJson(true);
}
// RHTODO: not threadsafe?
RCLConsensus&
NetworkOPsImp::getConsensus()
{
return mConsensus;
}
LedgerMaster&
NetworkOPsImp::getLedgerMaster()
{
return m_ledgerMaster;
}
Json::Value
NetworkOPsImp::getServerInfo(bool human, bool admin, bool counters)
{
@@ -4128,12 +4193,6 @@ NetworkOPsImp::stateAccounting(Json::Value& obj)
accounting_.json(obj);
}
StateAccounting::CounterData
NetworkOPsImp::getStateAccountingData()
{
return accounting_.getCounterData();
}
// <-- bool: true=erased, false=was not there
bool
NetworkOPsImp::unsubValidations(std::uint64_t uSeq)
@@ -4604,6 +4663,50 @@ NetworkOPsImp::collect_metrics()
counters[static_cast<std::size_t>(OperatingMode::FULL)].transitions);
}
void
NetworkOPsImp::StateAccounting::mode(OperatingMode om)
{
auto now = std::chrono::steady_clock::now();
std::lock_guard lock(mutex_);
++counters_[static_cast<std::size_t>(om)].transitions;
if (om == OperatingMode::FULL &&
counters_[static_cast<std::size_t>(om)].transitions == 1)
{
initialSyncUs_ = std::chrono::duration_cast<std::chrono::microseconds>(
now - processStart_)
.count();
}
counters_[static_cast<std::size_t>(mode_)].dur +=
std::chrono::duration_cast<std::chrono::microseconds>(now - start_);
mode_ = om;
start_ = now;
}
void
NetworkOPsImp::StateAccounting::json(Json::Value& obj) const
{
auto [counters, mode, start, initialSync] = getCounterData();
auto const current = std::chrono::duration_cast<std::chrono::microseconds>(
std::chrono::steady_clock::now() - start);
counters[static_cast<std::size_t>(mode)].dur += current;
obj[jss::state_accounting] = Json::objectValue;
for (std::size_t i = static_cast<std::size_t>(OperatingMode::DISCONNECTED);
i <= static_cast<std::size_t>(OperatingMode::FULL);
++i)
{
obj[jss::state_accounting][states_[i]] = Json::objectValue;
auto& state = obj[jss::state_accounting][states_[i]];
state[jss::transitions] = std::to_string(counters[i].transitions);
state[jss::duration_us] = std::to_string(counters[i].dur.count());
}
obj[jss::server_state_duration_us] = std::to_string(current.count());
if (initialSync)
obj[jss::initial_sync_duration_us] = std::to_string(initialSync);
}
//------------------------------------------------------------------------------
std::unique_ptr<NetworkOPs>

View File

@@ -20,10 +20,8 @@
#ifndef RIPPLE_APP_MISC_NETWORKOPS_H_INCLUDED
#define RIPPLE_APP_MISC_NETWORKOPS_H_INCLUDED
#include <ripple/app/consensus/RCLConsensus.h>
#include <ripple/app/consensus/RCLCxPeerPos.h>
#include <ripple/app/ledger/Ledger.h>
#include <ripple/app/misc/StateAccounting.h>
#include <ripple/core/JobQueue.h>
#include <ripple/ledger/ReadView.h>
#include <ripple/net/InfoSub.h>
@@ -44,6 +42,35 @@ class LedgerMaster;
class Transaction;
class ValidatorKeys;
// This is the primary interface into the "client" portion of the program.
// Code that wants to do normal operations on the network such as
// creating and monitoring accounts, creating transactions, and so on
// should use this interface. The RPC code will primarily be a light wrapper
// over this code.
//
// Eventually, it will check the node's operating mode (synched, unsynched,
// etectera) and defer to the correct means of processing. The current
// code assumes this node is synched (and will continue to do so until
// there's a functional network.
//
/** Specifies the mode under which the server believes it's operating.
This has implications about how the server processes transactions and
how it responds to requests (e.g. account balance request).
@note Other code relies on the numerical values of these constants; do
not change them without verifying each use and ensuring that it is
not a breaking change.
*/
enum class OperatingMode {
DISCONNECTED = 0, //!< not ready to process requests
CONNECTED = 1, //!< convinced we are talking to the network
SYNCING = 2, //!< fallen slightly behind
TRACKING = 3, //!< convinced we agree with the network
FULL = 4 //!< we have the ledger and can even validate
};
/** Provides server functionality for clients.
Clients include backend applications, local commands, and connected
@@ -194,13 +221,6 @@ public:
virtual Json::Value
getConsensusInfo() = 0;
virtual RCLConsensus&
getConsensus() = 0;
virtual LedgerMaster&
getLedgerMaster() = 0;
virtual Json::Value
getServerInfo(bool human, bool admin, bool counters) = 0;
virtual void
@@ -208,9 +228,6 @@ public:
virtual Json::Value
getLedgerFetchInfo() = 0;
virtual StateAccounting::CounterData
getStateAccountingData() = 0;
/** Accepts the current transaction tree, return the new ledger's sequence
This API is only used via RPC with the server in STANDALONE mode and

View File

@@ -1,49 +0,0 @@
#include <ripple/app/misc/StateAccounting.h>
namespace ripple {
void
StateAccounting::mode(OperatingMode om)
{
std::lock_guard lock(mutex_);
auto now = std::chrono::steady_clock::now();
++counters_[static_cast<std::size_t>(om)].transitions;
if (om == OperatingMode::FULL &&
counters_[static_cast<std::size_t>(om)].transitions == 1)
{
initialSyncUs_ = std::chrono::duration_cast<std::chrono::microseconds>(
now - processStart_)
.count();
}
counters_[static_cast<std::size_t>(mode_)].dur +=
std::chrono::duration_cast<std::chrono::microseconds>(now - start_);
mode_ = om;
start_ = now;
}
void
StateAccounting::json(Json::Value& obj)
{
auto [counters, mode, start, initialSync] = getCounterData();
auto const current = std::chrono::duration_cast<std::chrono::microseconds>(
std::chrono::steady_clock::now() - start);
counters[static_cast<std::size_t>(mode)].dur += current;
obj[jss::state_accounting] = Json::objectValue;
for (std::size_t i = static_cast<std::size_t>(OperatingMode::DISCONNECTED);
i <= static_cast<std::size_t>(OperatingMode::FULL);
++i)
{
obj[jss::state_accounting][states_[i]] = Json::objectValue;
auto& state = obj[jss::state_accounting][states_[i]];
state[jss::transitions] = std::to_string(counters[i].transitions);
state[jss::duration_us] = std::to_string(counters[i].dur.count());
}
obj[jss::server_state_duration_us] = std::to_string(current.count());
if (initialSync)
obj[jss::initial_sync_duration_us] = std::to_string(initialSync);
}
} // namespace ripple

View File

@@ -1,99 +0,0 @@
#ifndef RIPPLE_APP_MAIN_STATEACCOUNTING_H_INCLUDED
#define RIPPLE_APP_MAIN_STATEACCOUNTING_H_INCLUDED
#include <ripple/basics/chrono.h>
#include <ripple/beast/utility/Journal.h>
#include <ripple/json/json_value.h>
#include <ripple/protocol/jss.h>
#include <array>
#include <mutex>
namespace ripple {
// This is the primary interface into the "client" portion of the program.
// Code that wants to do normal operations on the network such as
// creating and monitoring accounts, creating transactions, and so on
// should use this interface. The RPC code will primarily be a light wrapper
// over this code.
//
// Eventually, it will check the node's operating mode (synched, unsynched,
// etectera) and defer to the correct means of processing. The current
// code assumes this node is synched (and will continue to do so until
// there's a functional network.
//
/** Specifies the mode under which the server believes it's operating.
This has implications about how the server processes transactions and
how it responds to requests (e.g. account balance request).
@note Other code relies on the numerical values of these constants; do
not change them without verifying each use and ensuring that it is
not a breaking change.
*/
enum class OperatingMode {
DISCONNECTED = 0, //!< not ready to process requests
CONNECTED = 1, //!< convinced we are talking to the network
SYNCING = 2, //!< fallen slightly behind
TRACKING = 3, //!< convinced we agree with the network
FULL = 4 //!< we have the ledger and can even validate
};
class StateAccounting
{
public:
constexpr static std::array<Json::StaticString const, 5> const states_ = {
{Json::StaticString("disconnected"),
Json::StaticString("connected"),
Json::StaticString("syncing"),
Json::StaticString("tracking"),
Json::StaticString("full")}};
struct Counters
{
explicit Counters() = default;
std::uint64_t transitions = 0;
std::chrono::microseconds dur = std::chrono::microseconds(0);
};
private:
OperatingMode mode_ = OperatingMode::DISCONNECTED;
std::array<Counters, 5> counters_;
mutable std::mutex mutex_;
std::chrono::steady_clock::time_point start_ =
std::chrono::steady_clock::now();
std::chrono::steady_clock::time_point const processStart_ = start_;
std::uint64_t initialSyncUs_{0};
public:
explicit StateAccounting()
{
counters_[static_cast<std::size_t>(OperatingMode::DISCONNECTED)]
.transitions = 1;
}
//! Record state transition. Update duration spent in previous state.
void
mode(OperatingMode om);
//! Output state counters in JSON format.
void
json(Json::Value& obj);
using CounterData = std::tuple<
decltype(counters_),
decltype(mode_),
decltype(start_),
decltype(initialSyncUs_)>;
CounterData
getCounterData()
{
return {counters_, mode_, start_, initialSyncUs_};
}
};
} // namespace ripple
#endif

View File

@@ -458,13 +458,6 @@ Change::activateXahauGenesis()
bool const isTest =
(ctx_.tx.getFlags() & tfTestSuite) && ctx_.app.config().standalone();
// RH NOTE: we'll only configure xahau governance structure on networks that
// begin with 2133... so production xahau: 21337 and its testnet 21338
// with 21330-21336 and 21339 also valid and reserved for dev nets etc.
// all other Network IDs will be conventionally configured.
if ((ctx_.app.config().NETWORK_ID / 10) != 2133 && !isTest)
return;
auto [ng_entries, l1_entries, l2_entries, gov_params] =
normalizeXahauGenesis(
isTest ? TestNonGovernanceDistribution : NonGovernanceDistribution,

View File

@@ -889,45 +889,6 @@ Import::preclaim(PreclaimContext const& ctx)
}
auto const& sle = ctx.view.read(keylet::account(ctx.tx[sfAccount]));
auto const tt = stpTrans->getTxnType();
if ((tt == ttSIGNER_LIST_SET || tt == ttREGULAR_KEY_SET) &&
ctx.view.rules().enabled(fixReduceImport) && sle)
{
// blackhole check
do
{
// if master key is not set then it is not blackholed
if (!(sle->getFlags() & lsfDisableMaster))
break;
// if a regular key is set then it must be acc 0, 1, or 2 otherwise
// not blackholed
if (sle->isFieldPresent(sfRegularKey))
{
AccountID rk = sle->getAccountID(sfRegularKey);
static const AccountID ACCOUNT_ZERO(0);
static const AccountID ACCOUNT_ONE(1);
static const AccountID ACCOUNT_TWO(2);
if (rk != ACCOUNT_ZERO && rk != ACCOUNT_ONE &&
rk != ACCOUNT_TWO)
break;
}
// if a signer list is set then it's not blackholed
auto const signerListKeylet = keylet::signers(ctx.tx[sfAccount]);
if (ctx.view.exists(signerListKeylet))
break;
// execution to here means it's blackholed
JLOG(ctx.j.warn())
<< "Import: during preclaim target account is blackholed "
<< ctx.tx[sfAccount] << ", bailing.";
return tefIMPORT_BLACKHOLED;
} while (0);
}
if (sle && sle->isFieldPresent(sfImportSequence))
{
uint32_t sleImportSequence = sle->getFieldU32(sfImportSequence);

View File

@@ -155,8 +155,6 @@ public:
std::map<std::string, PublicKey>
IMPORT_VL_KEYS; // hex string -> class PublicKey (for caching purposes)
std::vector<std::string> DATAGRAM_MONITOR;
enum StartUpType {
FRESH,
NORMAL,

View File

@@ -101,7 +101,6 @@ struct ConfigSection
#define SECTION_SWEEP_INTERVAL "sweep_interval"
#define SECTION_NETWORK_ID "network_id"
#define SECTION_IMPORT_VL_KEYS "import_vl_keys"
#define SECTION_DATAGRAM_MONITOR "datagram_monitor"
} // namespace ripple

View File

@@ -281,9 +281,6 @@ Config::setupControl(bool bQuiet, bool bSilent, bool bStandalone)
// RAM and CPU resources. We default to "tiny" for standalone mode.
if (!bStandalone)
{
NODE_SIZE = 4;
return;
// First, check against 'minimum' RAM requirements per node size:
auto const& threshold =
sizedItems[std::underlying_type_t<SizedItem>(SizedItem::ramSizeGB)];
@@ -468,24 +465,26 @@ Config::loadFromString(std::string const& fileContents)
SNTP_SERVERS = *s;
// if the user has specified ip:port then replace : with a space.
auto replaceColons = [](std::vector<std::string>& strVec) {
const static std::regex e(":([0-9]+)$");
for (auto& line : strVec)
{
// skip anything that might be an ipv6 address
if (std::count(line.begin(), line.end(), ':') != 1)
continue;
{
auto replaceColons = [](std::vector<std::string>& strVec) {
const static std::regex e(":([0-9]+)$");
for (auto& line : strVec)
{
// skip anything that might be an ipv6 address
if (std::count(line.begin(), line.end(), ':') != 1)
continue;
std::string result = std::regex_replace(line, e, " $1");
// sanity check the result of the replace, should be same length
// as input
if (result.size() == line.size())
line = result;
}
};
std::string result = std::regex_replace(line, e, " $1");
// sanity check the result of the replace, should be same length
// as input
if (result.size() == line.size())
line = result;
}
};
replaceColons(IPS_FIXED);
replaceColons(IPS);
replaceColons(IPS_FIXED);
replaceColons(IPS);
}
{
std::string dbPath;
@@ -510,12 +509,6 @@ Config::loadFromString(std::string const& fileContents)
NETWORK_ID = beast::lexicalCastThrow<uint32_t>(strTemp);
}
if (auto s = getIniFileSection(secConfig, SECTION_DATAGRAM_MONITOR))
{
DATAGRAM_MONITOR = *s;
replaceColons(DATAGRAM_MONITOR);
}
if (getSingleSection(secConfig, SECTION_PEER_PRIVATE, strTemp, j_))
PEER_PRIVATE = beast::lexicalCastThrow<bool>(strTemp);

View File

@@ -710,7 +710,10 @@ Shard::finalize(bool writeSQLite, std::optional<uint256> const& referenceHash)
if (writeSQLite && !storeSQLite(ledger))
return fail("failed storing to SQLite databases");
assert(ledger->info().seq == ledgerSeq && ledger->read(keylet::fees()));
assert(
ledger->info().seq == ledgerSeq &&
(ledger->info().seq < XRP_LEDGER_EARLIEST_FEES ||
ledger->read(keylet::fees())));
hash = ledger->info().parentHash;
next = std::move(ledger);

View File

@@ -74,7 +74,7 @@ namespace detail {
// Feature.cpp. Because it's only used to reserve storage, and determine how
// large to make the FeatureBitset, it MAY be larger. It MUST NOT be less than
// the actual number of amendments. A LogicError on startup will verify this.
static constexpr std::size_t numFeatures = 75;
static constexpr std::size_t numFeatures = 74;
/** Amendments that this server supports and the default voting behavior.
Whether they are enabled depends on the Rules defined in the validated
@@ -362,7 +362,6 @@ extern uint256 const fix240819;
extern uint256 const fixPageCap;
extern uint256 const fix240911;
extern uint256 const fixFloatDivide;
extern uint256 const fixReduceImport;
} // namespace ripple

View File

@@ -184,7 +184,6 @@ enum TEFcodes : TERUnderlyingType {
tefPAST_IMPORT_SEQ,
tefPAST_IMPORT_VL_SEQ,
tefNONDIR_EMIT,
tefIMPORT_BLACKHOLED,
};
//------------------------------------------------------------------------------

View File

@@ -468,7 +468,6 @@ REGISTER_FIX (fix240819, Supported::yes, VoteBehavior::De
REGISTER_FIX (fixPageCap, Supported::yes, VoteBehavior::DefaultYes);
REGISTER_FIX (fix240911, Supported::yes, VoteBehavior::DefaultYes);
REGISTER_FIX (fixFloatDivide, Supported::yes, VoteBehavior::DefaultYes);
REGISTER_FIX (fixReduceImport, Supported::yes, VoteBehavior::DefaultYes);
// The following amendments are obsolete, but must remain supported
// because they could potentially get enabled.

View File

@@ -116,7 +116,6 @@ transResults()
MAKE_ERROR(tefNO_TICKET, "Ticket is not in ledger."),
MAKE_ERROR(tefNFTOKEN_IS_NOT_TRANSFERABLE, "The specified NFToken is not transferable."),
MAKE_ERROR(tefNONDIR_EMIT, "An emitted txn was injected into the ledger without a corresponding directory entry."),
MAKE_ERROR(tefIMPORT_BLACKHOLED, "Cannot import keying because target account is blackholed."),
MAKE_ERROR(telLOCAL_ERROR, "Local failure."),
MAKE_ERROR(telBAD_DOMAIN, "Domain too long."),

View File

@@ -30,7 +30,6 @@
#include <ripple/rpc/Context.h>
#include <ripple/rpc/Role.h>
#include <ripple/rpc/impl/RPCHelpers.h>
#include <ripple/rpc/impl/UDPInfoSub.h>
namespace ripple {
@@ -43,7 +42,7 @@ doSubscribe(RPC::JsonContext& context)
if (!context.infoSub && !context.params.isMember(jss::url))
{
// Must be a JSON-RPC call.
JLOG(context.j.warn()) << "doSubscribe: RPC subscribe requires a url";
JLOG(context.j.info()) << "doSubscribe: RPC subscribe requires a url";
return rpcError(rpcINVALID_PARAMS);
}
@@ -374,13 +373,6 @@ doSubscribe(RPC::JsonContext& context)
}
}
if (ispSub)
{
if (std::shared_ptr<UDPInfoSub> udp =
std::dynamic_pointer_cast<UDPInfoSub>(ispSub))
udp->increment();
}
return jvResult;
}

View File

@@ -25,7 +25,6 @@
#include <ripple/rpc/Context.h>
#include <ripple/rpc/Role.h>
#include <ripple/rpc/impl/RPCHelpers.h>
#include <ripple/rpc/impl/UDPInfoSub.h>
namespace ripple {
@@ -246,12 +245,6 @@ doUnsubscribe(RPC::JsonContext& context)
context.netOps.tryRemoveRpcSub(context.params[jss::url].asString());
}
if (ispSub)
{
if (auto udp = std::dynamic_pointer_cast<UDPInfoSub>(ispSub))
udp->destroy();
}
return jvResult;
}

View File

@@ -361,67 +361,6 @@ ServerHandlerImp::onWSMessage(
}
}
void
ServerHandlerImp::onUDPMessage(
std::string const& message,
boost::asio::ip::tcp::endpoint const& remoteEndpoint,
std::function<void(std::string const&)> sendResponse)
{
Json::Value jv;
if (message.size() > RPC::Tuning::maxRequestSize ||
!Json::Reader{}.parse(message, jv) || !jv.isObject())
{
Json::Value jvResult(Json::objectValue);
jvResult[jss::type] = jss::error;
jvResult[jss::error] = "jsonInvalid";
jvResult[jss::value] = message;
std::string const response = to_string(jvResult);
JLOG(m_journal.trace())
<< "UDP sending error response: '" << jvResult << "'";
sendResponse(response);
return;
}
JLOG(m_journal.trace())
<< "UDP received '" << jv << "' from " << remoteEndpoint;
auto const postResult = m_jobQueue.postCoro(
jtCLIENT_RPC, // Using RPC job type since this is admin RPC
"UDP-RPC",
[this,
remoteEndpoint,
jv = std::move(jv),
sendResponse = std::move(sendResponse)](
std::shared_ptr<JobQueue::Coro> const& coro) {
// Process the request similar to WebSocket but with UDP context
Role const role = Role::ADMIN; // UDP-RPC is admin-only
auto const jr =
this->processUDP(jv, role, coro, sendResponse, remoteEndpoint);
std::string const response = to_string(jr);
JLOG(m_journal.trace())
<< "UDP sending '" << jr << "' to " << remoteEndpoint;
// Send response back via UDP
sendResponse(response);
});
if (postResult == nullptr)
{
// Request rejected, probably shutting down
Json::Value jvResult(Json::objectValue);
jvResult[jss::type] = jss::error;
jvResult[jss::error] = "serverShuttingDown";
jvResult[jss::value] = "Server is shutting down";
std::string const response = to_string(jvResult);
JLOG(m_journal.trace())
<< "UDP sending shutdown response to " << remoteEndpoint;
sendResponse(response);
}
}
void
ServerHandlerImp::onClose(Session& session, boost::system::error_code const&)
{
@@ -458,145 +397,6 @@ logDuration(
<< " microseconds. request = " << request;
}
Json::Value
ServerHandlerImp::processUDP(
Json::Value const& jv,
Role const& role,
std::shared_ptr<JobQueue::Coro> const& coro,
std::optional<std::function<void(std::string const&)>>
sendResponse /* used for subscriptions */,
boost::asio::ip::tcp::endpoint const& remoteEndpoint)
{
std::shared_ptr<InfoSub> is;
// Requests without "command" are invalid.
Json::Value jr(Json::objectValue);
try
{
auto apiVersion =
RPC::getAPIVersionNumber(jv, app_.config().BETA_RPC_API);
if (apiVersion == RPC::apiInvalidVersion ||
(!jv.isMember(jss::command) && !jv.isMember(jss::method)) ||
(jv.isMember(jss::command) && !jv[jss::command].isString()) ||
(jv.isMember(jss::method) && !jv[jss::method].isString()) ||
(jv.isMember(jss::command) && jv.isMember(jss::method) &&
jv[jss::command].asString() != jv[jss::method].asString()))
{
jr[jss::type] = jss::response;
jr[jss::status] = jss::error;
jr[jss::error] = apiVersion == RPC::apiInvalidVersion
? jss::invalid_API_version
: jss::missingCommand;
jr[jss::request] = jv;
if (jv.isMember(jss::id))
jr[jss::id] = jv[jss::id];
if (jv.isMember(jss::jsonrpc))
jr[jss::jsonrpc] = jv[jss::jsonrpc];
if (jv.isMember(jss::ripplerpc))
jr[jss::ripplerpc] = jv[jss::ripplerpc];
if (jv.isMember(jss::api_version))
jr[jss::api_version] = jv[jss::api_version];
return jr;
}
auto required = RPC::roleRequired(
apiVersion,
app_.config().BETA_RPC_API,
jv.isMember(jss::command) ? jv[jss::command].asString()
: jv[jss::method].asString());
if (Role::FORBID == role)
{
jr[jss::result] = rpcError(rpcFORBIDDEN);
}
else
{
Resource::Consumer c;
Resource::Charge loadType = Resource::feeReferenceRPC;
if (sendResponse.has_value())
is = UDPInfoSub::getInfoSub(
m_networkOPs, *sendResponse, remoteEndpoint);
RPC::JsonContext context{
{app_.journal("RPCHandler"),
app_,
loadType,
app_.getOPs(),
app_.getLedgerMaster(),
c,
role,
coro,
is,
apiVersion},
jv};
auto start = std::chrono::system_clock::now();
RPC::doCommand(context, jr[jss::result]);
auto end = std::chrono::system_clock::now();
logDuration(jv, end - start, m_journal);
}
}
catch (std::exception const& ex)
{
jr[jss::result] = RPC::make_error(rpcINTERNAL);
JLOG(m_journal.error())
<< "Exception while processing WS: " << ex.what() << "\n"
<< "Input JSON: " << Json::Compact{Json::Value{jv}};
}
if (is)
{
if (auto udp = std::dynamic_pointer_cast<UDPInfoSub>(is))
udp->destroy();
}
// Currently we will simply unwrap errors returned by the RPC
// API, in the future maybe we can make the responses
// consistent.
//
// Regularize result. This is duplicate code.
if (jr[jss::result].isMember(jss::error))
{
jr = jr[jss::result];
jr[jss::status] = jss::error;
auto rq = jv;
if (rq.isObject())
{
if (rq.isMember(jss::passphrase.c_str()))
rq[jss::passphrase.c_str()] = "<masked>";
if (rq.isMember(jss::secret.c_str()))
rq[jss::secret.c_str()] = "<masked>";
if (rq.isMember(jss::seed.c_str()))
rq[jss::seed.c_str()] = "<masked>";
if (rq.isMember(jss::seed_hex.c_str()))
rq[jss::seed_hex.c_str()] = "<masked>";
}
jr[jss::request] = rq;
}
else
{
if (jr[jss::result].isMember("forwarded") &&
jr[jss::result]["forwarded"])
jr = jr[jss::result];
jr[jss::status] = jss::success;
}
if (jv.isMember(jss::id))
jr[jss::id] = jv[jss::id];
if (jv.isMember(jss::jsonrpc))
jr[jss::jsonrpc] = jv[jss::jsonrpc];
if (jv.isMember(jss::ripplerpc))
jr[jss::ripplerpc] = jv[jss::ripplerpc];
if (jv.isMember(jss::api_version))
jr[jss::api_version] = jv[jss::api_version];
jr[jss::type] = jss::response;
return jr;
}
Json::Value
ServerHandlerImp::processSession(
std::shared_ptr<WSSession> const& session,

View File

@@ -24,7 +24,6 @@
#include <ripple/core/JobQueue.h>
#include <ripple/json/Output.h>
#include <ripple/rpc/RPCHandler.h>
#include <ripple/rpc/impl/UDPInfoSub.h>
#include <ripple/rpc/impl/WSInfoSub.h>
#include <ripple/server/Server.h>
#include <ripple/server/Session.h>
@@ -165,12 +164,6 @@ public:
std::shared_ptr<WSSession> session,
std::vector<boost::asio::const_buffer> const& buffers);
void
onUDPMessage(
std::string const& message,
boost::asio::ip::tcp::endpoint const& remoteEndpoint,
std::function<void(std::string const&)> sendResponse);
void
onClose(Session& session, boost::system::error_code const&);
@@ -184,14 +177,6 @@ private:
std::shared_ptr<JobQueue::Coro> const& coro,
Json::Value const& jv);
Json::Value
processUDP(
Json::Value const& jv,
Role const& role,
std::shared_ptr<JobQueue::Coro> const& coro,
std::optional<std::function<void(std::string const&)>> sendResponse,
boost::asio::ip::tcp::endpoint const& remoteEndpoint);
void
processSession(
std::shared_ptr<Session> const&,

View File

@@ -1,140 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2012, 2013 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_RPC_UDPINFOSUB_H
#define RIPPLE_RPC_UDPINFOSUB_H
#include <ripple/beast/net/IPAddressConversion.h>
#include <ripple/json/json_writer.h>
#include <ripple/json/to_string.h>
#include <ripple/net/InfoSub.h>
#include <ripple/rpc/Role.h>
#include <ripple/server/WSSession.h>
#include <boost/utility/string_view.hpp>
#include <memory>
#include <string>
namespace ripple {
class UDPInfoSub : public InfoSub
{
std::function<void(std::string const&)> send_;
boost::asio::ip::tcp::endpoint endpoint_;
UDPInfoSub(
Source& source,
std::function<void(std::string const&)>& sendResponse,
boost::asio::ip::tcp::endpoint const& remoteEndpoint)
: InfoSub(source), send_(sendResponse), endpoint_(remoteEndpoint)
{
}
struct RefCountedSub
{
std::shared_ptr<UDPInfoSub> sub;
size_t refCount;
RefCountedSub(std::shared_ptr<UDPInfoSub> s)
: sub(std::move(s)), refCount(1)
{
}
};
static inline std::mutex mtx_;
static inline std::map<boost::asio::ip::tcp::endpoint, RefCountedSub> map_;
public:
static std::shared_ptr<UDPInfoSub>
getInfoSub(
Source& source,
std::function<void(std::string const&)>& sendResponse,
boost::asio::ip::tcp::endpoint const& remoteEndpoint)
{
std::lock_guard<std::mutex> lock(mtx_);
auto it = map_.find(remoteEndpoint);
if (it != map_.end())
{
it->second.refCount++;
return it->second.sub;
}
auto sub = std::shared_ptr<UDPInfoSub>(
new UDPInfoSub(source, sendResponse, remoteEndpoint));
map_.emplace(remoteEndpoint, RefCountedSub(sub));
return sub;
}
static bool
increment(boost::asio::ip::tcp::endpoint const& remoteEndpoint)
{
std::lock_guard<std::mutex> lock(mtx_);
auto it = map_.find(remoteEndpoint);
if (it != map_.end())
{
it->second.refCount++;
return true;
}
return false;
}
bool
increment()
{
return increment(endpoint_);
}
static bool
destroy(boost::asio::ip::tcp::endpoint const& remoteEndpoint)
{
std::lock_guard<std::mutex> lock(mtx_);
auto it = map_.find(remoteEndpoint);
if (it != map_.end())
{
if (--it->second.refCount == 0)
{
map_.erase(it);
return true;
}
}
return false;
}
bool
destroy()
{
return destroy(endpoint_);
}
void
send(Json::Value const& jv, bool) override
{
std::string const str = to_string(jv);
send_(str);
}
boost::asio::ip::tcp::endpoint const&
endpoint() const
{
return endpoint_;
}
};
} // namespace ripple
#endif

View File

@@ -86,15 +86,6 @@ struct Port
// Returns a string containing the list of protocols
std::string
protocols() const;
bool
has_udp() const
{
return protocol.count("udp") > 0;
}
// Maximum UDP packet size (default 64KB)
std::size_t udp_packet_size = 65536;
};
std::ostream&

View File

@@ -244,13 +244,6 @@ parse_Port(ParsedPort& port, Section const& section, std::ostream& log)
optResult->begin(), optResult->end()))
port.protocol.insert(s);
}
if (port.protocol.count("udp") > 0 && port.protocol.size() > 1)
{
log << "Port " << section.name()
<< " cannot mix UDP with other protocols";
Throw<std::exception>();
}
}
{

View File

@@ -24,7 +24,6 @@
#include <ripple/beast/core/List.h>
#include <ripple/server/Server.h>
#include <ripple/server/impl/Door.h>
#include <ripple/server/impl/UDPDoor.h>
#include <ripple/server/impl/io_list.h>
#include <boost/asio.hpp>
#include <array>
@@ -163,35 +162,18 @@ ServerImpl<Handler>::ports(std::vector<Port> const& ports)
{
if (closed())
Throw<std::logic_error>("ports() on closed Server");
ports_.reserve(ports.size());
Endpoints eps;
eps.reserve(ports.size());
for (auto const& port : ports)
{
ports_.push_back(port);
if (port.has_udp())
if (auto sp = ios_.emplace<Door<Handler>>(
handler_, io_service_, ports_.back(), j_))
{
// UDP-RPC door
if (auto sp = ios_.emplace<UDPDoor<Handler>>(
handler_, io_service_, ports_.back(), j_))
{
eps.push_back(sp->get_endpoint());
sp->run();
}
}
else
{
// Standard TCP door
if (auto sp = ios_.emplace<Door<Handler>>(
handler_, io_service_, ports_.back(), j_))
{
list_.push_back(sp);
eps.push_back(sp->get_endpoint());
sp->run();
}
list_.push_back(sp);
eps.push_back(sp->get_endpoint());
sp->run();
}
}
return eps;

View File

@@ -1,284 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2012, 2013 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_SERVER_UDPDOOR_H_INCLUDED
#define RIPPLE_SERVER_UDPDOOR_H_INCLUDED
#include <ripple/basics/Log.h>
#include <ripple/basics/contract.h>
#include <ripple/server/impl/PlainHTTPPeer.h>
#include <ripple/server/impl/SSLHTTPPeer.h>
#include <ripple/server/impl/io_list.h>
#include <boost/asio/basic_waitable_timer.hpp>
#include <boost/asio/buffer.hpp>
#include <boost/asio/io_context.hpp>
#include <boost/asio/ip/tcp.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/beast/core/detect_ssl.hpp>
#include <boost/beast/core/multi_buffer.hpp>
#include <boost/beast/core/tcp_stream.hpp>
#include <boost/container/flat_map.hpp>
#include <chrono>
#include <condition_variable>
#include <functional>
#include <memory>
#include <mutex>
namespace ripple {
template <class Handler>
class UDPDoor : public io_list::work,
public std::enable_shared_from_this<UDPDoor<Handler>>
{
private:
using error_code = boost::system::error_code;
using endpoint_type = boost::asio::ip::tcp::endpoint;
using udp_socket = boost::asio::ip::udp::socket;
beast::Journal const j_;
Port const& port_;
Handler& handler_;
boost::asio::io_context& ioc_;
boost::asio::strand<boost::asio::io_context::executor_type> strand_;
udp_socket socket_;
std::vector<char> recv_buffer_;
endpoint_type local_endpoint_; // Store TCP-style endpoint
public:
UDPDoor(
Handler& handler,
boost::asio::io_context& io_context,
Port const& port,
beast::Journal j)
: j_(j)
, port_(port)
, handler_(handler)
, ioc_(io_context)
, strand_(io_context.get_executor())
, socket_(io_context)
, recv_buffer_(port.udp_packet_size)
, local_endpoint_(port.ip, port.port) // Store as TCP endpoint
{
error_code ec;
// Create UDP endpoint from port configuration
auto const addr = port_.ip.to_v4();
boost::asio::ip::udp::endpoint udp_endpoint(addr, port_.port);
socket_.open(boost::asio::ip::udp::v4(), ec);
if (ec)
{
JLOG(j_.error()) << "UDP socket open failed: " << ec.message();
return;
}
// Set socket options
socket_.set_option(boost::asio::socket_base::reuse_address(true), ec);
if (ec)
{
JLOG(j_.error())
<< "UDP set reuse_address failed: " << ec.message();
return;
}
socket_.bind(udp_endpoint, ec);
if (ec)
{
JLOG(j_.error()) << "UDP socket bind failed: " << ec.message();
return;
}
JLOG(j_.info()) << "UDP-RPC listening on " << udp_endpoint;
}
endpoint_type
get_endpoint() const
{
return local_endpoint_;
}
void
run()
{
if (!socket_.is_open())
return;
do_receive();
}
void
close() override
{
error_code ec;
socket_.close(ec);
}
private:
void
do_receive()
{
if (!socket_.is_open())
return;
socket_.async_receive_from(
boost::asio::buffer(recv_buffer_),
sender_endpoint_,
boost::asio::bind_executor(
strand_,
std::bind(
&UDPDoor::on_receive,
this->shared_from_this(),
std::placeholders::_1,
std::placeholders::_2)));
}
void
on_receive(error_code ec, std::size_t bytes_transferred)
{
if (ec)
{
if (ec != boost::asio::error::operation_aborted)
{
JLOG(j_.error()) << "UDP receive failed: " << ec.message();
do_receive();
}
return;
}
// Convert UDP endpoint to TCP endpoint for compatibility
endpoint_type tcp_endpoint(
sender_endpoint_.address(), sender_endpoint_.port());
// Handle the received UDP message
handler_.onUDPMessage(
std::string(recv_buffer_.data(), bytes_transferred),
tcp_endpoint,
[this, tcp_endpoint](std::string const& response) {
do_send(response, tcp_endpoint);
});
do_receive();
}
void
do_send(std::string const& response, endpoint_type const& tcp_endpoint)
{
if (!socket_.is_open())
{
std::cout << "UDP SOCKET NOT OPEN WHEN SENDING\n\n";
return;
}
const size_t HEADER_SIZE = 16;
const size_t MAX_DATAGRAM_SIZE =
65487; // Allow for ipv6 header 40 bytes + 8 bytes of udp header
const size_t MAX_PAYLOAD_SIZE = MAX_DATAGRAM_SIZE - HEADER_SIZE;
// Convert TCP endpoint back to UDP for sending
boost::asio::ip::udp::endpoint udp_endpoint(
tcp_endpoint.address(), tcp_endpoint.port());
// If message fits in single datagram, send normally
if (response.length() <= MAX_DATAGRAM_SIZE)
{
socket_.async_send_to(
boost::asio::buffer(response),
udp_endpoint,
boost::asio::bind_executor(
strand_,
[this, self = this->shared_from_this()](
error_code ec, std::size_t bytes_transferred) {
if (ec && ec != boost::asio::error::operation_aborted)
{
JLOG(j_.error())
<< "UDP send failed: " << ec.message();
}
}));
return;
}
// Calculate number of packets needed
const size_t payload_size = MAX_PAYLOAD_SIZE;
const uint16_t total_packets =
(response.length() + payload_size - 1) / payload_size;
// Get current timestamp in microseconds
auto now = std::chrono::system_clock::now();
auto micros = std::chrono::duration_cast<std::chrono::microseconds>(
now.time_since_epoch())
.count();
uint64_t timestamp = static_cast<uint64_t>(micros);
// Send fragmented packets
for (uint16_t packet_num = 0; packet_num < total_packets; packet_num++)
{
std::string fragment;
fragment.reserve(MAX_DATAGRAM_SIZE);
// Add header - 4 bytes of zeros
fragment.push_back(0);
fragment.push_back(0);
fragment.push_back(0);
fragment.push_back(0);
// Add packet number (little endian)
fragment.push_back(packet_num & 0xFF);
fragment.push_back((packet_num >> 8) & 0xFF);
// Add total packets (little endian)
fragment.push_back(total_packets & 0xFF);
fragment.push_back((total_packets >> 8) & 0xFF);
// Add timestamp (8 bytes, little endian)
fragment.push_back(timestamp & 0xFF);
fragment.push_back((timestamp >> 8) & 0xFF);
fragment.push_back((timestamp >> 16) & 0xFF);
fragment.push_back((timestamp >> 24) & 0xFF);
fragment.push_back((timestamp >> 32) & 0xFF);
fragment.push_back((timestamp >> 40) & 0xFF);
fragment.push_back((timestamp >> 48) & 0xFF);
fragment.push_back((timestamp >> 56) & 0xFF);
// Calculate payload slice
size_t start = packet_num * payload_size;
size_t length = std::min(payload_size, response.length() - start);
fragment.append(response.substr(start, length));
socket_.async_send_to(
boost::asio::buffer(fragment),
udp_endpoint,
boost::asio::bind_executor(
strand_,
[this, self = this->shared_from_this()](
error_code ec, std::size_t bytes_transferred) {
if (ec && ec != boost::asio::error::operation_aborted)
{
JLOG(j_.error())
<< "UDP send failed: " << ec.message();
}
}));
}
}
boost::asio::ip::udp::endpoint sender_endpoint_;
};
} // namespace ripple
#endif

View File

@@ -79,7 +79,7 @@ class Import_test : public beast::unit_test::suite
importVLSequence(jtx::Env const& env, PublicKey const& pk)
{
auto const sle = env.le(keylet::import_vlseq(pk));
if (sle && sle->isFieldPresent(sfImportSequence))
if (sle->isFieldPresent(sfImportSequence))
return (*sle)[sfImportSequence];
return 0;
}
@@ -2672,134 +2672,6 @@ class Import_test : public beast::unit_test::suite
env(import::import(alice, tmpXpop), ter(temMALFORMED));
}
// tefIMPORT_BLACKHOLED - SetRegularKey (w/seed) AccountZero
{
test::jtx::Env env{
*this, network::makeNetworkVLConfig(21337, keys)};
auto const feeDrops = env.current()->fees().base;
auto const alice = Account("alice");
env.fund(XRP(1000), alice);
env.close();
// Set Regular Key
Json::Value jv;
jv[jss::Account] = alice.human();
const AccountID ACCOUNT_ZERO(0);
jv["RegularKey"] = to_string(ACCOUNT_ZERO);
jv[jss::TransactionType] = jss::SetRegularKey;
env(jv, alice);
// Disable Master Key
env(fset(alice, asfDisableMaster), sig(alice));
env.close();
// Import with Master Key
Json::Value tmpXpop =
import::loadXpop(ImportTCSetRegularKey::w_seed);
env(import::import(alice, tmpXpop),
ter(tefIMPORT_BLACKHOLED),
fee(feeDrops * 10),
sig(alice));
env.close();
}
// tefIMPORT_BLACKHOLED - SetRegularKey (w/seed) AccountOne
{
test::jtx::Env env{
*this, network::makeNetworkVLConfig(21337, keys)};
auto const feeDrops = env.current()->fees().base;
auto const alice = Account("alice");
env.fund(XRP(1000), alice);
env.close();
// Set Regular Key
Json::Value jv;
jv[jss::Account] = alice.human();
const AccountID ACCOUNT_ONE(1);
jv["RegularKey"] = to_string(ACCOUNT_ONE);
jv[jss::TransactionType] = jss::SetRegularKey;
env(jv, alice);
// Disable Master Key
env(fset(alice, asfDisableMaster), sig(alice));
env.close();
// Import with Master Key
Json::Value tmpXpop =
import::loadXpop(ImportTCSetRegularKey::w_seed);
env(import::import(alice, tmpXpop),
ter(tefIMPORT_BLACKHOLED),
fee(feeDrops * 10),
sig(alice));
env.close();
}
// tefIMPORT_BLACKHOLED - SetRegularKey (w/seed) AccountTwo
{
test::jtx::Env env{
*this, network::makeNetworkVLConfig(21337, keys)};
auto const feeDrops = env.current()->fees().base;
auto const alice = Account("alice");
env.fund(XRP(1000), alice);
env.close();
// Set Regular Key
Json::Value jv;
jv[jss::Account] = alice.human();
const AccountID ACCOUNT_TWO(2);
jv["RegularKey"] = to_string(ACCOUNT_TWO);
jv[jss::TransactionType] = jss::SetRegularKey;
env(jv, alice);
// Disable Master Key
env(fset(alice, asfDisableMaster), sig(alice));
env.close();
// Import with Master Key
Json::Value tmpXpop =
import::loadXpop(ImportTCSetRegularKey::w_seed);
env(import::import(alice, tmpXpop),
ter(tefIMPORT_BLACKHOLED),
fee(feeDrops * 10),
sig(alice));
env.close();
}
// tefIMPORT_BLACKHOLED - SignersListSet (w/seed)
{
test::jtx::Env env{
*this, network::makeNetworkVLConfig(21337, keys)};
auto const feeDrops = env.current()->fees().base;
auto const alice = Account("alice");
env.fund(XRP(1000), alice);
env.close();
// Set Regular Key
Json::Value jv;
jv[jss::Account] = alice.human();
const AccountID ACCOUNT_ZERO(0);
jv["RegularKey"] = to_string(ACCOUNT_ZERO);
jv[jss::TransactionType] = jss::SetRegularKey;
env(jv, alice);
// Disable Master Key
env(fset(alice, asfDisableMaster), sig(alice));
env.close();
// Import with Master Key
Json::Value tmpXpop =
import::loadXpop(ImportTCSignersListSet::w_seed);
env(import::import(alice, tmpXpop),
ter(tefIMPORT_BLACKHOLED),
fee(feeDrops * 10),
sig(alice));
env.close();
}
// tefPAST_IMPORT_SEQ
{
test::jtx::Env env{
@@ -4708,22 +4580,14 @@ class Import_test : public beast::unit_test::suite
// confirm signers set
auto const [signers, signersSle] =
signersKeyAndSle(*env.current(), alice);
auto const signerEntries =
signersSle->getFieldArray(sfSignerEntries);
BEAST_EXPECT(signerEntries.size() == 2);
BEAST_EXPECT(signerEntries[0u].getFieldU16(sfSignerWeight) == 1);
BEAST_EXPECT(
signersSle && signersSle->isFieldPresent(sfSignerEntries));
if (signersSle && signersSle->isFieldPresent(sfSignerEntries))
{
auto const signerEntries =
signersSle->getFieldArray(sfSignerEntries);
BEAST_EXPECT(signerEntries.size() == 2);
BEAST_EXPECT(
signerEntries[0u].getFieldU16(sfSignerWeight) == 1);
BEAST_EXPECT(
signerEntries[0u].getAccountID(sfAccount) == carol.id());
BEAST_EXPECT(
signerEntries[1u].getFieldU16(sfSignerWeight) == 1);
BEAST_EXPECT(
signerEntries[1u].getAccountID(sfAccount) == bob.id());
}
signerEntries[0u].getAccountID(sfAccount) == carol.id());
BEAST_EXPECT(signerEntries[1u].getFieldU16(sfSignerWeight) == 1);
BEAST_EXPECT(signerEntries[1u].getAccountID(sfAccount) == bob.id());
// confirm multisign tx
env.close();
@@ -6122,69 +5986,6 @@ class Import_test : public beast::unit_test::suite
}
}
void
testBlackhole(FeatureBitset features)
{
testcase("blackhole");
using namespace test::jtx;
using namespace std::literals;
auto blackholeAccount = [&](Env& env, Account const& acct) {
// Set Regular Key
Json::Value jv;
jv[jss::Account] = acct.human();
const AccountID ACCOUNT_ZERO(0);
jv["RegularKey"] = to_string(ACCOUNT_ZERO);
jv[jss::TransactionType] = jss::SetRegularKey;
env(jv, acct);
// Disable Master Key
env(fset(acct, asfDisableMaster), sig(acct));
env.close();
};
auto burnHeader = [&](Env& env) {
// confirm total coins header
auto const initCoins = env.current()->info().drops;
BEAST_EXPECT(initCoins == 100'000'000'000'000'000);
// burn 10'000 xrp
auto const master = Account("masterpassphrase");
env(noop(master), fee(100'000'000'000'000), ter(tesSUCCESS));
env.close();
// confirm total coins header
auto const burnCoins = env.current()->info().drops;
BEAST_EXPECT(burnCoins == initCoins - 100'000'000'000'000);
};
// AccountSet (w/seed)
{
test::jtx::Env env{
*this, network::makeNetworkVLConfig(21337, keys)};
auto const feeDrops = env.current()->fees().base;
// Burn Header
burnHeader(env);
auto const alice = Account("alice");
env.fund(XRP(1000), alice);
env.close();
// Blackhole Account
blackholeAccount(env, alice);
// Import with Master Key
Json::Value tmpXpop = import::loadXpop(ImportTCAccountSet::w_seed);
env(import::import(alice, tmpXpop),
ter(tesSUCCESS),
fee(feeDrops * 10),
sig(alice));
env.close();
}
}
public:
void
run() override
@@ -6225,7 +6026,6 @@ public:
testMaxSupply(features);
testMinMax(features);
testHalving(features - featureOwnerPaysFee);
testBlackhole(features);
}
};

View File

@@ -19,7 +19,6 @@
#include <ripple/app/misc/HashRouter.h>
#include <ripple/app/tx/apply.h>
#include <ripple/app/tx/impl/XahauGenesis.h>
#include <ripple/core/Config.h>
#include <ripple/json/json_reader.h>
#include <ripple/protocol/Feature.h>
#include <ripple/protocol/Indexes.h>
@@ -28,7 +27,6 @@
#include <ripple/protocol/jss.h>
#include <string>
#include <test/jtx.h>
#include <test/jtx/envconfig.h>
#include <vector>
#define BEAST_REQUIRE(x) \
@@ -61,18 +59,7 @@ maybe_to_string(T val, std::enable_if_t<!std::is_integral_v<T>, int> = 0)
using namespace XahauGenesis;
namespace ripple {
inline std::unique_ptr<Config>
makeNetworkConfig(uint32_t networkID)
{
using namespace test::jtx;
return envconfig([&](std::unique_ptr<Config> cfg) {
cfg->NETWORK_ID = networkID;
return cfg;
});
}
namespace test {
/*
Accounts used in this test suite:
alice: AE123A8556F3CF91154711376AFB0F894F832B3D,
@@ -138,8 +125,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
bool burnedViaTest =
false, // means the calling test already burned some of the genesis
bool skipTests = false,
bool const testFlag = false,
bool const badNetID = false)
bool const testFlag = false)
{
using namespace jtx;
@@ -197,20 +183,6 @@ struct XahauGenesis_test : public beast::unit_test::suite
if (skipTests)
return;
if (badNetID)
{
BEAST_EXPECT(
100000000000000000ULL ==
env.app().getLedgerMaster().getClosedLedger()->info().drops);
auto genesisAccRoot = env.le(keylet::account(genesisAccID));
BEAST_REQUIRE(!!genesisAccRoot);
BEAST_EXPECT(
genesisAccRoot->getFieldAmount(sfBalance) ==
XRPAmount(100000000000000000ULL));
return;
}
// sum the initial distribution balances, these should equal total coins
// in the closed ledger
std::vector<std::pair<std::string, XRPAmount>> const& l1membership =
@@ -470,59 +442,17 @@ struct XahauGenesis_test : public beast::unit_test::suite
{
testcase("Test activation");
using namespace jtx;
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
activate(__LINE__, env, false, false, false);
}
void
testBadNetworkIDActivation(FeatureBitset features)
{
testcase("Test Bad Network ID activation");
using namespace jtx;
std::vector<int> badNetIDs{
0,
1,
2,
10,
100,
1000,
10000,
20000,
21000,
21328,
21329,
21340,
21341,
65535};
for (int netid : badNetIDs)
{
Env env{
*this,
makeNetworkConfig(netid),
features - featureXahauGenesis};
activate(__LINE__, env, false, false, false, true);
}
for (int netid = 21330; netid <= 21339; ++netid)
{
Env env{
*this,
makeNetworkConfig(netid),
features - featureXahauGenesis};
activate(__LINE__, env, false, false, false, false);
}
}
void
testWithSignerList(FeatureBitset features)
{
using namespace jtx;
testcase("Test signerlist");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
Account const alice{"alice", KeyType::ed25519};
env.fund(XRP(1000), alice);
@@ -538,8 +468,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
{
using namespace jtx;
testcase("Test regkey");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
env.memoize(env.master);
Account const alice("alice");
@@ -738,11 +667,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
{
using namespace jtx;
testcase("Test governance membership voting L1");
Env env{
*this,
makeNetworkConfig(21337),
features - featureXahauGenesis,
nullptr};
Env env{*this, envconfig(), features - featureXahauGenesis, nullptr};
auto const alice = Account("alice");
auto const bob = Account("bob");
@@ -2186,8 +2111,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace jtx;
testcase("Test governance membership voting L2");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
auto const alice = Account("alice");
auto const bob = Account("bob");
@@ -3784,7 +3708,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test last close time");
Env env{*this, makeNetworkConfig(21337), features};
Env env{*this, envconfig(), features};
validateTime(lastClose(env), 0);
// last close = 0
@@ -3814,8 +3738,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace jtx;
testcase("test claim reward rate is == 0");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
STAmount const feesXRP = XRP(1);
@@ -3860,8 +3783,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace jtx;
testcase("test claim reward rate is > 1");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
STAmount const feesXRP = XRP(1);
@@ -3906,8 +3828,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace jtx;
testcase("test claim reward delay is == 0");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
STAmount const feesXRP = XRP(1);
@@ -3952,8 +3873,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace jtx;
testcase("test claim reward delay is < 0");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
STAmount const feesXRP = XRP(1);
@@ -3998,8 +3918,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace jtx;
testcase("test claim reward before time");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
STAmount const feesXRP = XRP(1);
@@ -4049,8 +3968,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test claim reward valid without unl report");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
bool const has240819 = env.current()->rules().enabled(fix240819);
double const rateDrops = 0.00333333333 * 1'000'000;
@@ -4197,8 +4115,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test claim reward valid with unl report");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
double const rateDrops = 0.00333333333 * 1'000'000;
STAmount const feesXRP = XRP(1);
@@ -4333,7 +4250,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
{
FeatureBitset _features = features - featureXahauGenesis;
auto const amend = withXahauV1 ? _features : _features - fixXahauV1;
Env env{*this, makeNetworkConfig(21337), amend};
Env env{*this, envconfig(), amend};
double const rateDrops = 0.00333333333 * 1'000'000;
STAmount const feesXRP = XRP(1);
@@ -4470,8 +4387,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test claim reward optin optout");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
bool const has240819 = env.current()->rules().enabled(fix240819);
double const rateDrops = 0.00333333333 * 1'000'000;
@@ -4583,8 +4499,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test claim reward bal == 1");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
double const rateDrops = 0.00333333333 * 1'000'000;
STAmount const feesXRP = XRP(1);
@@ -4672,8 +4587,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test claim reward elapsed_since_last == 1");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
double const rateDrops = 0.00333333333 * 1'000'000;
STAmount const feesXRP = XRP(1);
@@ -4754,8 +4668,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test claim reward elapsed_since_last == 0");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
STAmount const feesXRP = XRP(1);
@@ -5016,8 +4929,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test compound interest over 12 claims");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
double const rateDrops = 0.00333333333 * 1'000'000;
STAmount const feesXRP = XRP(1);
@@ -5115,8 +5027,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test deposit");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
double const rateDrops = 0.00333333333 * 1'000'000;
STAmount const feesXRP = XRP(1);
@@ -5206,8 +5117,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test deposit withdraw");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
double const rateDrops = 0.00333333333 * 1'000'000;
STAmount const feesXRP = XRP(1);
@@ -5299,8 +5209,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test deposit late");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
double const rateDrops = 0.00333333333 * 1'000'000;
STAmount const feesXRP = XRP(1);
@@ -5390,8 +5299,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test deposit late withdraw");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
double const rateDrops = 0.00333333333 * 1'000'000;
STAmount const feesXRP = XRP(1);
@@ -5484,8 +5392,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test no claim");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
double const rateDrops = 0.00333333333 * 1'000'000;
STAmount const feesXRP = XRP(1);
@@ -5573,8 +5480,7 @@ struct XahauGenesis_test : public beast::unit_test::suite
using namespace std::chrono_literals;
testcase("test no claim late");
Env env{
*this, makeNetworkConfig(21337), features - featureXahauGenesis};
Env env{*this, envconfig(), features - featureXahauGenesis};
double const rateDrops = 0.00333333333 * 1'000'000;
STAmount const feesXRP = XRP(1);
@@ -5688,7 +5594,6 @@ struct XahauGenesis_test : public beast::unit_test::suite
testGovernHookWithFeats(FeatureBitset features)
{
testPlainActivation(features);
testBadNetworkIDActivation(features);
testWithSignerList(features);
testWithRegularKey(features);
testGovernanceL1(features);

View File

@@ -144,14 +144,6 @@ public:
{
}
void
onUDPMessage(
std::string const& message,
boost::asio::ip::tcp::endpoint const& remoteEndpoint,
std::function<void(std::string const&)> sendResponse)
{
}
void
onClose(Session& session, boost::system::error_code const&)
{
@@ -357,14 +349,6 @@ public:
{
}
void
onUDPMessage(
std::string const& message,
boost::asio::ip::tcp::endpoint const& remoteEndpoint,
std::function<void(std::string const&)> sendResponse)
{
}
void
onClose(Session& session, boost::system::error_code const&)
{