Multiple transactions per account in TxQ (RIPD-1048):

* Tweak account XRP balance and sequence if needed before preclaim.
* Limit total fees in flight to minimum reserve / account balance.
* LastLedgerSequence must be at least 2 more than the current ledger to be queued.
* Limit 10 transactions per account in the queue at a time.
* Limit queuing multiple transactions after transactions that affect authentication.
* Zero base fee transactions are treated as having a fixed fee level of 256000 instead of infinite.
* Full queue: new txn can only kick out a tx if the fee is higher than that account's average fee.
* Queued tx retry limit prevents indefinitely stuck txns.
* Return escalation factors in server_info and _state when escalated.
* Update documentation.
* Update experimental config to only include the % increase.
* Convert TxQ metric magic numbers to experimental config.
This commit is contained in:
Edward Hennis
2015-11-02 19:18:16 -05:00
committed by Vinnie Falco
parent 7f97b7bc05
commit 2e2a7509cd
28 changed files with 2473 additions and 592 deletions

View File

@@ -418,16 +418,77 @@
#
# The queue will be limited to this <number> of average ledgers'
# worth of transactions. If the queue fills up, the transactions
# with the lowest fees will be dropped from the queue any time a
# transaction with a higher fee level is added. Default: 20.
# with the lowest fee levels will be dropped from the queue any
# time a transaction with a higher fee level is added.
# Default: 20.
#
# retry_sequence_percent = <number>
#
# If a client resubmits a transaction, the new transaction's fee
# must be more than <number> percent higher than the original
# transaction's fee, or meet the current open ledger fee to be
# considered. Default: 125.
# If a client replaces a transaction in the queue (same sequence
# number as a transaction already in the queue), the new
# transaction's fee must be more than <number> percent higher
# than the original transaction's fee, or meet the current open
# ledger fee to be considered. Default: 25.
#
# multi_txn_percent = <number>
#
# If a client submits multiple transactions (different sequence
# numbers), later transactions must pay a fee at least <number>
# percent higher than the transaction with the previous sequence
# number.
# Default: -90.
#
# minimum_escalation_multiplier = <number>
#
# At ledger close time, the median fee level of the transactions
# in that ledger is used as a multiplier in escalation
# calculations of the next ledger. This minimum value ensures that
# the escalation is significant. Default: 500.
#
# minimum_txn_in_ledger = <number>
#
# Minimum number of transactions that must be allowed into the
# ledger at the minimum required fee before the required fee
# escalates. Default: 5.
#
# minimum_txn_in_ledger_standalone = <number>
#
# Like minimum_txn_in_ledger when rippled is running in standalone
# mode. Default: 1000.
#
# target_txn_in_ledger = <number>
#
# Number of transactions allowed into the ledger at the minimum
# required fee that the queue will "work toward" as long as
# consensus stays healthy. The limit will grow quickly until it
# reaches or exceeds this number. After that the limit may still
# change, but will stay above the target. If consensus is not
# healthy, the limit will be clamped to this value or lower.
# Default: 50.
#
# maximum_txn_in_ledger = <number>
#
# (Optional) Maximum number of transactions that will be allowed
# into the ledger at the minimum required fee before the required
# fee escalates. Default: no maximum.
#
# maximum_txn_per_account = <number>
#
# Maximum number of transactions that one account can have in the
# queue at any given time. Default: 10.
#
# minimum_last_ledger_buffer = <number>
#
# If a transaction has a LastLedgerSequence, it must be at least
# this much larger than the current open ledger sequence number.
# Default: 2.
#
# zero_basefee_transaction_feelevel = <number>
#
# So we don't deal with infinite fee levels, treat any transaction
# with a 0 base fee (ie. SetRegularKey password recovery) as
# having this fee level.
# Default: 256000.
#
#
#-------------------------------------------------------------------------------

View File

@@ -57,9 +57,18 @@ in the fee escalation formula for the next open ledger.
which is 15,000 drops for a
[reference transaction](#reference-transaction).
* This will cause the first 21 transactions only require 10
drops, but the 22st transaction will require
drops, but the 22nd transaction will require
a level of about 110,000,000 or about 4.3 million drops (4.3XRP).
* This example assumes a cold-start scenario, with a single, possibly
malicious, user willing to pay arbitrary amounts to get transactions
into the open ledger. It ignores the effects of the [Transaction
Queue](#transaction-queue). Any lower fee level transactions submitted
by other users at the same time as this user's transactions will go into
the transaction queue, and will have the first opportunity to be applied
to the _next_ open ledger. The next section describes how that works in
more detail.
## Transaction Queue
An integral part of making fee escalation work for users of the network
@@ -91,19 +100,36 @@ is opened up for normal transaction processing.
4. A transaction in the queue can stay there indefinitely in principle,
but in practice, either
* it will eventually get applied to the ledger,
* it will attempt to apply to the ledger and fail,
* it will attempt to apply to the ledger and retry [10
times](#other-constants),
* its last ledger sequence number will expire,
* the user will replace it by submitting another transaction with the same
sequence number and a higher fee, or
sequence number and at least a 25% higher fee, or
* it will get dropped when the queue fills up with more valuable transactions.
The size limit is computed dynamically, and can hold transactions for
the next [20 ledgers](#other-constants).
The lower the transaction's fee, the more likely that it will get dropped if the
network is busy.
Currently, there is an additional restriction that the queue can only hold one
transaction per account at a time. Future development will make the queue
aware of transaction dependencies so that more than one can be
queued up at a time per account.
If a transaction is submitted for an account with one or more transactions
already in the queue, and a sequence number that is sequential with the other
transactions in the queue for that account, it will be considered
for the queue if it meets these additional criteria:
* the account has fewer than [10](#other-constants) transactions
already in the queue.
* it pays a [fee level](#fee-level) that is greater than 10% of the
fee level for the transaction with the previous sequence number,
* all other queued transactions for that account, in the case where
they spend the maximum possible XRP, leave enough XRP balance to pay
the fee,
* the total fees for the other queued transactions are less than both
the network's minimum reserve and the account's XRP balance, and
* none of the prior queued transactions affect the ability of subsequent
transactions to claim a fee.
Currently, there is an additional restriction that the queue can not work with
transactions using the `sfPreviousTxnID` or `sfAccountTxnID` fields.
`sfPreviousTxnID` is deprecated and shouldn't be used anyway. Future
development will make the queue aware of `sfAccountTxnID` mechanisms.
## Technical Details
@@ -115,15 +141,21 @@ transaction](#reference-transaction), the base fee
level is 256. If a transaction is submitted with a higher `Fee` field,
the fee level is scaled appropriately.
Examples, assuming a base fee of 10 drops:
Examples, assuming a [reference transaction](#reference-transaction)
base fee of 10 drops:
1. A single-signed [reference transaction](#reference-transaction)
with `Fee=20` will have a fee level of 512.
2. A multi-signed transaction with 3 signatures (base fee = 40 drops)
and `Fee=60` will have a fee level of 384.
with `Fee=20` will have a fee level of
`20 drop fee * 256 fee level / 10 drop base fee = 512 fee level`.
2. A multi-signed [reference transaction](#reference-transaction) with
3 signatures (base fee = 40 drops) and `Fee=60` will have a fee level of
`60 drop fee * 256 fee level / ((1tx + 3sigs) * 10 drop base fee) = 384
fee level`.
3. A hypothetical future non-reference transaction with a base
fee of 15 drops multi-signed with 5 signatures and `Fee=90` will
have a fee level of 256.
have a fee level of
`90 drop fee * 256 fee level / ((1tx + 5sigs) * 15 drop base fee) = 256
fee level`.
This demonstrates that a simpler transaction paying less XRP can be more
likely to get into the open ledger, or be sorted earlier in the queue
@@ -168,7 +200,28 @@ to process successfully. The limit of 20 ledgers was used to provide
a balance between resource (specifically memory) usage, and giving
transactions a realistic chance to be processed. This exact value was
chosen experimentally, and can easily change in the future.
* *Maximum retries*. A transaction in the queue can attempt to apply
to the open ledger, but get a retry (`ter`) code up to 10 times, at
which point, it will be removed from the queue and dropped. The
value was chosen to be large enough to allow temporary failures to clear
up, but small enough that the queue doesn't fill up with stale
transactions which prevent lower fee level, but more likely to succeed,
transactions from queuing.
* *Maximum transactions per account*. A single account can have up to 10
transactions in the queue at any given time. This is primarily to
mitigate the lost cost of broadcasting multiple transactions if one of
the earlier ones fails or is otherwise removed from the queue without
being applied to the open ledger. The value was chosen arbitrarily, and
can easily change in the future.
* *Minimum last ledger sequence buffer*. If a transaction has a
`LastLedgerSequence` value, and can not be processed into the open
ledger, that `LastLedgerSequence` must be at least 2 more than the
sequence number of the open ledger to be considered for the queue. The
value was chosen to provide a balance between letting the user control
the lifespan of the transaction, and giving a queued transaction a
chance to get processed out of the queue before getting discarded,
particularly since it may have dependent transactions also in the queue,
which will never succeed if this one is discarded.
### `fee` command
**The `fee` RPC and WebSocket command is still experimental, and may
@@ -209,14 +262,52 @@ Result format:
}
```
### Enabling Fee Escalation
### [`server_info`](https://ripple.com/build/rippled-apis/#server-info) command
These features are disabled by default and need to be activated by a
feature in your rippled.cfg. Add a `[features]` section if one is not
already present, and add `FeeEscalation` (case-sensitive) to that
list, then restart rippled.
**The fields listed here are still experimental, and may change
without warning.**
Up to two fields in `server_info` output are related to fee escalation.
1. `load_factor_fee_escalation`: The factor on base transaction cost
that a transaction must pay to get into the open ledger. This value can
change quickly as transactions are processed from the network and
ledgers are closed. If not escalated, the value is 1, so will not be
returned.
2. `load_factor_fee_queue`: If the queue is full, this is the factor on
base transaction cost that a transaction must pay to get into the queue.
If not full, the value is 1, so will not be returned.
In all cases, the transaction fee must be high enough to overcome both
`load_factor_fee_queue` and `load_factor` to be considered. It does not
need to overcome `load_factor_fee_escalation`, though if it does not, it
is more likely to be queued than immediately processed into the open
ledger.
### [`server_state`](https://ripple.com/build/rippled-apis/#server-state) command
**The fields listed here are still experimental, and may change
without warning.**
Three fields in `server_state` output are related to fee escalation.
1. `load_factor_fee_escalation`: The factor on base transaction cost
that a transaction must pay to get into the open ledger. This value can
change quickly as transactions are processed from the network and
ledgers are closed. The ratio between this value and
`load_factor_fee_reference` determines the multiplier for transaction
fees to get into the current open ledger.
2. `load_factor_fee_queue`: This is the factor on base transaction cost
that a transaction must pay to get into the queue. The ratio between
this value and `load_factor_fee_reference` determines the multiplier for
transaction fees to get into the transaction queue to be considered for
a later ledger.
3. `load_factor_fee_reference`: Like `load_base`, this is the baseline
that is used to scale fee escalation computations.
In all cases, the transaction fee must be high enough to overcome both
`load_factor_fee_queue` and `load_factor` to be considered. It does not
need to overcome `load_factor_fee_escalation`, though if it does not, it
is more likely to be queued than immediately processed into the open
ledger.
```
[features]
FeeEscalation
```

View File

@@ -2053,10 +2053,31 @@ Json::Value NetworkOPsImp::getServerInfo (bool human, bool admin)
if (admin)
info[jss::load] = m_job_queue.getJson ();
auto const escalationMetrics = app_.getTxQ().getMetrics(
app_, *app_.openLedger().current());
if (!human)
{
info[jss::load_base] = app_.getFeeTrack ().getLoadBase ();
info[jss::load_factor] = app_.getFeeTrack ().getLoadFactor ();
if (escalationMetrics)
{
/* Json::Value doesn't support uint64, so clamp to max
uint32 value. This is mostly theoretical, since there
probably isn't enough extant XRP to drive the factor
that high.
*/
constexpr std::uint64_t max =
std::numeric_limits<std::uint32_t>::max();
info[jss::load_factor_fee_escalation] =
static_cast<std::uint32_t> (std::min(
max, escalationMetrics->expFeeLevel));
info[jss::load_factor_fee_queue] =
static_cast<std::uint32_t> (std::min(
max, escalationMetrics->minFeeLevel));
info[jss::load_factor_fee_reference] =
static_cast<std::uint32_t> (std::min(
max, escalationMetrics->referenceFeeLevel));
}
}
else
{
@@ -2079,6 +2100,19 @@ Json::Value NetworkOPsImp::getServerInfo (bool human, bool admin)
info[jss::load_factor_cluster] =
static_cast<double> (fee) / base;
}
if (escalationMetrics)
{
if (escalationMetrics->expFeeLevel !=
escalationMetrics->referenceFeeLevel)
info[jss::load_factor_fee_escalation] =
static_cast<double> (escalationMetrics->expFeeLevel) /
escalationMetrics->referenceFeeLevel;
if (escalationMetrics->minFeeLevel !=
escalationMetrics->referenceFeeLevel)
info[jss::load_factor_fee_queue] =
static_cast<double> (escalationMetrics->minFeeLevel) /
escalationMetrics->referenceFeeLevel;
}
}
bool valid = false;

View File

@@ -33,105 +33,18 @@ namespace ripple {
class Application;
namespace detail {
class FeeMetrics
{
private:
// Fee escalation
// Limit of the txnsExpected value after a
// time leap.
std::size_t const targetTxnCount_;
// Minimum value of txnsExpected.
std::size_t minimumTxnCount_;
// Number of transactions expected per ledger.
// One more than this value will be accepted
// before escalation kicks in.
std::size_t txnsExpected_;
// Minimum value of escalationMultiplier.
std::uint32_t const minimumMultiplier_;
// Based on the median fee of the LCL. Used
// when fee escalation kicks in.
std::uint32_t escalationMultiplier_;
beast::Journal j_;
std::mutex mutable lock_;
public:
static const std::uint64_t baseLevel = 256;
public:
FeeMetrics(bool standAlone, beast::Journal j)
: targetTxnCount_(50)
, minimumTxnCount_(standAlone ? 1000 : 5)
, txnsExpected_(minimumTxnCount_)
, minimumMultiplier_(500)
, escalationMultiplier_(minimumMultiplier_)
, j_(j)
{
}
/**
Updates fee metrics based on the transactions in the ReadView
for use in fee escalation calculations.
@param view View of the LCL that was just closed or received.
@param timeLeap Indicates that rippled is under load so fees
should grow faster.
*/
std::size_t
updateFeeMetrics(Application& app,
ReadView const& view, bool timeLeap);
/** Used by tests only.
*/
std::size_t
setMinimumTx(int m)
{
std::lock_guard <std::mutex> sl(lock_);
auto const old = minimumTxnCount_;
minimumTxnCount_ = m;
txnsExpected_ = m;
return old;
}
std::size_t
getTxnsExpected() const
{
std::lock_guard <std::mutex> sl(lock_);
return txnsExpected_;
}
std::uint32_t
getEscalationMultiplier() const
{
std::lock_guard <std::mutex> sl(lock_);
return escalationMultiplier_;
}
std::uint64_t
scaleFeeLevel(OpenView const& view) const;
};
}
/**
Transaction Queue. Used to manage transactions in conjunction with
fee escalation. See also: RIPD-598, and subissues
RIPD-852, 853, and 854.
fee escalation.
Once enough transactions are added to the open ledger, the required
fee will jump dramatically. If additional transactions are added,
the fee will grow exponentially.
Transactions that don't have a high enough fee to be applied to
the ledger are added to the queue in order from highest fee to
the ledger are added to the queue in order from highest fee level to
lowest. Whenever a new ledger is accepted as validated, transactions
are first applied from the queue to the open ledger in fee order
are first applied from the queue to the open ledger in fee level order
until either all transactions are applied or the fee again jumps
too high for the remaining transactions.
*/
@@ -141,7 +54,25 @@ public:
struct Setup
{
std::size_t ledgersInQueue = 20;
std::uint32_t retrySequencePercent = 125;
std::uint32_t retrySequencePercent = 25;
// TODO: eahennis. Can we remove the multi tx factor?
std::int32_t multiTxnPercent = -90;
std::uint32_t minimumEscalationMultiplier = 500;
std::uint32_t minimumTxnInLedger = 5;
std::uint32_t minimumTxnInLedgerSA = 1000;
std::uint32_t targetTxnInLedger = 50;
boost::optional<std::uint32_t> maximumTxnInLedger;
std::uint32_t maximumTxnPerAccount = 10;
std::uint32_t minimumLastLedgerBuffer = 2;
/* So we don't deal with infinite fee levels, treat
any transaction with a 0 base fee (ie. SetRegularKey
password recovery) as having this fee level.
Should the network behavior change in the future such
that these transactions are unable to be processed,
we can make this more complicated. But avoid
bikeshedding for now.
*/
std::uint64_t zeroBaseFeeTransactionFeeLevel = 256000;
bool standAlone = false;
};
@@ -166,32 +97,10 @@ public:
Add a new transaction to the open ledger, hold it in the queue,
or reject it.
How the decision is made:
1. Is there already a transaction for the same account with the
same sequence number in the queue?
Yes: Is `txn`'s fee higher than the queued transaction's fee?
Yes: Remove the queued transaction. Continue to step 2.
No: Reject `txn` with a low fee TER code. Stop.
No: Continue to step 2.
2. Is the `txn`s fee level >= the required fee level?
Yes: `txn` can be applied to the ledger. Pass it
to the engine and return that result.
No: Can it be held in the queue? (See TxQImpl::canBeHeld).
No: Reject `txn` with a low fee TER code.
Yes: Is the queue full?
No: Put `txn` in the queue.
Yes: Is the `txn`'s fee higher than the end item's
fee?
Yes: Remove the end item, and add `txn`.
No: Reject `txn` with a low fee TER code.
If the transaction is queued, addTransaction will return
{ TD_held, terQUEUED }
@param txn The transaction to be attempted.
@param params Flags to control engine behaviors.
@param engine Transaction Engine.
@return A pair with the TER and a bool indicating
whether or not the transaction was applied.
If the transaction is queued, will return
{ terQUEUED, false }.
*/
std::pair<TER, bool>
apply(Application& app, OpenView& view,
@@ -203,93 +112,164 @@ public:
As we apply more transactions to the ledger, the required
fee will increase.
Iterate over the transactions from highest fee to lowest.
For each transaction, compute the required fee.
Is the transaction fee is less than the required fee?
Yes: Stop. We're done.
No: Try to apply the transaction. Did it apply?
Yes: Take it out of the queue.
No: Leave it in the queue, and continue iterating.
@return Whether any txs were added to the view.
*/
bool
accept(Application& app, OpenView& view);
/**
We have a new last validated ledger, update and clean up the
A new ledger has been validated. Update and clean up the
queue.
1) Keep track of the average non-empty ledger size. Once there
are enough data points, the maximum queue size will be
enough to hold 20 ledgers. (Parameters for this are
experimentally configurable, but should be left alone.)
1a) If the new limit makes the queue full, trim excess
transactions from the end of the queue.
2) Remove any transactions from the queue whos the
`LastLedgerSequence` has passed.
*/
void
processValidatedLedger(Application& app,
OpenView const& view, bool timeLeap);
/** Used by tests only.
/** Returns fee metrics in reference fee level units.
*/
std::size_t
setMinimumTx(int m);
/** Returns fee metrics in reference fee (level) units.
*/
struct Metrics
getMetrics(OpenView const& view) const;
boost::optional<Metrics>
getMetrics(Application& app, OpenView const& view) const;
/** Packages up fee metrics for the `fee` RPC command.
*/
Json::Value
doRPC(Application& app) const;
/** Return the instantaneous fee to get into the current
open ledger for a reference transaction.
*/
XRPAmount
openLedgerFee(OpenView const& view) const;
private:
class CandidateTxn
class FeeMetrics
{
private:
// Fee escalation
// Minimum value of txnsExpected.
std::size_t const minimumTxnCount_;
// Limit of the txnsExpected value after a
// time leap.
std::size_t const targetTxnCount_;
// Maximum value of txnsExpected
boost::optional<std::size_t> const maximumTxnCount_;
// Number of transactions expected per ledger.
// One more than this value will be accepted
// before escalation kicks in.
std::size_t txnsExpected_;
// Minimum value of escalationMultiplier.
std::uint32_t const minimumMultiplier_;
// Based on the median fee of the LCL. Used
// when fee escalation kicks in.
std::uint32_t escalationMultiplier_;
beast::Journal j_;
std::mutex mutable lock_;
public:
static constexpr std::uint64_t baseLevel = 256;
public:
FeeMetrics(Setup const& setup, beast::Journal j)
: minimumTxnCount_(setup.standAlone ?
setup.minimumTxnInLedgerSA :
setup.minimumTxnInLedger)
, targetTxnCount_(setup.targetTxnInLedger < minimumTxnCount_ ?
minimumTxnCount_ : setup.targetTxnInLedger)
, maximumTxnCount_(setup.maximumTxnInLedger ?
*setup.maximumTxnInLedger < targetTxnCount_ ?
targetTxnCount_ : *setup.maximumTxnInLedger :
boost::optional<std::size_t>(boost::none))
, txnsExpected_(minimumTxnCount_)
, minimumMultiplier_(setup.minimumEscalationMultiplier)
, escalationMultiplier_(minimumMultiplier_)
, j_(j)
{
}
/**
Updates fee metrics based on the transactions in the ReadView
for use in fee escalation calculations.
@param view View of the LCL that was just closed or received.
@param timeLeap Indicates that rippled is under load so fees
should grow faster.
*/
std::size_t
update(Application& app,
ReadView const& view, bool timeLeap,
TxQ::Setup const& setup);
std::size_t
getTxnsExpected() const
{
std::lock_guard <std::mutex> sl(lock_);
return txnsExpected_;
}
std::uint32_t
getEscalationMultiplier() const
{
std::lock_guard <std::mutex> sl(lock_);
return escalationMultiplier_;
}
std::uint64_t
scaleFeeLevel(OpenView const& view) const;
};
// Alternate name: MaybeTx
class MaybeTx
{
public:
// Used by the TxQ::FeeHook and TxQ::FeeMultiSet below
// to put each candidate object into more than one
// to put each MaybeTx object into more than one
// set without copies, pointers, etc.
boost::intrusive::set_member_hook<> byFeeListHook;
std::shared_ptr<STTx const> txn;
boost::optional<TxConsequences const> consequences;
uint64_t const feeLevel;
TxID const txID;
boost::optional<TxID> priorTxID;
AccountID const account;
boost::optional<LedgerIndex> lastValid;
int retriesRemaining;
TxSeq const sequence;
ApplyFlags const flags;
// pfresult_ is never allowed to be empty. The
// Invariant: pfresult is never allowed to be empty. The
// boost::optional is leveraged to allow `emplace`d
// construction and replacement without a copy
// assignment operation.
boost::optional<PreflightResult const> pfresult;
/* In TxQ::accept, the required fee level may be low
enough that this transaction gets a chance to apply
to the ledger, but it may get a retry ter result for
another reason (eg. insufficient balance). When that
happens, the transaction is left in the queue to try
again later, but it shouldn't be allowed to fail
indefinitely. The number of failures allowed is
essentially arbitrary. It should be large enough to
allow temporary failures to clear up, but small enough
that the queue doesn't fill up with stale transactions
which prevent lower fee level transactions from queuing.
*/
static constexpr int retriesAllowed = 10;
public:
CandidateTxn(std::shared_ptr<STTx const> const&,
MaybeTx(std::shared_ptr<STTx const> const&,
TxID const& txID, std::uint64_t feeLevel,
ApplyFlags const flags,
PreflightResult const& pfresult);
std::pair<TER, bool>
apply(Application& app, OpenView& view);
};
class GreaterFee
{
public:
bool operator()(const CandidateTxn& lhs, const CandidateTxn& rhs) const
bool operator()(const MaybeTx& lhs, const MaybeTx& rhs) const
{
return lhs.feeLevel > rhs.feeLevel;
}
@@ -298,11 +278,13 @@ private:
class TxQAccount
{
public:
using TxMap = std::map <TxSeq, MaybeTx>;
AccountID const account;
uint64_t totalFees;
// Sequence number will be used as the key.
std::map <TxSeq, CandidateTxn> transactions;
TxMap transactions;
bool retryPenalty = false;
bool dropPenalty = false;
public:
explicit TxQAccount(std::shared_ptr<STTx const> const& txn);
@@ -320,31 +302,29 @@ private:
return !getTxnCount();
}
CandidateTxn&
addCandidate(CandidateTxn&&);
MaybeTx&
add(MaybeTx&&);
bool
removeCandidate(TxSeq const& sequence);
CandidateTxn const*
findCandidateAt(TxSeq const& sequence) const;
remove(TxSeq const& sequence);
};
using FeeHook = boost::intrusive::member_hook
<CandidateTxn, boost::intrusive::set_member_hook<>,
&CandidateTxn::byFeeListHook>;
<MaybeTx, boost::intrusive::set_member_hook<>,
&MaybeTx::byFeeListHook>;
using FeeMultiSet = boost::intrusive::multiset
< CandidateTxn, FeeHook,
< MaybeTx, FeeHook,
boost::intrusive::compare <GreaterFee> >;
using AccountMap = std::map <AccountID, TxQAccount>;
Setup const setup_;
beast::Journal j_;
detail::FeeMetrics feeMetrics_;
FeeMetrics feeMetrics_;
FeeMultiSet byFee_;
std::map <AccountID, TxQAccount> byAccount_;
AccountMap byAccount_;
boost::optional<size_t> maxSize_;
// Most queue operations are done under the master lock,
@@ -352,14 +332,20 @@ private:
std::mutex mutable mutex_;
private:
bool isFull() const
{
return maxSize_ && byFee_.size() >= *maxSize_;
}
template<size_t fillPercentage = 100>
bool
isFull() const;
bool canBeHeld(std::shared_ptr<STTx const> const&);
bool canBeHeld(STTx const&, OpenView const&,
AccountMap::iterator,
boost::optional<FeeMultiSet::iterator>);
// Erase and return the next entry in byFee_ (lower fee level)
FeeMultiSet::iterator_type erase(FeeMultiSet::const_iterator_type);
// Erase and return the next entry for the account (if fee level
// is higher), or next entry in byFee_ (lower fee level).
// Used to get the next "applyable" MaybeTx for accept().
FeeMultiSet::iterator_type eraseAndAdvance(FeeMultiSet::const_iterator_type);
};

File diff suppressed because it is too large Load Diff

View File

@@ -19,6 +19,7 @@
#include <BeastConfig.h>
#include <ripple/test/jtx.h>
#include <ripple/app/tx/applySteps.h>
#include <ripple/protocol/digest.h>
#include <ripple/protocol/Feature.h>
#include <ripple/protocol/Indexes.h>
@@ -343,17 +344,68 @@ struct SusPay_test : public beast::unit_test::suite
{
using namespace jtx;
using namespace std::chrono;
using S = seconds;
Env env(*this, features(featureSusPay));
auto T = [&env](NetClock::duration const& d)
{ return env.now() + d; };
env.fund(XRP(5000), "alice", "bob", "carol");
auto const c = cond("receipt");
env(condpay("alice", "carol", XRP(1000), c.first, T(S{1})));
env(condpay("alice", "carol", XRP(1000), c.first, T(1s)));
auto const m = env.meta();
expect((*m)[sfTransactionResult] == tesSUCCESS);
}
void testConsequences()
{
using namespace jtx;
using namespace std::chrono;
Env env(*this, features(featureSusPay));
auto T = [&env](NetClock::duration const& d)
{
return env.now() + d;
};
env.memoize("alice");
env.memoize("bob");
env.memoize("carol");
auto const c = cond("receipt");
{
auto const jtx = env.jt(
condpay("alice", "carol", XRP(1000), c.first, T(1s)),
seq(1), fee(10));
auto const pf = preflight(env.app(), env.current()->rules(),
*jtx.stx, tapNONE, env.journal);
expect(pf.ter == tesSUCCESS);
auto const conseq = calculateConsequences(pf);
expect(conseq.category == TxConsequences::normal);
expect(conseq.fee == drops(10));
expect(conseq.potentialSpend == XRP(1000));
}
{
auto const jtx = env.jt(cancel("bob", "alice", 3),
seq(1), fee(10));
auto const pf = preflight(env.app(), env.current()->rules(),
*jtx.stx, tapNONE, env.journal);
expect(pf.ter == tesSUCCESS);
auto const conseq = calculateConsequences(pf);
expect(conseq.category == TxConsequences::normal);
expect(conseq.fee == drops(10));
expect(conseq.potentialSpend == XRP(0));
}
{
auto const jtx = env.jt(
finish("bob", "alice", 3, c.first, c.second),
seq(1), fee(10));
auto const pf = preflight(env.app(), env.current()->rules(),
*jtx.stx, tapNONE, env.journal);
expect(pf.ter == tesSUCCESS);
auto const conseq = calculateConsequences(pf);
expect(conseq.category == TxConsequences::normal);
expect(conseq.fee == drops(10));
expect(conseq.potentialSpend == XRP(0));
}
}
void run() override
{
testEnablement();
@@ -362,6 +414,7 @@ struct SusPay_test : public beast::unit_test::suite
testLockup();
testCondPay();
testMeta();
testConsequences();
}
};

File diff suppressed because it is too large Load Diff

View File

@@ -92,6 +92,37 @@ public:
PreclaimResult& operator=(PreclaimResult const&) = delete;
};
struct TxConsequences
{
enum Category
{
// Moves currency around, creates offers, etc.
normal = 0,
// Affects the ability of subsequent transactions
// to claim a fee. Eg. SetRegularKey
blocker
};
Category const category;
XRPAmount const fee;
// Does NOT include the fee.
XRPAmount const potentialSpend;
TxConsequences(Category const category_,
XRPAmount const fee_, XRPAmount const spend_)
: category(category_)
, fee(fee_)
, potentialSpend(spend_)
{
}
TxConsequences(TxConsequences const&) = default;
TxConsequences& operator=(TxConsequences const&) = delete;
TxConsequences(TxConsequences&&) = default;
TxConsequences& operator=(TxConsequences&&) = delete;
};
/** Gate a transaction based on static information.
The transaction is checked against all possible
@@ -142,6 +173,13 @@ preclaim(PreflightResult const& preflightResult,
std::uint64_t
calculateBaseFee(Application& app, ReadView const& view,
STTx const& tx, beast::Journal j);
/** Determine the XRP balance consequences if a transaction
consumes the maximum XRP allowed.
*/
TxConsequences
calculateConsequences(PreflightResult const& preflightResult);
/** Apply a prechecked transaction to an OpenView.
See also: apply()

View File

@@ -32,6 +32,15 @@
namespace ripple {
XRPAmount
CreateOffer::calculateMaxSpend(STTx const& tx)
{
auto const& saTakerGets = tx[sfTakerGets];
return saTakerGets.native() ?
saTakerGets.xrp() : beast::zero;
}
TER
CreateOffer::preflight (PreflightContext const& ctx)
{

View File

@@ -46,6 +46,10 @@ public:
{
}
static
XRPAmount
calculateMaxSpend(STTx const& tx);
static
TER
preflight (PreflightContext const& ctx);

View File

@@ -30,6 +30,20 @@ namespace ripple {
// See https://ripple.com/wiki/Transaction_Format#Payment_.280.29
XRPAmount
Payment::calculateMaxSpend(STTx const& tx)
{
if (tx.isFieldPresent(sfSendMax))
{
auto const& sendMax = tx[sfSendMax];
return sendMax.native() ? sendMax.xrp() : beast::zero;
}
/* If there's no sfSendMax in XRP, and the sfAmount isn't
in XRP, then the transaction can not send XRP. */
auto const& saDstAmount = tx.getFieldAmount(sfAmount);
return saDstAmount.native() ? saDstAmount.xrp() : beast::zero;
}
TER
Payment::preflight (PreflightContext const& ctx)
{

View File

@@ -44,6 +44,10 @@ public:
{
}
static
XRPAmount
calculateMaxSpend(STTx const& tx);
static
TER
preflight (PreflightContext const& ctx);

View File

@@ -29,6 +29,25 @@
namespace ripple {
bool
SetAccount::affectsSubsequentTransactionAuth(STTx const& tx)
{
auto const uTxFlags = tx.getFlags();
if(uTxFlags & (tfRequireAuth | tfOptionalAuth))
return true;
auto const uSetFlag = tx[~sfSetFlag];
if(uSetFlag && (*uSetFlag == asfRequireAuth ||
*uSetFlag == asfDisableMaster ||
*uSetFlag == asfAccountTxnID))
return true;
auto const uClearFlag = tx[~sfClearFlag];
return uClearFlag && (*uClearFlag == asfRequireAuth ||
*uClearFlag == asfDisableMaster ||
*uClearFlag == asfAccountTxnID);
}
TER
SetAccount::preflight (PreflightContext const& ctx)
{

View File

@@ -41,6 +41,10 @@ public:
{
}
static
bool
affectsSubsequentTransactionAuth(STTx const& tx);
static
TER
preflight (PreflightContext const& ctx);

View File

@@ -36,6 +36,13 @@ public:
{
}
static
bool
affectsSubsequentTransactionAuth(STTx const& tx)
{
return true;
}
static
TER
preflight (PreflightContext const& ctx);

View File

@@ -53,6 +53,13 @@ public:
{
}
static
bool
affectsSubsequentTransactionAuth(STTx const& tx)
{
return true;
}
static
TER
preflight (PreflightContext const& ctx);

View File

@@ -128,6 +128,12 @@ namespace ripple {
//------------------------------------------------------------------------------
XRPAmount
SusPayCreate::calculateMaxSpend(STTx const& tx)
{
return tx[sfAmount].xrp();
}
TER
SusPayCreate::preflight (PreflightContext const& ctx)
{

View File

@@ -34,6 +34,10 @@ public:
{
}
static
XRPAmount
calculateMaxSpend(STTx const& tx);
static
TER
preflight (PreflightContext const& ctx);

View File

@@ -152,10 +152,22 @@ std::uint64_t Transactor::calculateBaseFee (
return baseFee + (signerCount * baseFee);
}
XRPAmount
Transactor::calculateFeePaid(STTx const& tx)
{
return tx[sfFee].xrp();
}
XRPAmount
Transactor::calculateMaxSpend(STTx const& tx)
{
return beast::zero;
}
TER
Transactor::checkFee (PreclaimContext const& ctx, std::uint64_t baseFee)
{
auto const feePaid = ctx.tx[sfFee].xrp ();
auto const feePaid = calculateFeePaid(ctx.tx);
if (!isLegalAmount (feePaid) || feePaid < beast::zero)
return temBAD_FEE;
@@ -198,7 +210,7 @@ Transactor::checkFee (PreclaimContext const& ctx, std::uint64_t baseFee)
TER Transactor::payFee ()
{
auto const feePaid = ctx_.tx[sfFee].xrp();
auto const feePaid = calculateFeePaid(ctx_.tx);
auto const sle = view().peek(
keylet::account(account_));
@@ -237,11 +249,6 @@ Transactor::checkSeq (PreclaimContext const& ctx)
{
if (a_seq < t_seq)
{
if (ctx.flags & tapPOST_SEQ)
{
return tesSUCCESS;
}
JLOG(ctx.j.trace()) <<
"applyTransaction: has future sequence number " <<
"a_seq=" << a_seq << " t_seq=" << t_seq;

View File

@@ -70,6 +70,9 @@ public:
PreclaimContext& operator=(PreclaimContext const&) = delete;
};
struct TxConsequences;
struct PreflightResult;
class Transactor
{
protected:
@@ -126,6 +129,21 @@ public:
calculateBaseFee (
PreclaimContext const& ctx);
static
bool
affectsSubsequentTransactionAuth(STTx const& tx)
{
return false;
}
static
XRPAmount
calculateFeePaid(STTx const& tx);
static
XRPAmount
calculateMaxSpend(STTx const& tx);
static
TER
preclaim(PreclaimContext const &ctx)

View File

@@ -156,6 +156,47 @@ invoke_calculateBaseFee(PreclaimContext const& ctx)
}
}
template<class T>
static
TxConsequences
invoke_calculateConsequences(STTx const& tx)
{
auto const category = T::affectsSubsequentTransactionAuth(tx) ?
TxConsequences::blocker : TxConsequences::normal;
auto const feePaid = T::calculateFeePaid(tx);
auto const maxSpend = T::calculateMaxSpend(tx);
return{ category, feePaid, maxSpend };
}
static
TxConsequences
invoke_calculateConsequences(STTx const& tx)
{
switch (tx.getTxnType())
{
case ttACCOUNT_SET: return invoke_calculateConsequences<SetAccount>(tx);
case ttOFFER_CANCEL: return invoke_calculateConsequences<CancelOffer>(tx);
case ttOFFER_CREATE: return invoke_calculateConsequences<CreateOffer>(tx);
case ttPAYMENT: return invoke_calculateConsequences<Payment>(tx);
case ttSUSPAY_CREATE: return invoke_calculateConsequences<SusPayCreate>(tx);
case ttSUSPAY_FINISH: return invoke_calculateConsequences<SusPayFinish>(tx);
case ttSUSPAY_CANCEL: return invoke_calculateConsequences<SusPayCancel>(tx);
case ttREGULAR_KEY_SET: return invoke_calculateConsequences<SetRegularKey>(tx);
case ttSIGNER_LIST_SET: return invoke_calculateConsequences<SetSignerList>(tx);
case ttTICKET_CANCEL: return invoke_calculateConsequences<CancelTicket>(tx);
case ttTICKET_CREATE: return invoke_calculateConsequences<CreateTicket>(tx);
case ttTRUST_SET: return invoke_calculateConsequences<SetTrust>(tx);
case ttAMENDMENT:
case ttFEE:
// fall through to default
default:
assert(false);
return { TxConsequences::blocker, Transactor::calculateFeePaid(tx),
beast::zero };
}
}
static
std::pair<TER, bool>
invoke_apply (ApplyContext& ctx)
@@ -245,6 +286,17 @@ calculateBaseFee(Application& app, ReadView const& view,
return invoke_calculateBaseFee(ctx);
}
TxConsequences
calculateConsequences(PreflightResult const& preflightResult)
{
assert(preflightResult.ter == tesSUCCESS);
if (preflightResult.ter != tesSUCCESS)
return{ TxConsequences::blocker,
Transactor::calculateFeePaid(preflightResult.tx),
beast::zero };
return invoke_calculateConsequences(preflightResult.tx);
}
std::pair<TER, bool>
doApply(PreclaimResult const& preclaimResult,
Application& app, OpenView& view)

View File

@@ -122,7 +122,7 @@ public:
std::string START_LEDGER;
// Network parameters
int TRANSACTION_FEE_BASE = 10; // The number of fee units a reference transaction costs
int const TRANSACTION_FEE_BASE = 10; // The number of fee units a reference transaction costs
/** Operate in stand-alone mode.

View File

@@ -96,8 +96,6 @@ public:
}
Json::Value getJson (std::uint64_t baseFee, std::uint32_t referenceFeeUnits) const;
void setClusterFee (std::uint32_t fee)
{
ScopedLockType sl (mLock);

View File

@@ -102,29 +102,6 @@ LoadFeeTrack::scaleFeeLoad (std::uint64_t fee, std::uint64_t baseFee,
return fee;
}
Json::Value
LoadFeeTrack::getJson (std::uint64_t baseFee,
std::uint32_t referenceFeeUnits) const
{
Json::Value j (Json::objectValue);
{
ScopedLockType sl (mLock);
// base_fee = The cost to send a "reference" transaction under
// no load, in millionths of a Ripple
j[jss::base_fee] = Json::Value::UInt (baseFee);
// load_fee = The cost to send a "reference" transaction now,
// in millionths of a Ripple
j[jss::load_fee] = Json::Value::UInt (
mulDivThrow(baseFee, std::max(mLocalTxnLoadFee,
mRemoteTxnLoadFee), lftNormalFee));
}
return j;
}
bool
LoadFeeTrack::raiseLocalFee ()
{

View File

@@ -32,10 +32,6 @@ enum ApplyFlags
// Signature already checked
tapNO_CHECK_SIGN = 0x01,
// We expect the transaction to have a later
// sequence number than the account in the ledger
tapPOST_SEQ = 0x04,
// This is not the transaction's last pass
// Transaction can be retried, soft failures allowed
tapRETRY = 0x20,

View File

@@ -43,6 +43,7 @@ JSS ( DeliverMin ); // in: TransactionSign
JSS ( Fee ); // in/out: TransactionSign; field.
JSS ( Flags ); // in/out: TransactionSign; field.
JSS ( Invalid ); //
JSS ( LastLedgerSequence ); // in: TransactionSign; field
JSS ( LimitAmount ); // field.
JSS ( OfferSequence ); // field.
JSS ( Paths ); // in/out: TransactionSign
@@ -231,6 +232,9 @@ JSS ( load ); // out: NetworkOPs, PeerImp
JSS ( load_base ); // out: NetworkOPs
JSS ( load_factor ); // out: NetworkOPs
JSS ( load_factor_cluster ); // out: NetworkOPs
JSS ( load_factor_fee_escalation ); // out: NetworkOPs
JSS ( load_factor_fee_queue ); // out: NetworkOPs
JSS ( load_factor_fee_reference ); // out: NetworkOPs
JSS ( load_factor_local ); // out: NetworkOPs
JSS ( load_factor_net ); // out: NetworkOPs
JSS ( load_fee ); // out: LoadFeeTrackImp

View File

@@ -45,6 +45,7 @@ enum TER
telFAILED_PROCESSING,
telINSUF_FEE_P,
telNO_DST_PARTIAL,
telCAN_NOT_QUEUE,
// -299 .. -200: M Malformed (bad signature)
// Causes:

View File

@@ -89,6 +89,7 @@ bool transResultInfo (TER code, std::string& token, std::string& text)
{ telFAILED_PROCESSING, { "telFAILED_PROCESSING", "Failed to correctly process transaction." } },
{ telINSUF_FEE_P, { "telINSUF_FEE_P", "Fee insufficient." } },
{ telNO_DST_PARTIAL, { "telNO_DST_PARTIAL", "Partial payment to create account not allowed." } },
{ telCAN_NOT_QUEUE, { "telCAN_NOT_QUEUE", "Can not queue at this time." } },
{ temMALFORMED, { "temMALFORMED", "Malformed transaction." } },
{ temBAD_AMOUNT, { "temBAD_AMOUNT", "Can only send positive amounts." } },
@@ -132,7 +133,7 @@ bool transResultInfo (TER code, std::string& token, std::string& text)
{ terNO_LINE, { "terNO_LINE", "No such line." } },
{ terPRE_SEQ, { "terPRE_SEQ", "Missing/inapplicable prior transaction." } },
{ terOWNERS, { "terOWNERS", "Non-zero owner count." } },
{ terQUEUED, { "terQUEUED", "Held until fee drops." } },
{ terQUEUED, { "terQUEUED", "Held until escalated fee drops." } },
{ tesSUCCESS, { "tesSUCCESS", "The transaction was applied. Only final in a validated ledger." } },
};