Grow the open ledger expected transactions quickly (RIPD-1630):

* When increasing the expected ledger size, add on an extra 20%.
* When decreasing the expected ledger size, take the minimum of the
  validated ledger size or the old expected size, and subract another 50%.
* Update fee escalation documentation.
* Refactor the FeeMetrics object to use values from Setup
This commit is contained in:
Edward Hennis
2018-07-20 19:27:47 -04:00
committed by Nik Bougalis
parent e14f913244
commit 7295cf979b
6 changed files with 261 additions and 39 deletions

View File

@@ -538,6 +538,20 @@
# into the ledger at the minimum required fee before the required
# fee escalates. Default: no maximum.
#
# normal_consensus_increase_percent = <number>
#
# (Optional) When the ledger has more transactions than "expected",
# and performance is humming along nicely, the expected ledger size
# is updated to the previous ledger size plus this percentage.
# Default: 20
#
# slow_consensus_decrease_percent = <number>
#
# (Optional) When consensus takes longer than appropriate, the
# expected ledger size is updated to the minimum of the previous
# ledger size or the "expected" ledger size minus this percentage.
# Default: 50
#
# maximum_txn_per_account = <number>
#
# Maximum number of transactions that one account can have in the

View File

@@ -26,17 +26,24 @@ can get into an open ledger for that base fee level. The limit
will vary based on the [health](#consensus-health) of the
consensus process, but will be at least [5](#other-constants).
* If consensus stays [healthy](#consensus-health), the limit will
be the max of the current limit or the number of transactions in
the validated ledger until it gets to [50](#other-constants), at
which point, the limit will be the largest number of transactions
be the max of the number of transactions in the validated ledger
plus [20%](#other-constants) or the current limit until it gets
to [50](#other-constants), at which point, the limit will be the
largest number of transactions plus [20%](#other-constants)
in the last [20](#other-constants) validated ledgers which had
more than 50 transactions. Any time the limit decreases (ie. a
large ledger is no longer recent), the limit will decrease to the
new largest value by 10% each time the ledger has more than 50
transactions.
more than [50](#other-constants) transactions. Any time the limit
decreases (i.e. a large ledger is no longer recent), the limit will
decrease to the new largest value by 10% each time the ledger has
more than 50 transactions.
* If consensus does not stay [healthy](#consensus-health),
the limit will clamp down to the smaller of [50](#other-constants)
or the number of transactions in the validated ledger.
the limit will clamp down to the smaller of the number of
transactions in the validated ledger minus [50%](#other-constants)
or the previous limit minus [50%](#other-constants).
* The intended effect of these mechanisms is to allow as many base fee
level transactions to get into the ledger as possible while the
network is [healthy](#consensus-health), but to respond quickly to
any condition that makes it [unhealthy](#consensus-health), including,
but not limited to, malicious attacks.
3. Once there are more transactions in the open ledger than indicated
by the limit, the required fee level jumps drastically.
* The formula is `( lastLedgerMedianFeeLevel *
@@ -57,12 +64,12 @@ in the fee escalation formula for the next open ledger.
* Continuing the example above, if ledger consensus completes with
only those 20 transactions, and all of those transactions paid the
minimum required fee at each step, the limit will be adjusted from
6 to 20, and the `lastLedgerMedianFeeLevel` will be about 322,000,
6 to 24, and the `lastLedgerMedianFeeLevel` will be about 322,000,
which is 12,600 drops for a
[reference transaction](#reference-transaction).
* This will cause the first 21 transactions only require 10
drops, but the 22nd transaction will require
a level of about 355,000 or about 13,800 drops.
* This will only require 10 drops for the first 25 transactions,
but the 26th transaction will require a level of about 349,150
or about 13,649 drops.
* This example assumes a cold-start scenario, with a single, possibly
malicious, user willing to pay arbitrary amounts to get transactions
@@ -99,8 +106,8 @@ external transactions, transactions are applied from the queue to the
ledger from highest [fee level](#fee-level) to lowest. These transactions
count against the open ledger limit, so the required [fee level](#fee-level)
may start rising during this process.
3. Once the queue is empty, or required the [fee level](#fee-level)
jumps too high for the remaining transactions in the queue, the ledger
3. Once the queue is empty, or the required [fee level](#fee-level)
rises too high for the remaining transactions in the queue, the ledger
is opened up for normal transaction processing.
4. A transaction in the queue can stay there indefinitely in principle,
but in practice, either
@@ -133,7 +140,7 @@ for the queue if it meets these additional criteria:
* none of the prior queued transactions affect the ability of subsequent
transactions to claim a fee.
Currently, there is an additional restriction that the queue can not work with
Currently, there is an additional restriction that the queue cannot work with
transactions using the `sfPreviousTxnID` or `sfAccountTxnID` fields.
`sfPreviousTxnID` is deprecated and shouldn't be used anyway. Future
development will make the queue aware of `sfAccountTxnID` mechanisms.
@@ -195,6 +202,13 @@ unusable. The "target" value of 50 was chosen so the limit never gets large
enough to invite abuse, but keeps up if the network stays healthy and
active. These exact values were chosen experimentally, and can easily
change in the future.
* *Expected ledger size growth and reduction percentages*. The growth
value of 20% was chosen to allow the limit to grow quickly as load
increases, but not so quickly as to allow bad actors to run unrestricted.
The reduction value of 50% was chosen to cause the limit to drop
significantly, but not so drastically that the limit cannot quickly
recover if the problem is temporary. These exact values were chosen
experimentally, and can easily change in the future.
* *Minimum `lastLedgerMedianFeeLevel`*. The value of 500 was chosen to
ensure that the first escalated fee was more significant and noticable
than what the default would allow. This exact value was chosen
@@ -223,7 +237,7 @@ the earlier ones fails or is otherwise removed from the queue without
being applied to the open ledger. The value was chosen arbitrarily, and
can easily change in the future.
* *Minimum last ledger sequence buffer*. If a transaction has a
`LastLedgerSequence` value, and can not be processed into the open
`LastLedgerSequence` value, and cannot be processed into the open
ledger, that `LastLedgerSequence` must be at least 2 more than the
sequence number of the open ledger to be considered for the queue. The
value was chosen to provide a balance between letting the user control
@@ -251,8 +265,8 @@ neccessary fees for other types of transactions. (E.g. multiply all drop
values by 5 for a multi-signed transaction with 4 signatures.)
The `fee` result is always instantanteous, and relates to the open
ledger. Thus, it does not include any sequence number or IDs, and may
not make sense if rippled is not synced to the network.
ledger. It includes the sequence number of the current open ledger,
but may not make sense if rippled is not synced to the network.
Result format:
```
@@ -262,6 +276,7 @@ Result format:
"current_queue_size" : "2", // number of transactions waiting in the queue
"expected_ledger_size" : "15", // one less than the number of transactions that can get into the open ledger for the base fee.
"max_queue_size" : "300", // number of transactions allowed into the queue
"ledger_current_index" : 123456789, // sequence number of the current open ledger
"levels" : {
"reference_level" : "256", // level of a reference transaction. Always 256.
"minimum_level" : "256", // minimum fee level to get into the queue. If >256, indicates the queue is full.

View File

@@ -104,7 +104,7 @@ public:
std::int32_t multiTxnPercent = -90;
/// Minimum value of the escalation multiplier, regardless
/// of the prior ledger's median fee level.
std::uint32_t minimumEscalationMultiplier = baseLevel * 500;
std::uint64_t minimumEscalationMultiplier = baseLevel * 500;
/// Minimum number of transactions to allow into the ledger
/// before escalation, regardless of the prior ledger's size.
std::uint32_t minimumTxnInLedger = 5;
@@ -125,6 +125,32 @@ public:
values. Can it be removed?
*/
boost::optional<std::uint32_t> maximumTxnInLedger;
/** When the ledger has more transactions than "expected", and
performance is humming along nicely, the expected ledger size
is updated to the previous ledger size plus this percentage.
Calculations are subject to configured limits, and the recent
transactions counts buffer.
Example: If the "expectation" is for 500 transactions, and a
ledger is validated normally with 501 transactions, then the
expected ledger size will be updated to 601.
*/
std::uint32_t normalConsensusIncreasePercent = 20;
/** When consensus takes longer than appropriate, the expected
ledger size is updated to the lesser of the previous ledger
size and the current expected ledger size minus this
percentage.
Calculations are subject to configured limits.
Example: If the ledger has 15000 transactions, and it is
validated slowly, then the expected ledger size will be
updated to 7500. If there are only 6 transactions, the
expected ledger size will be updated to 5, assuming the
default minimum.
*/
std::uint32_t slowConsensusDecreasePercent = 50;
/// Maximum number of transactions that can be queued by one account.
std::uint32_t maximumTxnPerAccount = 10;
/** Minimum difference between the current ledger sequence and a
@@ -135,7 +161,7 @@ public:
*/
std::uint32_t minimumLastLedgerBuffer = 2;
/** So we don't deal with "infinite" fee levels, treat
any transaction with a 0 base fee (ie SetRegularKey
any transaction with a 0 base fee (i.e. SetRegularKey
password recovery) as having this fee level.
Should the network behavior change in the future such
that these transactions are unable to be processed,
@@ -347,8 +373,6 @@ private:
/// Recent history of transaction counts that
/// exceed the targetTxnCount_
boost::circular_buffer<std::size_t> recentTxnCounts_;
/// Minimum value of escalationMultiplier.
std::uint64_t const minimumMultiplier_;
/// Based on the median fee of the LCL. Used
/// when fee escalation kicks in.
std::uint64_t escalationMultiplier_;
@@ -369,8 +393,7 @@ private:
boost::optional<std::size_t>(boost::none))
, txnsExpected_(minimumTxnCount_)
, recentTxnCounts_(setup.ledgersInQueue)
, minimumMultiplier_(setup.minimumEscalationMultiplier)
, escalationMultiplier_(minimumMultiplier_)
, escalationMultiplier_(setup.minimumEscalationMultiplier)
, j_(j)
{
}
@@ -469,7 +492,7 @@ private:
};
/**
Represents transactions in the queue which may be applied
Represents a transaction in the queue which may be applied
later to the open ledger.
*/
class MaybeTx
@@ -498,6 +521,8 @@ private:
/// Expiration ledger for the transaction
/// (`sfLastLedgerSequence` field).
boost::optional<LedgerIndex> lastValid;
/// Transaction sequence number (`sfSequence` field).
TxSeq const sequence;
/**
A transaction at the front of the queue will be given
several attempts to succeed before being dropped from
@@ -507,12 +532,16 @@ private:
penalty.
*/
int retriesRemaining;
/// Transaction sequence number (`sfSequence` field).
TxSeq const sequence;
/// Flags provided to `apply`. If the transaction is later
/// attempted with different flags, it will need to be
/// `preflight`ed again.
ApplyFlags const flags;
/** If the transactor attempted to apply the transaction to the open
ledger from the queue and *failed*, then this is the transactor
result from the last attempt. Should never be a `tec`, `tef`,
`tem`, or `tesSUCCESS`, because those results cause the
transaction to be removed from the queue.
*/
boost::optional<TER> lastResult;
/** Cached result of the `preflight` operation. Because
`preflight` is expensive, minimize the number of times

View File

@@ -105,14 +105,19 @@ TxQ::FeeMetrics::update(Application& app,
{
// Ledgers are taking to long to process,
// so clamp down on limits.
txnsExpected_ = boost::algorithm::clamp(feeLevels.size(),
minimumTxnCount_, targetTxnCount_);
auto const cutPct = 100 - setup.slowConsensusDecreasePercent;
txnsExpected_ = boost::algorithm::clamp(
mulDiv(size, cutPct, 100).second,
minimumTxnCount_,
mulDiv(txnsExpected_, cutPct, 100).second);
recentTxnCounts_.clear();
}
else if (feeLevels.size() > txnsExpected_ ||
feeLevels.size() > targetTxnCount_)
else if (size > txnsExpected_ ||
size > targetTxnCount_)
{
recentTxnCounts_.push_back(feeLevels.size());
recentTxnCounts_.push_back(
mulDiv(size, 100 + setup.normalConsensusIncreasePercent,
100).second);
auto const iter = std::max_element(recentTxnCounts_.begin(),
recentTxnCounts_.end());
BOOST_ASSERT(iter != recentTxnCounts_.end());
@@ -135,9 +140,9 @@ TxQ::FeeMetrics::update(Application& app,
maximumTxnCount_.value_or(next));
}
if (feeLevels.empty())
if (!size)
{
escalationMultiplier_ = minimumMultiplier_;
escalationMultiplier_ = setup.minimumEscalationMultiplier;
}
else
{
@@ -148,7 +153,7 @@ TxQ::FeeMetrics::update(Application& app,
escalationMultiplier_ = (feeLevels[size / 2] +
feeLevels[(size - 1) / 2] + 1) / 2;
escalationMultiplier_ = std::max(escalationMultiplier_,
minimumMultiplier_);
setup.minimumEscalationMultiplier);
}
JLOG(j_.debug()) << "Expected transactions updated to " <<
txnsExpected_ << " and multiplier updated to " <<
@@ -250,8 +255,8 @@ TxQ::MaybeTx::MaybeTx(
, feeLevel(feeLevel_)
, txID(txID_)
, account(txn_->getAccountID(sfAccount))
, retriesRemaining(retriesAllowed)
, sequence(txn_->getSequence())
, retriesRemaining(retriesAllowed)
, flags(flags_)
, pfresult(pfresult_)
{
@@ -1545,6 +1550,27 @@ setup_TxQ(Config const& config)
std::uint32_t max;
if (set(max, "maximum_txn_in_ledger", section))
setup.maximumTxnInLedger.emplace(max);
/* The math works as expected for any value up to and including
MAXINT, but put a reasonable limit on this percentage so that
the factor can't be configured to render escalation effectively
moot. (There are other ways to do that, including
minimum_txn_in_ledger.)
*/
set(setup.normalConsensusIncreasePercent,
"normal_consensus_increase_percent", section);
setup.normalConsensusIncreasePercent = boost::algorithm::clamp(
setup.normalConsensusIncreasePercent, 0, 1000);
/* If this percentage is outside of the 0-100 range, the results
are nonsensical (uint overflows happen, so the limit grows
instead of shrinking). 0 is not recommended.
*/
set(setup.slowConsensusDecreasePercent, "slow_consensus_decrease_percent",
section);
setup.slowConsensusDecreasePercent = boost::algorithm::clamp(
setup.slowConsensusDecreasePercent, 0, 100);
set(setup.maximumTxnPerAccount, "maximum_txn_per_account", section);
set(setup.minimumLastLedgerBuffer,
"minimum_last_ledger_buffer", section);

View File

@@ -108,6 +108,7 @@ class TxQ_test : public beast::unit_test::suite
section.set("max_ledger_counts_to_store", "100");
section.set("retry_sequence_percent", "25");
section.set("zero_basefee_transaction_feelevel", "100000000000");
section.set("normal_consensus_increase_percent", "0");
for (auto const& value : extraTxQ)
section.set(value.first, value.second);
@@ -2811,6 +2812,141 @@ public:
}
}
void
testScaling()
{
using namespace jtx;
using namespace std::chrono_literals;
{
Env env(*this,
makeConfig({ { "minimum_txn_in_ledger_standalone", "3" },
{ "normal_consensus_increase_percent", "25" },
{ "slow_consensus_decrease_percent", "50" },
{ "target_txn_in_ledger", "10" },
{ "maximum_txn_per_account", "200" } }));
auto alice = Account("alice");
checkMetrics(env, 0, boost::none, 0, 3, 256);
env.fund(XRP(50000000), alice);
fillQueue(env, alice);
checkMetrics(env, 0, boost::none, 4, 3, 256);
auto seqAlice = env.seq(alice);
auto txCount = 140;
for (int i = 0; i < txCount; ++i)
env(noop(alice), seq(seqAlice++), ter(terQUEUED));
checkMetrics(env, txCount, boost::none, 4, 3, 256);
// Close a few ledgers successfully, so the limit grows
env.close();
// 4 + 25% = 5
txCount -= 6;
checkMetrics(env, txCount, 10, 6, 5, 257);
env.close();
// 6 + 25% = 7
txCount -= 8;
checkMetrics(env, txCount, 14, 8, 7, 257);
env.close();
// 8 + 25% = 10
txCount -= 11;
checkMetrics(env, txCount, 20, 11, 10, 257);
env.close();
// 11 + 25% = 13
txCount -= 14;
checkMetrics(env, txCount, 26, 14, 13, 257);
env.close();
// 14 + 25% = 17
txCount -= 18;
checkMetrics(env, txCount, 34, 18, 17, 257);
env.close();
// 18 + 25% = 22
txCount -= 23;
checkMetrics(env, txCount, 44, 23, 22, 257);
env.close();
// 23 + 25% = 28
txCount -= 29;
checkMetrics(env, txCount, 56, 29, 28, 256);
// From 3 expected to 28 in 7 "fast" ledgers.
// Close the ledger with a delay.
env.close(env.now() + 5s, 10000ms);
txCount -= 15;
checkMetrics(env, txCount, 56, 15, 14, 256);
// Close the ledger with a delay.
env.close(env.now() + 5s, 10000ms);
txCount -= 8;
checkMetrics(env, txCount, 56, 8, 7, 256);
// Close the ledger with a delay.
env.close(env.now() + 5s, 10000ms);
txCount -= 4;
checkMetrics(env, txCount, 56, 4, 3, 256);
// From 28 expected back down to 3 in 3 "slow" ledgers.
// Confirm the minimum sticks
env.close(env.now() + 5s, 10000ms);
txCount -= 4;
checkMetrics(env, txCount, 56, 4, 3, 256);
BEAST_EXPECT(!txCount);
}
{
Env env(*this,
makeConfig({ { "minimum_txn_in_ledger_standalone", "3" },
{ "normal_consensus_increase_percent", "150" },
{ "slow_consensus_decrease_percent", "150" },
{ "target_txn_in_ledger", "10" },
{ "maximum_txn_per_account", "200" } }));
auto alice = Account("alice");
checkMetrics(env, 0, boost::none, 0, 3, 256);
env.fund(XRP(50000000), alice);
fillQueue(env, alice);
checkMetrics(env, 0, boost::none, 4, 3, 256);
auto seqAlice = env.seq(alice);
auto txCount = 43;
for (int i = 0; i < txCount; ++i)
env(noop(alice), seq(seqAlice++), ter(terQUEUED));
checkMetrics(env, txCount, boost::none, 4, 3, 256);
// Close a few ledgers successfully, so the limit grows
env.close();
// 4 + 150% = 10
txCount -= 11;
checkMetrics(env, txCount, 20, 11, 10, 257);
env.close();
// 11 + 150% = 27
txCount -= 28;
checkMetrics(env, txCount, 54, 28, 27, 256);
// From 3 expected to 28 in 7 "fast" ledgers.
// Close the ledger with a delay.
env.close(env.now() + 5s, 10000ms);
txCount -= 4;
checkMetrics(env, txCount, 54, 4, 3, 256);
// From 28 expected back down to 3 in 3 "slow" ledgers.
BEAST_EXPECT(!txCount);
}
}
void run() override
{
testQueue();
@@ -2835,6 +2971,7 @@ public:
testServerInfo();
testServerSubscribe();
testClearQueuedAccountTxs();
testScaling();
}
};

View File

@@ -1283,8 +1283,9 @@ class LedgerRPC_test : public beast::unit_test::suite
using namespace test::jtx;
Env env { *this,
envconfig([](std::unique_ptr<Config> cfg) {
cfg->section("transaction_queue")
.set("minimum_txn_in_ledger_standalone", "3");
auto& section = cfg->section("transaction_queue");
section.set("minimum_txn_in_ledger_standalone", "3");
section.set("normal_consensus_increase_percent", "0");
return cfg;
})};