Introduce a new ledger type: ltCHECK
Introduce three new transactions that operate on checks:
- "CheckCreate" which adds the check entry to the ledger. The
check is a promise from the source of the check that the
destination of the check may cash the check and receive up to
the SendMax specified on the check. The check may have an
expiration, after which the check may no longer be cashed.
- "CheckCash" is a request by the destination of the check to
transfer a requested amount of funds, up to the check's SendMax,
from the source to the destination. The destination may receive
less than the SendMax due to transfer fees.
When cashing a check, the destination specifies the smallest
amount of funds that will be acceptable. If the transfer
completes and delivers the requested amount, then the check is
considered cashed and removed from the ledger. If enough funds
cannot be delivered, then the transaction fails and the check
remains in the ledger.
Attempting to cash the check after its expiration will fail.
- "CheckCancel" removes the check from the ledger without
transferring funds. Either the check's source or destination
can cancel the check at any time. After a check has expired,
any account can cancel the check.
Facilities related to checks are on the "Checks" amendment.
Do not process a transaction received from a peer if it has
been processed within the past ten seconds.
Increase the number of transaction handlers that can be in
flight in the job queue and decrease the relative cost for
peers to share transaction and ledger data.
Additionally, make better use of resources by adjusting the
number of threads we initialize, by reverting commit
68b8ffdb63.
Performance counter modifications:
* Create and display counters to track:
1) Pending transaction limit overruns.
2) Total peer disconnections.
3) Peers disconnections due to resource consumption.
Avoid a potential double-free in Json library.
The DepositAuth feature allows an account to require that
it signs for any funds that are deposited to the account.
For the time being this limits the account to accepting
only XRP, although there are plans to allow IOU payments
in the future.
The lsfDepositAuth protections are not extended to offers.
If an account creates an offer it is in effect saying, “I
will accept funds from anyone who takes this offer.”
Therefore, the typical user of the lsfDepositAuth flag
will choose never to create any offers. But they can if
they so choose.
The DepositAuth feature leaves a small gap in its
protections. An XRP payment is allowed to a destination
account with the lsfDepositAuth flag set if:
- The Destination XRP balance is less than or equal to
the base reserve and
- The value of the XRP Payment is less than or equal to
the base reserve.
This exception is intended to make it impossible for an
account to wedge itself by spending all of its XRP on fees
and leave itself unable to pay the fee to get more XRP.
This commit
- adds featureDepositAuth,
- adds the lsfDepositAuth flag,
- adds support for lsfDepositAuth in SetAccount.cpp
- adds support in Payment.cpp for rejecting payments that
don't meet the lsfDepositAuth requirements,
- adds unit tests for Payment transactions to an an account
with lsfDepositAuth set.
- adds Escrow and PayChan support for lsfDepositAuth along
with as unit tests.
* Null json values can be objects or arrays.
* json arrays are now interpreted as batch commands.
* json objects are single commands.
* null jsons are ambiguous as to whether they are single or batch
commands and should be avoided.
Profiling and research indicates that the SQLite query planner executed
our existing SQL queries sub-optimally by not using the index efficiently.
Restructuring the SQL query works around this issue and allows queries
to be executed efficiently and without unnecessary delay.
* Update unity build for RocksDB changes
* Log RocksDB options on startup
* Support RocksDB option strings
* Support full file bloom filters
You can now configure most RocksDB options with RocksDB's option
string scheme.
Set "filter_full" to 1 to make bloom filters for an
entire file rather than each block. More memory will be
needed during compaction but less memory will be needed
during fetching for large databases. Does nothing unless
bloom filters are enabled with "filter_bits".
Example:
options = max_compaction_bytes=64;max_bytes_for_level_multiplier=64
clock_cache_mb = 96
filter_bits = 10
filter_full = 1
Do not dispatch a transaction received from a peer for
processing, if it has already been dispatched within the
past ten seconds.
Increase the number of transaction handlers that can be in
flight in the job queue and decrease the relative cost for
peers to share transaction and ledger data.
Additionally, make better use of resources by adjusting the
number of threads we initialize, by reverting commit
68b8ffdb63.
Previously if you mistyped the "submit_multisigned" command as
"submit_multisign", the returned message was "Internal error". Not
very helpful. It turns out this was caused by a small amount of
code in RPCCall.cpp. Removing that code improves two situations:
1. It improves the situation with a mistyped command. Now the
command returns "Unknown method" and provides the string of
the mistyped command.
2. The "transaction_entry", if properly entered in its command
line form, would fire an assert. That assert is now removed.
In the process, it was discovered that the command line form of
the "transaction_entry" command has not worked correctly for at
least a year. Therefore support for that the command line form
of "transaction_entry" is added along with appropriate unit
tests.
It is common for a validator operator to connect their validator
only to other nodes under their control, using clustering. Relaying of
untrusted validations and proposals must be unreliable to prevent
denial of service attacks. But currently, they are unreliable even
within a cluster.
With this change, a cluster member's decisison to relay (or originate)
a validation or proposal is honored by other cluster members. This
ensures that validators in a cluster will get reliable relaying to
hubs outside the cluster, even if other members of the cluster do not
have that validator on their UNL.
* Can be exercised from the command line with json2
* Rewrite Env::do_rpc to call the same code as
rpc from the command line. This puts rpc
handling logic in one place.
* If any of the destructor, copy assignment or copy constructor
are user-declared, both copy members should be user-declared,
otherwise the compiler-generation of them is deprecated.
* Remove composite helper functions
* Add set difference and Bitset/uint256 operators
* Convert tests to use new feature bitset set difference operator
In order to automatically run unit tests with newly created
amendments, prefer to start with jtx::supported_features() and
then subtract unwanted features.
These changes identified a few bugs that were hiding in
amendments. One of those bugs, in FlowCross, is not yet fixed.
By uncommenting the test in CrossingLimits_test.cpp you can see
failures relating to that bug. Since FlowCross is not yet
enabled on the network we can fix the bug at our convenience.
Both Tickets and SHAMapV2 have been around for a while and don't
look like they will be enabled on the network soon. So they are
removed from the supportedAmendments list. This prevents Env
from automatically testing with Tickets or SHAMapV2 enabled,
although testing with those features can still be explicitly
specified.
Drive-by cleanups:
o supportedAmendments() returns a const reference rather than
a fresh vector on each call.
o supportedAmendments() implementation moved from Amendments.cpp
to Feature.cpp. Amendments.cpp deleted.
o supportedAmendments() declared in Feature.h. All other
declarations deleted.
o preEnabledAmendments() removed, since it was empty and only
used in one place. It will be easy to re-add when it is needed.
o jtx::all_features_except() renamed to
jtx::supported_features_except(), which is more descriptive.
o jtx::all_amendments() renamed to jxt::supported_amendments()
o jtx::with_features() renamed to with_only_features()
o Env_test.cpp adjusted since featureTickets is no longer
automatically enabled for unit tests.
- Separate `Scheduler` from `BasicNetwork`.
- Add an event/collector framework for monitoring invariants and calculating statistics.
- Allow distinct network and trust connections between Peers.
- Add a simple routing strategy to support broadcasting arbitrary messages.
- Add a common directed graph (`Digraph`) class for representing network and trust topologies.
- Add a `PeerGroup` class for simpler specification of the trust and network topologies.
- Add a `LedgerOracle` class to ensure distinct ledger histories and simplify branch checking.
- Add a `Submitter` to send transactions in at fixed or random intervals to fixed or random peers.
Co-authored-by: Joseph McGee
We need to ensure that pointers and/or references to existing
elements will not be invalidated during the course of element
insertion and removal.
Containers like std::vector do not offer this guarantee, and
cannot be used as the underlying container for the stack.
By choosing to explicitly specify std::deque as the underlying
cotnainer, we avoid:
- the unlikely possibility of the C++ standards committee
changing the default template parameter to a container;
- the more likely possibility of an accidental change by
a programmer, without fully considering the consequences.
* Stores recent history of "good" ledgers. Uses the maximum as the
expected ledger size. When a large value drops off, use a 90%
backoff to go down to to the new maximum.
* If consensus is unhealthy, wipe the history in addition to the current
clamping.
* Include .md doc files in xcode and VS projects