This flag, if present, suppresses the output of incoming
trustlines in default state.
This is primarily motivated by observing that users of Xumm often
have many unwanted incoming trustlines in a default state, which are
not useful in the vast majority of cases.
Being able to suppress those when doing `account_lines` saves bandwidth
and resources.
* Only require adding the new feature names in one place. (Also need to
increment a counter, but a check on startup will catch that.)
* Allows rippled to have the code to support a given amendment, but
not vote for it by default. This allows the amendment to be enabled in
a future version without necessarily amendment blocking these older
versions.
* The default vote is carried with the amendment name in the list of
supported amendments.
* The amendment table is constructed with the amendment and default
vote.
The existing logic involves every server sending every transaction
that it receives to all its peers (except the one that it received
a transaction from).
This commit instead uses a randomized algorithm, where a node will
randomly select peers to relay a given transaction to, caching the
list of transaction hashes that are not relayed and forwading them
to peers once every second. Peers can then determine whether there
are transactions that they have not seen and can request them from
the node which has them.
It is expected that this feature will further reduce the bandwidth
needed to operate a server.
This commit removes the `ltINVALID` pseudo-type identifier from
`LedgerEntryType` and the `ttINVALID` pseudo-type identifier from
`TxType` and includes several small additional improvements that
help to simplify the code base.
It also improves the documentation `LedgerEntryType` and `TxType`,
which was all over the place, and highlights some important caveats
associated with making changes to the ledger and transaction type
identifiers.
The commit also adds a safety check to the `KnownFormats<>` class,
that will catch the the accidental reuse of format identifiers.
Ideally, this should be done at compile time but C++ does not (yet?)
allow for the sort of introspection that would enable this.
The Negative UNL is a feature of the XRP Ledger consensus protocol that
improves liveness (the network's ability to make forward progress) during
a partial outage. Using the Negative UNL, servers adjust their effective
UNLs based on which validators are currently online and operational, so
that a new ledger version can be declared validated even if several trusted
validators are offline.
The Negative UNL has no impact on how the network processes transactions
or what transactions' outcomes are, except that it improves the network's
ability to declare outcomes final during some types of partial outages.
The feature was originally introduced with version **1.6.0** but it was
only possible to manually enable this. If merged, this commit introduces
the amendment associated with the feature so that server operators can
vote on whether to enable this feature.
For more details, please see https://xrpl.org/negative-unl.html
This commit closes#3898.
Under some circumstances, it is possible to induce an out-of-bounds
memory read in the base58 decoder.
This commit addresses this issue.
Acknowledgements:
Guido Vranken for discovering and responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find.
Ripple is generously sponsoring a bug bounty program for the
rippled project. For more information please visit:
https://ripple.com/bug-bounty
The HardenedValidations amendment introduces additional fields
in validations:
- `sfValidatedHash`, if present, is the hash the of last ledger that
the validator considers to be fully validated.
- `sfCookie`, if present, is a 64-bit cookie (the default
implementation selects it randomly at startup but other
implementations are possible), which can be used to improve the
detection and classification of duplicate validations.
- `sfServerVersion`, if present, reports the version of the software
that the validator is running. By surfacing this information,
server operators gain additional insight about variety of software
on the network.
If merged, this commit fixes#3797 by adding the fields to the
`validations` stream as shown below:
- `sfValidateHash` as `validated_hash`: a 256-bit hex string;
- `sfCookie` as `cookie`: a 64-bit integer as a string; and
- `sfServerVersion` as `server_version`: a 64-bit integer as
a string.
With this amendment, the CheckCash transaction creates a TrustLine
if needed. The change is modeled after offer crossing. And,
similar to offer crossing, cashing a check allows an account to
exceed its trust line limit.
The `[node_size]` configuration parameter is used to tune various
parameters based on the hardware that the code is running on. The
parameter can take five distinct values: `tiny`, `small`, `medium`,
`large` and `huge`.
The default value in the code is `tiny` but the default configuration
file sets the value to `medium`. This commit attempts to detect the
amount of RAM on the system and adjusts the node size default value
based on the amount of RAM and the number of hardware execution
threads on the system.
The decision matrix currently used is:
| | 1 | 2 or 3 | ≥ 4 |
|:-------:|:----:|:------:|:------:|
| > ~8GB | tiny | tiny | tiny |
| > ~12GB | tiny | small | small |
| > ~16GB | tiny | small | medium |
| > ~24GB | tiny | small | large |
| > ~32GB | tiny | small | huge |
Some systems exclude memory reserved by the the hardware, the kernel
or the underlying hypervisor so the automatic detection code may end
up determining the node_size to be one less than "appropriate" given
the above table.
The detection algorithm is simplistic and does not take into account
other relevant factors. Therefore, for production-quality servers it
is recommended that server operators examine the system holistically
and determine what the appropriate size is instead of relying on the
automatic detection code.
To aid server operators, the node size will now be reported in the
`server_info` API as `node_size` when the command is invoked in
'admin' mode.
While most of the code associated with secp256k1 operations had
been migrated to libsecp256k1, the deterministic key derivation
code was still using calls to OpenSSL.
If merged, this commit replaces the OpenSSL-based routines with
new libsecp256k1-based implementations. No functional change is
expected and the change should be transparent.
This commit also removes several support classes and utility
functions that wrapped or adapted various OpenSSL types that
are no longer needed.
A tip of the hat to the original author of this truly superb
library, Dr. Pieter Wuille, and to all other contributors.
This commit expands the detection capabilities of the Byzantine
validation detector. Prior to this commit, only validators that
were on a server's UNL were monitored. Now, all the validations
that a server receives are passed through the detector.
The existing class offered several constructors which were mostly
unnecessary. This commit eliminates all existing constructors and
introduces a single new one, taking a `Slice`.
The internal buffer is switched from `std::vector` to `Buffer` to
save a minimum of 8 bytes (plus the buffer slack that is inherent
in `std::vector`) per SHAMapItem instance.
Before this change any non-zero Sequence field was handled as
a non-ticketed transaction, even if a TicketSequence was
present. We learned that this could lead to user confusion.
So the rules are tightened up.
Now if any transaction contains both a non-zero Sequence
field and a TicketSequence field then that transaction
returns a temSEQ_AND_TICKET error code.
The (deprecated) "sign" and "submit" RPC commands are tuned
up so they auto-insert a Sequence field of zero if they see
a TicketSequence in the transaction.
No amendment is needed because this change is going into
the first release that supports the TicketBatch amendment.
The existing code that deserialized an STAmount was sub-optimal and performed
poorly. In some rare cases the operation could result in otherwise valid
serialized amounts overflowing during deserialization. This commit will help
detect error conditions more quickly and eliminate the problematic corner cases.