Entries in the ledger are located using 256-bit locators. The locators
are calculated using a wide range of parameters specific to the entry
whose locator we are calculating (e.g. an account's locator is derived
from the account's address, whereas the locator for an offer is derived
from the account and the offer sequence.)
Keylets enhance type safety during lookup and make the code more robust,
so this commit removes most of the earlier code, which used naked
uint256 values.
This commit removes obsolete comments, dead or no longer useful
code, and workarounds for several issues that were present in older
compilers that we no longer support.
Specifically:
- It improves the transaction metadata handling class, simplifying
its use and making it less error-prone.
- It reduces the footprint of the Serializer class by consolidating
code and leveraging templates.
- It cleanups the ST* class hierarchy, removing dead code, improving
and consolidating code to reduce complexity and code duplication.
- It shores up the handling of currency codes and the conversation
between 160-bit currency codes and their string representation.
- It migrates beast::secure_erase to the ripple namespace and uses
a call to OpenSSL_cleanse instead of the custom implementation.
This commit introduces the "HardenedValidations" amendment which,
if enabled, allows validators to include additional information in
their validations that can increase the robustness of consensus.
Specifically, the commit introduces a new optional field that can
be set in validation messages can be used to attest to the hash of
the latest ledger that a validator considers to be fully validated.
Additionally, the commit leverages the previously introduced "cookie"
field to improve the robustness of the network by making it possible
for servers to automatically detect accidental misconfiguration which
results in two or more validators using the same validation key.
* Peers negotiate compression via HTTP Header "X-Offer-Compression: lz4"
* Messages greater than 70 bytes and protocol type messages MANIFESTS,
ENDPOINTS, TRANSACTION, GET_LEDGER, LEDGER_DATA, GET_OBJECT,
and VALIDATORLIST are compressed
* If the compressed message is larger than the uncompressed message
then the uncompressed message is sent
* Compression flag and the compression algorithm type are included
in the message header
* Only LZ4 block compression is currently supported
* Reduce lock scope on all public functions
* Use TaskQueue to process shard finalization in separate thread
* Store shard last ledger hash and other info in backend
* Use temp SQLite DB versus control file when acquiring
* Remove boost serialization from cmake files
* Whenever a node downloads a new VL, send it to all peers that
haven't already sent or received it. It also saves it to the
database_dir as a Json text file named "cache." plus the public key of
the list signer. Any files that exist for public keys provided in
[validator_list_keys] will be loaded and processed if any download
from [validator_list_sites] fails or no [validator_list_sites] are
configured.
* Whenever a node receives a broadcast VL message, it treats it as if
it had downloaded it on it's own, broadcasting to other peers as
described above.
* Because nodes normally download the VL once every 5 minutes, a single
node downloading a VL with an updated sequence number could
potentially propagate across a large part of a well-connected network
before any other nodes attempt to download, decreasing the amount of
time that different parts of the network are using different VLs.
* Send all of our current valid VLs to new peers on connection.
This is probably the "noisiest" part of this change, but will give
poorly connected or poorly networked nodes the best chance of syncing
quickly. Nodes which have no http(s) access configured or available
can get a VL with no extra effort.
* Requests on the peer port to the /vl/<pubkey> endpoint will return
that VL in the same JSON format as is used to download now, IF the
node trusts and has a valid instance of that VL.
* Upgrade protocol version to 2.1. VLs will only be sent to 2.1 and
higher nodes.
* Resolves#2953
* use tagged containers for pkg build
* update build images
* continue to build container images in pipeline, but allow
failure (non-block)
* limit travis macos cache
* add vs2019 windows to travis
* remove xcode 9 travis build
* remove clang5/6 from CI and update min version of Clang required in
cmake
* break windows CI build into stages to reduce timeouts
* update datelib
* add if condition to travis builds to allow commit message to limit
builds by platform
The 'network_id' option allows an administrator to specify to which
network they intend a server to connect. Servers can leverage this
information to optimize routing and prune automatically discovered
cross-network connections.
This commit will, if merged:
- add support for the devnet keyword, which corresponds to network ID #2;
- report the network ID, if one is configured, in server_info
This commit restructures the HTTP based protocol negotiation that `rippled`
executes and introduces support for negotiation of compression for peer
links which, if implemented, should result in significant bandwidth savings
for some server roles.
This commit also introduces the new `[network_id]` configuration option
that administrators can use to specify which network the server is part of
and intends to join. This makes it possible for servers from different
networks to drop the link early.
The changeset also improves the log messages generated when negotiation
of a peer link upgrade fails. In the past, no useful information would
be logged, making it more difficult for admins to troubleshoot errors.
This commit also fixes RIPD-237 and RIPD-451
* replace boost::beast::detail::iequals with boost::iequals
* replace deprecated `buffers` function with `make_printable`
* replace boost::beast::detail::ascii_tolower with lambda
* add missing includes
This commit allows server operators to reserve slots for specific
peers (identified by the peer's public node identity) and to make
changes to the reservations while the server is operating.
This commit closes#2938
A running instance of the server tracks the number of protocol messages
and the number of bytes it sends and receives.
This commit makes the counters more granular, allowing server operators
to better track and understand bandwidth usage.
* Add construction and assignment from a generic
contiguous container. Both compile-time and run time
safety checks are made to ensure the safety of this
conversion.
* Remove base_uint::copyFrom. The generic copy assignment
operator now does this functionality with enhanced
safety and better syntax.
* Remove construction from and dedendence on Blob.
The generic constructor and assignment now handle this
functionality.
* Fix client code to adhere to this new API.
* Removed the use of fromVoid in PeerImp.cpp as it was
an inappropriate use of this dangerous API. The
generic container constructors do it with enhanced
safety and better syntax.
* Rename data member pn to data_ and make it private.
* Remove constraint from hash_append
* Remove array_type alias
This patch removes calls to several deprecated asio functions.
* `io_service::post` becomes `post` (free function)
* `io_service::work` becomes `executor_work_guard`
* `io_service::wrap` becomes `bind_executor`
* `get_io_context` becomes `get_executor` or `get_executor().context()`
This patch was tested with boost 1.69 and 1.70. The functions
`ripple::get_lowest_layer` and `beast::create_waitable_timer` are required to
handle a breaking difference between these versions. When rippled no longer
needs to support pre 1.70 boost versions, both of these functions may be
removed, and the waitable timer injections may also be removed.
At this point all of the jss::* names are defined in the same
file. That file has been named JsonFields.h. That file name
has little to do with either JsonStaticStrings (which is what
jss is short for) or with jss. The file is renamed to jss.h
so the file name better reflects what the file contains.
All includes of that file are fixed. A few include order
issues are tidied up along the way.
The new 'Domain' field allows validator operators to associate a domain
name with their manifest in a transparent and independently verifiable
fashion.
It is important to point out that while this system can cryptographically
prove that a particular validator claims to be associated with a domain
it does *NOT* prove that the validator is, actually, associated with that
domain.
Domain owners will have to cryptographically attest to operating particular
validators that claim to be associated with that domain. One option for
doing so would be by making available a file over HTTPS under the domain
being claimed, which is verified separately (e.g. by ensuring that the
certificate used to serve the file matches the domain being claimed) and
which contains the long-term master public keys of validator(s) associated
with that domain.
Credit for an early prototype of this idea goes to GitHub user @cryptobrad
who introduced a PR that would allow a validator list publisher to attest
that a particular validator was associated with a domain. The idea may be
worth revisiting as a way of verifying the domain name claimed by the
validator's operator.
If a server is configured to support crawl, it will report the
IP addresses of all peers it is connected to, unless those peers
have explicitly opted out by setting the `peer_private` option
in their config file.
This commit makes servers that are configured as validators
opt out of crawling.
Several commands allow a user to retrieve a server's status. Commands
will typically limit disclosure of information that can reveal that a
particular server is a validator to connections that are not verified
to make it more difficult to determine validators via fingerprinting.
Prior to this commit, servers configured to operate as validators
would, instead of simply reporting their server state as 'full',
augment their state information to indicate whether they are
'proposing' or 'validating'.
Servers will only provide this enhanced state information for
connections that have elevated privileges.
Acknowledgements:
Ripple thanks Markus Teufelberger for responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
The /crawl API endpoint allows developers to examine the structure of
the XRP Ledger's overlay network.
This commit adds additional information about the local server to the
/crawl endpoint, making it possible for developers to create data-rich
network-wide status dashboards.
Related:
- https://developers.ripple.com/peer-protocol.html
- https://github.com/ripple/rippled-network-crawler
Specially crafted messages could cause the server to buffer large
amounts of memory which could increase memory pressure.
This commit changes how messages are buffered and imposes a limit
on the amount of data that the server is willing to buffer.
Acknowledgements:
Aaron Hook for responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find. For information
on Ripple's Bug Bounty program, please visit:
https://ripple.com/bug-bounty
When validators publish a proposal, they include the close time that they
believe the new ledger should have, and the network attempts to reach
consensus on that.
Instead of delaying consensus if no close time has the required majority
the servers can "agree to disagree"; if this happens, they switch to
proposing a close time of 0, and the network avalanches to that value.
If that occurs, determinstic rules record the new ledger's close time as
being one second later than its parent, and set a flag indicating that
no consensus on the close time was reached.
The wire protocol decoder would incorrectly filter such proposals, so
that they would not be seen by the higher level consensus engine.
This commit removes the low-level filtering, and allows higher level
code to filter out stale proposals instead.