This reverts commit 002893f280.
There were two files with conflicts in the automated revert:
- src/ripple/rpc/impl/RPCHelpers.h and
- src/test/rpc/JSONRPC_test.cpp
Those files were manually resolved.
* Revert "Optimize calculation of close time to avoid impasse and minimize gratuitous proposal changes (#4760)"
This reverts commit 8ce85a9750.
* Revert "Several changes to improve Consensus stability: (#4505)"
This reverts commit f259cc1ab6.
* Add missing include
---------
Co-authored-by: seelabs <scott.determan@yahoo.com>
Clean up the peer-to-peer protocol start/close sequences by introducing
START_PROTOCOL and GRACEFUL_CLOSE messages, which sync inbound/outbound
peer send/receive. The GRACEFUL_CLOSE message differentiates application
and link layer failures.
* Introduce the `InboundHandoff` class to manage inbound peer
instantiation and synchronize the send/receive protocol messages
between peers.
* Update `OverlayImpl` to utilize the `InboundHandoff` class to manage
inbound handshakes.
* Update `PeerImp` for improved handling of protocol messages.
* Modify the `Message` class for better maintainability.
* Introduce P2P protocol version `2.3`.
A bridge connects two blockchains: a locking chain and an issuing
chain (also called a mainchain and a sidechain). Both are independent
ledgers, with their own validators and potentially their own custom
transactions. Importantly, there is a way to move assets from the
locking chain to the issuing chain and a way to return those assets from
the issuing chain back to the locking chain: the bridge. This key
operation is called a cross-chain transfer. A cross-chain transfer is
not a single transaction. It happens on two chains, requires multiple
transactions, and involves an additional server type called a "witness".
A bridge does not exchange assets between two ledgers. Instead, it locks
assets on one ledger (the "locking chain") and represents those assets
with wrapped assets on another chain (the "issuing chain"). A good model
to keep in mind is a box with an infinite supply of wrapped assets.
Putting an asset from the locking chain into the box will release a
wrapped asset onto the issuing chain. Putting a wrapped asset from the
issuing chain back into the box will release one of the existing locking
chain assets back onto the locking chain. There is no other way to get
assets into or out of the box. Note that there is no way for the box to
"run out of" wrapped assets - it has an infinite supply.
Co-authored-by: Gregory Popovitch <greg7mdp@gmail.com>
* Verify accepted ledger becomes validated, and retry
with a new consensus transaction set if not.
* Always store proposals.
* Track proposals by ledger sequence. This helps slow peers catch
up with the rest of the network.
* Acquire transaction sets for proposals with future ledger sequences.
This also helps slow peers catch up.
* Optimize timer delay for establish phase to wait based on how
long validators have been sending proposals. This also helps slow
peers to catch up.
* Fix impasse achieving close time consensus.
* Don't wait between open and establish phases.
Add new transaction submission API field, "sync", which
determines behavior of the server while submitting transactions:
1) sync (default): Process transactions in a batch immediately,
and return only once the transaction has been processed.
2) async: Put transaction into the batch for the next processing
interval and return immediately.
3) wait: Put transaction into the batch for the next processing
interval and return only after it is processed.
* Verify accepted ledger becomes validated, and retry
with a new consensus transaction set if not.
* Always store proposals.
* Track proposals by ledger sequence. This helps slow peers catch
up with the rest of the network.
* Acquire transaction sets for proposals with future ledger sequences.
This also helps slow peers catch up.
* Optimize timer delay for establish phase to wait based on how
long validators have been sending proposals. This also helps slow
peers to catch up.
* Fix impasse achieving close time consensus.
* Don't wait between open and establish phases.
Add new transaction submission API field, "sync", which
determines behavior of the server while submitting transactions:
1) sync (default): Process transactions in a batch immediately,
and return only once the transaction has been processed.
2) async: Put transaction into the batch for the next processing
interval and return immediately.
3) wait: Put transaction into the batch for the next processing
interval and return only after it is processed.
- "Rename" the type `LedgerInfo` to `LedgerHeader` (but leave an alias
for `LedgerInfo` to not yet disturb existing uses). Put it in its own
public header, named after itself, so that it is more easily found.
- Move the type `Fees` and NFT serialization functions into public
(installed) headers.
- Compile the XRPL and gRPC protocol buffers directly into `libxrpl` and
install their headers. Fix the Conan recipe to correctly export these
types.
Addresses change (2) in
https://github.com/XRPLF/XRPL-Standards/discussions/121.
For context: This work supports Clio's dependence on libxrpl. Clio is
just an example consumer. These changes should benefit all current and
future consumers.
---------
Co-authored-by: cyan317 <120398799+cindyyan317@users.noreply.github.com>
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
Use the most recent versions in ConanCenter.
* Due to a bug in Clang 16, you may get a compile error:
"call to 'async_teardown' is ambiguous"
* A compiler flag workaround is documented in `BUILD.md`.
* At this time, building this with gcc 13 may require editing some files
in `.conan/data`
* A patch to support gcc13 may be added in a later PR.
---------
Co-authored-by: Scott Schurr <scott@ripple.com>
Log exception messages at several locations.
Previously, these were locations where an exception was caught, but the
exception message was not logged. Logging the exception messages can be
useful for analysis or debugging. The additional logging could have a
small negative performance impact.
Fix#3213.
- MSVC 19.x reported a warning about import paths in boost for
function_output_iterator class (boost::function_output_iterator).
- Eliminate that warning by updating the import paths, as suggested by
the compiler warnings.
Partially revert the functionality introduced
with #4195 / 5a15229 (part of 1.10.0-b1).
Acknowledgements:
Aaron Hook for responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find.
To report a bug, please send a detailed report to:
bugs@xrpl.org
---------
Co-authored-by: Nik Bougalis <nikb@bougalis.net>
Each node on the network is supposed to have a unique cryptographic
identity. Typically, this identity is generated randomly at startup
and stored for later reuse in the (poorly named) file `wallet.db`.
If the file is copied, it is possible for two nodes to share the
same node identity. This is generally not desirable and existing
servers will detect and reject connections to other servers that
have the same key.
This commit achives three things:
1. It improves the detection code to pinpoint instances where two
distinct servers with the same key connect with each other. In
that case, servers will log an appropriate error and shut down
pending intervention by the server's operator.
2. It makes it possible for server administrators to securely and
easily generate new cryptographic identities for servers using
the new `--newnodeid` command line arguments. When a server is
started using this command, it will generate and save a random
secure identity.
3. It makes it possible to configure the identity using a command
line option, which makes it possible to derive it from data or
parameters associated with the container or hardware where the
instance is running by passing the `--nodeid` option, followed
by a single argument identifying the infomation from which the
node's identity is derived. For example, the following command
will result in nodes with different hostnames having different
node identities: `rippled --nodeid $HOSTNAME`
The last option is particularly useful for automated cloud-based
deployments that minimize the need for storing state and provide
unique deployment identifiers.
**Important note for server operators:**
Depending on variables outside of the the control of this code,
such as operating system version or configuration, permissions,
and more, it may be possible for other users or programs to be
able to access the command line arguments of other processes
on the system.
If you are operating in a shared environment, you should avoid
using this option, preferring instead to use the `[node_seed]`
option in the configuration file, and use permissions to limit
exposure of the node seed.
A user who gains access to the value used to derive the node's
unique identity could impersonate that node.
The commit also updates the minimum supported server protocol
version to `XRPL/2.1`, which has been supported since version
1.5.0 and eliminates support for `XPRL/2.0`.
The existing code properly parses the network_id parameter from the
the configuration file, but it does not properly set up the code to
use the value correctly. As a result the configured `network_id` is
ignored.
When fetching ledgers, the existing code would isolate the peer
that sent the most useful responses and issue follow up queries
only to that peer.
This commit increases the query aggressiveness, and changes the
mechanism used to select which peers to issue follow-up queries
to so as to more evenly spread the load along those peers which
provided useful responses.
This commit combines the `apply_mutex` and `read_mutex` into a single `mutex_`
var. This new `mutex_` var is a `shared_mutex`, and most operations only need to
lock it with a `shared_lock`. The only exception is `applyMutex`, which may need
a `unique_lock`.
One consequence of removing the `apply_mutex` is more than one `applyMutex`
function can run at the same time. To help reduce lock contention that a
`unique_lock` would cause, checks that only require reading data are run a
`shared_lock` (call these the "prewriteChecks"), then the lock is released, then
a `unique_lock` is acquired. Since a currently running `applyManifest` may write
data between the time a `shared_lock` is released and the `write_lock` is
acquired, the "prewriteChecks" need to be rerun. Duplicating this work isn't
ideal, but the "prewirteChecks" are relatively inexpensive.
A couple of other designs were considered. We could restrict more than one
`applyMutex` function from running concurrently - either with a `applyMutex` or
my setting the max number of manifest jobs on the job queue to one. The biggest
issue with this is if any other function ever adds a write lock for any reason,
`applyManifest` would not be broken - data could be written between the release
of the `shared_lock` and the acquisition of the `unique_lock`. Note: it is
tempting to solve this problem by not releasing the `shared_mutex` and simply
upgrading the lock. In the presence of concurrently running `applyManifest`
functions, this will deadlock (both function need to wait for the other to
release their read locks before they can acquire a write lock).
This is a refactor aimed at cleaning up and simplifying the existing
job queue.
As of now, all jobs are cancelled at the same time and in the same
way, so this commit removes the per-job cancellation token. If the
need for such support is demonstrated, support can be re-added.
* Revise documentation for ClosureCounter and Workers.
* Simplify code, removing unnecessary function arguments and
deduplicating expressions
* Restructure job handlers to no longer need to pass a job's
handle to the job.
The existing logic involves every server sending every transaction
that it receives to all its peers (except the one that it received
a transaction from).
This commit instead uses a randomized algorithm, where a node will
randomly select peers to relay a given transaction to, caching the
list of transaction hashes that are not relayed and forwading them
to peers once every second. Peers can then determine whether there
are transactions that they have not seen and can request them from
the node which has them.
It is expected that this feature will further reduce the bandwidth
needed to operate a server.
A recent version of clang notes a number of places in range
for loops where the code base was making unnecessary copies
or using const lvalue references to extend lifetimes. This
fixes the places that clang identified.
This commit expands the detection capabilities of the Byzantine
validation detector. Prior to this commit, only validators that
were on a server's UNL were monitored. Now, all the validations
that a server receives are passed through the detector.