When rippled initiates a connection to SQLite3, rippled sends a "PRAGMA"
statement defining the maximum number of pages allowed in the database.
Update the max_page_count so it is consistent with the default for newer
versions of SQLite3. Increasing max_page_count is critical for keeping
full history servers online.
Fix#5102
* It is now an invariant that all constructed Public Keys are valid,
non-empty and contain 33 bytes of data.
* Additionally, the memory footprint of the PublicKey class is reduced.
The size_ data member is declared as static.
* Distinguish and identify the PublisherList retrieved from the local
config file, versus the ones obtained from other validators.
* Fixes#2942
Prior to this commit, `port_grpc` could not be added to the [server]
stanza. Instead of validating gRPC IP/Port/Protocol information in
ServerHandler, validate grpc port info in GRPCServer constructor. This
should not break backwards compatibility.
gRPC-related config info must be in a section (stanza) called
[port_gprc].
* Close#4015 - That was an alternate solution. It was decided that with
relaxed validation, it is not necessary to rename port_grpc.
* Fix#4557
* Add logging for Application.cpp sweep()
* Improve lifetime management of ledger objects (`SLE`s)
* Only store SLE digest in CachedView; get SLEs from CachedSLEs
* Also force release of last ledger used for path finding if there are
no path finding requests to process
* Count more ST objects (derive from `CountedObject`)
* Track CachedView stats in CountedObjects
* Rename the CachedView counters
* Fix the scope of the digest lookup lock
Before this patch, if you asked "is it caching?" It was always caching.
This reverts commit 002893f280.
There were two files with conflicts in the automated revert:
- src/ripple/rpc/impl/RPCHelpers.h and
- src/test/rpc/JSONRPC_test.cpp
Those files were manually resolved.
Add a new RPC / WS call for `server_definitions`, which returns an
SDK-compatible `definitions.json` (binary enum definitions) generated by
the server. This enables clients/libraries to dynamically work with new
fields and features, such as ones that may become available on side
chains. Clients query `server_definitions` on a node from the network
they want to work with, and immediately know how to speak that node's
binary "language", even if new features are added to it in the future
(as long as there are no new serialized types that the software doesn't
know how to serialize/deserialize).
Example:
```js
> {"command": "server_definitions"}
< {
"result": {
"FIELDS": [
[
"Generic",
{
"isSerialized": false,
"isSigningField": false,
"isVLEncoded": false,
"nth": 0,
"type": "Unknown"
}
],
[
"Invalid",
{
"isSerialized": false,
"isSigningField": false,
"isVLEncoded": false,
"nth": -1,
"type": "Unknown"
}
],
[
"ObjectEndMarker",
{
"isSerialized": false,
"isSigningField": true,
"isVLEncoded": false,
"nth": 1,
"type": "STObject"
}
],
...
```
Close#3657
---------
Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
Amendment "flapping" (an amendment repeatedly gaining and losing
majority) usually occurs when an amendment is on the verge of gaining
majority, and a validator not in favor of the amendment goes offline or
loses sync. This fix makes two changes:
1. The number of validators in the UNL determines the threshold required
for an amendment to gain majority.
2. The AmendmentTable keeps a record of the most recent Amendment vote
received from each trusted validator (and, with `trustChanged`, stays
up-to-date when the set of trusted validators changes). If no
validation arrives from a given validator, then the AmendmentTable
assumes that the previously-received vote has not changed.
In other words, when missing an `STValidation` from a remote validator,
each server now uses the last vote seen. There is a 24 hour timeout for
recorded validator votes.
These changes do not require an amendment because they do not impact
transaction processing, but only the threshold at which each individual
validator decides to propose an EnableAmendment pseudo-transaction.
Fix#4350
A bridge connects two blockchains: a locking chain and an issuing
chain (also called a mainchain and a sidechain). Both are independent
ledgers, with their own validators and potentially their own custom
transactions. Importantly, there is a way to move assets from the
locking chain to the issuing chain and a way to return those assets from
the issuing chain back to the locking chain: the bridge. This key
operation is called a cross-chain transfer. A cross-chain transfer is
not a single transaction. It happens on two chains, requires multiple
transactions, and involves an additional server type called a "witness".
A bridge does not exchange assets between two ledgers. Instead, it locks
assets on one ledger (the "locking chain") and represents those assets
with wrapped assets on another chain (the "issuing chain"). A good model
to keep in mind is a box with an infinite supply of wrapped assets.
Putting an asset from the locking chain into the box will release a
wrapped asset onto the issuing chain. Putting a wrapped asset from the
issuing chain back into the box will release one of the existing locking
chain assets back onto the locking chain. There is no other way to get
assets into or out of the box. Note that there is no way for the box to
"run out of" wrapped assets - it has an infinite supply.
Co-authored-by: Gregory Popovitch <greg7mdp@gmail.com>
Add new transaction submission API field, "sync", which
determines behavior of the server while submitting transactions:
1) sync (default): Process transactions in a batch immediately,
and return only once the transaction has been processed.
2) async: Put transaction into the batch for the next processing
interval and return immediately.
3) wait: Put transaction into the batch for the next processing
interval and return only after it is processed.
* In namespace ripple, introduces get_name function that takes a
std:🧵:native_handle_type and returns a std::string.
* In namespace ripple, introduces get_name function that takes a
std::thread or std::jthread and returns a std::string.
* In namespace ripple::this_thread, introduces get_name function
that takes no parameters and returns the name of the current
thread as a std::string.
* In namespace ripple::this_thread, introduces set_name function
that takes a std::string_view and sets the name of the current
thread.
* Intended to replace the beast utilities setCurrentThreadName
and getCurrentThreadName.
Add new transaction submission API field, "sync", which
determines behavior of the server while submitting transactions:
1) sync (default): Process transactions in a batch immediately,
and return only once the transaction has been processed.
2) async: Put transaction into the batch for the next processing
interval and return immediately.
3) wait: Put transaction into the batch for the next processing
interval and return only after it is processed.
* Commits 0b812cd (#4427) and 11e914f (#4516) conflict. The first added
references to `ServerHandlerImp` in files outside of that class's
organizational unit (which is technically incorrect). The second
removed `ServerHandlerImp`, but was not up to date with develop. This
results in the build failing.
* Fixes the build by changing references to `ServerHandlerImp` to
the more correct `ServerHandler`.
Enhance the /crawl endpoint by publishing WebSocket/RPC ports in the
server_info response. The function processing requests to the /crawl
endpoint actually calls server_info internally, so this change enables a
server to advertise its WebSocket/RPC port(s) to peers via the /crawl
endpoint. `grpc` and `peer` ports are included as well.
The new `ports` array contains objects, each containing a `port` for the
listening port (number string), and an array `protocol` listing the
supported protocol(s).
This allows crawlers to build a richer topology without needing to
port-scan nodes. For non-admin users (including peers), the info about
*admin* ports is excluded.
Also increase test coverage for RPC ServerInfo.
Fix#2837.
The API would allow seeds (and public keys) to be used in place of
accounts at several locations in the API. For example, when calling
account_info, you could pass `"account": "foo"`. The string "foo" is
treated like a seed, so the method returns `actNotFound` (instead of
`actMalformed`, as most developers would expect). In the early days,
this was a convenience to make testing easier. However, it allows for
poor security practices, so it is no longer a good idea. Allowing a
secret or passphrase is now considered a bug. Previously, it was
controlled by the `strict` option on some methods. With this commit,
since the API does not interpret `account` as `seed`, the option
`strict` is no longer needed and is removed.
Removing this behavior from the API is a [breaking
change](https://xrpl.org/request-formatting.html#breaking-changes). One
could argue that it shouldn't be done without bumping the API version;
however, in this instance, there is no evidence that anyone is using the
API in the "legacy" way. Furthermore, it is a potential security hole,
as it allows users to send secrets to places where they are not needed,
where they could end up in logs, error messages, etc. There's no reason
to take such a risk with a seed/secret, since only the public address is
needed.
Resolves: #3329, #3330, #4337
BREAKING CHANGE: Remove non-strict account parsing (#3330)
* Create the FeeSettings object in genesis ledger.
* Initialize with default values from the config. Removes the need to
pass a Config down into the Ledger initialization functions, including
setup().
* Drop the undocumented fee config settings in favor of the [voting]
section.
* Fix#3734.
* If you previously used fee_account_reserve and/or fee_owner_reserve,
you should change to using the [voting] section instead. Example:
```
[voting]
account_reserve=10000000
owner_reserve=2000000
```
* Because old Mainnet ledgers (prior to 562177 - yes, I looked it up)
don't have FeeSettings, some of the other ctors will default them to
the config values before setup() tries to load the object.
* Update default Config fee values to match Mainnet.
* Fix unit tests:
* Updated fees: Some tests are converted to use computed values of fee
object, but the default Env config was also updated to fix the rest.
* Unit tests that check the structure of the ledger have updated
hashes and counts.
Caching the base58check encoded version of an `AccountID` has
performance advantages, because because of the computationally
heavy cost associated with the conversion, which requires the
application of SHA-256 twice.
This commit makes the cache significantly more efficient in terms
of memory used: it eliminates the map, using a vector with a size
that is determined by the configured size of the node, and a hash
function to directly map any given `AccountID` to a specific slot
in the cache; the eviction policy is simple: in case of collision
the existing entry is removed and replaced with the new data.
Previously, use of the cache was optional and required additional
effort by the programmer. Now the cache is automatic and does not
require any additional work or information.
The new cache also utilizes a 64-way spinlock, to help reduce any
contention that the pressure on the cache would impose.
Each node on the network is supposed to have a unique cryptographic
identity. Typically, this identity is generated randomly at startup
and stored for later reuse in the (poorly named) file `wallet.db`.
If the file is copied, it is possible for two nodes to share the
same node identity. This is generally not desirable and existing
servers will detect and reject connections to other servers that
have the same key.
This commit achives three things:
1. It improves the detection code to pinpoint instances where two
distinct servers with the same key connect with each other. In
that case, servers will log an appropriate error and shut down
pending intervention by the server's operator.
2. It makes it possible for server administrators to securely and
easily generate new cryptographic identities for servers using
the new `--newnodeid` command line arguments. When a server is
started using this command, it will generate and save a random
secure identity.
3. It makes it possible to configure the identity using a command
line option, which makes it possible to derive it from data or
parameters associated with the container or hardware where the
instance is running by passing the `--nodeid` option, followed
by a single argument identifying the infomation from which the
node's identity is derived. For example, the following command
will result in nodes with different hostnames having different
node identities: `rippled --nodeid $HOSTNAME`
The last option is particularly useful for automated cloud-based
deployments that minimize the need for storing state and provide
unique deployment identifiers.
**Important note for server operators:**
Depending on variables outside of the the control of this code,
such as operating system version or configuration, permissions,
and more, it may be possible for other users or programs to be
able to access the command line arguments of other processes
on the system.
If you are operating in a shared environment, you should avoid
using this option, preferring instead to use the `[node_seed]`
option in the configuration file, and use permissions to limit
exposure of the node seed.
A user who gains access to the value used to derive the node's
unique identity could impersonate that node.
The commit also updates the minimum supported server protocol
version to `XRPL/2.1`, which has been supported since version
1.5.0 and eliminates support for `XPRL/2.0`.
* Remove Application & Database dependency in PerfLog. Replace it with
a callback passed into the constructor.
* Fixes the circular dependency between ripple/nodestore and ripple/basics
This is a refactor aimed at cleaning up and simplifying the existing
job queue.
As of now, all jobs are cancelled at the same time and in the same
way, so this commit removes the per-job cancellation token. If the
need for such support is demonstrated, support can be re-added.
* Revise documentation for ClosureCounter and Workers.
* Simplify code, removing unnecessary function arguments and
deduplicating expressions
* Restructure job handlers to no longer need to pass a job's
handle to the job.
- Only duplicate records from archive to writable during online_delete.
- Log duration of nodestore reads.
- Include nodestore counters in perf_log output.
- Remove gratuitous nodestore activity counting.
- Report initial sync duration in server_info and perfLog.
- Report state_accounting in perfLog.
- Make state_accounting durations more accurate.
- Parallel ledger loader.
- Config parameter to load ledgers on start.
This commit implements partitioned unordered maps and makes it possible
to traverse such a map in parallel, allowing for more efficient use of
CPU resources.
The `CachedSLEs`, `TaggedCache`, and `KeyCache` classes make use of the
new functionality, which should improve performance.
The existing calculation would limit the maximum number of threads
that would be created by default to at most 6; this may have been
reasonable a few years ago, but given both the load on the network
as of today and the increase in the number of CPU cores, the value
should be revisited.
This commit, if merged, changes the default calculation for nodes
that are configured as `large` or `huge` to allow for up to twelve
threads.
The "sweep interval" is the amount of time between successive sweeps of
of various in-memory data structures to remove stale items.
Prior to this commit, the interval was automatically adjusted, based on
the value of the `[node_size]` option in a server's configuration file.
If merged, this commit introduces a new configuration option that makes
it possible for a server operator to adjust the sweep interval and make
a CPU/memory tradeoff:
[sweep_interval]
<integer>
The specified value represents the number of seconds between successive
sweeps. The range of valid values is between 10 and 600.
Important operator notes:
This is an advanced configuration option that should not be used unless
there is empirical data which suggests that the default sweep frequency
is either resulting in performance problems or is causing undue load to
the server.
Note that adjusting the sweep interval may not have the intended effect
on the server. Lower values will not always translate to a reduction of
memory usage and higher values will not always translate to a reduction
of CPU usage and/or load.
* Only require adding the new feature names in one place. (Also need to
increment a counter, but a check on startup will catch that.)
* Allows rippled to have the code to support a given amendment, but
not vote for it by default. This allows the amendment to be enabled in
a future version without necessarily amendment blocking these older
versions.
* The default vote is carried with the amendment name in the list of
supported amendments.
* The amendment table is constructed with the amendment and default
vote.
The following changes were made:
- Removed dependency on template defined in beast detail namespace.
- Removed Section::find() method which had an obsolete interface.
- Made Section::get<>() easier to use for the common case of
retrieving a std::string. The revised get() method replaces old
calls to Section::find().
- Provided a default template parameter to free function
get<>(Section config, std::string name) so it stays similar to
Section::get<>().
Then the rest of the code was adapted to these changes.
- Calls to Section::find() were replaced with calls to Section::get.
- Unnecessary get<std::string>() arguments were reduced to get().
These changes dug up an interesting artifact in the SHAMap unit
tests. I'm not sure why the tests were working before, but there
was a problem with the case of a Section key. The unit test is
fixed.
The `[node_size]` configuration parameter is used to tune various
parameters based on the hardware that the code is running on. The
parameter can take five distinct values: `tiny`, `small`, `medium`,
`large` and `huge`.
The default value in the code is `tiny` but the default configuration
file sets the value to `medium`. This commit attempts to detect the
amount of RAM on the system and adjusts the node size default value
based on the amount of RAM and the number of hardware execution
threads on the system.
The decision matrix currently used is:
| | 1 | 2 or 3 | ≥ 4 |
|:-------:|:----:|:------:|:------:|
| > ~8GB | tiny | tiny | tiny |
| > ~12GB | tiny | small | small |
| > ~16GB | tiny | small | medium |
| > ~24GB | tiny | small | large |
| > ~32GB | tiny | small | huge |
Some systems exclude memory reserved by the the hardware, the kernel
or the underlying hypervisor so the automatic detection code may end
up determining the node_size to be one less than "appropriate" given
the above table.
The detection algorithm is simplistic and does not take into account
other relevant factors. Therefore, for production-quality servers it
is recommended that server operators examine the system holistically
and determine what the appropriate size is instead of relying on the
automatic detection code.
To aid server operators, the node size will now be reported in the
`server_info` API as `node_size` when the command is invoked in
'admin' mode.
* Create SQLite database for mapping transaction IDs to shard indexes
* Create SQLite database for mapping ledger hashes to shard indexes
* Create additional test cases for the shard database