Commit Graph

254 Commits

Author SHA1 Message Date
John Freeman
c2a08a1f26 Simplify the Job Queue:
This is a refactor aimed at cleaning up and simplifying the existing
job queue.

As of now, all jobs are cancelled at the same time and in the same
way, so this commit removes the per-job cancellation token. If the
need for such support is demonstrated, support can be re-added.

* Revise documentation for ClosureCounter and Workers.
* Simplify code, removing unnecessary function arguments and
  deduplicating expressions
* Restructure job handlers to no longer need to pass a job's
  handle to the job.
2022-03-01 11:25:03 -08:00
Nik Bougalis
417cfc2fb0 Adjust pathfinding configuration defaults:
The pathfinding engine built into the code has several configurable
parameters to adjust the depth of the paths indexed and explored.

These parameters can dramatically impact the performance and memory
consumption of a server; higher values can result in resource usage
increasing exponentially.

These default values were decided early and somewhat arbitrarily at
a time when the network and the size of the network state were much
smaller.

This commit adjusts the default values to reduce the depth of paths
to more reasonable levels; unless explicitly overriden, the changes
mean that pathfinding operations will return fewer, shallower paths
than previous versions of the software.
2022-01-12 18:54:03 -08:00
Nik Bougalis
18584ef2fd Adjust number of concurrent ledger data jobs 2022-01-10 15:29:43 -08:00
Mark Travis
7c12f01358 Parallel ledger loader & I/O performance improvements:
- Only duplicate records from archive to writable during online_delete.
- Log duration of nodestore reads.
- Include nodestore counters in perf_log output.
- Remove gratuitous nodestore activity counting.
- Report initial sync duration in server_info and perfLog.
- Report state_accounting in perfLog.
- Make state_accounting durations more accurate.
- Parallel ledger loader.
- Config parameter to load ledgers on start.
2022-01-10 15:29:21 -08:00
Mark Travis
c663f1f62b Make tx() function against a read-only postgres instance. 2021-12-15 11:25:10 -08:00
Scott Schurr
d54f6278bb Improve names returned by server_info counters 2021-12-15 11:21:51 -08:00
Edward Hennis
b1c9b134dc Make transaction queue order deterministic:
* Sort by fee level (which is the current behavior) then by transaction
  ID (hash).
* Edge case when the account at the end of the queue submits a higher
  paying transaction to walk backwards and compare against the cheapest
  transaction from a different account.
* Use std::if_any to simplify the JobQueue::isOverloaded loop.
2021-12-14 17:47:48 -08:00
Edward Hennis
aaa601841c Logging & minor optimizations:
* Log load fee values (at debug) received from validations.
* Log remote and cluster fee values (at trace) when changed.
* Refactor JobQueue::isOverloaded to return sooner if overloaded.
* Refactor Transactor::checkFee to only compute fee if ledger is open.
2021-12-14 17:47:38 -08:00
Richard Holland
cf97dcb992 Make I/O and prefetch worker threads configurable 2021-12-14 17:43:50 -08:00
Richard Holland
6746b863b3 Configurable handling of untrusted validations and proposals 2021-12-14 17:43:04 -08:00
Nik Bougalis
7edfbbd8bd Disable pathfinding indexing on validator nodes:
The pathfinding engine requires pre-building large tables which is a
resource-intensive operation. Typically, one would not expect that a
server configured as a validator would also support pathfinding APIs
and so, building those tables by default wastes resources.

This commit, if merged, will disable pathfinding on servers that are
configured as validators, unless the server operator opts to support
it explicitly, by configuring the `[path_search_max]` parameter.

Validator operators that wish to support pathfinding on a validator
and want to use the default values can add the following stanza to
their server's configuration file:

    [path_search_max]
    7
2021-11-17 20:52:18 -08:00
Nik Bougalis
0c47cfad6f Adjust priorities of jobs:
The priority of different types of jobs was set back in the early
days of development, based on insight and observations that don't
necessarily apply any longer.

Specifically, job types used by the server to sync to the network
were being treated as lower priority than client requests, making
it more difficult to regain sync.

This commit adjusts the priority of several jobs and should allow
servers to prioritize resynchronizing to the network over serving
clients.
2021-11-17 20:52:11 -08:00
Nik Bougalis
eb6b79bed7 Adjust default number of threads for JobQueue:
The existing calculation would limit the maximum number of threads
that would be created by default to at most 6; this may have been
reasonable a few years ago, but given both the load on the network
as of today and the increase in the number of CPU cores, the value
should be revisited.

This commit, if merged, changes the default calculation for nodes
that are configured as `large` or `huge` to allow for up to twelve
threads.
2021-11-17 20:52:11 -08:00
Nik Bougalis
eaff0d30fb Make the sweep_interval individually configurable:
The "sweep interval" is the amount of time between successive sweeps of
of various in-memory data structures to remove stale items.

Prior to this commit, the interval was automatically adjusted, based on
the value of the `[node_size]` option in a server's configuration file.

If merged, this commit introduces a new configuration option that makes
it possible for a server operator to adjust the sweep interval and make
a CPU/memory tradeoff:

    [sweep_interval]
    <integer>

The specified value represents the number of seconds between successive
sweeps. The range of valid values is between 10 and 600.

Important operator notes:

This is an advanced configuration option that should not be used unless
there is empirical data which suggests that the default sweep frequency
is either resulting in performance problems or is causing undue load to
the server.

Note that adjusting the sweep interval may not have the intended effect
on the server. Lower values will not always translate to a reduction of
memory usage and higher values will not always translate to a reduction
of CPU usage and/or load.
2021-11-17 20:52:10 -08:00
Gregory Tsipenyuk
ea145d12c7 Improve transaction relaying logic:
The existing logic involves every server sending every transaction
that it receives to all its peers (except the one that it received
a transaction from).

This commit instead uses a randomized algorithm, where a node will
randomly select peers to relay a given transaction to, caching the
list of transaction hashes that are not relayed and forwading them
to peers once every second. Peers can then determine whether there
are transactions that they have not seen and can request them from
the node which has them.

It is expected that this feature will further reduce the bandwidth
needed to operate a server.
2021-09-13 15:13:15 -07:00
seelabs
cd27b5f2bd Some code cleanups tagged by static analysis 2021-07-27 11:35:50 -07:00
Scott Schurr
a1fd579756 Clean up Section in BasicConfig.h:
The following changes were made:
- Removed dependency on template defined in beast detail namespace.
- Removed Section::find() method which had an obsolete interface.
- Made Section::get<>() easier to use for the common case of
  retrieving a std::string.  The revised get() method replaces old
  calls to Section::find().
- Provided a default template parameter to free function
  get<>(Section config, std::string name) so it stays similar to
  Section::get<>().

Then the rest of the code was adapted to these changes.

- Calls to Section::find() were replaced with calls to Section::get.
- Unnecessary get<std::string>() arguments were reduced to get().

These changes dug up an interesting artifact in the SHAMap unit
tests.  I'm not sure why the tests were working before, but there
was a problem with the case of a Section key.  The unit test is
fixed.
2021-07-27 11:35:50 -07:00
Nik Bougalis
433feade5d Automatically determine the node size:
The `[node_size]` configuration parameter is used to tune various
parameters based on the hardware that the code is running on. The
parameter can take five distinct values: `tiny`, `small`, `medium`,
`large` and `huge`.

The default value in the code is `tiny` but the default configuration
file sets the value to `medium`. This commit attempts to detect the
amount of RAM on the system and adjusts the node size default value
based on the amount of RAM and the number of hardware execution
threads on the system.

The decision matrix currently used is:

|         |   1  | 2 or 3 |   ≥ 4  |
|:-------:|:----:|:------:|:------:|
|  > ~8GB | tiny |   tiny |   tiny |
| > ~12GB | tiny |  small |  small |
| > ~16GB | tiny |  small | medium |
| > ~24GB | tiny |  small |  large |
| > ~32GB | tiny |  small |   huge |

Some systems exclude memory reserved by the the hardware, the kernel
or the underlying hypervisor so the automatic detection code may end
up determining the node_size to be one less than "appropriate" given
the above table.

The detection algorithm is simplistic and does not take into account
other relevant factors. Therefore, for production-quality servers it
is recommended that server operators examine the system holistically
and determine what the appropriate size is instead of relying on the
automatic detection code.

To aid server operators, the node size will now be reported in the
`server_info` API as `node_size` when the command is invoked in
'admin' mode.
2021-06-03 10:58:24 -07:00
Peng Wang
2eb1c6a396 Enable testing beta RPC API version with config 2021-06-02 13:37:40 -07:00
John Freeman
a2a37a928a Redesign stoppable object pattern 2021-06-01 15:36:28 -07:00
cdy20
6d82fb83a0 Relational DB interface 2021-04-01 10:38:22 -07:00
Howard Hinnant
9932a19139 Reduce coupling to date.h by calling C++17 chrono functions 2021-03-17 15:02:15 -07:00
Scott Schurr
3b33318dc8 Prefer std::optional over boost:optional:
Some of the boost::optionals must remain for now.  Both
boost::beast and SOCI have interfaces that require
boost::optional.
2021-03-11 14:35:31 -08:00
John Freeman
0a1fb4e6ca Reduce nesting and remove dead code 2021-01-25 18:49:49 -08:00
Peng Wang
7e97bfce10 Implement ledger forward replay 2021-01-25 18:49:49 -08:00
CJ Cobb
27543170d0 Add Reporting Mode
* Add a new operating mode to rippled called reporting mode
* Add ETL mechanism for a reporting node to extract data from a p2p node
* Add new gRPC methods to faciliate ETL
* Use Postgres in place of SQLite in reporting mode
* Add Cassandra as a nodestore option
* Update logic of RPC handlers when running in reporting mode
* Add ability to forward RPCs to a p2p node
2021-01-20 11:30:03 -08:00
Mark Travis
b0a39c5f86 Add Postgres functionality 2021-01-20 10:53:24 -08:00
Gregory Tsipenyuk
74d96ff4bd Add experimental validation & proposal relay reduction support:
- Add validation/proposal reduce-relay feature negotiation to
  the handshake
- Make squelch duration proportional to a number of peers that
  can be squelched
- Refactor makeRequest()/makeResponse() to facilitate handshake
  unit-testing
- Fix compression enable flag for inbound peer
- Fix compression algorithm parsing in the header parser
- Fix squelch duration in onMessage(TMSquelch)

This commit fixes 3624, fixes 3639 and fixes 3641
2021-01-09 13:49:40 -08:00
John Freeman
78245a072c Clean-up the Stoppable architecture 2021-01-08 14:43:06 -05:00
JoelKatz
02ccdeb94e Rework deferred node logic and async fetch behavior
This comment explains this patch and the associated patches
that should be folded into it. This paragraph should be removed
when the patches are folded after review.

This change significantly improves ledger sync and fetch
times while reducing memory consumption. The change affects
the code from that begins with SHAMap::getMissingNodes and runs
through to Database::threadEntry.

The existing code issues a number of async fetches which are then
handed off to the Database's pool of read threads to execute.
The results of each read are placed in the Database's positive
and negative caches. The caller waits for all reads to complete
and then retrieves the results out of these caches.

Among other issues, this means that the results of the first read
cannot be processed until the last read completes. Additionally,
all the results must sit in memory.

This patch changes the behavior so that each read operation has a
completion handler associated with it. The completion of the read
calls the handler, allowing the results of each read to be
processed as it completes. As this was the only reason the
negative and positive caches were needed, they can now be removed.

The read generation code is also no longer needed and is removed.
The batch fetch logic was never implemented or supported and is
removed.
2020-12-17 09:11:39 -08:00
Nik Bougalis
cba6b4a749 Improve handling of peers that aren't synced:
When evaluating the fitness and usefulness of an outbound peer, the code
would incorrectly calculate the amount of time that the peer spent in
a non-useful state.

This commit, if merged, corrects the calculation and makes the timeout
values configurable by server operators.

Two new options are introduced in the 'overlay' stanza of the config
file. The default values, in seconds, are:

[overlay]
max_unknown_time = 600
max_diverged_time = 300
2020-12-04 12:45:09 -08:00
Gregory Tsipenyuk
bec6c626d8 Add finer-grained control for incoming & outgoing peer limits:
This commit replaces the `peers_max` configuration element which had
a predetermined split between incoming and outgoing connections with
two new configuration options, `peers_in_max` and `peers_out_max`,
which server operators can use to explicitly control the number of
incoming and outgoing peer slots.
2020-12-04 12:44:19 -08:00
Devon White
cf5ca9a5cf Download queued shards only if there is space for them all 2020-11-19 10:57:13 -08:00
Miguel Portilla
8707c15b9c Use NuDB burst size and use NuDB version 2.0.5 2020-11-18 13:07:26 -08:00
Miguel Portilla
03c809371a Add Shard pool management 2020-10-14 11:17:44 -07:00
Nik Bougalis
d282b0bf85 Report server domain to other servers:
This commit introduces a new configuration option that server
operators can set. The value is communicated to other servers
and is also reported via the `server_info` API.

The value is meant to allow third-party applications or tools
to group servers together. For example, a tool that visualizes
the network's topology can group servers together.

Similar to the "Domain" field in validator manifests, an operator
can claim any domain. Prior to relying on the value returned, the
domain should be verified by retrieving the xrp-ledger.toml file
from the domain and looking for the server's public key in the
`nodes` array.
2020-10-14 11:17:44 -07:00
Nathan Nichols
a26a175957 Add GRPCServer to Stoppable Hierarchy 2020-09-17 15:05:06 -07:00
Nathan Nichols
660d9c1602 Make the transaction job queue limit adjustable:
The job queue can impose limits of how many jobs of a particular
type can be queued.

This commit makes the previously hard-coded limit associated with
transactions configurable by the server's operator. Servers that
have increased memory capacity or which expect to see an influx
of transactions can increase the number of transactions their
server will be able to queue.

This commit fixes #3556.
2020-09-01 16:39:00 -07:00
Devon White
6c268a3e9c Allow multiple paths for shard storage:
* Distinguish between recent and historical shards
* Allow multiple storage paths for historical shards
* Add documentation for this feature
* Add unit tests
2020-09-01 16:39:00 -07:00
Gregory Tsipenyuk
9b9f34f881 Optimize relaying of validation and proposal messages:
With few exceptions, servers will typically receive multiple copies
of any given message from its directly connected peers. For servers
with several peers this can impact the processing latency and force
it to do redundant work. Proposal and validation messages are often
relayed with extremely high redundancy.

This commit, if merged, introduces experimental code that attempts
to optimize the relaying of proposals and validations by allowing
servers to instruct their peers to "squelch" delivery of selected
proposals and validations. Servers making squelching decisions by
a process that evaluates the fitness and performance of a given
server and randomly selecting a subset of the best candidates.

The experimental code is presently disabled and must be explicitly
enabled by server operators that wish to test it.
2020-09-01 09:07:32 -07:00
seelabs
8cf542abb0 Fix memory management issues with checkpointers:
The checkpointer class had assumed that the database would exist for the
lifetime of the application. This is no long true. These changes resolve bugs
involving dangling pointers.
2020-08-06 10:05:43 -07:00
Edward Hennis
4702c8b591 Improve online_delete configuration and DB tuning:
* Document delete_batch, back_off_milliseconds, age_threshold_seconds.
* Convert those time values to chrono types.
* Fix bug that ignored age_threshold_seconds.
* Add a "recovery buffer" to the config that gives the node a chance to
  recover before aborting online delete.
* Add begin/end log messages around the SQL queries.
* Add a new configuration section: [sqlite] to allow tuning the sqlite
  database operations. Ignored on full/large history servers.
* Update documentation of [node_db] and [sqlite] in the
  rippled-example.cfg file.

Resolves #3321
2020-06-25 19:46:43 -07:00
Gregory Tsipenyuk
df29e98ea5 Improve amendment processing and activation logic:
* The amendment ballot counting code contained a minor technical
  flaw, caused by the use of integer arithmetic and rounding
  semantics, that could allow amendments to reach majority with
  slightly less than 80% support. This commit introduces an
  amendment which, if enabled, will ensure that activation
  requires at least 80% support.
* This commit also introduces a configuration option to adjust
  the amendment activation hysteresis. This option is useful on
  test networks, but should not be used on the main network as
  is a network-wide consensus parameter that should not be
  changed on a per-server basis; doing so can result in a
  hard-fork.

Fixes #3396
2020-06-25 19:46:43 -07:00
Nik Bougalis
268e28a278 Tune relaying of untrusted proposals & validations:
In deciding whether to relay a proposal or validation, a server would
consider whether it was issued by a validator on that server's UNL.

While both trusted proposals and validations were always relayed,
the code prioritized relaying of untrusted proposals over untrusted
validations. While not technically incorrect, validations are
generally more "valuable" because they are required during the
consensus process, whereas proposals are not, strictly, required.

The commit introduces two new configuration options, allowing server
operators to fine-tune the relaying behavior:

The `[relay_proposals]` option controls the relaying behavior for
proposals received by this server. It has two settings: "trusted"
and "all" and the default is "trusted".

The `[relay_validations]` options controls the relaying behavior for
validations received by this server. It has two settings: "trusted"
and "all" and the default is "all".

This change does not require an amendment as it does not affect
transaction processing.
2020-05-26 18:36:06 -07:00
John Freeman
5b5226d518 Cleanup the 'PeerSet' hierarchy:
This commit introduces no functional changes but cleans up the
code and shrinks the surface area by removing dead and unused
code, leveraging std:: alternatives to hand-rolled code and
improving comments and documentation.
2020-05-05 16:05:23 -07:00
Nik Bougalis
dbee3f01b7 Clean up and modernize code:
This commit removes obsolete comments, dead or no longer useful
code, and workarounds for several issues that were present in older
compilers that we no longer support.

Specifically:

- It improves the transaction metadata handling class, simplifying
  its use and making it less error-prone.
- It reduces the footprint of the Serializer class by consolidating
  code and leveraging templates.
- It cleanups the ST* class hierarchy, removing dead code, improving
  and consolidating code to reduce complexity and code duplication.
- It shores up the handling of currency codes and the conversation
  between 160-bit currency codes and their string representation.
- It migrates beast::secure_erase to the ripple namespace and uses
  a call to OpenSSL_cleanse instead of the custom implementation.
2020-05-05 16:05:22 -07:00
Pretty Printer
50760c6935 Format first-party source according to .clang-format 2020-04-23 10:02:04 -07:00
John Freeman
65dfc5d19e Prepare code for formatting
- Add missing `#include` in `ripple/core/JobTypeInfo.h`
- Protect version string from clang-format in
  `ripple/protocol/impl/BuildInfo.cpp`.
  `Builds/CMake/RippledVersion.cmake` searches for this line by pattern.
2020-04-23 09:53:49 -07:00
Howard Hinnant
9470558ecc Remove all uses of the name scoped_lock
*  scoped_lock is now a std name with subtly different semantics
   compared to lock_guard.  Namely it can be used to lock 0 or
   more mutexes.  This is valuable, but can also be accidentally
   used to lock 0 mutexes when 1 was intended, creating a
   run-time error.

   Therefore, if and when we use scoped_lock, extra care needs to
   be taken in reviewing that code to ensure it doesn't
   accidentally lock 0 mutexes when 1 was intended.  To aid in
   such careful reviewing, the use of the name scoped_lock should
   be limited to those cases where the number of mutexes is not
   exactly one.
2020-04-07 16:25:09 -07:00
Gregory Tsipenyuk
758a3792eb Add protocol message compression support:
* Peers negotiate compression via HTTP Header "X-Offer-Compression: lz4"
* Messages greater than 70 bytes and protocol type messages MANIFESTS,
  ENDPOINTS, TRANSACTION, GET_LEDGER, LEDGER_DATA, GET_OBJECT,
  and VALIDATORLIST are compressed
* If the compressed message is larger than the uncompressed message
  then the uncompressed message is sent
* Compression flag and the compression algorithm type are included
  in the message header
* Only LZ4 block compression is currently supported
2020-04-06 17:22:59 -07:00