The nodestore includes a built-in cache to reduce the disk I/O
load but, by default, this cache was not initialized unless it
was explicitly configured by the server operator.
This commit introduces sensible defaults based on the server's
configured node size.
It remains possible to completely disable the cache if desired
by explicitly configuring it the cache size and age parameters
to 0:
[node_db]
...
cache_size = 0
cache_age = 0
- Only duplicate records from archive to writable during online_delete.
- Log duration of nodestore reads.
- Include nodestore counters in perf_log output.
- Remove gratuitous nodestore activity counting.
- Report initial sync duration in server_info and perfLog.
- Report state_accounting in perfLog.
- Make state_accounting durations more accurate.
- Parallel ledger loader.
- Config parameter to load ledgers on start.
1) Don't acquire so many nodes per pass. It's likely
far more than we need.
2) Right-size the finishedReads_ vector on passes other
than just the first.
* Sort by fee level (which is the current behavior) then by transaction
ID (hash).
* Edge case when the account at the end of the queue submits a higher
paying transaction to walk backwards and compare against the cheapest
transaction from a different account.
* Use std::if_any to simplify the JobQueue::isOverloaded loop.
* Log load fee values (at debug) received from validations.
* Log remote and cluster fee values (at trace) when changed.
* Refactor JobQueue::isOverloaded to return sooner if overloaded.
* Refactor Transactor::checkFee to only compute fee if ledger is open.
This flag, if present, suppresses the output of incoming
trustlines in default state.
This is primarily motivated by observing that users of Xumm often
have many unwanted incoming trustlines in a default state, which are
not useful in the vast majority of cases.
Being able to suppress those when doing `account_lines` saves bandwidth
and resources.
This commit implements partitioned unordered maps and makes it possible
to traverse such a map in parallel, allowing for more efficient use of
CPU resources.
The `CachedSLEs`, `TaggedCache`, and `KeyCache` classes make use of the
new functionality, which should improve performance.
The pathfinding engine requires pre-building large tables which is a
resource-intensive operation. Typically, one would not expect that a
server configured as a validator would also support pathfinding APIs
and so, building those tables by default wastes resources.
This commit, if merged, will disable pathfinding on servers that are
configured as validators, unless the server operator opts to support
it explicitly, by configuring the `[path_search_max]` parameter.
Validator operators that wish to support pathfinding on a validator
and want to use the default values can add the following stanza to
their server's configuration file:
[path_search_max]
7
The priority of different types of jobs was set back in the early
days of development, based on insight and observations that don't
necessarily apply any longer.
Specifically, job types used by the server to sync to the network
were being treated as lower priority than client requests, making
it more difficult to regain sync.
This commit adjusts the priority of several jobs and should allow
servers to prioritize resynchronizing to the network over serving
clients.
The existing calculation would limit the maximum number of threads
that would be created by default to at most 6; this may have been
reasonable a few years ago, but given both the load on the network
as of today and the increase in the number of CPU cores, the value
should be revisited.
This commit, if merged, changes the default calculation for nodes
that are configured as `large` or `huge` to allow for up to twelve
threads.
The "sweep interval" is the amount of time between successive sweeps of
of various in-memory data structures to remove stale items.
Prior to this commit, the interval was automatically adjusted, based on
the value of the `[node_size]` option in a server's configuration file.
If merged, this commit introduces a new configuration option that makes
it possible for a server operator to adjust the sweep interval and make
a CPU/memory tradeoff:
[sweep_interval]
<integer>
The specified value represents the number of seconds between successive
sweeps. The range of valid values is between 10 and 600.
Important operator notes:
This is an advanced configuration option that should not be used unless
there is empirical data which suggests that the default sweep frequency
is either resulting in performance problems or is causing undue load to
the server.
Note that adjusting the sweep interval may not have the intended effect
on the server. Lower values will not always translate to a reduction of
memory usage and higher values will not always translate to a reduction
of CPU usage and/or load.
The performance characteristics of `std::unordered_map` are better
than `std::map` and the former should be preferred when the strict
ordering of the latter is not required.