o Fixes an off-by-one when determining which NFTokenPage an
NFToken belongs on.
o Improves handling of packed sets of 32 NFTs with
identical low 96-bits.
o Fixes marker handling by the account_nfts RPC command.
o Tightens constraints of NFTokenPage invariant checks.
Adds unit tests to exercise the fixed cases as well as tests
for previously untested functionality.
The amendment increases the maximum sign of an account's signer
list from 8 to 32.
Like all new features, the associated amendment is configured with
a default vote of "no" and server operators will have to vote for
it explicitly if they believe it is useful.
* "A path is considered invalid if and only if it enters and exits an
address node through trust lines where No Ripple has been enabled for
that address." (https://xrpl.org/rippling.html#specifics)
* When loading trust lines for an account "Alice" which was reached
via a trust line that has the No Ripple flag set on Alice's side, do
not use or cache any of Alice's trust lines which have the No Ripple
flag set on Alice's side. For typical "end-user" accounts, this will
return no trust lines.
One of the two versions of the `rngfill` function accepts a pointer
to a buffer and a size (in bytes). The function aims to fill the
provided `buffer` with `size` random bytes. It does this in chunks
of 8 bytes, for long as possible, and then fills any left-over gap
one byte at a time.
To avoid an annoying and incorrect warning about a potential buffer
overflow in the "trailing write", commit 78bc2727f7
used a `#pragma` to instruct the compiler to not generate the incorrect
diagnostic. Unfortunately, this change _also_ eliminated the trailing
write code, which means that, under some cases, the `rngfill` function
would generate between 1 and 7 fewer random bytes than requested.
This problem would only manifest on builds that do not define `__GNUC__`
which, as of this writing, means MSVC.
Several hard-coded parameters control the behavior of the ledger
acquisition engine. The values of many of these parameters where
set by intuition and have complex and non-intuitive interactions
with each other and other parts of the code.
An earlier commit attempted to adjust several of these parameters
to improve syncing performance; initial testing was promising but
a number of operators reported experiencing syncing and stability
issues with their servers. As a result, this commit reverts parts
of commit 18235067af.
This commit further adjusts some tunables so as to increase the
aggressiveness of the ledger acquisition engine.
This commit addresses minor bugs introduced with commit
6faaa91850:
- The number of threads used by the database engine was
incorrectly clamped to the lower possible value, such
that the database was effectively operating in single
threaded mode.
- The number of requests to extract at once was so high
that it could result in increased latency. The bundle
size is now limited to 4 and can be adjusted by a new
configuration option `rq_bundle` in the `[node_db]`
stanza. This is an advanced tunable and adjusting it
should not be needed.
This commit modernizes the `AcceptedLedger` and `AcceptedLedgerTx`
classes, reduces their memory footprint and reduces unnecessary
dynamic memory allocations.
This commit optimizes the way asynchronous nodestore operations are
processed both by reducing the amount of time locks are held and by
minimizing the number of memory allocations and data copying.
Adds support to TaggedCache to support smart replacement
(Needed to avoid race conditions with negative caching.)
Create a "hotDUMMY" object that represents the absence
of an object.
Allow DatabaseNodeImp::asyncFetch to complete immediately
if object is in cache (positive or negative).
Fix a bug in asyncFetch where the object returned may not
be the correct canonical version because we stash the
object in the results array before we canonicalize it.
When fetching ledgers, the existing code would isolate the peer
that sent the most useful responses and issue follow up queries
only to that peer.
This commit increases the query aggressiveness, and changes the
mechanism used to select which peers to issue follow-up queries
to so as to more evenly spread the load along those peers which
provided useful responses.
The `SHAMapInnerNode` class had a global mutex to protect the
array of node children. Profiling suggested that around 4% of
all attempts to lock the global would block.
This commit removes that global mutex, and replaces it with a
new per-node 16-way spinlock (implemented so as not to effect
the size of an inner node objet), effectively eliminating the
lock contention.
* Txs with the same fee level will sort by TxID XORed with the parent
ledger hash.
* The TxQ is re-sorted after every ledger.
* Attempt to future-proof the TxQ tie breaking test
* Abort background path finding when closed or disconnected
* Exit pathfinding job thread if there are no requests left
* Don't bother creating the path find job if there are no requests
* Refactor to remove circular dependency between InfoSub and PathRequest
The existing trust line caching code was suboptimal in that it stored
redundant information, pinned SLEs into memory and required multiple
memory allocations per cached object.
This commit eliminates redundant data, reducing the size of cached
objects and unpinning SLEs from memory, and uses value_types to
avoid the need for `std::shared_ptr`. As a result of these changes, the
effective size of a cached object, includes the overhead of the memory
allocator and the `std::shared_ptr` should be reduced by at least 64
bytes. This is significant, as there can easily be tens of millions
of these objects.
This commit combines the `apply_mutex` and `read_mutex` into a single `mutex_`
var. This new `mutex_` var is a `shared_mutex`, and most operations only need to
lock it with a `shared_lock`. The only exception is `applyMutex`, which may need
a `unique_lock`.
One consequence of removing the `apply_mutex` is more than one `applyMutex`
function can run at the same time. To help reduce lock contention that a
`unique_lock` would cause, checks that only require reading data are run a
`shared_lock` (call these the "prewriteChecks"), then the lock is released, then
a `unique_lock` is acquired. Since a currently running `applyManifest` may write
data between the time a `shared_lock` is released and the `write_lock` is
acquired, the "prewriteChecks" need to be rerun. Duplicating this work isn't
ideal, but the "prewirteChecks" are relatively inexpensive.
A couple of other designs were considered. We could restrict more than one
`applyMutex` function from running concurrently - either with a `applyMutex` or
my setting the max number of manifest jobs on the job queue to one. The biggest
issue with this is if any other function ever adds a write lock for any reason,
`applyManifest` would not be broken - data could be written between the release
of the `shared_lock` and the acquisition of the `unique_lock`. Note: it is
tempting to solve this problem by not releasing the `shared_mutex` and simply
upgrading the lock. In the presence of concurrently running `applyManifest`
functions, this will deadlock (both function need to wait for the other to
release their read locks before they can acquire a write lock).
In order to preserve the Hooks ABI, it is important that field
values used for hooks be stable going forward.
This commit reserves the required codes so that they will not
be repurposed before Hooks can be proposed for inclusion in
the codebase.