* Create SQLite database for mapping transaction IDs to shard indexes
* Create SQLite database for mapping ledger hashes to shard indexes
* Create additional test cases for the shard database
Add support to allow multiple indepedent nodes to produce a binary identical
shard for a given range of ledgers. The advantage is that servers can use
content-addressable storage, and can more efficiently retrieve shards by
downloading from multiple peers at once and then verifying the integrity of
a shard by cross-checking its checksum with the checksum other servers report.
* Add a new operating mode to rippled called reporting mode
* Add ETL mechanism for a reporting node to extract data from a p2p node
* Add new gRPC methods to faciliate ETL
* Use Postgres in place of SQLite in reporting mode
* Add Cassandra as a nodestore option
* Update logic of RPC handlers when running in reporting mode
* Add ability to forward RPCs to a p2p node
This commit combines a number of cleanups, targeting both the
code structure and the code logic. Large changes include:
- Using more strongly-typed classes for SHAMap nodes, instead of relying
on runtime-time detection of class types. This change saves 16 bytes
of memory per node.
- Improving the interface of SHAMap::addGiveItem and SHAMap::addItem to
avoid the need for passing two bool arguments.
- Documenting the "copy-on-write" semantics that SHAMap uses to
efficiently track changes in individual nodes.
- Removing unused code and simplifying several APIs.
- Improving function naming.
- Provide separate functions for serializing depending on whether
one wants a "wire" version of a node, or one suitable for hashing.
- Remove unused functions
* Distinguish between recent and historical shards
* Allow multiple storage paths for historical shards
* Add documentation for this feature
* Add unit tests
Commit 4dc08f8202 introduced support for
deterministic shards, which makes the sharding functionality provided
by rippled more useful.
After merging, several opportunities for further improvements to the
deterministic sharding implementation were identified and a significant
increase int memory usage during shard finalization was detected.
Because of these issues, the commit is being reverted and the feature is
being rolled back. It will be reintroduced in a future release.
This commit, if merged, adds support to allow multiple indepedent nodes to
produce a binary identical shard for a given range of ledgers. The advantage
is that servers can use content-addressable storage, and can more efficiently
retrieve shards by downloading from multiple peers at once and then verifying
the integrity of a shard by cross-checking its checksum with the checksum
other servers report.
* Document delete_batch, back_off_milliseconds, age_threshold_seconds.
* Convert those time values to chrono types.
* Fix bug that ignored age_threshold_seconds.
* Add a "recovery buffer" to the config that gives the node a chance to
recover before aborting online delete.
* Add begin/end log messages around the SQL queries.
* Add a new configuration section: [sqlite] to allow tuning the sqlite
database operations. Ignored on full/large history servers.
* Update documentation of [node_db] and [sqlite] in the
rippled-example.cfg file.
Resolves#3321
* Make sure variables are always initialized
* Use lround instead of adding .5 and casting
* Remove some unneeded vars
* Check for null before calling strcmp
* Remove redundant if conditions
* Remove make_TxQ factory function
* Reduce lock scope on all public functions
* Use TaskQueue to process shard finalization in separate thread
* Store shard last ledger hash and other info in backend
* Use temp SQLite DB versus control file when acquiring
* Remove boost serialization from cmake files
Fixes: RIPD-1575. Fix argument passing to runner. Allow multiple unit
test selectors to be passed via --unittest argument. Add optional
integer priority value to test suite list. Fix several failing manual
tests. Update CLI usage message to make it clearer.
* Tally and duration counters for Job Queue tasks and RPC calls
optionally rendered by server_info and server_state, and
optionally printed to a distinct log file.
- Tally each Job Queue task as it is queued, starts, and
finishes running. Track total duration queued and running.
- Tally each RPC call as it starts and either finishes
successfully or throws an exception. Track total running
duration for each.
* Track currently executing Job Queue tasks and RPC methods
along with durations.
* Json-formatted performance log file written by a dedicated
thread, for above-described data.
* New optional parameter, "counters", for server_info and
server_state. If set, render Job Queue and RPC call counters
as well as currently executing tasks.
* New configuration section, "[perf]", to optionally control
performance logging to a file.
* Support optional sub-second periods when rendering human-readable
time points.
The DatabaseImp has threads that asynchronously call JobQueue to
perform database reads. Formerly these threads had the same
lifespan as Database, which was until the end-of-life of
ApplicationImp. During shutdown these threads could call JobQueue
after JobQueue had already stopped. Or, even worse, occasionally
call JobQueue after JobQueue's destructor had run.
To avoid these shutdown conditions, Database is made a Stoppable,
with JobQueue as its parent. When Database stops, it shuts down
its asynchronous read threads. This prevents Database from
accessing JobQueue after JobQueue has stopped, but allows
Database to perform stores for the remainder of shutdown.
During development it was noted that the Database::close()
method was never called. So that method is removed from Database
and all derived classes.
Stoppable is also adjusted so it can be constructed using either
a char const* or a std::string.
For those files touched for other reasons, unneeded #includes
are removed.