mirror of
https://github.com/XRPLF/rippled.git
synced 2025-12-06 17:27:55 +00:00
* Tally and duration counters for Job Queue tasks and RPC calls
optionally rendered by server_info and server_state, and
optionally printed to a distinct log file.
- Tally each Job Queue task as it is queued, starts, and
finishes running. Track total duration queued and running.
- Tally each RPC call as it starts and either finishes
successfully or throws an exception. Track total running
duration for each.
* Track currently executing Job Queue tasks and RPC methods
along with durations.
* Json-formatted performance log file written by a dedicated
thread, for above-described data.
* New optional parameter, "counters", for server_info and
server_state. If set, render Job Queue and RPC call counters
as well as currently executing tasks.
* New configuration section, "[perf]", to optionally control
performance logging to a file.
* Support optional sub-second periods when rendering human-readable
time points.
Basics
Utility functions and classes.
ripple/basic should contain no dependencies on other modules.
Choosing a rippled container.
-
std::vector- For ordered containers with most insertions or erases at the end.
-
std::deque- For ordered containers with most insertions or erases at the start or end.
-
std::list- For ordered containers with inserts and erases to the middle.
- For containers with iterators stable over insert and erase.
- Generally slower and bigger than
std::vectororstd::dequeexcept for those cases.
-
std::set- For sorted containers.
-
ripple::hash_set- Where inserts and contains need to be O(1).
- For "small" sets,
std::setmight be faster and smaller.
-
ripple::hardened_hash_set- For data sets where the key could be manipulated by an attacker in an attempt to mount an algorithmic complexity attack: see http://en.wikipedia.org/wiki/Algorithmic_complexity_attack
The following container is deprecated
std::unordered_set- Use
ripple::hash_setinstead, which uses a better hashing algorithm. - Or use
ripple::hardened_hash_setto prevent algorithmic complexity attacks.