Compare commits

..

204 Commits

Author SHA1 Message Date
Nicholas Dudfield
04077c1a55 test(testnet): assert zero entropy in degraded ledgers 2026-04-10 12:04:46 +07:00
Nicholas Dudfield
d94079d762 test(rng): relax PartialReveals sync assertion 2026-04-10 11:18:52 +07:00
Nicholas Dudfield
92ec07a1be chore: regenerate hook/sfcodes.h + format fix
Regenerate sfcodes.h to include the new sfSidecarType field
(UINT8, code 20).  Also apply clang-format to ConsensusExtensions.cpp.
2026-04-10 10:36:50 +07:00
Nicholas Dudfield
664db62588 fix: sidecar kind lost on cache hit + harden export sig parse
1. Record SidecarKind in pendingRngFetches_ before calling
   onAcquiredSidecarSet on local-cache-hit path. Without this,
   cached reveal/exportSig sets silently fell back to commit kind
   and were rejected by the sfSidecarType check.

2. Wrap export sig visitLeaves callback in try/catch (matching the
   RNG path) and enforce sfSidecarType == sidecarExportSig before
   processing — closes the shape-only acceptance gap.
2026-04-10 10:22:58 +07:00
Nicholas Dudfield
03a436d918 refactor: convert sidecar SHAMap entries from STTx to STObject
Replace STTx-based sidecar entries with plain STObject(sfGeneric)
using sfSidecarType (UINT8) discriminator. Eliminates unnecessary
transaction envelope overhead (sfSequence, sfFee, sfFlags) and
content-sniffing heuristics from the parse path.

Build: STObject with sidecarRngCommit/sidecarRngReveal/sidecarExportSig
Parse: sfSidecarType dispatch + typed field accessors
2026-04-10 10:14:06 +07:00
Nicholas Dudfield
7474048295 refactor: typed sidecar dispatch — eliminate content-sniffing heuristic
Replace the content-sniffing heuristic in onAcquiredSidecarSet with
typed dispatch based on SidecarKind.

The type is already known at fetch time:
- commitSetHash → SidecarKind::commit
- entropySetHash → SidecarKind::reveal
- exportSigSetHash → SidecarKind::exportSig

pendingRngFetches_ changes from hash_set<uint256> to
hash_map<uint256, SidecarKind>.  When the set arrives,
look up the kind by hash and dispatch — no leaf inspection.

This is the set-classification fix (Option E from the design doc):
no new SField, no STTx changes, no protocol additions, no RNG
proof-chain churn.
2026-04-10 09:18:43 +07:00
Nicholas Dudfield
1ee660529e fix: RPC handler sync, unused local, idiomatic Buffer comparison
- Add rng_poll_ms, no_export_sig, bootstrap_fast_start to the
  runtime_config RPC handler (SET and GET paths) so all ConfigVals
  fields are configurable live via admin RPC.
- Remove unused `added` counter in CSF fetchRngSetIfNeeded (was
  causing compiler warnings after debug logging removal).
- Use Buffer::operator== instead of std::memcmp in upgradeSignature,
  drop <cstring> include.
2026-04-10 08:56:16 +07:00
Nicholas Dudfield
311dfa1c23 chore: add TODO for RuntimeConfig activation gating
Both runtime_config and disconnect RPC handlers are already
Role::ADMIN.  Add a TODO to consider gating the entire
RuntimeConfig system on a config flag or compile-time define
for production nodes.
2026-04-10 08:31:54 +07:00
Nicholas Dudfield
f27cd2c567 refactor: consolidate env vars into RuntimeConfig
Move XAHAU_RNG_POLL_MS and XAHAUD_NO_EXPORT_SIG into RuntimeConfig
as rngPollMs and noExportSig fields.  Both are now configurable via
the XAHAU_RUNTIME_CONFIG JSON blob or individual env vars, and
controllable at runtime via the runtime_config RPC.

rngPollMs is clamped to minimum 50ms (prevents tight-loop polling).
Default remains 250ms.

This removes the last loose std::getenv calls from production code
outside of RuntimeConfig.  All env-var-based configuration now flows
through a single system.
2026-04-10 08:24:20 +07:00
Nicholas Dudfield
f34fdc297c fix(export): close upgradeSignature TOCTOU with buffer comparison
upgradeSignature now takes the verified buffer and compares it against
the currently stored buffer before promoting to verified.  This guards
against concurrent overlay threads overwriting the buffer between the
caller's unverifiedSignatures() snapshot and the upgrade call.

If the stored buffer was overwritten (different size or content), the
upgrade is silently skipped — the new buffer will be verified on its
next encounter.
2026-04-10 08:19:45 +07:00
Nicholas Dudfield
65fa63883d chore: remove CSF debug logging that floods CI output
Strip JLOG(j_.debug()) calls from buildEntropySet, fetchRngSetIfNeeded,
and finalizeRoundEntropy in CSF Peer.h.  These were added for local
debugging and caused CI failures due to output size limits.
2026-04-09 20:21:37 +07:00
Nicholas Dudfield
d8c683fb4c test(rng): fix AlignmentRequired test to run 1 round not 3
Running 3 rounds caused peer 0 to desync on round 2, dropping
prevProposers for the majority on round 3, triggering bootstrap
skip → zero entropy on the last round.  The gate works correctly
(logs show aligned=3, peersSeen=3) but the test was checking the
LAST round's entropy, not the round where the gate was exercised.

Run 1 round after warmup — sufficient to exercise the gate.
2026-04-09 18:09:17 +07:00
Nicholas Dudfield
fd53af304b fix(rng): measure entropy deadline from publish time, not reveal start
The entropy convergence deadline was measured from revealPhaseStart_,
which is set when entering ConvergingReveal.  By the time the entropy
set is published (after reveal timeout + observation tick), most of
the deadline budget was already spent — leaving insufficient time
for peer alignment.

Add entropyPublishStart_ timestamp set when the entropy set is first
published.  All convergence gate deadlines now measure from this
point, giving the full 2x rngREVEAL_TIMEOUT window for peer
proposals to propagate and alignment to be observed.
2026-04-09 18:06:18 +07:00
Nicholas Dudfield
2a3f0ec923 fix(rng): bounded wait for alignment instead of immediate fallback
When peers have published entropySetHash but none match ours yet
(e.g. a subset peer is the only one seen so far), wait for the
bounded deadline instead of immediately falling back to zero.
Other aligned peers may not have published yet — give them time.

Only fall back to zero if no alignment is observed within the
deadline (2x rngREVEAL_TIMEOUT).
2026-04-09 17:58:41 +07:00
Nicholas Dudfield
00f1f7ba30 fix(rng): subset-aware conflict detection in entropy convergence gate
After fetch/merge, if our entropy set hash didn't change, the
conflicting peer had a subset of our data — not a real threat.
Clear the conflict flag so we don't fall back to zero when a peer
simply has fewer reveals than us.

If the hash DID change (merge added data), re-count alignment
with the updated hash before treating it as a real conflict.

This prevents the majority from falling back to zero just because
one peer (e.g. isolated) has a smaller reveal set.
2026-04-09 17:53:58 +07:00
Nicholas Dudfield
49f05e4e47 fix(rng): require positive peer alignment for non-zero entropy
The observation tick alone was insufficient — a node could pass the
gate without any peer confirming its entropySetHash.  Now the gate
requires at least one tx-converged peer with a matching hash before
accepting non-zero entropy.

Three cases after the observation tick:
1. aligned > 0: peers confirm our hash → proceed with entropy
2. conflict: fetch/merge/rebuild → bounded wait → zero fallback
3. aligned=0, peersSeen=0: no peers published yet → bounded wait →
   zero fallback if still no peers at deadline
4. aligned=0, peersSeen>0: peers published but none match → zero

Also:
- CSF finalizeRoundEntropy now uses shouldZeroEntropy() (quorum check)
- Two new TDD tests:
  - testRngNoEntropyWithoutPeerAlignment: healthy network must agree
  - testRngAlignmentRequiredForNonZeroEntropy: isolated peer must not
    produce non-zero entropy that differs from majority
2026-04-09 17:51:51 +07:00
Nicholas Dudfield
1f51b9c594 fix(csf): quorum threshold in shouldZeroEntropy + test adjustments
CSF shouldZeroEntropy() now checks reveals < quorumThreshold (80% of
UNL), matching production.  MajorRevealLoss test adjusted to verify
majority group agreement rather than requiring full synchronization
(peer 0 may desync when it misses most reveals).

All 15 ConsensusRng tests now pass.
2026-04-09 17:40:05 +07:00
Nicholas Dudfield
88a548a8ef fix(rng): observation tick + CSF quorum threshold in shouldZeroEntropy
Two fixes addressing the asymmetric-view problem:

1. Convergence gate now forces one observation tick after first
   publishing the entropySet before accepting.  Previously a node
   could publish + accept in the same tick, never seeing a peer's
   different hash.  The entropySetPublished_ flag ensures at least
   one round-trip for proposal propagation.

2. CSF shouldZeroEntropy() now checks quorum threshold (80% of UNL),
   matching production behavior.  Previously it only checked empty().

Result: PartialReveals test now passes — all 6 peers converge on
the same entropy (count=6) via union merge after the observation tick.
14/15 ConsensusRng tests pass.
2026-04-09 17:31:36 +07:00
Nicholas Dudfield
db302a0f78 fix(rng): add selfSeedReveal to fix CSF reveal counting
The CSF never self-seeded its own reveal into pendingReveals_ because
harvestRngData only processes peer proposals, not self.  The real code
handles this in decorateMessage, but the CSF has no equivalent.

Add selfSeedReveal() called from the tick at reveal transition.
Both the real ConsensusExtensions and the CSF Extensions implement it.
The real code now has belt-and-suspenders: tick + decorateMessage.

This fixes CSF peers having N-1 reveals instead of N, which caused
every peer to compute entropy from a different subset.
2026-04-09 17:23:53 +07:00
Nicholas Dudfield
383d9ec2e7 feat(csf): add SidecarStore for sidecar set fetch/merge simulation
Add a content-addressed SidecarStore to the CSF, simulating the
InboundTransactions SHAMap fetch pipeline.  Tagged entries (commit
or reveal) are published by hash during buildCommitSet/buildEntropySet
and fetched by hash during fetchRngSetIfNeeded, with type-aware
union merge into the correct local pending set.

Also adds debug logging to CSF Extensions for entropy pipeline
troubleshooting.
2026-04-09 17:17:53 +07:00
Nicholas Dudfield
52671bfc99 test(rng): add XAHAU_RNG_TEST env var filter for focused test runs
Set XAHAU_RNG_TEST=<substring> to run only matching test methods.
e.g. XAHAU_RNG_TEST=SingleByzantine runs only that test.
2026-04-09 16:51:26 +07:00
Nicholas Dudfield
8307fca3b9 fix(rng): add entropySetHash convergence gate before accept
Add a bounded pre-accept convergence check for entropySetHash,
closing the gap where two honest validators could accept with
different reveal subsets and compute different entropy (ledger fork).

After publishing the entropy set, the gate:
1. Inspects tx-converged peer positions for conflicting entropySetHash
2. Fetches differing sets via fetchRngSetIfNeeded (union merge)
3. Rebuilds and re-publishes the local entropy set after merge
4. Waits within a bounded window (2x rngREVEAL_TIMEOUT)
5. Falls back to zero entropy if conflict persists past deadline

This follows the same pattern as the existing commitSetHash conflict
handling and exportSigSetHash convergence gate.  Union merge ensures
monotonic set growth — honest timing skew resolves quickly, and
hostile hash spam hits the hard deadline and falls back safely.

The "one bad actor shouldn't deny entropy" optimization (supermajority
vote) is deferred to a follow-up patch per codex recommendation.
2026-04-09 16:30:02 +07:00
Nicholas Dudfield
6526621c16 test(rng): add TDD tests for entropySetHash convergence gate
Three new CSF tests that document expected behavior for the
entropySetHash convergence gate (not yet implemented):

1. testRngEntropyConvergesWithPartialReveals: two groups each drop
   one peer's reveal, creating different quorate subsets.  Must not
   fork — either converge via SHAMap merge or both fall back to zero.

2. testRngEntropyFallbackOnMajorRevealLoss: one peer drops most
   reveals (below quorum locally).  Network must still agree.

3. testRngSingleByzantineCannotDenyEntropy: one Byzantine peer
   (future: forced garbage entropySetHash) should not prevent the
   other 80% from producing valid entropy.

Also adds dropRevealFrom_ test knob to CSF Peer::Extensions for
simulating asymmetric reveal delivery.
2026-04-09 16:26:30 +07:00
Nicholas Dudfield
2a9b1c9c22 fix(export): guard against empty verified sigs + add invariant asserts
- Skip addVerifiedSignature in decorateMessage when sigBuf is empty
  (sign() threw — don't mark a failed sign as "verified")
- Add XRPL_ASSERT in addVerifiedSignature and addUnverifiedSignature
  requiring non-empty signature buffers
- Add XRPL_ASSERT in checkQuorumAndSnapshot verifying that every
  entry in the verified set exists in the signatures map with a
  non-empty buffer
2026-04-09 16:02:35 +07:00
Nicholas Dudfield
54ca21b604 fix(export): verified-only quorum, SHAMap, and transactor upgrade pass
Enforce the contract: source chain finalizes an export only when it
has a quorum of cryptographically verified multisignatures.

ExportSigCollector changes:
- signatureCount() now counts verified entries only
- checkQuorumAndSnapshot() returns verified-only snapshot
- snapshot() and snapshotWithSigs() return verified-only data
- buildExportSigSet (via snapshot) publishes verified-only entries
- unverifiedSignatures() returns sigs needing verification
- upgradeSignature() promotes unverified to verified
- addStandaloneSignature() marks as verified (no consensus to check)
- All add methods now set firstSeenSeq (fixes stale cleanup bug)

Export::doApply changes:
- Upgrade pass before quorum check: deserializes the inner tx (which
  is always available as ctx_.tx), verifies any unverified sigs via
  buildMultiSigningData + verify(), upgrades them in the collector
- Then checks quorum on verified-only count
- Assembles blob from verified-only snapshot

This means:
- Unverified sigs (relay ordering) are local cache only
- They don't count toward quorum until upgraded
- SHAMap convergence operates on verified sigs only
- Destination chain verification remains defense-in-depth
2026-04-09 15:54:41 +07:00
Nicholas Dudfield
462db6004c fix(rng): replace nonexistent leafCount() with std::distance
SHAMap has no leafCount() method — it was a local variable in
SHAMap.cpp, not a public API.  Use std::distance(begin(), end())
on the SHAMap's ForwardRange iterators instead.  Cost is O(n) but
the set is bounded by UNL size (~20-35 entries).
2026-04-09 15:42:04 +07:00
Nicholas Dudfield
cfca708aae fix(rng): remove pendingReveals fallback from entropy output path
shouldZeroEntropy() and sfEntropyCount no longer fall back to
pendingReveals_.  If entropySetMap_ is null, entropy failed — the
pipeline didn't complete, and the map is the only canonical source.

pendingReveals_ is now strictly an internal staging area for the
commit/reveal pipeline.  All final entropy decisions flow through
entropySetMap_, which is the consensus-agreed set.
2026-04-09 15:40:22 +07:00
Nicholas Dudfield
5f70e5259c fix(rng): use entropySetMap for shouldZeroEntropy and sfEntropyCount
The H2 entropy fix switched the digest computation to entropySetMap_
but shouldZeroEntropy() and sfEntropyCount still used pendingReveals_.
Since pendingReveals_ can diverge from the published entropySetMap_
(late reveals mutate it after the map hash is published), two nodes
agreeing on the same entropySetHash could still build different
ttCONSENSUS_ENTROPY pseudo-transactions.

Now shouldZeroEntropy() checks entropySetMap_ leaf count when the map
is available, and sfEntropyCount uses the map's leaf count.  Both
fall back to pendingReveals_ only during pipeline stages before the
map is built.
2026-04-09 15:35:00 +07:00
Nicholas Dudfield
8697c5d821 refactor(export): explicit verified/unverified sig API in collector
Replace the ambiguous addSignature/hasSignature API with clearly
named methods that make verification state explicit:

  addVerifiedSignature()   — sig passed buildMultiSigningData + verify()
  addUnverifiedSignature() — trusted source but no multisign check yet
  addStandaloneSignature() — pubkey-only for standalone/test mode
  hasVerifiedSignature()   — only returns true for verified sigs

Unverified sigs (relay ordering fallback) are no longer treated as
verified by the cache.  When the same sig is encountered again via a
path that CAN verify (e.g. SHAMap merge after the tx arrives), the
verification runs and upgrades it to verified.

addUnverifiedSignature() won't overwrite a verified sig, preventing
downgrade.  SigEntry tracks verified validators in a separate set.
2026-04-09 15:34:13 +07:00
Nicholas Dudfield
9436e5868e fix(export): soften hard reject to best-effort verify for relay ordering
Revert the hard reject when ttEXPORT is not in the open ledger.
Under relay ordering, a node can receive a proposal with export sigs
before the ttEXPORT tx itself arrives.  Dropping these sigs loses a
valid validator contribution for the entire round with no recovery
path until terRETRY_EXPORT on the next round.

Post C1+C2, the proposal-level authentication is sufficient trust:
checkSign() verified the sender holds the private key, and sender
binding verified the embedded pubkey matches.  Store the sig and
let the multisign content be verified on the destination chain.
The collector's stale cleanup (256 ledgers) bounds retention.

When the tx IS in the open ledger (common case), the multisign sig
is still fully verified via buildMultiSigningData + verify().
2026-04-09 15:22:20 +07:00
Nicholas Dudfield
c6fa973cf6 fix(rng): compute entropy from entropySetMap instead of pendingReveals
H2: Compute final entropy from the agreed-upon entropySetMap_ SHAMap
rather than from the local pendingReveals_ in-memory map.

Previously, two nodes with different reveal subsets at timeout would
compute different entropy from their local pendingReveals_ maps,
despite both passing haveConsensus() (which only checks txSetHash).
This could cause a ledger fork.

Now the entropy computation reads directly from the entropySetMap_
whose hash was published in proposals and converged via SHAMap
fetch/merge.  Nodes that agree on entropySetHash deterministically
produce the same entropy regardless of local pendingReveals_ state.

If entropySetMap_ is null (bootstrap skip, pipeline failure), the
existing shouldZeroEntropy() fallback handles it.
2026-04-09 15:18:45 +07:00
Nicholas Dudfield
939e03714c fix(export): cap exportSignatures count per proposal
Reject proposals with more than ExportLimits::maxPendingExports (8)
export sig entries.  Honest validators attach at most one sig per
pending export, bounded by the same limit.  Prevents DoS via
proposals with millions of entries triggering lock contention on
the validator list and collector mutexes.
2026-04-09 15:13:15 +07:00
Nicholas Dudfield
969f98f57e perf(export): skip redundant sig verification via collector lookup
Add hasSignature() to ExportSigCollector — checks if a verified sig
already exists for a given (txHash, validator) pair.  Both the
proposal ingestion path and the SHAMap merge path now check this
before calling verify(), avoiding redundant ed25519 verification
when the same sig arrives via multiple paths.

No external sig cache exists in rippled, so the collector itself
serves as the verification cache: once a sig is stored (always
post-verify), subsequent encounters skip the crypto work.
2026-04-09 15:03:57 +07:00
Nicholas Dudfield
435deb0e78 fix(export): close remaining sig verification gaps
Three fixes from codex review:

1. Remove unsafe fallback in proposal ingestion path: reject export
   sigs when the ttEXPORT tx is not in the open ledger instead of
   storing them unverified.  The tx must be in the open ledger for
   validators to have signed it, so this is not a legitimate case.

2. Add full sig verification to the SHAMap merge path
   (onAcquiredSidecarSet): verify each export sig entry against
   buildMultiSigningData + verify() before storing in the collector.
   Previously this path only checked trusted() on the pubkey,
   allowing a malicious UNL validator to publish a sidecar set with
   forged sigs for other validators.

3. Close cluster mode bypass: always call checkSign() and gate export
   sig harvesting on sigValid, even when cluster() is true.  Cluster
   trust is for relay/resource charging, not for accepting on-chain
   cryptographic artifacts.
2026-04-09 14:59:20 +07:00
Nicholas Dudfield
b80352e512 fix(export): verify multisign signatures at ingestion time
C3: Cryptographically verify each export signature blob against the
inner transaction's signing data before storing in the collector.
Looks up the ttEXPORT tx from the open ledger, reconstructs the
signing data via buildMultiSigningData, and calls verify().

If the tx isn't in our open ledger yet (timing/relay), the sig is
stored unverified as a fallback — it can be verified later at the
SHAMap merge path or will be rejected at Export::doApply if invalid.

This runs on the jtPROPOSAL_t job queue thread (not the IO strand
or transactor), so the verify() cost has no impact on consensus
critical path performance.
2026-04-09 14:43:30 +07:00
Nicholas Dudfield
57c46c61fc fix(export): two-pass sender validation and atomic quorum+snapshot
C2 hardening: validate all export sig blobs before committing any,
preventing partial state if a later blob fails the sender binding
check. Also moves the trusted() check before the loop since senderPK
is constant.

H1: Add checkQuorumAndSnapshot() to ExportSigCollector that performs
the quorum threshold check and signature snapshot under a single lock
acquisition. Export::doApply now uses this instead of separate
signatureCount() + snapshotWithSigs() calls, eliminating the TOCTOU
window where concurrent overlay threads could mutate the collector
between the two operations.
2026-04-09 14:39:35 +07:00
Nicholas Dudfield
37ff13df50 fix(export): move sig harvesting after checkSign and bind pubkey to sender
C1: Move onTrustedPeerMessage() from the synchronous onMessage(TMProposeSet)
handler into checkPropose(), after checkSign() verifies the proposal's
cryptographic signature. Previously, export sigs were ingested before
signature verification, allowing any peer to inject forged sigs by
spoofing nodepubkey to a trusted validator's key.

C2: Add sender binding in onTrustedPeerMessage() — each export sig
blob's embedded validator pubkey must match the proposal sender's
nodepubkey. Reject the entire proposal's export sigs on any mismatch,
preventing a compromised validator from impersonating other validators
to single-handedly forge quorum.
2026-04-09 14:32:38 +07:00
Nicholas Dudfield
1b363b7eac fix: correct stale function name in ConsensusExtensionsTick comment 2026-04-09 14:03:07 +07:00
Nicholas Dudfield
9562b457cf chore: remove stale Peer-level RNG forwarders
All callsites now go through ce() consistently.
2026-03-23 09:57:53 +07:00
Nicholas Dudfield
724633ceb5 refactor(consensus): decouple CSF tests from xrpld.app via PeerTick.h
Move ConsensusExtensionsTick.h from xrpld/app/consensus/ to
xrpld/consensus/ — it's a pure template with no app-layer deps.
Extract Peer::Extensions::onTick() definition into test/csf/PeerTick.h
so Peer.h no longer includes from xrpld/app/.

Eliminates the test.csf > xrpld.app levelization edge.

Add --explain flag to levelization.py for tracing dependency edges.
2026-03-23 09:36:59 +07:00
Nicholas Dudfield
152d82e798 refactor(consensus): extract RNG/Export into ConsensusExtensions
Extract all Xahau consensus extension logic (RNG commit-reveal entropy
and Export validator multisign collection) from Consensus.h and
RCLCxAdaptor into a dedicated ConsensusExtensions class owned by
Application.

Implements all 10 lifecycle hooks from the design doc:
  onRoundStart, onTrustedPeerMessage, onTrustedPeerProposal,
  decoratePosition, decorateMessage, onTick, onPreBuild,
  onAcceptComplete, isSidecarSet, onAcquiredSidecarSet

Key design decisions:
- ConsensusTick<> template in ConsensusTypes.h keeps dependency
  direction clean (generic consensus defines contract, extensions
  implement it)
- extensionsTick<> shared template in ConsensusExtensionsTick.h
  ensures CSF test framework runs the same state machine as production
- ExportSigCollector ownership moved from global singleton to CE
- Sidecar acquisition routed through RCLConsensus mutex for thread
  safety (isExtensionSet + gotExtensionSet)
- RCLCxAdaptor reduced to thin ce() accessor + generic consensus
  interface methods

Files:
  new: ConsensusExtensions.h/.cpp, ConsensusExtensionsTick.h
  reduced: Consensus.h (-1060 lines), RCLConsensus.cpp (-1400 lines)
  updated: ConsensusTypes.h, Application.h/.cpp, NetworkOPs.cpp,
           PeerImp.cpp, Export.cpp, ExportSigCollector.h, Peer.h

7/7 testnet scenarios + 1463 consensus + 260 Export unit tests pass.
2026-03-19 20:23:19 +07:00
Nicholas Dudfield
0bb31ce7ce chore: add projected-source markers for consensus extension docs
Non-functional comment markers (//@@start, //@@end) for projected-source
documentation extraction.
2026-03-19 12:14:11 +07:00
Nicholas Dudfield
4cb3de0497 refactor(export): build xport wrapper via STObject then serialise to STTx
Replace the duplicated throwaway-STTx + real-STTx pattern with a
single STObject: set all fields including fee=0, serialise to compute
the fee, patch the fee, then serialise once into the final STTx.

20 lines shorter, no duplication.
2026-03-18 17:36:30 +07:00
Nicholas Dudfield
c6b315412d test(export): harden scenario tests with proper assertions
Replace warnings and logs with hard assertions across all export
scenario tests:

- export_helpers: add assert_hook_accepted(), assert_export_result(),
  assert_shadow_ticket() shared assertion helpers
- steady_state_export: assert hook ACCEPT + emitCount, ExportResult
  contents (signers, inner tx fields), shadow ticket exists
- retriable_export: assert ExportResult well-formed, shadow ticket
  created, payment not blocked
- export_degradation: assert export FAILS (not just log), no shadow
  ticket, payment still works
- export_unanimity: assert ExportResult + shadow ticket on success,
  absence on failure
2026-03-18 17:18:12 +07:00
Nicholas Dudfield
72395bec75 chore: clang-format 2026-03-18 17:12:41 +07:00
Nicholas Dudfield
8ed4d86f0f test(export): verify emitted ttEXPORT lifecycle end-to-end
testXportPayment now asserts the full emitted tx lifecycle:
- hook fires with ACCEPT, emitCount=1, returnCode=0
- sfHookEmissions present with 1 entry
- ltEMITTED_TXN in AffectedNodes
- emitted dir is not empty
- after close, emitted ttEXPORT appears in closed ledger

Also add FOCUSED_TEST env var gate for fast iteration during
development (set FOCUSED_TEST=1 to run only focused_test()).
2026-03-18 17:11:19 +07:00
Nicholas Dudfield
419fd16b9a chore: move export scenarios to export-suite.yml, add SuiteLogsWithOverrides
Move export scenario tests from suite.yml into their own
export-suite.yml file. The defaults already set CE+Export features
so individual test entries no longer need to repeat them.

Add SuiteLogsWithOverrides test utility: a Logs subclass that routes
specified journal partitions to stderr (always visible) while keeping
others on suite_.log (only on failure). Useful for debugging specific
subsystems during test development.
2026-03-18 17:09:47 +07:00
Nicholas Dudfield
a8097cd9a6 fix(export): compute emit fee before STTx construction
Mutating the fee via const_cast after STTx construction left a stale
cached getTransactionID(). When the emitted ttEXPORT was serialised
into the emitted directory and later deserialised, the round-tripped
txid differed from the original, causing tefNONDIR_EMIT in
Transactor::preclaim (the emitted dir entry was keyed with the stale
hash).

Build a throwaway STTx with fee=0 to calculate the fee size, then
construct the real STTx with the correct fee from the start.
2026-03-18 17:04:46 +07:00
Nicholas Dudfield
02a0552325 docs(export): clarify LLS semantics for retriable exports
Add comment explaining the three-state outcome table for exports
relative to LastLedgerSequence:
  ledger < LLS:  tesSUCCESS or terRETRY_EXPORT
  ledger == LLS: tesSUCCESS or tecEXPORT_EXPIRED
  ledger > LLS:  tefMAX_LEDGER (never reaches doApply)
2026-03-18 14:43:50 +07:00
Nicholas Dudfield
3698193b0a chore: clang-format 2026-03-18 14:21:00 +07:00
Nicholas Dudfield
de43ca2385 refactor(export): store multisigned tx as sfExportedTxn object in metadata
Use sfExportedTxn (OBJECT) instead of sfBlob (VL) for the multisigned
transaction in sfExportResult metadata. This renders as readable JSON
with all fields visible (Account, Signers, etc.) instead of an opaque
hex blob.

Also compute the tx hash directly via getHash(HashPrefix::transactionID)
on the STObject instead of serializing/deserializing through STTx.
2026-03-18 14:16:25 +07:00
Nicholas Dudfield
8c747a1916 feat(export): produce multisigned blob in export metadata
Export::doApply now builds the fully multisigned inner tx and stores
it as sfBlob in sfExportResult metadata. In standalone mode, the node
signs directly with its own validator keys (no consensus needed).

Key changes:
- ExportSigCollector stores actual multisign signatures, not just pubkeys
- RCLConsensus proposal attachment computes real multisign sigs over inner tx
- PeerImp harvests variable-length sig entries from proposals
- Export::doApply assembles Signers array, builds multisigned blob first,
  then uses its hash for the shadow ticket (getTransactionID includes all
  fields including Signers)
- Import skips OperationLimit and signing key checks for export callback
  path (sfTicketSequence present) — shadow ticket proves the relationship
- Full Export→XRPL→Import round-trip test: export on Xahau, submit
  multisigned blob to XRPL (with matching SignerList), build XPOP,
  import back, verify shadow ticket consumed
2026-03-18 13:54:29 +07:00
Nicholas Dudfield
cea110f29a feat: add XPOP test helper and XPOP_test suite
- src/test/jtx/xpop.h: test utilities for building XPOPs from Env ledgers
  (TestValidator, TestVLPublisher, TestXPOPContext, buildTestXPOP)
- src/test/app/XPOP_test.cpp: 4 tests (173 assertions)
  - LedgerProof construction from payment tx
  - XPOP v1 JSON structure verification
  - Merkle proof verification for multi-tx ledgers
  - Full Import round-trip: source Env payment → XPOP → dest Env Import → tesSUCCESS
2026-03-18 11:59:34 +07:00
Nicholas Dudfield
3ca056a94b feat: add XPOP test helper and XPOP_test suite
- src/test/jtx/xpop.h: test utilities for building XPOPs from Env ledgers
  (TestValidator, TestVLPublisher, buildTestXPOP)
- src/test/app/XPOP_test.cpp: 3 tests (133 assertions)
  - LedgerProof construction from payment tx
  - XPOP v1 JSON structure verification
  - Merkle proof verification for multi-tx ledgers
2026-03-18 11:41:16 +07:00
Nicholas Dudfield
705d8400db feat: add proof module for XPOP construction
New module at src/xrpld/app/proof/ with layered design:

- ProofBuilder: SHAMap merkle proof extraction (extractProofV1)
  Binary trie proof with 16-way branching, root hash verification.
- LedgerProof: proof-of-ledger (header fields + tx blob/meta + merkle proof)
  buildLedgerProof() extracts everything from a closed Ledger.
- XPOPv1: JSON format builder matching Import.cpp expectations
  buildXPOPv1() creates complete XPOP with validation signatures.

Designed for versioning: v1 JSON (current Import compat), future v2
binary proofs and account state proofs layer on the same core.
2026-03-18 11:29:29 +07:00
Nicholas Dudfield
655b751698 chore: regenerate hook/sfcodes.h for new sfields
Adds sfCancelTicketSequence (UINT32 101) and sfExportResult (OBJECT 98).
2026-03-18 11:06:04 +07:00
Nicholas Dudfield
f324081277 fix(import): verify XPOP tx hash matches shadow ticket
Shadow tickets store the exported transaction hash. Import now verifies
the XPOP's inner tx hash matches, preventing use of a different XPOP
with the same TicketSequence.
2026-03-18 10:52:19 +07:00
Nicholas Dudfield
24a284180a fix(export): address review findings from code audit
- Validate incoming export sig pubkeys against trusted UNL (PeerImp)
- Fix handleAcquiredRngSet misclassifying export sets when local map is null
- Add stale-entry cleanup (cleanupStale with 256-ledger timeout)
- Gate sig attachment on featureExport amendment (RCLConsensus)
- Fail with tefINTERNAL if ExportResult metadata can't be written
- Make export and cancel mutually exclusive (reject both in preflight)
- Remove dead ExportLimits.h include
2026-03-18 10:30:41 +07:00
Nicholas Dudfield
6f003cc983 feat(export): add SHAMap-based export sig convergence
Add deterministic export sig set convergence using CE infrastructure:
- exportSigSetHash in ExtendedPosition (flag 0x10)
- buildExportSigSet() builds SHAMap from collected sigs
- hasPendingExportSigs() gates convergence in phaseEstablish
- Parallel convergence gate alongside RNG sub-states
- handleAcquiredRngSet extended to merge export sig sets
- Tiered quorum: 80% with CE, 100% unanimity without CE

Scenario tests for all 3 feature combos:
- CE+Export, 1 node down: 4/5=80% → tesSUCCESS
- Export only, all up: 5/5=100% → tesSUCCESS
- Export only, 1 node down: 4/5≠100% → tecEXPORT_EXPIRED
2026-03-18 10:17:28 +07:00
Nicholas Dudfield
3a58020388 fix(export): tiered quorum threshold based on CE availability
Without CE: require unanimity (100% UNL) to avoid non-deterministic
quorum disagreement. With CE: use standard 80% quorum via
calculateQuorumThreshold (SHAMap convergence will ensure agreement).
Standalone/unit tests: require 1 sig.
2026-03-18 09:46:36 +07:00
Nicholas Dudfield
829441b52e fix(export): deduplicate export sigs across proposals within a round 2026-03-18 09:32:27 +07:00
Nicholas Dudfield
3a055663cc chore: add export-sig-attachment marker for projected-source 2026-03-18 09:26:33 +07:00
Nicholas Dudfield
985a194bdc feat(export): migrate to retriable ttEXPORT with proposal-based sigs
Replace the old ltEXPORTED_TXN + ttEXPORT_FINALIZE (validation-based
sigs, TxQ injection) approach with a retriable ttEXPORT that collects
validator signatures via TMProposeSet during consensus.

Added:
- terRETRY_EXPORT: keeps tx in retry set across ledger boundaries
- tecEXPORT_EXPIRED (200): LLS expiry frees sequence cleanly
- sfExportResult (OBJECT 98): signed export result in tx metadata
- ExportSigCollector: minimal thread-safe sig tracker
- Proposal sig attachment (RCLConsensus) + harvesting (PeerImp)
- exportSignatures field in TMProposeSet (ripple.proto)
- Metadata plumbing (TxMeta, ApplyViewImpl, ApplyStateTable)
- Hook xport() now emits ttEXPORT via normal emitted txn path

Removed:
- ttEXPORT_FINALIZE (type 90) pseudo-tx and Change::applyExportFinalize
- ltEXPORTED_TXN ledger entry and exportedDir/exportedTxn keylets
- ExportSignatureCollector (replaced by ExportSigCollector)
- TxQ export injection (quorum check + rawTxInsert)
- Validation-based export signing in RCLConsensus
- Application::getExportSignatureCollector

Verified on 5-node testnet: golden path (same-ledger finalization with
ExportResult in metadata), degraded path (tecEXPORT_EXPIRED on sub-quorum),
and hook xport() path (emitted ttEXPORT with shadow ticket creation).
2026-03-18 09:23:52 +07:00
Nicholas Dudfield
869f366d8a feat(export): add sfCancelTicketSequence for shadow ticket cancellation
Add sfCancelTicketSequence (UINT32 field 101) to ttEXPORT, allowing
users to cancel shadow tickets via a transaction. Both sfExportedTxn
and sfCancelTicketSequence are optional — at least one must be present.
This allows export, cancel, or both in a single transaction.

Test: create shadow ticket via export, cancel via sfCancelTicketSequence,
verify ticket is gone and owner reserve is freed.
2026-03-17 14:13:57 +07:00
Nicholas Dudfield
03936aa928 fix(export): require TicketSequence on exported transactions
Exported transactions must use TicketSequence (with Sequence=0)
because a bounced tx on the destination chain would jam sequential
sequence numbers. This is enforced in both the hook xport() API
and the Export transactor via ExportLedgerOps::validateTicketSequence().

Adds test: ttEXPORT rejects export without TicketSequence.
Updates existing test hooks to include TicketSequence in exported txns.
2026-03-17 12:30:55 +07:00
Nicholas Dudfield
6d180307ad feat(export): split Import into B2M and export callback paths
When the inner XPOP transaction has sfTicketSequence, Import now
takes the export callback path: consume the shadow ticket via
ExportLedgerOps::cancelShadowTicket() and return. No B2M balance
crediting, no account creation. Hooks fire normally and can inspect
the result via xpop_slot().

The B2M path is unchanged for non-ticket imports.

Also migrates the shadow ticket check in preclaim from the old
hookState namespace approach to keylet::shadowTicket().

Removes the unused shadowTicketNamespace constant.
2026-03-17 12:23:27 +07:00
Nicholas Dudfield
f2ca499c97 feat(export): add ltSHADOW_TICKET and xport_cancel hook API
Introduce shadow tickets for export replay protection:

- ltSHADOW_TICKET ledger entry: account-owned, keyed by
  account + ticket sequence. Fields: sfAccount, sfTicketSequence,
  sfTransactionHash, sfLedgerSequence, sfOwnerNode.

- ExportLedgerOps::createShadowTicket(): creates shadow ticket
  when exported tx has sfTicketSequence. Charges owner reserve.
  Called from both hook xport() path and Export transactor.

- ExportLedgerOps::cancelShadowTicket(): deletes shadow ticket,
  frees reserve. Used by xport_cancel hook API.

- xport_cancel(ticket_seq) hook API: allows hooks to cancel
  shadow tickets for exports that will never get a callback.

- InvariantCheck: add ltSHADOW_TICKET to valid entry types.

- Test: verify shadow ticket creation with correct fields and
  owner count bump via ttEXPORT with TicketSequence.
2026-03-17 12:13:41 +07:00
Nicholas Dudfield
bd68364f25 feat(export): add ttEXPORT user transaction and extract ExportLedgerOps
Rename the existing ttEXPORT pseudo-tx to ttEXPORT_FINALIZE (type 90)
to make room for a user-submittable ttEXPORT (type 91).

ttEXPORT allows non-hook users to submit export transactions directly,
creating the same ltEXPORTED_TXN entries that the hook xport() API
creates inline.

Extract shared logic into ExportLedgerOps.h:
- createExportedTxn(): creates ltEXPORTED_TXN, enforces directory cap
- validateNetworkID(): self-target and unconfigured guards
- validateExportAccount(): account ownership check

Both the hook API (HookAPI.cpp) and the Export transactor now call
into ExportLedgerOps, eliminating duplicated validation and ledger
mutation code.
2026-03-17 11:43:45 +07:00
Nicholas Dudfield
42a6407815 fix(export): reject exports when NETWORK_ID is unconfigured
If the node's NETWORK_ID is 0 (default/unconfigured) and the exported
transaction has no sfNetworkID field, we can't distinguish self-targeting
from legitimate cross-chain export. Reject to be safe.

Also adds exportTestConfig() helper and test for the unconfigured case.
2026-03-17 07:41:36 +07:00
Nicholas Dudfield
a387c853ab test(export): add NetworkID self-target guard test
Verify that xport() rejects exported transactions whose sfNetworkID
matches the local network. The hook builds a Payment with
NetworkID=21337 (matching the test env), and the guard correctly
returns EXPORT_FAILURE causing tecHOOK_REJECTED.

Also fix log level for the guard rejection to warn (not trace).
2026-03-17 07:31:35 +07:00
Nicholas Dudfield
9311e567d3 fix(export): reject exports targeting the local network
Explicitly forbid exported transactions whose sfNetworkID matches the
local network's ID. An exported txn re-executing on its origin chain
could cause exploits or logic issues.

The check is intentionally non-mandatory: XRPL mainnet (the primary
export target) doesn't use NetworkID, so absent = allowed.
2026-03-16 17:30:14 +07:00
Nicholas Dudfield
c26582bdf9 fix(export): move ExportLimits.h to xrpl/protocol
Both xrpld.overlay and xrpl.hook depend on xrpl.protocol, so placing
the header there avoids introducing a new xrpld.overlay > xrpl.hook
levelization dependency.
2026-03-16 15:58:58 +07:00
Nicholas Dudfield
417b999c7f chore(levelization): add xrpld.overlay > xrpl.hook dependency
New include of ExportLimits.h in PeerImp.cpp introduces this
module dependency (from feat(export) commit 89274b538).
2026-03-16 15:47:45 +07:00
Nicholas Dudfield
0205be4500 chore: add testnet scenario scripts
Entropy and export scenario scripts for local testnet validation.
2026-03-16 15:17:32 +07:00
Nicholas Dudfield
89274b5387 feat(export): wip export system limits
- max_export per hook: 4 → 2
- maxPendingExports: cap exported directory at 8 entries (tecDIR_FULL)
- clamp inbound signature processing in PeerImp to directory cap

The directory cap is the root DoS constraint: each pending export
requires every validator to sign and broadcast every round. Inbound
processing and signing throughput are transitively bounded by it.
2026-03-16 13:59:06 +07:00
Nicholas Dudfield
b65d9faf12 docs(consensus): add MERGE NOTE comments for upstream 86ef16dbeb resolution
Extends merge guidance to cover the empty-disputes bugfix (not yet in
sync-2.5.0): !disputes.empty() guard, stalled() j/clog params,
"should be rare" doc wording, debug→warn promotion, and auto-merged
testDisputes duplicate warning.
2026-03-11 10:45:04 +07:00
Nicholas Dudfield
aa1a7e5320 docs(consensus): add MERGE NOTE comments for sync-2.5.0 resolution
Inline comments at all 6 conflict points guiding the maintainer
through the expected merge conflicts when sync-2.5.0 lands:
ledgerMAX_CONSENSUS const, bootstrap params, calculateQuorumThreshold,
effectiveParms+stalled in haveConsensus, DisputedTx::stalled() j/clog
params, and testDisputes placement.
2026-03-11 10:09:02 +07:00
Nicholas Dudfield
6f0f17aad9 fix(consensus): cherry-pick upstream 86ef16dbeb empty-disputes stall fix
Cherry-pick of ripple/rippled@86ef16dbeb ("Fix: Don't flag consensus
as stalled prematurely (#5627)"). Not yet in any xahau sync branch.

Fixes false stall detection when there are no disputed transactions:
std::ranges::all_of on an empty set is vacuously true, so consensus
was incorrectly flagged as stalled. Adds !result_->disputes.empty()
guard.

Also adds diagnostic logging to DisputedTx::stalled() and the
stall detection path in haveConsensus(), and promotes the
"Need validated ledger" log from debug to warn.
2026-03-11 09:47:11 +07:00
Nicholas Dudfield
407bfa1467 feat(consensus): cherry-pick dd085e5d8 (upstream d22a5057b9) anti-stall mechanisms
Cherry-pick of ripple/rippled@d22a5057b9 / xahau dd085e5d8 ("Prevent
consensus from getting stuck in the establish phase (#5277)"), resolved
against our RNG pipeline and bootstrap fast-start changes.

Upstream adds three layered anti-stall mechanisms:
- Stateful per-dispute avalanche state machine (init→mid→late→stuck)
- Stall detection: declares consensus when all disputes individually settled
- Hard expiration: clamp(10× prev round, 15s, 120s) wall-clock safety net

Conflict resolution:
- ConsensusParms.h: kept both avalanche state machine (const members,
  avMIN_ROUNDS, avSTALLED_ROUNDS, getNeededWeight) and our bootstrap
  params (bootstrapRoundTimeSeed, bootstrapStableRoundsRequired).
  ledgerMAX_CONSENSUS left non-const for bootstrap override.
- Consensus.h: pass both stalled flag and effectiveParms to checkConsensus.
  Stall check uses original parms, bootstrap override only affects max
  consensus timeout.
- Consensus_test.cpp: kept all 12 RNG tests and new testDisputes test.
2026-03-11 09:36:38 +07:00
Nicholas Dudfield
f0dfcf6b81 fix(consensus): cap bootstrap ledgerMAX_CONSENSUS at 5s
Use an explicit 5s cap instead of dividing the default 15s.
5s is the sweet spot: long enough for peers to exchange proposals
and converge naturally, short enough to avoid wasted time.
Shorter values (e.g. 3.75s) cause nodes to hit reachedMax before
peers converge, cascading into slower subsequent rounds.
2026-03-10 14:30:20 +07:00
Nicholas Dudfield
503d2ebf98 feat(consensus): add XAHAUD_BOOTSTRAP_FAST_START for faster cold-start
Seed prevRoundTime_ to 3s instead of 15s on first round, override
idle interval to bypass closeTimeResolution (10-30s on early ledgers),
and halve ledgerMAX_CONSENSUS during bootstrap. Auto-disables after 3
consecutive rounds with UNL quorum participation.

Cuts 5-node testnet cold-start from ~28s to ~13s.

Also adds projected-source markers to TxQ, NetworkOPs, and Submit for
the transaction-submission documentation template.
2026-03-10 12:52:56 +07:00
Nicholas Dudfield
e52bc51384 refactor(consensus): extract shouldZeroEntropy() for quorum-gated entropy
Consolidate the repeated entropy fallback condition
(entropyFailed || no reveals || sub-quorum reveals) into a single
method. Fixes EntropyCount field reporting non-zero when the digest
was correctly zeroed due to sub-quorum reveals.
2026-03-10 08:42:10 +07:00
Nicholas Dudfield
91860db578 fix(consensus): require quorum-many reveals for non-zero entropy
Sub-quorum reveals (e.g. 3/4 threshold) were producing real entropy,
allowing a minority of validators to disproportionately influence the
output. Both injectEntropyPseudoTx and buildExplicitFinalProposalTxSet
now fall back to zero entropy when reveals < quorumThreshold().
2026-03-09 17:13:02 +07:00
Nicholas Dudfield
0b317a8e7a fix(consensus): skip rng pipeline during bootstrap convergence
When prevProposers < quorumThreshold, the network is still converging
and RNG can only produce zero entropy. Skip the commit/reveal pipeline
to avoid PIPELINE_TIMEOUT and conflict-wait delays that compound across
staggered startup rounds.
2026-03-09 16:27:36 +07:00
Nicholas Dudfield
dbd230b695 feat(rpc): add rng state to consensus_info response 2026-03-09 16:05:42 +07:00
Nicholas Dudfield
30cefcba85 chore: clang-format alignment fixes 2026-03-06 18:39:37 +07:00
Nicholas Dudfield
94edb5759d fix(export): gate pre-quorum on verified signature count
hasQuorum() and getExportsWithQuorum() were using raw signerMap.size()
which includes unverified signatures. TxQ could inject a ttEXPORT
pseudo-tx that then fails the stricter verified-signature check in
Change::applyExport(). Use verifiedSignatureCount() instead so TxQ
only injects when cryptographically verified quorum is actually met.

Also add cmake plumbing for enhanced logging: link date::date-tz when
available and enable BEAST_ENHANCED_LOGGING for Debug builds.
2026-03-06 18:38:54 +07:00
Nicholas Dudfield
ce57b6a3a0 fix(consensus): fix rng quorum to active UNL and demote rng log noise
Quorum fix:
- Rename expectedProposers_ → likelyParticipants_ to clarify role
- Fix commit quorum to 80% of active UNL snapshot (not shrinkable by
  recent proposer count, which was allowing 2/3 to pass as quorum)
- hasQuorumOfCommits() now uses simple threshold check only
- Add CSF test: persistent loss does not shrink quorum

Log level cleanup:
- Demote ~30 RNG/STALLDIAG per-peer/per-tick lines from info/debug to
  debug/trace across Consensus.h and RCLConsensus.cpp
- Principle: per-peer/per-tick → trace; state transitions → debug;
  milestones → info
- Reduces testnet log volume by ~93%
2026-03-06 18:36:43 +07:00
Nicholas Dudfield
fca5cad470 fix(log): catch tzdb exception in local-time formatter
date::current_zone() can throw if the timezone database is unavailable
or misconfigured (e.g. minimal container images). Fall back to UTC
formatting so enhanced logging does not make startup fatal.
2026-03-06 18:36:22 +07:00
Nicholas Dudfield
bb77c2090b consensus: gate RNG substates by amendment state 2026-03-06 14:09:06 +07:00
Nicholas Dudfield
90a94294e4 protocol: split export and consensus entropy amendments 2026-03-06 14:08:15 +07:00
Nicholas Dudfield
c2209b4472 docs(consensus): explain why seq=3 may mirror seq=2
Clarify inline that seq=3 publish can carry unchanged txSetHash while still providing extra entropySetHash delivery/fetch opportunity under packet loss or reordering.
2026-03-03 17:41:55 +07:00
Nicholas Dudfield
8fcb2ed336 docs(consensus): annotate implicit entropy injection rationale
Document why synthetic entropy pseudo-tx is canonically injected at onAccept/buildLCL and why explicit-final proposal remains experimental/default-off.
2026-03-03 17:31:04 +07:00
Nicholas Dudfield
fd1567d1ba consensus: document explicit-final tradeoffs and tighten rng diagnostics
Keep explicit final proposal as an opt-in experimental path with implicit mode as default.

Add inline rationale/TBD notes, extend stall diagnostics, and cover runtime-config + CSF txn-path behavior with tests.
2026-03-03 17:08:38 +07:00
Nicholas Dudfield
d32f34d3bf build(levelization): add fast python generator with CI parity check
Add Builds/levelization/levelization.py for fast local iteration and semantic comparison against canonical shell output via --compare-to.

Keep Builds/levelization/levelization.sh as canonical path, and update levelization workflow to fail if python output diverges from shell-generated results.

Also harden interactive-shell detection in levelization.sh for portability and document local usage in README.
2026-03-03 10:17:46 +07:00
Nicholas Dudfield
c491c5c82f refactor(consensus): reduce header fanout for faster iteration
Decouple RCLConsensus.h from Consensus.h by forward-declaring Consensus and storing Consensus<Adaptor> behind std::unique_ptr, moving thin wrappers out-of-line into RCLConsensus.cpp.

Also remove direct RCLConsensus.h include from NetworkOPs.h (forward declare), and add explicit includes in DatagramMonitor.h and ServerDefinitions.cpp to replace transitive dependencies.

Keep RNG fast-path behavior unchanged in Consensus.h; build and ripple.consensus.Consensus remain green.
2026-03-03 09:49:59 +07:00
Nicholas Dudfield
74817765ae consensus: restore full entropySet broadcast and document fanout tradeoffs 2026-03-03 08:32:09 +07:00
Nicholas Dudfield
fc23fa8535 consensus: reduce entropy-set proposal fanout
Keep entropy-set recovery path but elect a deterministic single broadcaster (lowest NodeID among tx-converged participants) instead of every proposer broadcasting entropySetHash.

This lowers steady-state proposal chatter while preserving liveness for lagging peers that need entropy-set fetch/merge.
2026-03-03 07:42:27 +07:00
Nicholas Dudfield
34c0f17b6b runtimeconfig: add rng_claim_drop_pct testing control
Expose rng_claim_drop_pct in runtime config (RPC + env) as a clamped 0-100 percentage used by RNG claim-drop testing.

Include RuntimeConfig RPC tests for round-trip and clamping behavior.
2026-03-03 07:20:32 +07:00
Nicholas Dudfield
765ad6a278 consensus: harden RNG set convergence under dropped claims
Track active RNG round sequence for fetched set validation so lagging observers can merge current-round commit sets instead of rejecting them as closed+1 out-of-round.

Refresh/re-publish commitSetHash after fetch-merge conflicts and publish entropySetHash in ConvergingReveal so peers can recover reveal sets.

Add inline tradeoff notes: extra proposal traffic is accepted to preserve consensus safety/liveness under packet loss or drop injection.
2026-03-03 07:14:46 +07:00
Nicholas Dudfield
f623ca89b9 chore(levelization): update loops result after format/merge 2026-03-02 17:01:47 +07:00
Nicholas Dudfield
e4865f09f9 Merge remote-tracking branch 'origin/dev' into feature-export-rng 2026-03-02 16:59:57 +07:00
Nicholas Dudfield
4c182e4738 consensus: guard commit-set conflicts and extend RNG CSF coverage 2026-03-02 16:59:41 +07:00
Nicholas Dudfield
d0c869c8a6 fix(consensus): tighten RNG acquired-set validation and observer quorum
Harden acquired RNG merge paths with strict entry typing, trusted key/node binding, round-sequence gating, reveal-to-commit linkage checks, and stale reveal/proof invalidation on commitment changes.

Adjust proposer expectation logic so non-proposing observers are not counted as expected committers, and add a CSF regression test covering observer self-commit exclusion.
2026-03-02 16:36:03 +07:00
Nicholas Dudfield
cac5efcd3c fix(consensus): harden acquired RNG set ingestion
Reject mixed commit/reveal maps, enforce per-entry type checks, bind node identity to trusted validator keys, and gate acquired entries to the active round.

Also verify acquired reveals against stored commitments and clear stale reveal/proof state when commitments change.
2026-03-02 16:18:55 +07:00
Nicholas Dudfield
514e60b71c fix(export): age and validate stashed tx data for signature checks 2026-03-02 15:54:53 +07:00
Nicholas Dudfield
2a34e32e05 fix(export): harden addSignature validation and verification 2026-03-02 15:46:07 +07:00
Nicholas Dudfield
b969024a25 fix(export): update duplicates and prevent phantom pending entries 2026-03-02 15:39:43 +07:00
Nicholas Dudfield
f30b9a4c3a fix(export): avoid stale-age poisoning from rejected signatures 2026-03-02 15:35:36 +07:00
Nicholas Dudfield
0e019fec4e fix(export): prune invalid early signatures when stashing tx data 2026-03-02 15:29:42 +07:00
Nicholas Dudfield
7e0c72fd22 fix(export): run stale signature cleanup during TxQ processing 2026-03-02 15:27:30 +07:00
Nicholas Dudfield
07d741cdd7 fix(export): harden collector duplicate and identity handling 2026-03-02 15:25:19 +07:00
Nicholas Dudfield
b99c38c09d test(consensus): add asymmetric delay reveal-timeout scenario 2026-03-02 15:11:01 +07:00
Nicholas Dudfield
64e50209ff fix(consensus): invalidate stale reveals when commitment changes
Add RNG regression tests for non-UNL data, reveal-without-commit, invalid reveal, and commitment-change stale-reveal handling in CSF consensus tests.
2026-03-02 15:04:35 +07:00
Nicholas Dudfield
b1ce2103ad test(csf): add RNG consensus hooks and edge-case tests 2026-03-02 14:28:34 +07:00
Nicholas Dudfield
50c4cf1df3 refactor: move xport_reserve and xport logic into HookAPI class
Move core xport_reserve and xport implementations from applyHook.cpp
DEFINE_HOOK_FUNCTION wrappers into the decoupled HookAPI class, following
the same pattern used for etxn_reserve and emit.
2026-03-02 14:10:03 +07:00
Nicholas Dudfield
6fc14f398d feat(rpc): add disconnect by ip:port [TESTNET] 2026-03-02 12:06:00 +07:00
Nicholas Dudfield
592a8600c7 fix: add missing <mutex> include for GCC compatibility 2026-02-27 16:42:10 +07:00
Nicholas Dudfield
e71768700a chore: update levelization after RuntimeConfig overlay dependency 2026-02-27 16:40:00 +07:00
Nicholas Dudfield
e598e405bd fix: harden RuntimeConfig validation and add startup diagnostics
- Error on unknown message_types instead of silently widening scope
- Make messageCategories optional so per-peer can override global filter
  to "all categories" (nullopt=inherit, empty set=explicitly all)
- Clamp send_drop_pct to 0-100% range
- Add STARTDIAG: logging for consensus startup diagnostics
- Add 3 test cases (11 total, 58 assertions)
2026-02-27 13:38:26 +07:00
Nicholas Dudfield
8af3ce2f5b fix: allow extended proposals in PeerImp and add message type filtering
- Fix convergence regression caused by 2.4.0 merge: replace
  stringIsUint256Sized(currenttxhash) with size() < uint256::size()
  to accept extended proposals (>32 bytes) containing RNG fields
- Add message_types filter to RuntimeConfig for targeting specific
  protocol message categories (proposal, validation, transaction, etc.)
- Add appliesTo() method and messageCategories set to ConfigVals
- Add category name mapping helpers in RPC handler
- Add 2 test cases for message type filtering (8 total)
2026-02-27 13:10:49 +07:00
Nicholas Dudfield
b67cb78b97 feat: add RuntimeConfig service with overlay artificial delays
Add a generic RuntimeConfig service for runtime-configurable parameters,
initially supporting artificial send delays and packet drops for testing
consensus behavior on local testnets.

- RuntimeConfig class with atomic fast-path gate (zero cost when inactive)
- Per-peer targeting via "*" (global) and "ip:port" keys with inheritance
- Pre-merged caching at write time for single-lookup read path
- Admin RPC handler `runtime_config` (set/clear/clear_all/get)
- Env var support: XAHAU_RUNTIME_CONFIG (JSON) or XAHAU_SEND_* vars
- PeerImp::send() integration with delay timer and probabilistic drops
- RPC handler test covering all operations and merge behavior
2026-02-27 09:46:19 +07:00
Nicholas Dudfield
0b1b82282e fix: reject single-signed exports and fix test hook SigningPubKey
Add single-sign rejection check in Change::applyExport() matching
rippled's multi-sign validation: SigningPubKey must be present but
empty, TxnSignature must not be present.

Fix Export_test.cpp hook to encode an empty VL blob for SigningPubKey
instead of 33 zero bytes (AI slop from export-uvtxn branch).
2026-02-25 14:55:55 +07:00
Nicholas Dudfield
d4c5a7e8ab fix: update copyright headers to 2026 XRPL Labs for new files 2026-02-25 14:38:40 +07:00
Nicholas Dudfield
82837864fa fix: extract calculateQuorumThreshold() and revert Import.cpp quorum change
Extract duplicated (n * 80 + 99) / 100 ceiling quorum formula into shared
calculateQuorumThreshold() in ConsensusParms.h, matching the standard
ValidatorList::calculateQuorum(). Used by ExportSignatureCollector,
Change.cpp, and RCLConsensus.cpp.

Revert Import.cpp quorum from ceiling back to original truncating formula
(totalValidatorCount * 0.8) since Import handles XPOP imports, not the
new Export feature. Added TODO for future upgrade.
2026-02-25 14:22:43 +07:00
Nicholas Dudfield
e1caee6459 fix: regenerate hook/sfcodes.h after sfHookExportCount field code change 2026-02-25 13:40:25 +07:00
Nicholas Dudfield
3206b4a4e1 fix: address @tequdev review comments (cbak, render, Change.cpp, markers)
- Remove unnecessary cbak() stubs from ConsensusEntropy test hooks and
  recompile WASM (cbak is optional per Guard.h validator)
- Restore RCLCxPeerPos::render() lost during merge (delegates to
  ConsensusProposal::render())
- Fix Change.cpp applyAmendment() fixInnerObjTemplate2 reversion:
  use STObject::makeInnerObject() and bracket assignment (fbcff932)
- Restore txq-export-quorum-check documentation marker in TxQ.cpp
2026-02-25 13:25:41 +07:00
Nicholas Dudfield
0c2e09050e fix: move sfHookExportCount to Xahau-reserved field code range
sfHookExportCount was at field code 23, colliding with the mainline
rippled UINT16 range. Move to 98 in the Xahau-reserved range.

Also reorder sfExportedTxn (90) before sfAmountEntry (91) for
consistency.
2026-02-25 12:05:28 +07:00
Nicholas Dudfield
83922d5c20 fix: restore XRPL_ASSERT and UNREACHABLE macros reverted during merge
The merge with origin/dev accidentally reverted 19 XRPL_ASSERT() calls
back to plain assert() and 1 UNREACHABLE() back to assert(0). These
macros provide descriptive diagnostic messages on failure and are the
project convention since the rippled 2.4.0 migration.

Files fixed:
- Consensus.h: 9 XRPL_ASSERT reversions
- RCLConsensus.cpp: 5 XRPL_ASSERT reversions
- BuildLedger.cpp: 3 XRPL_ASSERT reversions
- Change.cpp: 1 UNREACHABLE + 1 XRPL_ASSERT reversion
2026-02-25 11:55:07 +07:00
Nicholas Dudfield
6bae42ff01 fix: restore CLOG consensus logging removed during merge
The merge with origin/dev accidentally stripped all CLOG diagnostic
statements from the consensus code path. This restores the clog
parameter to internal Consensus.h functions (checkLedger, phaseOpen,
closeLedger, updateOurPositions, handleWrongLedger, leaveConsensus,
createDisputes) and re-adds all 46 CLOG statements that provide
per-round diagnostic detail for phase transitions, convergence
progress, dispute tracking, and pause decisions.

Also restores the origin/dev structure of Consensus.cpp by removing
the anonymous-namespace wrapper and forwarding overloads that were
merge artifacts.
2026-02-25 11:53:27 +07:00
Nicholas Dudfield
35e86d926e fix(consensus-entropy): align pseudo tx/sle formats and hook handling
Add missing ttEXPORT/ttCONSENSUS_ENTROPY pseudo transaction fields required by runtime logic and ensure corresponding ledger entries carry threading/sequence fields.

Handle ttEXPORT and ttCONSENSUS_ENTROPY in hook stakeholder routing to avoid Unknown transaction type assertion during ledger close.
2026-02-24 18:44:00 +07:00
Nicholas Dudfield
9c4ee9315d chore: update levelization results after merge 2026-02-24 15:56:36 +07:00
Nicholas Dudfield
0f17cf02aa chore: clang-format 2026-02-24 15:51:55 +07:00
Nicholas Dudfield
7753dc3cbe fix(invariants): exempt export and entropy pseudo-ledger entries
Handle ltEXPORTED_TXN and ltCONSENSUS_ENTROPY in LedgerEntryTypesMatch so creating/destroying these pseudo-ledger entries does not trigger XRP balance invariant violations.
2026-02-24 15:48:32 +07:00
Nicholas Dudfield
cc7f3c59ae merge: port export-rng onto post-2.4.0 tree restructure
Resolve the origin/dev post-2.4.0 sync conflicts across the xrpld path migration and macro-based protocol registration changes.

Re-apply export/RNG integration on top of the new structure, including consensus/build plumbing, tx/apply paths, peer ingest, and tests.

Regenerate hook headers and restore a green build via x-run-tests (Export_test build path).
2026-02-24 15:32:45 +07:00
Nicholas Dudfield
e8c1b25ab4 fix: harden export signature trust model and quorum verification
- unify validator trust checks into isExportValidatorTrusted() preferring
  UNLReport with local trust fallback
- add last-line-of-defense sig verification in Change::applyExport()
  requiring 80% (ceil) verified trusted UNL signatures
- filter untrusted export signatures at ingestion in PeerImp
- fix Import quorum from floor(n*0.8) to ceil(n*80%) matching export side
2026-02-24 12:52:23 +07:00
Nicholas Dudfield
b9dd854595 refactor: unify featureExport + featureConsensusEntropy into featureExportRNG
Single amendment flag for both features. numFeatures 94 → 93.
Exclude featureExportRNG from default test set to prevent
ConsensusEntropy pseudo-tx injection from breaking existing tests.
2026-02-21 17:46:46 +07:00
Nicholas Dudfield
3bead8dcb6 merge: integrate origin/export-uvtxn into consensus-phase-entropy
Resolve 14 conflicts keeping both sides. Renumber TOO_LITTLE_ENTROPY
from -46 to -48 to avoid collision with export error codes.
Fix sfHookExportCount to soeOPTIONAL in InnerObjectFormats (only set
when featureExportRNG is enabled).
2026-02-21 17:41:37 +07:00
Nicholas Dudfield
908a78a1d9 fix: regenerate hook/extern.h to match hook_api.macro ordering 2026-02-20 10:11:37 +07:00
Nicholas Dudfield
a9e3dc41d4 fix: add featureExport stub for standalone guard_checker build 2026-02-20 10:05:58 +07:00
Nicholas Dudfield
02990eb4ee Merge remote-tracking branch 'origin/dev' into consensus-phase-entropy
# Conflicts:
#	hook/extern.h
#	src/ripple/app/hook/hook_api.macro
#	src/ripple/protocol/Feature.h
#	src/ripple/protocol/impl/Feature.cpp
2026-02-19 10:57:40 +07:00
Nicholas Dudfield
ce76632322 Merge remote-tracking branch 'origin/dev' into export-uvtxn
# Conflicts:
#	Builds/CMake/RippledCore.cmake
#	hook/extern.h
#	src/ripple/app/hook/Guard.h
#	src/ripple/app/hook/applyHook.h
#	src/ripple/app/hook/guard_checker.cpp
#	src/ripple/app/tx/impl/Change.cpp
#	src/ripple/app/tx/impl/SetHook.cpp
#	src/ripple/protocol/Feature.h
#	src/ripple/protocol/impl/Feature.cpp
#	src/ripple/protocol/jss.h
2026-02-19 10:12:55 +07:00
Nicholas Dudfield
9eac54d690 Merge remote-tracking branch 'origin/dev' into consensus-phase-entropy
# Conflicts:
#	src/ripple/app/hook/Guard.h
#	src/ripple/app/hook/applyHook.h
#	src/ripple/app/tx/impl/SetHook.cpp
2026-02-17 10:12:46 +07:00
Nicholas Dudfield
24e4ac16ad docs(consensus): add extraction markers for remaining RNG sections
Add @@start/@@end comment markers to pseudo-tx submission filtering,
fast-polling, local testnet resource bucketing, and test environment
gating. No logic changes.
2026-02-13 12:52:14 +07:00
Nicholas Dudfield
94ce15d233 docs(consensus): add extraction markers for guided code review
Add @@start/@@end comment markers to key RNG pipeline sections for
automated documentation extraction. No logic changes.
2026-02-13 12:47:22 +07:00
Nicholas Dudfield
8f331a538e fix(consensus): harden proposal parser and guard dice(0) UB
Address findings from code review:

- dice(0): add early return with INVALID_ARGUMENT before modulo
  operation to prevent undefined behavior
- fromSerialIter: return std::optional to safely reject malformed
  payloads (truncated, unknown flag bits, trailing bytes) instead
  of throwing
- Update all callers (PeerImp, RCLConsensus, tests) for optional
- Add unit tests for dice(0) error code and 7 malformed wire cases
2026-02-12 16:18:30 +07:00
Nicholas Dudfield
7425ab0a39 fix(consensus): avoid structured bindings in lambda captures
clang-14 (CI) does not implement P2036R3 — structured bindings cannot
be captured by lambdas. Use explicit .first/.second instead.
2026-02-10 19:02:42 +07:00
Nicholas Dudfield
c5292bfe0d fix(test): use large dice range to avoid deterministic collision
Standalone synthetic entropy produces identical dice(6) results for
consecutive calls due to hash collision mod 6. Switch to dice(1000000)
and add diagnostic output for return code debugging.
2026-02-10 18:53:24 +07:00
Nicholas Dudfield
79b2f9f410 feat(hooks): add consensus entropy definitions to hook headers
Add dice/random externs, TOO_LITTLE_ENTROPY error code, sfEntropyCount
field code, and ttCONSENSUS_ENTROPY transaction type to hook SDK headers.
2026-02-10 18:53:16 +07:00
Nicholas Dudfield
e8358a82b1 feat(hooks): register dice/random with WasmEdge and add hook API tests
- ADD_HOOK_FUNCTION for dice/random (was defined+declared but not registered)
- Relax fairRng() seq check to allow previous ledger entropy (open ledger)
- Add hook tests: dice range, random fill, consecutive calls differ
- TODO: open-ledger entropy semantics need further thought
2026-02-10 17:58:52 +07:00
Nicholas Dudfield
d850e740e1 feat(consensus): standalone synthetic entropy and ConsensusEntropy test
Generate deterministic entropy in standalone mode so Hook APIs (dice/random)
work for testing. Add test suite verifying SLE creation on ledger close.
2026-02-10 17:27:57 +07:00
Nicholas Dudfield
61a166bcb0 feat(hooks): add dice() and random() hook APIs for consensus entropy
Port the Hook API surface from the tt-rng branch, adapted to use our
commit-reveal consensus entropy (ltCONSENSUS_ENTROPY / sfDigest).

Hook APIs:
- dice(sides): returns random int [0, sides) from consensus entropy
- random(write_ptr, write_len): fills buffer with 1-512 random bytes

Internal fairRng() derives per-execution entropy by hashing: ledger
seq + tx ID + hook hash + account + chain position + execution phase
+ consensus entropy + incrementing call counter. This ensures each
call within a single hook execution returns different values.

Quality gate: fairRng returns empty (TOO_LITTLE_ENTROPY) if fewer
than 5 validators contributed, preventing weak entropy from being
consumed by hooks.

Also adds sfEntropyCount and sfLedgerSequence to the consensus
entropy SLE and pseudo-tx, enabling the freshness and quality
checks needed by the Hook API.
2026-02-10 17:12:27 +07:00
Nicholas Dudfield
41a41ec625 feat(consensus): intersect expected proposers with UNL Report and adaptive quorum
setExpectedProposers() now filters incoming proposers against the
on-chain UNL Report, preventing non-UNL nodes from inflating the
expected set and causing unnecessary timeouts.

quorumThreshold() uses expectedProposers_.size() (recent proposers ∩
UNL) when available, falling back to full UNL Report count on cold
boot. This adapts to actual network conditions rather than relying
on a potentially stale UNL Report that over-counts offline validators.

Renamed activeUNLNodeIds_/cacheActiveUNL/isActiveUNLMember to
unlReportNodeIds_/cacheUNLReport/isUNLReportMember to make the
on-chain data source explicit.
2026-02-10 16:14:47 +07:00
Nicholas Dudfield
bc98c589b7 docs(consensus): fix stale quorum comment in phaseEstablish
Update inline comment to reflect that hasQuorumOfCommits() checks
expected proposers first, with 80% of active UNL as fallback.
2026-02-06 16:56:01 +07:00
Nicholas Dudfield
4f009e4698 fix(consensus): proceed with partial commitSet on timeout instead of zero entropy
When expected proposers don't all arrive before rngPIPELINE_TIMEOUT,
check if we still have quorum (80% of UNL). If so, build the commitSet
with available commits and continue to reveals. Only fall back to zero
entropy when truly below quorum.

Previously any missing expected proposer caused a full timeout with zero
entropy for that round. Now: kill 3 of 20 nodes → one 3s timeout round
per kill but entropy preserved (17/16 quorum met).
2026-02-06 16:40:33 +07:00
Nicholas Dudfield
b6811a6f59 feat(consensus): deterministic commitSets via expected proposers and seq=0 proofs
Wait for commits from last round's proposers (falling back to activeUNL
on cold boot) instead of 80% of UNL. This ensures all nodes build the
commitSet at the same moment with the same entries.

Split proof storage: commitProofs_ (seq=0 only, deterministic) and
proposalProofs_ (latest with reveal, for entropySet). Previously the
proof blob contained whichever proposeSeq was last seen, causing
identical commits to produce different SHAMap hashes across nodes.

20-node testnet: all nodes now produce identical commitSet hashes.
2026-02-06 16:27:10 +07:00
Nicholas Dudfield
ae88fd3d24 feat(consensus): add dedicated reveal-phase timeout measured from phase entry
Previously rngPIPELINE_TIMEOUT (3s) was measured from round start,
meaning txSet convergence could eat into the reveal budget. Now reveals
get their own rngREVEAL_TIMEOUT (1.5s) measured from the moment we
enter ConvergingReveal, ensuring consistent time for reveal collection
regardless of how long txSet convergence took.
2026-02-06 16:00:40 +07:00
Nicholas Dudfield
db3ed0c2eb fix(consensus): wait for all committers' reveals and fix local testnet resource charging
- Change hasMinimumReveals() to wait for reveals from ALL committers
  (pendingCommits_.size()) instead of 80% quorum. The commit set is
  deterministic, so we know exactly which reveals to expect.
  rngPIPELINE_TIMEOUT remains the safety valve for crash/partition.
  Fixes reveal-set non-determinism causing entropy divergence on
  15-node testnets.

- Resource manager: preserve port for loopback addresses so local
  testnet nodes each get their own resource bucket instead of all
  sharing one on 127.0.0.1 (causing rate-limit disconnections).

- Make RNG fast-poll interval configurable via XAHAU_RNG_POLL_MS
  env var (default 250ms) for testnet tuning.
2026-02-06 15:22:42 +07:00
Nicholas Dudfield
960808b172 fix(consensus): skip RNG wait when quorum is impossible and base threshold on active UNL
When fewer participants are present than the quorum threshold, skip the
RNG commit wait immediately instead of waiting the full pipeline timeout.
Also base the quorum on activeUNLNodeIds_ (UNL Report with fallback)
instead of the full trusted key set, so the denominator reflects who is
actually active on the network.
2026-02-06 14:34:28 +07:00
Nicholas Dudfield
a9dffd38ff fix(consensus): shorten RNG pipeline timeout to 3s for faster recovery
Add rngPIPELINE_TIMEOUT (3s) to replace ledgerMAX_CONSENSUS (10s) in
the commit/reveal quorum gates. Late-joining nodes enter as
proposing=false and cannot contribute commitments until promoted, so
waiting beyond a few seconds just delays the ZERO-entropy fallback and
penalizes recovery. Add inline comments documenting the late-joiner
constraint and SHAMap sync's role as a dropped-proposal safety net.
2026-02-06 14:04:53 +07:00
Nicholas Dudfield
382e6fa673 fix(consensus): verify reveals match commitments and cache UNL for observers
Prevent grinding attacks by verifying sha512Half(reveal, pubKey, seq)
matches the stored commitment before accepting a reveal. Also move
cacheActiveUNL() into startRound so non-proposing nodes (exchanges,
block explorers) correctly accept RNG data instead of diverging with
zero entropy.
2026-02-06 13:28:55 +07:00
Nicholas Dudfield
2905b0509c perf(consensus): gate RNG SHAMap fetches on sub-state
During ConvergingTx all RNG data arrives via proposal leaves, so
fetching a peer's commitSet before we've built our own just generates
unnecessary traffic. Only fetch commitSetHash once in ConvergingCommit+,
and entropySetHash once in ConvergingReveal.
2026-02-06 13:18:53 +07:00
Nicholas Dudfield
4911c1bf52 feat(consensus): embed proposal signature proofs in RNG SHAMap entries
Prevents spoofed SHAMap entries by embedding verifiable proof blobs
(proposal signature + metadata) in each commit/reveal entry via sfBlob.

- Store ProposalProof in harvestRngData (peers) and propose() (self)
- serializeProof: pack proposeSeq/closeTime/prevLedger/position/sig
- verifyProof: reconstruct signingHash, verify against public key
- Embed proofs in buildCommitSet/buildEntropySet via sfBlob field
- Verify proofs in handleAcquiredRngSet (both diff and visitLeaves paths)
- Add stall fix: gate ConvergingTx on timeout when commits unavailable
- Clear proposalProofs_ in clearRngState
2026-02-06 11:47:42 +07:00
Nicholas Dudfield
1744d21410 docs(consensus): explain union convergence model for RNG sets 2026-02-06 11:17:52 +07:00
Nicholas Dudfield
34ff53f65d feat(consensus): add UNL enforcement for RNG commit-reveal pipeline
Cache active UNL NodeIDs per round from UNL Report (in-ledger),
falling back to getTrustedMasterKeys() on fresh testnets.
Reject non-UNL validators at all entry points: harvestRngData,
buildCommitSet, buildEntropySet, and handleAcquiredRngSet.
2026-02-06 11:12:03 +07:00
Nicholas Dudfield
893f8d5a10 feat(consensus): replace fake hashes with real SHAMap-backed commitSet/entropySet
Build real ephemeral (unbacked) SHAMaps for commitSet and entropySet using
ttCONSENSUS_ENTROPY entries with tfEntropyCommit/tfEntropyReveal flags.
Reuse InboundTransactions pipeline for peer fetch/diff/merge with no new
classes. Encode NodeID in sfAccount to avoid master-vs-signing key mismatch.
Add isPseudoTx guard in ConsensusTransSetSF to prevent pseudo-tx submission.
Route acquired RNG sets via isRngSet/gotRngSet in NetworkOPs mapComplete.
2026-02-06 10:38:06 +07:00
Nicholas Dudfield
3e5389d652 feat(consensus): add 250ms fast-poll for RNG sub-state transitions
During ConvergingCommit and ConvergingReveal sub-states, poll at 250ms
instead of the default 1s ledgerGRANULARITY. This reduces total RNG
pipeline overhead from ~3s to ~500ms while keeping the normal heartbeat
interval unchanged for all other consensus phases.
2026-02-06 09:21:42 +07:00
Nicholas Dudfield
c44dea3acf fix(consensus): resolve commit-reveal pipeline bugs enabling non-zero entropy
Three critical fixes that unblock the RNG commit-reveal pipeline:

- Remove entropy secret regeneration in ConvergingTx->ConvergingCommit
  transition that was overwriting the onClose() secret, breaking reveal
  verification against the original commitment
- Change ExtendedPosition operator== to compare txSetHash only, preventing
  deadlock where nodes transitioning sub-states at different times would
  break haveConsensus() for all peers
- Self-seed own commitment and reveal into pending collections so the
  node counts toward its own quorum checks

Also adds ExtendedPosition_test with signing, suppression, serialization
round-trip and equality tests, iterator safety fix in BuildLedger, wire
compatibility early-return, and RNG debug logging throughout the pipeline.
2026-02-06 09:03:26 +07:00
Nicholas Dudfield
a6dd54fa48 feat(consensus): add featureConsensusEntropy amendment gating
- Register ConsensusEntropy amendment (Supported::yes, DefaultNo)
- Gate entropy pseudo-tx injection behind amendment in doAccept()
- Gate preflight with temDISABLED when amendment not enabled
- Bump numFeatures 90 -> 91
- Exclude featureConsensusEntropy from default test environment to
  avoid breaking existing test transaction count assumptions
2026-02-06 07:29:48 +07:00
Nicholas Dudfield
28bd0a22d3 feat(consensus): add entropy injection, tx ordering, and dispatch registration
- Implement injectEntropyPseudoTx() to combine reveals into final
  entropy hash and inject as pseudo-tx into CanonicalTXSet in doAccept()
- Modify BuildLedger applyTransactions() to apply entropy tx FIRST
  before all other transactions to prevent front-running
- Remove redundant explicit threading in applyConsensusEntropy() as
  sfPreviousTxnID/sfPreviousTxnLgrSeq are set automatically by
  ApplyStateTable::threadItem()
- Register ttCONSENSUS_ENTROPY in applySteps.cpp dispatch tables
  (preflight, preclaim, calculateBaseFee, apply)
- Add ltCONSENSUS_ENTROPY to InvariantCheck.cpp valid type whitelist
2026-02-06 05:43:19 +07:00
Nicholas Dudfield
960fffcf82 feat(consensus): add ttCONSENSUS_ENTROPY pseudo-transaction protocol layer
Add protocol definitions for consensus-derived entropy pseudo-transaction:
- ttCONSENSUS_ENTROPY = 105 transaction type
- ltCONSENSUS_ENTROPY = 0x0058 ledger entry type
- keylet::consensusEntropy() singleton keylet (namespace 'X')
- applyConsensusEntropy() handler in Change.cpp
- Added to isPseudoTx() in STTx.cpp

The entropy value is stored in sfDigest field of the singleton ledger object.
This provides the protocol foundation for same-ledger entropy injection.
2026-02-05 17:26:31 +07:00
Nicholas Dudfield
e7867c07a1 feat(consensus): add RNG sub-state gating logic in phaseEstablish
- Add clearRngState() call in startRoundInternal
- Reset estState_ in closeLedger when entering establish phase
- Implement three-phase RNG checkpoint gating:
  - ConvergingTx: wait for quorum commits, build commitSet
  - ConvergingCommit: reveal entropy, transition immediately
  - ConvergingReveal: wait for reveals or timeout, build entropySet
- Use if constexpr for test framework compatibility
2026-02-05 16:53:00 +07:00
Nicholas Dudfield
a828e8a44d feat(consensus): add RNG wire protocol and harvest logic
- Serialize full ExtendedPosition in share() and propose()
- Deserialize ExtendedPosition in PeerImp using fromSerialIter()
- Add harvestRngData() to collect commits/reveals from peer proposals
- Conditionally call harvest via if constexpr for test compatibility
2026-02-05 16:41:13 +07:00
Nicholas Dudfield
bb33e7cf64 feat(consensus): add ExtendedPosition for RNG entropy support
Introduce data structures for consensus-derived randomness using
commit-reveal scheme:

- Add ExtendedPosition struct with consensus targets (txSetHash,
  commitSetHash, entropySetHash) and pipelined leaves (myCommitment,
  myReveal)
- operator== excludes leaves to allow convergence with unique leaves
- add() includes ALL fields to prevent signature stripping attacks
- Add EstablishState enum for sub-phases: ConvergingTx, ConvergingCommit,
  ConvergingReveal
- Update Consensus template to use Adaptor::Position_t
- Add Position_t typedef to RCLConsensus::Adaptor and test CSF Peer

This is the foundational data structure work for the RNG implementation.
The gating logic and entropy computation will follow.
2026-02-05 16:20:54 +07:00
Nicholas Dudfield
7e8e0654cd chore: add documentation markers for pr-description-outline 2026-01-28 15:00:14 +07:00
Nicholas Dudfield
38af0626e0 chore: add documentation markers for pr-description 2026-01-28 14:50:56 +07:00
Nicholas Dudfield
8500e86f57 chore: remove projected-source documentation markers 2026-01-28 11:30:47 +07:00
Nicholas Dudfield
1fc4fd9bfd chore: regenerate hook headers for export feature 2026-01-28 11:27:13 +07:00
Nicholas Dudfield
e4875e5398 refactor: remove ttEXPORT_SIGN and UVTx infrastructure
- Delete ExportSign.cpp/h transactor (ttEXPORT_SIGN no longer used)
- Remove isUVTx() function and all UVTx checks from Transactor/TxQ
- Remove ttEXPORT_SIGN from TxFormats enum and format definition
- Remove jss::ExportSign
- Move signPendingExports() to ExportSignatureCollector

Export signatures are now collected ephemerally via TMValidation
messages, not via ttEXPORT_SIGN transactions.
2026-01-28 10:59:45 +07:00
Nicholas Dudfield
5b1b142be0 chore: remove stray iostream includes 2026-01-28 10:34:11 +07:00
Nicholas Dudfield
5ba832204a test: remove unused scaffolding from Export_test
- Remove accept_wasm and emit_wasm hooks (not export-related)
- Remove testBasicSetup, testEmitPayment, testXportPayment
- Keep only testXportPaymentWithValidator which tests the export flow
2026-01-28 10:31:49 +07:00
Nicholas Dudfield
1257b3a65c Merge remote-tracking branch 'origin/dev' into export-uvtxn 2026-01-28 10:21:37 +07:00
Nicholas Dudfield
6013ed2cb6 refactor: remove vestigial on-ledger export signature code
- Remove makeExportSignTxns() function (signatures now via TMValidation)
- Simplify ExportSign::doApply() to no-op (ttEXPORT_SIGN kept for protocol)
- Remove sfSigners from ltEXPORTED_TXN format (collected in memory now)
- Remove unused OpenView include and forward declaration
- Remove vestigial comment in TxQ about makeExportSignTxns
2026-01-28 10:19:14 +07:00
Nicholas Dudfield
034010716e feat: add signature verification cache for export signatures
Add cryptographic verification of export signatures as they arrive:
- stashTxnData() caches serialized txn for verification
- verifyAndAddSignature() verifies against cached data, rejects invalid
- isSignatureVerified() / verifySignature() for Transactor fallback
- Cleanup methods updated to clear verification cache

Also removes leftover debug std::cerr from OpenView, STObject, and tests.
2026-01-27 18:03:55 +07:00
Nicholas Dudfield
b28793b0fa chore: clean up export debug logging
- remove DBG_EXPORT macros and all usages
- remove [EXPORT-TRACE] and [EXPORT-TIMING] debug prefixes
- adjust log levels (verbose logs to trace, summaries to debug)
- upgrade "quorum reached" to info level (important event)
- standardize log prefixes to use "Export:"
- re-enable relay loop in OpenLedger.cpp
- remove reentrant call detection debug code
2026-01-27 16:21:02 +07:00
Nicholas Dudfield
4bce392c31 feat: continuous signature broadcasting for export robustness
Validators now sign ALL pending ltEXPORTED_TXN entries every ledger
(not just those from the current ledger). Signatures are cached in
ExportSignatureCollector and re-broadcast until the export is finalized.

Changes:
- Add hasSignatureFrom() and getSignatureFrom() to collector for
  checking/retrieving cached signatures
- signPendingExports() now iterates ALL pending exports, uses cached
  signature if available, otherwise signs fresh
- Signatures keep broadcasting until ltEXPORTED_TXN is deleted

This ensures:
- Late validators can contribute (sign when they come online)
- Network partitions self-heal (signatures propagate on reconnect)
- Node restarts recover (re-sign from ledger state)

The ltEXPORTED_TXN acts as a "ticket" - signatures only valid while it
exists. No explicit expiry check needed; ledger state is the gatekeeper.
2026-01-26 18:25:24 +07:00
Nicholas Dudfield
244a28b981 feat: implement ephemeral export signature collection
Replace on-ledger ttEXPORT_SIGN transactions with ephemeral signature
collection via TMValidation messages. This eliminates O(n²) metadata
bloat from accumulating signatures on-ledger.

Changes:
- Add ExportSignatureCollector for in-memory signature storage with
  quorum tracking (80% UNL threshold)
- Extend TMValidation protobuf with exportSignatures field
- Sign pending exports during validate() and broadcast via validation
- Extract signatures from received TMValidation in PeerImp
- TxQ checks quorum from memory instead of ledger
- Inject ttEXPORT when quorum reached (can be ledger N+1 or N+2)
- Clean up collector after ttEXPORT processed

Includes [EXPORT-TIMING] debug logging for timing analysis.
2026-01-26 17:54:17 +07:00
Nicholas Dudfield
f2838351c9 chore: add [EXPORT-TRACE] debug logging for export flow tracing
adds step-by-step trace logging with [EXPORT-TRACE] prefix to track
the complete export transaction lifecycle:
- STEP-1: xport() creates ltEXPORTED_TXN
- STEP-2a: rawTxInsert ttEXPORT_SIGN in callback
- STEP-2b: doApply ttEXPORT_SIGN
- STEP-3a: rawTxInsert ttEXPORT
- STEP-4: doApply ttEXPORT (cleanup)

filter with: grep '\[EXPORT-TRACE\]'
2026-01-23 08:10:33 +07:00
Nicholas Dudfield
dae082d6a5 chore: format files with clang-format 2026-01-22 16:42:05 +07:00
Nicholas Dudfield
619a4a68f7 fix: resolve export feature bugs and add comprehensive tests
- fix Guard.h: add import_whitelist_2 to signature lookup chain
  (was causing "Function type is inconsistent" errors for xport APIs)
- fix InvariantCheck.cpp: add ltEXPORTED_TXN to valid ledger entry types
  (was causing "invalid ledger entry type added" invariant failures)
- add SetHook.cpp: TODO comment documenting API version confusion

- add Export_test.cpp: comprehensive test suite for export feature
  - testBasicSetup: verify hook installation works
  - testEmitPayment: verify emit() flow works
  - testXportPayment: verify xport() creates ltEXPORTED_TXN
  - includes DebugLogs helper for per-partition log levels
  - parameterized runXportTest helper for future validator tests

Note: validator signing flow (ttEXPORT_SIGN) still needs debugging -
causes internal error on env.close() when validator config enabled.
2026-01-22 09:51:50 +07:00
Nicholas Dudfield
4a6db8bb05 Merge remote-tracking branch 'origin/dev' into export-uvtxn 2026-01-22 08:07:58 +07:00
Nicholas Dudfield
c86479bc58 fix: correct xport api signature and sfExportedTxn type usage
- Fix xport hook API whitelist to declare 4 args (I32, I32, I32, I32)
  instead of 2, matching the actual implementation signature
- Fix TxQ.cpp to use emplace_back with STObject for sfExportedTxn
  instead of setFieldVL, since sfExportedTxn is OBJECT type not VL.
  The previous code would throw "Wrong field type" at runtime.
2026-01-22 07:41:12 +07:00
Nicholas Dudfield
dc6a2dc6ff refactor: separate ExportSign transactor from Change
Move ttEXPORT_SIGN handling to dedicated ExportSign transactor class,
following the same pattern as ttENTROPY/Entropy from the RNG feature.
UVTxns (signed validator transactions) should not be mixed with
pseudo-transactions in the Change transactor.

- Create ExportSign.h/cpp with preflight, preclaim, doApply
- Route ttEXPORT_SIGN through ExportSign in applySteps.cpp
- Remove UVTx branches from Change transactor
- Add documentation markers to View.h for inUNLReport functions
2026-01-22 07:41:12 +07:00
Nicholas Dudfield
c01b9a657b feat: implement uvtxn pattern for ttEXPORT_SIGN
Port the UNL Validator Transaction (UVTxn) pattern from the RNG feature
to allow validators to submit signed ttEXPORT_SIGN transactions without
requiring a funded account.

Changes:
- Add isUVTx() to identify UVTxn transaction types
- Add inUNLReport() templates to check validator UNLReport membership
- Add getValidationSecretKey() to Application for signing
- Modify Transactor for UVTxn bypasses (fee, seq, signature checks)
- Add makeExportSignTxns() to generate validator signatures
- Hook into RCLConsensus to submit ttEXPORT_SIGN during accept
- Update applySteps.cpp routing for ttEXPORT_SIGN
- Remove direct ttEXPORT_SIGN injection from TxQ::accept

Note: Currently uses Change transactor with UVTx branches.
May refactor to dedicated ExportSign transactor class.
2026-01-20 13:44:38 +07:00
Nicholas Dudfield
652b181b5d chore: clang format 2026-01-20 12:44:14 +07:00
RichardAH
8329d78f32 Update src/ripple/app/tx/impl/Import.cpp
Co-authored-by: tequ <git@tequ.dev>
2025-12-21 13:42:46 +10:00
RichardAH
bf4579c1d1 Update src/ripple/app/tx/impl/Change.cpp
Co-authored-by: tequ <git@tequ.dev>
2025-12-21 13:42:37 +10:00
RichardAH
73e099eb23 Update src/ripple/app/hook/impl/applyHook.cpp
Co-authored-by: tequ <git@tequ.dev>
2025-12-21 13:42:29 +10:00
RichardAH
2e311b4259 Update src/ripple/app/hook/applyHook.h
Co-authored-by: tequ <git@tequ.dev>
2025-12-21 13:42:20 +10:00
RichardAH
7c8e940091 Merge branch 'dev' into export 2025-12-19 13:27:02 +10:00
Richard Holland
9b90c50789 featureExport compiling, untested 2025-12-19 14:19:17 +11:00
Richard Holland
a18e2cb2c6 remainder of the export feature... untested uncompiled 2025-12-14 19:04:37 +11:00
Richard Holland
be5f425122 change symbol name to xport 2025-12-14 13:27:44 +11:00
Richard Holland
fc6f4762da export hook apis, untested 2025-12-13 15:46:08 +11:00
1294 changed files with 34702 additions and 40720 deletions

View File

@@ -44,7 +44,6 @@ DerivePointerAlignment: false
DisableFormat: false
ExperimentalAutoDetectBinPacking: false
ForEachMacros: [ Q_FOREACH, BOOST_FOREACH ]
IncludeBlocks: Regroup
IncludeCategories:
- Regex: '^<(test)/'
Priority: 0
@@ -54,12 +53,8 @@ IncludeCategories:
Priority: 2
- Regex: '^<(boost)/'
Priority: 3
- Regex: '^.*/'
Priority: 4
- Regex: '^.*\.h'
Priority: 5
- Regex: '.*'
Priority: 6
Priority: 4
IncludeIsMainRegex: '$'
IndentCaseLabels: true
IndentFunctionDeclarationAfterType: false
@@ -94,4 +89,3 @@ SpacesInSquareBrackets: false
Standard: Cpp11
TabWidth: 8
UseTab: Never
QualifierAlignment: Right

View File

@@ -8,15 +8,6 @@ jobs:
env:
CLANG_VERSION: 18
steps:
# For jobs running in containers, $GITHUB_WORKSPACE and ${{ github.workspace }} might not be the
# same directory. The actions/checkout step is *supposed* to checkout into $GITHUB_WORKSPACE and
# then add it to safe.directory (see instructions at https://github.com/actions/checkout)
# but that's apparently not happening for some container images. We can't be sure what is actually
# happening, so let's pre-emptively add both directories to safe.directory. There's a
# Github issue opened in 2022 and not resolved in 2025 https://github.com/actions/runner/issues/2058 ¯\_(ツ)_/¯
- run: |
git config --global --add safe.directory $GITHUB_WORKSPACE
git config --global --add safe.directory ${{ github.workspace }}
- uses: actions/checkout@v4
- name: Install clang-format
run: |
@@ -29,7 +20,7 @@ jobs:
sudo apt-get update
sudo apt-get install clang-format-${CLANG_VERSION}
- name: Format first-party sources
run: find include src tests -type f \( -name '*.cpp' -o -name '*.hpp' -o -name '*.h' -o -name '*.ipp' \) -not -path "src/magic/magic_enum.h" -exec clang-format-${CLANG_VERSION} -i {} +
run: find include src -type f \( -name '*.cpp' -o -name '*.hpp' -o -name '*.h' -o -name '*.ipp' \) -not -path "src/magic/magic_enum.h" -exec clang-format-${CLANG_VERSION} -i {} +
- name: Check for differences
id: assert
run: |

View File

@@ -11,6 +11,11 @@ jobs:
- uses: actions/checkout@v3
- name: Check levelization
run: Builds/levelization/levelization.sh
- name: Verify Python Generator Matches Canonical Script
run: |
python3 Builds/levelization/levelization.py \
--results-dir /tmp/levelization-py-results \
--compare-to Builds/levelization/results
- name: Check for differences
id: assert
run: |

View File

@@ -122,6 +122,4 @@ jobs:
- name: Test
run: |
cd ${{ env.build_dir }}
./rippled --unittest --unittest-jobs $(nproc)
ctest -j $(nproc) --output-on-failure
${{ env.build_dir }}/rippled --unittest --unittest-jobs $(nproc)

View File

@@ -370,9 +370,7 @@ jobs:
run: |
# Ensure the binary exists before trying to run
if [ -f "${{ env.build_dir }}/rippled" ]; then
cd ${{ env.build_dir }}
./rippled --unittest --unittest-jobs $(nproc)
ctest -j $(nproc) --output-on-failure
${{ env.build_dir }}/rippled --unittest --unittest-jobs $(nproc)
else
echo "Error: rippled executable not found in ${{ env.build_dir }}"
exit 1

3
.gitignore vendored
View File

@@ -124,5 +124,8 @@ bld.rippled/
generated
.vscode
# AI docs (local working documents)
.ai-docs/
# Suggested in-tree build directory
/.build/

View File

@@ -1,6 +1,6 @@
# .pre-commit-config.yaml
repos:
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: v18.1.8
rev: v18.1.3
hooks:
- id: clang-format

4
.testnet/.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
output/
__pycache__/
scenarios/odd-cases/
scenarios/suite-experiments.yml

View File

@@ -0,0 +1,27 @@
"""Scenario: ConsensusEntropy amendment crashes non-supporting node.
Votes ConsensusEntropy accept on all nodes except n4, then waits for n4
to crash as the amendment activates without its support.
x-testnet run --scenario-script consensus_entropy_crash.py
"""
async def scenario(ctx, log):
await ctx.wait_for_ledger_close()
ctx.feature("ConsensusEntropy", vetoed=False, exclude_nodes=[4])
log("Waiting for ConsensusEntropy to be voted for...")
await ctx.wait_for_feature(
"ConsensusEntropy",
check=lambda s: not s.get("vetoed"),
exclude_nodes=[4],
timeout=60,
)
log("Waiting for n4 to crash...")
op = await ctx.wait_for_nodes_down(nodes=[4], timeout=600)
ctx.assert_log("unsupported amendments activated", since=op.started, nodes=[4])
ctx.assert_exit_status(0, nodes=[4])
log("PASS: n4 shut down due to unsupported amendment")

View File

@@ -0,0 +1,52 @@
""":descr: entropy stays valid under transaction load"""
from __future__ import annotations
from helpers import require_entropy, get_entropy_tx, assert_valid_entropy
variants = [
{"label": "light", "min_txns": 5, "max_txns": 10},
{"label": "heavy", "min_txns": 50, "max_txns": 60},
{"label": "super_heavy", "min_txns": 90, "max_txns": 120},
]
async def scenario(ctx, log, *, min_txns=5, max_txns=10, **_):
await require_entropy(ctx, log)
gen = ctx.txn_generator(min_txns=min_txns, max_txns=max_txns)
await gen.start()
await gen.wait_until_ready()
log(f"Transaction generator ready ({min_txns}-{max_txns} txns/ledger)")
# Wait for pipeline warmup + a few txn-bearing ledgers.
await ctx.wait_for_ledgers(3, node_id=0, timeout=60)
start_seq = ctx.validated_ledger_index(0)
await ctx.wait_for_ledgers(10, node_id=0, timeout=120)
end_seq = ctx.validated_ledger_index(0)
log(f"Inspecting ledgers {start_seq + 1}{end_seq}")
digests = set()
total_user_txns = 0
for seq in range(start_seq + 1, end_seq + 1):
ce, user_txns = get_entropy_tx(ctx, seq)
digest, count = assert_valid_entropy(ce, seq, seen_digests=digests)
total_user_txns += len(user_txns)
log(
f" Ledger {seq}: EntropyCount={count} "
f"user_txns={len(user_txns)} Digest={digest[:16]}..."
)
await gen.stop()
log(
f"Verified {end_seq - start_seq} ledgers: {total_user_txns} user txns, "
f"all entropy valid and unique"
)
if total_user_txns == 0:
raise AssertionError("No user transactions were included in any ledger")
log("PASS")

View File

@@ -0,0 +1,117 @@
""":descr: 4/5 liveness, 3/5 zero-entropy fallback, recovery"""
from __future__ import annotations
from helpers import require_entropy, get_entropy_tx, entropy_fields
async def scenario(ctx, log):
await require_entropy(ctx, log)
# Baseline: wait 1 ledger to confirm network is healthy.
await ctx.wait_for_ledgers(1, node_id=0, timeout=30)
# --- 4/5 liveness ---
ctx.stop_node(4)
await ctx.wait_for_nodes_down(nodes=[4], timeout=30)
await ctx.wait_for_ledgers(1, node_id=0, timeout=30)
log("4/5: liveness OK")
# Snapshot validated seq before dropping to 3/5.
val_before = ctx.validated_ledger_index(0)
# --- 3/5 degraded window ---
ctx.stop_node(3)
await ctx.wait_for_nodes_down(nodes=[3], timeout=30)
# 10s ≈ 3 rounds at 3s cadence.
await ctx.sleep(10)
val_after = ctx.validated_ledger_index(0)
log(f"3/5: validated ledger {val_before}{val_after}")
# Accepted/built ledgers may still later appear as validated once the full
# network rejoins. For ConsensusEntropy the key invariant is that every
# ledger created during this sub-quorum window carries ZERO entropy.
degraded_zero = 0
degraded_end = val_after or val_before
if val_before and degraded_end and degraded_end > val_before:
for seq in range(val_before + 1, degraded_end + 1):
ce, _ = get_entropy_tx(ctx, seq)
digest, entropy_count, is_zero = entropy_fields(ce)
if not is_zero:
raise AssertionError(
f"Ledger {seq}: expected ZERO entropy during 3/5 window, "
f"got Digest={digest[:16]}... EntropyCount={entropy_count}"
)
degraded_zero += 1
log(f" Degraded ledger {seq}: EntropyCount={entropy_count} ZERO")
log(f"3/5 entropy summary: {degraded_zero} zero")
# Log checks tied to actual transition mechanics:
# - seq=1 proposals are emitted once commit-set phase is entered
# - ConvergingCommit transition is the gateway out of seq=0-only behavior
# - establish gate blocked indicates tx-consensus/pause prevented accept
ctx.log_level("LedgerConsensus", "trace")
op = await ctx.sleep(6, name="stall_window")
ctx.assert_not_log(
r"RNG: transitioned to ConvergingCommit", within=op.window, nodes=[0, 1, 2]
)
ctx.assert_not_log(r"RNG: propose seq=1", within=op.window, nodes=[0, 1, 2])
gate_blocked = ctx.search_logs(
r"STALLDIAG: establish gate blocked reason=(pause|no-tx-consensus)",
within=op.window,
nodes=[0, 1, 2],
)
log(f"3/5: establish gate-blocked logs in 6s: {gate_blocked.count}")
skips = ctx.search_logs(r"RNG: bootstrap skip", within=op.window, nodes=[0, 1, 2])
log(f"3/5: RNG bootstrap skips in 6s: {skips.count}")
# --- Recovery: restart nodes, verify ledger advancement ---
ctx.start_node(3)
ctx.start_node(4)
await ctx.wait_for_ledgers(1, node_id=0, timeout=120)
val_recovered = ctx.validated_ledger_index(0)
pre_recovery = max(v for v in [val_before, val_after] if v is not None)
log(f"Recovered: validated seq {pre_recovery}{val_recovered}")
if not val_recovered or val_recovered <= pre_recovery:
raise AssertionError(
f"Validated ledger did not advance after recovery "
f"({pre_recovery}{val_recovered})"
)
# Inspect post-recovery ledgers separately from the degraded window above.
# Once the network is back at quorum, non-zero entropy is valid again but
# must still be quorum-met.
zero_count = 0
nonzero_count = 0
for seq in range(pre_recovery + 1, val_recovered + 1):
ce, _ = get_entropy_tx(ctx, seq)
digest, entropy_count, is_zero = entropy_fields(ce)
if is_zero:
zero_count += 1
else:
nonzero_count += 1
if entropy_count < 4:
raise AssertionError(
f"Ledger {seq}: non-zero entropy with sub-quorum "
f"EntropyCount={entropy_count} (need >= 4)"
)
log(
f" Ledger {seq}: EntropyCount={entropy_count} "
f"{'ZERO' if is_zero else 'REAL'}"
)
log(f"Entropy summary: {zero_count} zero, {nonzero_count} non-zero")
log("PASS")

View File

@@ -0,0 +1,46 @@
""":descr: drop 2 nodes (3/5 stall), restart both, verify recovery"""
from __future__ import annotations
async def scenario(ctx, log):
await ctx.wait_for_ledger_close(timeout=120)
feature = ctx.feature_check("ConsensusEntropy", node_id=0)
if not feature or not feature.get("enabled", False):
raise AssertionError(f"ConsensusEntropy not enabled: {feature}")
await ctx.wait_for_ledgers(1, node_id=0, timeout=60)
log("Baseline OK")
# Drop 2 nodes → validation stall.
ctx.stop_node(3)
ctx.stop_node(4)
await ctx.wait_for_nodes_down(nodes=[3, 4], timeout=30)
info = ctx.rpc.server_info(node_id=0)
val_before = info.get("info", {}).get("validated_ledger", {}).get("seq", 0)
log(f"Stalled at validated seq {val_before}")
# Let it sit for a few rounds in degraded state.
await ctx.sleep(6)
# Bring both nodes back.
ctx.start_node(3)
ctx.start_node(4)
log("Restarted n3 and n4, waiting for recovery...")
# Recovery: wait for ANY validated ledger advance on n0.
await ctx.wait_for_ledger_close(node_id=0, timeout=60)
info = ctx.rpc.server_info(node_id=0)
val_after = info.get("info", {}).get("validated_ledger", {}).get("seq", 0)
log(f"Recovered: validated seq {val_before}{val_after}")
if val_after <= val_before:
raise AssertionError(
f"Validated ledger did not advance after recovery "
f"({val_before}{val_after})"
)
log("PASS")

View File

@@ -0,0 +1,27 @@
""":descr: all 5 nodes healthy, every ledger has valid unique quorum-met entropy"""
from __future__ import annotations
from helpers import require_entropy, get_entropy_tx, assert_valid_entropy
async def scenario(ctx, log):
await require_entropy(ctx, log)
# Wait for RNG pipeline to warm up past bootstrap skip.
await ctx.wait_for_ledgers(3, node_id=0, timeout=60)
log("Pipeline warmed up")
start_seq = ctx.validated_ledger_index(0)
await ctx.wait_for_ledgers(10, node_id=0, timeout=120)
end_seq = ctx.validated_ledger_index(0)
log(f"Inspecting ledgers {start_seq + 1}{end_seq}")
digests = set()
for seq in range(start_seq + 1, end_seq + 1):
ce, _ = get_entropy_tx(ctx, seq)
digest, count = assert_valid_entropy(ce, seq, seen_digests=digests)
log(f" Ledger {seq}: EntropyCount={count} Digest={digest[:16]}...")
log(f"Verified {end_seq - start_seq} ledgers: all quorum entropy, all unique")
log("PASS")

View File

@@ -0,0 +1,71 @@
defaults:
network:
node_count: 5
launcher: tmux
slave_delay: 0.2
features:
- ConsensusEntropy
- Export
track_features:
- ConsensusEntropy
- Export
log_levels:
TxQ: info
Protocol: debug
Peer: debug
LedgerConsensus: debug
NetworkOPs: info
env:
XAHAU_RESOURCE_PER_PORT: "1"
XAHAU_RNG_POLL_MS: "333"
tests:
# --- CE + Export (80% quorum, SHAMap convergence) ---
- name: steady_state_export_ce
script: .testnet/scenarios/export/steady_state_export.py
- name: retriable_export_ce
script: .testnet/scenarios/export/retriable_export.py
- name: export_degradation_ce
script: .testnet/scenarios/export/export_degradation.py
network:
node_env:
3:
XAHAUD_NO_EXPORT_SIG: "1"
4:
XAHAUD_NO_EXPORT_SIG: "1"
# CE + Export: 1 node suppressed, 4/5 = 80% quorum, should succeed
- name: export_ce_one_node_down
script: .testnet/scenarios/export/export_unanimity.py
params:
expect_success: true
network:
node_env:
4:
XAHAUD_NO_EXPORT_SIG: "1"
# --- Export only, no CE (100% quorum, unanimity) ---
- name: export_unanimity_all_up
script: .testnet/scenarios/export/export_unanimity.py
params:
expect_success: true
network:
features:
- Export
track_features:
- Export
- name: export_unanimity_node_down
script: .testnet/scenarios/export/export_unanimity.py
params:
expect_success: false
network:
features:
- Export
track_features:
- Export
node_env:
4:
XAHAUD_NO_EXPORT_SIG: "1"

View File

@@ -0,0 +1,102 @@
""":descr: Submit ttEXPORT with 2 nodes suppressing export sigs, verify it
retries via terRETRY_EXPORT until LLS expiry (not enough sigs for quorum).
Nodes 3 and 4 have XAHAUD_NO_EXPORT_SIG=1, so only 3/5 nodes provide
export signatures. With 80% quorum = ceil(5*0.8) = 4 required, the
export cannot reach quorum and should expire via tecEXPORT_EXPIRED.
Flow:
1. Fund alice and bob
2. alice submits ttEXPORT with tight LLS
3. Export retries (only 3/5 sigs available, need 4)
4. Verify export expires with tecEXPORT_EXPIRED
5. Verify subsequent payment still works (sequence not permanently blocked)
"""
from __future__ import annotations
from export_helpers import require_export, assert_shadow_ticket
async def scenario(ctx, log):
await require_export(ctx, log)
# --- Setup ---
await ctx.fund_accounts({"alice": 10000, "bob": 1000})
log("Accounts funded")
alice = ctx.account("alice")
bob = ctx.account("bob")
current_seq = ctx.validated_ledger_index(0)
log(f"Current ledger: {current_seq}")
log("Nodes 3,4 have XAHAUD_NO_EXPORT_SIG=1 (3/5 sigs, need 4)")
# --- Submit ttEXPORT (should retry then expire -- only 3/5 sigs) ---
result = await ctx.submit_and_wait(
{
"TransactionType": "Export",
"LastLedgerSequence": current_seq + 8,
"Fee": "1000000",
"ExportedTxn": {
"TransactionType": "Payment",
"Account": alice.address,
"Destination": bob.address,
"Amount": "1000000",
"Fee": "10",
"Sequence": 0,
"TicketSequence": 1,
"FirstLedgerSequence": current_seq + 1,
"LastLedgerSequence": current_seq + 6,
"Flags": 2147483648,
"SigningPubKey": "",
},
},
alice.wallet,
timeout=60,
)
final_seq = ctx.validated_ledger_index(0)
engine_result = result.get("engine_result", "")
log(f"Export completed at ledger {final_seq}, result: {engine_result}")
# With only 3/5 sigs and 80% quorum (4 required), export MUST fail
if engine_result == "tesSUCCESS":
raise AssertionError(
"Export should NOT have succeeded with only 3/5 sigs "
"(need 4 for 80% quorum) -- check XAHAUD_NO_EXPORT_SIG config"
)
# Should be tecEXPORT_EXPIRED (LLS reached without quorum)
if engine_result != "tecEXPORT_EXPIRED":
log(f"WARNING: expected tecEXPORT_EXPIRED, got {engine_result}")
log(f"Export failed as expected ({engine_result})")
# No shadow ticket should exist (export never reached quorum)
assert_shadow_ticket(ctx, alice.address, log, expect_exists=False)
# --- Verify subsequent payment works regardless ---
log("Submitting payment from alice to bob...")
pay_result = await ctx.submit_and_wait(
{
"TransactionType": "Payment",
"Destination": bob.address,
"Amount": "1000000",
"Fee": "12",
},
alice.wallet,
timeout=30,
)
pay_engine = pay_result.get("engine_result", "")
log(f"Payment result: {pay_engine}")
if pay_engine != "tesSUCCESS":
raise AssertionError(
f"Payment failed after expired export: {pay_engine} "
f"-- sequence may be blocked"
)
log("Payment succeeded -- account not permanently blocked")
log("PASS")

View File

@@ -0,0 +1,144 @@
"""Shared helpers for Export scenario tests."""
from __future__ import annotations
async def require_export(ctx, log):
"""Wait for first ledger and assert Export amendment is enabled."""
await ctx.wait_for_ledger_close(timeout=120)
feature = ctx.feature_check("Export", node_id=0)
if not feature or not feature.get("enabled", False):
raise AssertionError(f"Export not enabled: {feature}")
log("Export amendment enabled")
def find_export_txns(ctx, seq):
"""Find Export transactions in a ledger.
Returns list of Export transaction dicts.
"""
result = ctx.ledger(seq, transactions=True)
if not result:
return []
txns = result.get("ledger", {}).get("transactions", [])
return [tx for tx in txns if tx.get("TransactionType") == "Export"]
def dst_param(address):
"""Encode an address as a HookParameter entry for the DST param."""
from xrpl.core.addresscodec import decode_classic_address
dst_hex = decode_classic_address(address).hex().upper()
return {
"HookParameter": {
"HookParameterName": "445354", # "DST"
"HookParameterValue": dst_hex,
}
}
def assert_hook_accepted(meta, log, *, expected_emits=1):
"""Assert hook executed with ACCEPT and the expected emit count.
Checks sfHookExecutions in transaction metadata.
Returns the hook execution entry for further inspection.
"""
hook_execs = meta.get("HookExecutions", [])
if not hook_execs:
raise AssertionError("No HookExecutions in metadata")
exec_entry = hook_execs[0].get("HookExecution", {})
hook_result = exec_entry.get("HookResult", -1)
emit_count = exec_entry.get("HookEmitCount", -1)
return_code = exec_entry.get("HookReturnCode", "")
log(f" HookResult={hook_result} EmitCount={emit_count} ReturnCode={return_code}")
# HookResult 3 = ExitType::ACCEPT
if hook_result != 3:
raise AssertionError(
f"Hook did not ACCEPT: HookResult={hook_result} "
f"ReturnCode={return_code}"
)
if emit_count != expected_emits:
raise AssertionError(
f"Expected {expected_emits} emits, got {emit_count}"
)
# ReturnCode 0 = success; non-zero = ASSERT line number in hook
if return_code and str(return_code) != "0":
raise AssertionError(
f"Hook returned error code {return_code} "
f"(likely ASSERT failure at that line)"
)
return exec_entry
def assert_export_result(meta, log, *, require_signers=True):
"""Assert ExportResult is present and well-formed in metadata.
Returns the ExportResult dict.
"""
export_result = meta.get("ExportResult", {})
if not export_result:
raise AssertionError("ExportResult not found in metadata")
# Must have LedgerSequence and TransactionHash
if "LedgerSequence" not in export_result:
raise AssertionError("ExportResult missing LedgerSequence")
if "TransactionHash" not in export_result:
raise AssertionError("ExportResult missing TransactionHash")
# Must have the inner ExportedTxn object
inner = export_result.get("ExportedTxn", {})
if not inner:
raise AssertionError("ExportResult missing ExportedTxn (multisigned blob)")
log(f" ExportResult: seq={export_result['LedgerSequence']} "
f"hash={export_result['TransactionHash'][:16]}...")
# Inner tx should have Account, Destination, TransactionType
if "Account" not in inner:
raise AssertionError("ExportedTxn missing Account")
if "TransactionType" not in inner:
raise AssertionError("ExportedTxn missing TransactionType")
# Should have empty SigningPubKey (multisigned)
if inner.get("SigningPubKey", "NOT_EMPTY") != "":
raise AssertionError(
f"ExportedTxn SigningPubKey should be empty, "
f"got '{inner.get('SigningPubKey')}'"
)
if require_signers:
signers = inner.get("Signers", [])
if not signers:
raise AssertionError("ExportedTxn has no Signers (multisig not applied)")
log(f" Signers: {len(signers)} validator(s)")
return export_result
def assert_shadow_ticket(ctx, account_address, log, *, expect_exists=True):
"""Assert shadow ticket exists (or doesn't) for the account."""
obj_result = ctx.rpc.request(
0, "account_objects", {"account": account_address}
)
all_objects = (obj_result or {}).get("account_objects", [])
shadow_tickets = [
obj for obj in all_objects
if obj.get("LedgerEntryType") == "ShadowTicket"
]
log(f" Shadow tickets: {len(shadow_tickets)}")
if expect_exists and not shadow_tickets:
raise AssertionError("Expected shadow ticket but none found")
if not expect_exists and shadow_tickets:
raise AssertionError(
f"Expected no shadow tickets but found {len(shadow_tickets)}"
)
return shadow_tickets

View File

@@ -0,0 +1,111 @@
""":descr: Test Export without CE (unanimity mode). When all 5 nodes are up,
100% quorum is reachable and the export should succeed. When 1 node
suppresses sigs (4/5), unanimity fails and the export should expire.
Parameterized via `expect_success` kwarg from suite.yml.
Flow:
1. Fund alice and bob
2. alice submits ttEXPORT
3. Verify result matches expectation (tesSUCCESS or tecEXPORT_EXPIRED)
4. Verify ExportResult + shadow ticket on success, absence on failure
5. Verify subsequent payment works regardless
"""
from __future__ import annotations
from export_helpers import (
require_export,
assert_export_result,
assert_shadow_ticket,
)
async def scenario(ctx, log, expect_success=True):
await require_export(ctx, log)
# --- Setup ---
await ctx.fund_accounts({"alice": 10000, "bob": 1000})
log("Accounts funded")
alice = ctx.account("alice")
bob = ctx.account("bob")
current_seq = ctx.validated_ledger_index(0)
log(f"Current ledger: {current_seq}")
log(f"Expecting export {'success' if expect_success else 'failure (unanimity)'}")
# --- Submit ttEXPORT ---
result = await ctx.submit_and_wait(
{
"TransactionType": "Export",
"LastLedgerSequence": current_seq + 10,
"Fee": "1000000",
"ExportedTxn": {
"TransactionType": "Payment",
"Account": alice.address,
"Destination": bob.address,
"Amount": "1000000",
"Fee": "10",
"Sequence": 0,
"TicketSequence": 1,
"FirstLedgerSequence": current_seq + 1,
"LastLedgerSequence": current_seq + 8,
"Flags": 2147483648,
"SigningPubKey": "",
},
},
alice.wallet,
timeout=60,
)
final_seq = ctx.validated_ledger_index(0)
engine_result = result.get("engine_result", "")
meta = result.get("meta", {})
log(f"Export at ledger {final_seq}, result: {engine_result}")
if expect_success:
if engine_result != "tesSUCCESS":
raise AssertionError(
f"Expected tesSUCCESS, got {engine_result}"
)
# Assert ExportResult is well-formed with signers
assert_export_result(meta, log, require_signers=True)
# Assert shadow ticket was created
assert_shadow_ticket(ctx, alice.address, log, expect_exists=True)
log("Export succeeded as expected (all validators signed)")
else:
if engine_result == "tesSUCCESS":
raise AssertionError(
"Export should NOT have succeeded with sub-unanimity"
)
log(f"Export failed as expected ({engine_result})")
# No shadow ticket should exist
assert_shadow_ticket(ctx, alice.address, log, expect_exists=False)
# --- Verify subsequent payment works ---
log("Submitting payment from alice to bob...")
pay_result = await ctx.submit_and_wait(
{
"TransactionType": "Payment",
"Destination": bob.address,
"Amount": "1000000",
"Fee": "12",
},
alice.wallet,
timeout=30,
)
pay_engine = pay_result.get("engine_result", "")
log(f"Payment result: {pay_engine}")
if pay_engine != "tesSUCCESS":
raise AssertionError(f"Payment failed: {pay_engine}")
log("Payment succeeded -- account not blocked")
log("PASS")

View File

@@ -0,0 +1,94 @@
""":descr: Submit ttEXPORT directly (no hook), verify it succeeds with
ExportResult in metadata. Then submit a payment from the same account
to verify sequence handling doesn't block subsequent transactions.
Flow:
1. Fund alice and bob
2. alice submits ttEXPORT with inner payment -> tesSUCCESS (provisional)
3. Validators attach sigs via proposals -> quorum -> ExportResult in metadata
4. alice submits a Payment to bob -> should succeed (sequence not blocked)
"""
from __future__ import annotations
from export_helpers import require_export, assert_export_result, assert_shadow_ticket
async def scenario(ctx, log):
await require_export(ctx, log)
# --- Setup ---
await ctx.fund_accounts({"alice": 10000, "bob": 1000})
log("Accounts funded")
alice = ctx.account("alice")
bob = ctx.account("bob")
current_seq = ctx.validated_ledger_index(0)
log(f"Current ledger: {current_seq}")
# --- 1. Submit ttEXPORT ---
result = await ctx.submit_and_wait(
{
"TransactionType": "Export",
"LastLedgerSequence": current_seq + 15,
"Fee": "1000000",
"ExportedTxn": {
"TransactionType": "Payment",
"Account": alice.address,
"Destination": bob.address,
"Amount": "1000000",
"Fee": "10",
"Sequence": 0,
"TicketSequence": 1,
"FirstLedgerSequence": current_seq + 1,
"LastLedgerSequence": current_seq + 10,
"Flags": 2147483648,
"SigningPubKey": "",
},
},
alice.wallet,
timeout=60,
)
export_seq = ctx.validated_ledger_index(0)
engine_result = result.get("engine_result", "")
log(f"Export completed at ledger {export_seq}, result: {engine_result}")
if engine_result != "tesSUCCESS":
raise AssertionError(
f"Expected tesSUCCESS for export, got {engine_result}"
)
# Assert ExportResult is well-formed with signers
meta = result.get("meta", {})
assert_export_result(meta, log, require_signers=True)
# Assert shadow ticket was created
assert_shadow_ticket(ctx, alice.address, log, expect_exists=True)
# --- 2. Submit Payment from same account ---
log("Submitting payment from alice to bob...")
pay_result = await ctx.submit_and_wait(
{
"TransactionType": "Payment",
"Destination": bob.address,
"Amount": "1000000",
"Fee": "12",
},
alice.wallet,
timeout=30,
)
pay_engine = pay_result.get("engine_result", "")
log(f"Payment result: {pay_engine}")
if pay_engine != "tesSUCCESS":
raise AssertionError(f"Payment failed: {pay_engine}")
log(
f"Both transactions succeeded: "
f"Export at ledger {export_seq}, Payment at ledger {ctx.validated_ledger_index(0)}"
)
log("Sequence handling OK - export didn't block subsequent txns")
log("PASS")

View File

@@ -0,0 +1,211 @@
""":descr: install xport hook, trigger export, verify emitted ttEXPORT lifecycle
1. Fund alice (hook holder), bob (trigger), carol (export destination)
2. Install xport hook on alice
3. bob pays alice with DST=carol → hook calls xport() → emits ttEXPORT
4. Emitted ttEXPORT enters open ledger, validators attach sigs via proposals
5. Verify Export transaction appears in a subsequent ledger
"""
from __future__ import annotations
from export_helpers import (
require_export,
find_export_txns,
dst_param,
assert_hook_accepted,
assert_export_result,
assert_shadow_ticket,
)
# C source for the xport hook — verbatim from src/test/app/Export_test_hooks.h
# On Payment to the hook account, exports a 1 XAH payment to the DST param.
XPORT_HOOK_C = r"""
#include <stdint.h>
extern int32_t _g(uint32_t id, uint32_t maxiter);
extern int64_t accept(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t rollback(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t xport(uint32_t write_ptr, uint32_t write_len, uint32_t read_ptr, uint32_t read_len);
extern int64_t xport_reserve(uint32_t count);
extern int64_t hook_account(uint32_t write_ptr, uint32_t write_len);
extern int64_t otxn_param(uint32_t write_ptr, uint32_t write_len, uint32_t name_ptr, uint32_t name_len);
extern int64_t otxn_type(void);
extern int64_t ledger_seq(void);
#define SBUF(x) (uint32_t)(x), sizeof(x)
#define ASSERT(x) if (!(x)) rollback((uint32_t)#x, sizeof(#x), __LINE__)
#define ttPAYMENT 0
#define tfCANONICAL 0x80000000UL
#define amAMOUNT 1
#define amFEE 8
#define atACCOUNT 1
#define atDESTINATION 3
#define ENCODE_TT(buf_out, tt) \
buf_out[0] = 0x12U; buf_out[1] = (tt >> 8) & 0xFFU; buf_out[2] = tt & 0xFFU; buf_out += 3;
#define ENCODE_FLAGS(buf_out, flags) \
buf_out[0] = 0x22U; buf_out[1] = (flags >> 24) & 0xFFU; buf_out[2] = (flags >> 16) & 0xFFU; \
buf_out[3] = (flags >> 8) & 0xFFU; buf_out[4] = flags & 0xFFU; buf_out += 5;
#define ENCODE_SEQUENCE(buf_out, seq) \
buf_out[0] = 0x24U; buf_out[1] = (seq >> 24) & 0xFFU; buf_out[2] = (seq >> 16) & 0xFFU; \
buf_out[3] = (seq >> 8) & 0xFFU; buf_out[4] = seq & 0xFFU; buf_out += 5;
#define ENCODE_FLS(buf_out, fls) \
buf_out[0] = 0x20U; buf_out[1] = 0x1AU; buf_out[2] = (fls >> 24) & 0xFFU; \
buf_out[3] = (fls >> 16) & 0xFFU; buf_out[4] = (fls >> 8) & 0xFFU; \
buf_out[5] = fls & 0xFFU; buf_out += 6;
#define ENCODE_LLS(buf_out, lls) \
buf_out[0] = 0x20U; buf_out[1] = 0x1BU; buf_out[2] = (lls >> 24) & 0xFFU; \
buf_out[3] = (lls >> 16) & 0xFFU; buf_out[4] = (lls >> 8) & 0xFFU; \
buf_out[5] = lls & 0xFFU; buf_out += 6;
#define ENCODE_DROPS(buf_out, drops, amt_type) \
buf_out[0] = 0x60U + amt_type; buf_out[1] = 0x40U + ((drops >> 56) & 0x3FU); \
buf_out[2] = (drops >> 48) & 0xFFU; buf_out[3] = (drops >> 40) & 0xFFU; \
buf_out[4] = (drops >> 32) & 0xFFU; buf_out[5] = (drops >> 24) & 0xFFU; \
buf_out[6] = (drops >> 16) & 0xFFU; buf_out[7] = (drops >> 8) & 0xFFU; \
buf_out[8] = drops & 0xFFU; buf_out += 9;
#define ENCODE_SIGNING_PUBKEY_EMPTY(buf_out) \
buf_out[0] = 0x73U; buf_out[1] = 0x00U; buf_out += 2;
#define ENCODE_ACCOUNT(buf_out, acc, acc_type) \
buf_out[0] = 0x80U + acc_type; buf_out[1] = 0x14U; \
for (int i = 0; i < 20; ++i) buf_out[2+i] = acc[i]; buf_out += 22;
#define PREPARE_PAYMENT_SIMPLE_SIZE 270U
int64_t hook(uint32_t reserved) {
_g(1, 1);
if (otxn_type() != ttPAYMENT)
return accept(0, 0, 0);
ASSERT(xport_reserve(1) == 1);
uint8_t dst[20];
int64_t dst_len = otxn_param(SBUF(dst), "DST", 3);
ASSERT(dst_len == 20);
uint8_t acc[20];
ASSERT(hook_account(SBUF(acc)) == 20);
uint32_t cls = (uint32_t)ledger_seq();
uint8_t tx[PREPARE_PAYMENT_SIMPLE_SIZE];
uint8_t* buf = tx;
ENCODE_TT(buf, ttPAYMENT);
ENCODE_FLAGS(buf, tfCANONICAL);
ENCODE_SEQUENCE(buf, 0);
ENCODE_FLS(buf, cls + 1);
ENCODE_LLS(buf, cls + 5);
// sfTicketSequence = UINT32 field 41 = 0x20 0x29
buf[0] = 0x20U; buf[1] = 0x29U;
buf[2] = 0; buf[3] = 0; buf[4] = 0; buf[5] = 1;
buf += 6;
uint64_t drops = 1000000;
ENCODE_DROPS(buf, drops, amAMOUNT);
ENCODE_DROPS(buf, 10, amFEE);
ENCODE_SIGNING_PUBKEY_EMPTY(buf);
ENCODE_ACCOUNT(buf, acc, atACCOUNT);
ENCODE_ACCOUNT(buf, dst, atDESTINATION);
uint8_t hash[32];
int64_t xport_result = xport(SBUF(hash), (uint32_t)tx, buf - tx);
ASSERT(xport_result == 32);
return accept(0, 0, 0);
}
"""
async def scenario(ctx, log):
# Wait for network to start and amendments to activate
await require_export(ctx, log)
# --- Setup ---
await ctx.fund_accounts({"alice": 10000, "bob": 10000, "carol": 1000})
log("Accounts funded")
alice = ctx.account("alice")
carol = ctx.account("carol")
# Compile and install xport hook on alice
wasm = ctx.compile_hook(XPORT_HOOK_C, label="xport")
await ctx.submit_and_wait(
{
"TransactionType": "SetHook",
"Hooks": [
{
"Hook": {
"CreateCode": wasm.hex().upper(),
"HookOn": "0" * 64,
"HookNamespace": "0" * 64,
"HookApiVersion": 0,
"Flags": 1, # hsfOVERRIDE
}
}
],
"Fee": "100000000",
},
alice.wallet,
)
log(
f"Hook installed on alice ({alice.address[:12]}...) "
f"ledger {ctx.validated_ledger_index(0)}"
)
# --- Trigger ---
# bob pays alice → hook calls xport() → emits ttEXPORT
trigger_result = await ctx.submit_and_wait(
{
"TransactionType": "Payment",
"Destination": alice.address,
"Amount": "100000000",
"Fee": "1000000",
"HookParameters": [dst_param(carol.address)],
},
ctx.account("bob").wallet,
)
trigger_seq = ctx.validated_ledger_index(0)
log(f"Export triggered at ledger {trigger_seq}")
# Assert hook fired with ACCEPT and emitted 1 tx
trigger_meta = trigger_result.get("meta", {})
assert_hook_accepted(trigger_meta, log, expected_emits=1)
# --- Verify: check each ledger close for the Export transaction ---
max_ledgers = 10
for i in range(max_ledgers):
await ctx.wait_for_ledgers(1, node_id=0, timeout=30)
seq = ctx.validated_ledger_index(0)
exports = find_export_txns(ctx, seq)
if exports:
export_tx = exports[0]
meta = export_tx.get("meta", export_tx.get("metaData", {}))
result = meta.get("TransactionResult", "")
log(f"Ledger {seq}: Export txn found, result={result}")
if result != "tesSUCCESS":
raise AssertionError(f"Export did not succeed: {result}")
# Assert ExportResult is well-formed with signers and inner tx
assert_export_result(meta, log, require_signers=True)
# Assert shadow ticket was created
assert_shadow_ticket(ctx, alice.address, log, expect_exists=True)
log("PASS")
return
log(f"Ledger {seq}: no Export txn yet")
raise AssertionError(
f"No Export transaction found after {max_ledgers} ledger closes"
)

View File

@@ -0,0 +1,60 @@
"""Shared helpers for ConsensusEntropy scenario tests."""
from __future__ import annotations
ZERO_DIGEST = "0" * 64
async def require_entropy(ctx, log):
"""Wait for first ledger and assert ConsensusEntropy is enabled."""
await ctx.wait_for_ledger_close(timeout=120)
feature = ctx.feature_check("ConsensusEntropy", node_id=0)
if not feature or not feature.get("enabled", False):
raise AssertionError(f"ConsensusEntropy not enabled: {feature}")
log("ConsensusEntropy enabled")
def get_entropy_tx(ctx, seq):
"""Fetch ledger and return (ce_tx, user_txns) or raise."""
result = ctx.ledger(seq, transactions=True)
if not result:
raise AssertionError(f"Ledger {seq}: fetch failed")
txns = result.get("ledger", {}).get("transactions", [])
ce = [tx for tx in txns if tx.get("TransactionType") == "ConsensusEntropy"]
user = [tx for tx in txns if tx.get("TransactionType") != "ConsensusEntropy"]
if len(ce) != 1:
raise AssertionError(
f"Ledger {seq}: expected 1 ConsensusEntropy txn, got {len(ce)}"
)
return ce[0], user
def entropy_fields(ce_tx):
"""Return (digest, entropy_count, is_zero) from a ConsensusEntropy tx."""
digest = ce_tx.get("Digest", "")
entropy_count = ce_tx.get("EntropyCount", -1)
is_zero = digest == ZERO_DIGEST and entropy_count == 0
return digest, entropy_count, is_zero
def assert_valid_entropy(ce_tx, seq, seen_digests=None):
"""Assert non-zero quorum-met entropy. Optionally check uniqueness."""
digest, entropy_count, is_zero = entropy_fields(ce_tx)
if is_zero or not digest:
raise AssertionError(f"Ledger {seq}: zero/empty Digest")
if entropy_count < 4:
raise AssertionError(
f"Ledger {seq}: EntropyCount={entropy_count} < 4 (sub-quorum)"
)
if seen_digests is not None:
if digest in seen_digests:
raise AssertionError(f"Ledger {seq}: duplicate Digest {digest[:16]}...")
seen_digests.add(digest)
return digest, entropy_count

View File

@@ -0,0 +1,42 @@
defaults:
network:
node_count: 5
launcher: tmux
slave_delay: 0.2
features:
- ConsensusEntropy
track_features:
- ConsensusEntropy
log_levels:
TxQ: info
Protocol: debug
Peer: debug
LedgerConsensus: debug
NetworkOPs: info
env:
XAHAU_RESOURCE_PER_PORT: "1"
XAHAU_RNG_POLL_MS: "333"
tests:
- name: steady_state_entropy
script: .testnet/scenarios/entropy/steady_state_entropy.py
- name: steady_state_entropy_fast_start
script: .testnet/scenarios/entropy/steady_state_entropy.py
network:
env:
XAHAUD_BOOTSTRAP_FAST_START: "1"
- name: entropy_with_transactions
script: .testnet/scenarios/entropy/entropy_with_transactions.py
- name: quorum_recovery_smoke
script: .testnet/scenarios/entropy/quorum_recovery_smoke.py
- name: quorum_degradation_smoke
script: .testnet/scenarios/entropy/quorum_degradation_smoke.py
network:
log_levels:
LedgerConsensus: trace
# Export scenarios: see export-suite.yml

View File

@@ -83,17 +83,9 @@ The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conve
The `network_id` field was added in the `server_info` response in version 1.5.0 (2019), but it is not returned in [reporting mode](https://xrpl.org/rippled-server-modes.html#reporting-mode). However, use of reporting mode is now discouraged, in favor of using [Clio](https://github.com/XRPLF/clio) instead.
## XRP Ledger server version 2.5.0
As of 2025-04-04, version 2.5.0 is in development. You can use a pre-release version by building from source or [using the `nightly` package](https://xrpl.org/docs/infrastructure/installation/install-rippled-on-ubuntu).
### Additions and bugfixes in 2.5.0
- `channel_authorize`: If `signing_support` is not enabled in the config, the RPC is disabled.
## XRP Ledger server version 2.4.0
[Version 2.4.0](https://github.com/XRPLF/rippled/releases/tag/2.4.0) was released on March 4, 2025.
As of 2025-01-28, version 2.4.0 is in development. You can use a pre-release version by building from source or [using the `nightly` package](https://xrpl.org/docs/infrastructure/installation/install-rippled-on-ubuntu).
### Additions and bugfixes in 2.4.0

View File

@@ -182,6 +182,16 @@ which allows you to statically link it with GCC, if you want.
conan export external/snappy --version 1.1.10 --user xahaud --channel stable
```
Export our [Conan recipe for RocksDB](./external/rocksdb).
It does not override paths to dependencies when building with Visual Studio.
```
# Conan 1.x
conan export external/rocksdb rocksdb/6.29.5@
# Conan 2.x
conan export --version 6.29.5 external/rocksdb
```
Export our [Conan recipe for SOCI](./external/soci).
It patches their CMake to correctly import its dependencies.
@@ -195,6 +205,17 @@ It patches their CMake to correctly import its dependencies.
conan export external/wasmedge --version 0.11.2 --user xahaud --channel stable
```
Export our [Conan recipe for NuDB](./external/nudb).
It fixes some source files to add missing `#include`s.
```
# Conan 1.x
conan export external/nudb nudb/2.0.8@
# Conan 2.x
conan export --version 2.0.8 external/nudb
```
### Build and Test
1. Create a build directory and move into it.
@@ -280,7 +301,7 @@ It patches their CMake to correctly import its dependencies.
Single-config generators:
```
cmake --build . -j $(nproc)
cmake --build .
```
Multi-config generators:

View File

@@ -59,6 +59,10 @@ the rippled source. The only caveat is that it runs much slower
under Windows than in Linux. It hasn't yet been tested under MacOS.
It generates many files of [results](results):
For local iteration speed there is also
[levelization.py](levelization.py), which generates the same artifact set much
faster. The shell script remains canonical for CI/auditing.
* `rawincludes.txt`: The raw dump of the `#includes`
* `paths.txt`: A second dump grouping the source module
to the destination module, deduped, and with frequency counts.
@@ -109,6 +113,9 @@ prevent false alarms and merging issues, and because it's easy to
get those details locally.
1. Run `levelization.sh`
* Faster local loop: `python3 Builds/levelization/levelization.py`
* Optional parity check against canonical shell output:
`python3 Builds/levelization/levelization.py --results-dir /tmp/levelization-py-results --compare-to Builds/levelization/results`
2. Grep the modules in `paths.txt`.
* For example, if a cycle is found `A ~= B`, simply `grep -w
A Builds/levelization/results/paths.txt | grep -w B`

View File

@@ -0,0 +1,409 @@
#!/usr/bin/env python3
"""
Development-oriented levelization generator.
This script produces the same result artifact set as levelization.sh, but is
much faster by doing parsing/counting in-process instead of spawning many
external tools in tight loops.
The shell script remains the canonical CI path. Use this script for local
iteration speed, then run levelization.sh before committing if you need strict
parity with existing workflow.
"""
from __future__ import annotations
import argparse
import concurrent.futures
import os
import posixpath
import re
import shutil
import time
from collections import Counter, defaultdict
from pathlib import Path
INCLUDE_PATTERN = re.compile(r"^\s*#include.*/.*\.h")
INCLUDE_TARGET_PATTERN = re.compile(r'.*["<]([^">]+)[">].*')
PATHS_LINE_PATTERN = re.compile(r"^\s*(\d+)\s+(\S+)\s+(\S+)\s*$")
def dictionary_sort_key(value: str) -> str:
"""Approximate `sort -d` behavior used by the shell script."""
return "".join(ch for ch in value if ch.isalnum() or ch.isspace())
def normalize_level(value: str) -> str:
# Match shell behavior: if level includes a file component (contains "."),
# replace with dirname + "/toplevel".
if "." in value:
parent = posixpath.dirname(value) or "."
value = f"{parent}/toplevel"
return value.replace("/", ".")
def source_level(rel_path: str) -> str:
parts = rel_path.split("/")
return normalize_level("/".join(parts[1:3]))
def include_level(include_line: str) -> str | None:
match = INCLUDE_TARGET_PATTERN.match(include_line)
if not match:
return None
include_path = match.group(1)
parts = include_path.split("/")
return normalize_level("/".join(parts[:2]))
def scan_file(path: Path, repo_root: Path) -> tuple[list[str], list[tuple[str, str]]]:
rel = path.relative_to(repo_root).as_posix()
src_level = source_level(rel)
raw_lines: list[str] = []
paths: list[tuple[str, str]] = []
with path.open("r", encoding="utf-8", errors="ignore") as handle:
for line in handle:
if "boost" in line:
continue
if not INCLUDE_PATTERN.match(line):
continue
line = line.rstrip("\n")
raw_lines.append(f"{rel}:{line}")
dst_level = include_level(line)
if dst_level is None:
continue
if src_level != dst_level:
paths.append((src_level, dst_level))
return raw_lines, paths
def iter_source_files(repo_root: Path) -> list[Path]:
files: list[Path] = []
for top in ("include", "src"):
root = repo_root / top
files.extend(path for path in root.rglob("*") if path.is_file())
files.sort(key=lambda p: p.relative_to(repo_root).as_posix())
return files
def write_relation_db(
results_dir: Path,
edge_counts: list[tuple[tuple[str, str], int]],
) -> tuple[dict[str, list[tuple[str, int]]], dict[str, list[tuple[str, int]]]]:
includes_dir = results_dir / "includes"
includedby_dir = results_dir / "includedby"
includes_dir.mkdir(parents=True, exist_ok=True)
includedby_dir.mkdir(parents=True, exist_ok=True)
includes: dict[str, list[tuple[str, int]]] = defaultdict(list)
includedby: dict[str, list[tuple[str, int]]] = defaultdict(list)
with (results_dir / "paths.txt").open("w", encoding="utf-8") as out:
for (src, dst), count in edge_counts:
out.write(f"{count:7d} {src} {dst}\n")
includes[src].append((dst, count))
includedby[dst].append((src, count))
for src, entries in includes.items():
with (includes_dir / src).open("w", encoding="utf-8") as out:
for dst, count in entries:
out.write(f"{dst} {count}\n")
for dst, entries in includedby.items():
with (includedby_dir / dst).open("w", encoding="utf-8") as out:
for src, count in entries:
out.write(f"{src} {count}\n")
return includes, includedby
def build_loops_and_ordering(
includes: dict[str, list[tuple[str, int]]],
) -> tuple[list[str], list[str]]:
include_map = {
src: {dst: count for dst, count in entries}
for src, entries in includes.items()
}
ordering_lines: list[str] = []
loops_lines: list[str] = []
seen_pairs: set[tuple[str, str]] = set()
for source in sorted(includes.keys()):
for include, includefreq in includes[source]:
if include not in include_map:
continue
sourcefreq = include_map[include].get(source)
if sourcefreq is None:
ordering_lines.append(f"{source} > {include}\n")
continue
if (include, source) in seen_pairs:
continue
seen_pairs.add((source, include))
loops_lines.append(f"Loop: {source} {include}\n")
if includefreq - sourcefreq > 3:
loops_lines.append(f" {source} > {include}\n\n")
elif sourcefreq - includefreq > 3:
loops_lines.append(f" {include} > {source}\n\n")
elif sourcefreq == includefreq:
loops_lines.append(f" {include} == {source}\n\n")
else:
loops_lines.append(f" {include} ~= {source}\n\n")
return ordering_lines, loops_lines
def parse_paths(path: Path) -> dict[tuple[str, str], int]:
out: dict[tuple[str, str], int] = {}
for line in path.read_text(encoding="utf-8").splitlines():
if not line.strip():
continue
match = PATHS_LINE_PATTERN.match(line)
if not match:
raise ValueError(f"Cannot parse paths line: {line!r}")
count = int(match.group(1))
src = match.group(2)
dst = match.group(3)
out[(src, dst)] = count
return out
def parse_relation_dir(path: Path) -> dict[str, Counter[str]]:
out: dict[str, Counter[str]] = {}
if not path.exists():
return out
for file in sorted(p for p in path.iterdir() if p.is_file()):
lines = [line for line in file.read_text(encoding="utf-8").splitlines() if line]
out[file.name] = Counter(lines)
return out
def compare_results(generated: Path, canonical: Path) -> list[str]:
mismatches: list[str] = []
# rawincludes: compare as sets/multisets of lines to ignore traversal order.
gen_raw = Counter(generated.joinpath("rawincludes.txt").read_text(encoding="utf-8").splitlines())
can_raw = Counter(canonical.joinpath("rawincludes.txt").read_text(encoding="utf-8").splitlines())
if gen_raw != can_raw:
mismatches.append("rawincludes.txt differs (line multiset mismatch)")
# paths: compare parsed edge->count map, ignoring ordering/whitespace.
gen_paths = parse_paths(generated / "paths.txt")
can_paths = parse_paths(canonical / "paths.txt")
if gen_paths != can_paths:
mismatches.append("paths.txt differs (edge count mismatch)")
# includes / includedby: compare per-file line multisets.
for rel in ("includes", "includedby"):
gen_rel = parse_relation_dir(generated / rel)
can_rel = parse_relation_dir(canonical / rel)
if gen_rel != can_rel:
mismatches.append(f"{rel}/ differs (file set or content mismatch)")
# ordering and loops are canonical artifacts; require exact bytes.
for name in ("ordering.txt", "loops.txt"):
gen_text = generated.joinpath(name).read_text(encoding="utf-8")
can_text = canonical.joinpath(name).read_text(encoding="utf-8")
if gen_text != can_text:
mismatches.append(f"{name} differs")
return mismatches
def generate(results_dir: Path, repo_root: Path, workers: int) -> None:
if results_dir.exists():
shutil.rmtree(results_dir)
results_dir.mkdir(parents=True)
files = iter_source_files(repo_root)
raw_by_file: dict[str, list[str]] = {}
paths_by_file: dict[str, list[tuple[str, str]]] = {}
start = time.perf_counter()
if workers <= 1:
for file in files:
rel = file.relative_to(repo_root).as_posix()
raw, paths = scan_file(file, repo_root)
raw_by_file[rel] = raw
paths_by_file[rel] = paths
else:
with concurrent.futures.ThreadPoolExecutor(max_workers=workers) as pool:
futures = {
file.relative_to(repo_root).as_posix(): pool.submit(
scan_file, file, repo_root
)
for file in files
}
for rel in sorted(futures.keys()):
raw, paths = futures[rel].result()
raw_by_file[rel] = raw
paths_by_file[rel] = paths
raw_lines: list[str] = []
raw_lines.extend(
line
for rel in sorted(raw_by_file.keys())
for line in raw_by_file[rel]
)
with (results_dir / "rawincludes.txt").open("w", encoding="utf-8") as out:
out.write("\n".join(raw_lines))
if raw_lines:
out.write("\n")
path_pairs: list[tuple[str, str]] = []
path_pairs.extend(
pair
for rel in sorted(paths_by_file.keys())
for pair in paths_by_file[rel]
)
counts = Counter(path_pairs)
edge_counts = sorted(
counts.items(),
key=lambda item: (
dictionary_sort_key(f"{item[0][0]} {item[0][1]}"),
item[0][0],
item[0][1],
),
)
includes, _ = write_relation_db(results_dir, edge_counts)
ordering, loops = build_loops_and_ordering(includes)
with (results_dir / "ordering.txt").open("w", encoding="utf-8") as out:
out.writelines(ordering)
with (results_dir / "loops.txt").open("w", encoding="utf-8") as out:
out.writelines(loops)
elapsed = time.perf_counter() - start
print(
f"levelization.py: scanned {len(files)} files, "
f"{len(raw_lines)} includes, {len(edge_counts)} unique paths in "
f"{elapsed:.2f}s"
)
print((results_dir / "ordering.txt").read_text(encoding="utf-8"), end="")
print((results_dir / "loops.txt").read_text(encoding="utf-8"), end="")
def explain_edge(src_level: str, dst_level: str, repo_root: Path, workers: int) -> None:
"""Show every source file and #include line that creates a dependency edge."""
files = iter_source_files(repo_root)
matches: list[tuple[str, str]] = [] # (source_file, include_line)
def check_file(path: Path) -> list[tuple[str, str]]:
rel = path.relative_to(repo_root).as_posix()
sl = source_level(rel)
if sl != src_level:
return []
hits: list[tuple[str, str]] = []
with path.open("r", encoding="utf-8", errors="ignore") as handle:
for line in handle:
if "boost" in line:
continue
if not INCLUDE_PATTERN.match(line):
continue
line = line.rstrip("\n")
dl = include_level(line)
if dl == dst_level:
hits.append((rel, line.strip()))
return hits
if workers <= 1:
for file in files:
matches.extend(check_file(file))
else:
with concurrent.futures.ThreadPoolExecutor(max_workers=workers) as pool:
for result in pool.map(check_file, files):
matches.extend(result)
if not matches:
print(f"No includes found from {src_level} -> {dst_level}")
return
print(f"{src_level} > {dst_level} ({len(matches)} include(s)):\n")
for source_file, include_line in matches:
print(f" {source_file}")
print(f" {include_line}")
print()
def main() -> int:
script_dir = Path(__file__).resolve().parent
repo_root = script_dir.parents[1]
parser = argparse.ArgumentParser()
parser.add_argument(
"--repo-root",
type=Path,
default=repo_root,
help="Repository root (defaults based on script location).",
)
parser.add_argument(
"--results-dir",
type=Path,
default=script_dir / "results",
help="Output results directory.",
)
parser.add_argument(
"--workers",
type=int,
default=min(32, (os.cpu_count() or 1)),
help="Thread count for source scanning (default: CPU count, max 32).",
)
parser.add_argument(
"--compare-to",
type=Path,
default=None,
help=(
"Compare generated results against this directory and exit non-zero "
"on mismatch (semantic comparison for rawincludes/paths/includes)."
),
)
parser.add_argument(
"--explain",
nargs=2,
metavar=("SRC", "DST"),
default=None,
help=(
"Show which source files cause a dependency edge. "
"Example: --explain test.csf xrpld.app"
),
)
args = parser.parse_args()
if args.explain is not None:
explain_edge(args.explain[0], args.explain[1], args.repo_root.resolve(), max(1, args.workers))
return 0
generated_dir = args.results_dir.resolve()
generate(
results_dir=generated_dir,
repo_root=args.repo_root.resolve(),
workers=max(1, args.workers),
)
if args.compare_to is not None:
canonical_dir = args.compare_to.resolve()
mismatches = compare_results(generated_dir, canonical_dir)
if mismatches:
print("levelization.py: mismatch against canonical results:")
for mismatch in mismatches:
print(f" - {mismatch}")
return 1
print("levelization.py: matches canonical results")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -7,7 +7,7 @@
pushd $( dirname $0 )
if [ -v PS1 ]
if [[ -n "${PS1-}" ]]
then
# if the shell is interactive, clean up any flotsam before analyzing
git clean -ix

View File

@@ -26,10 +26,10 @@ Loop: xrpld.app xrpld.nodestore
xrpld.app > xrpld.nodestore
Loop: xrpld.app xrpld.overlay
xrpld.overlay > xrpld.app
xrpld.overlay ~= xrpld.app
Loop: xrpld.app xrpld.peerfinder
xrpld.peerfinder ~= xrpld.app
xrpld.app > xrpld.peerfinder
Loop: xrpld.app xrpld.rpc
xrpld.rpc > xrpld.app
@@ -44,10 +44,10 @@ Loop: xrpld.core xrpld.perflog
xrpld.perflog == xrpld.core
Loop: xrpld.net xrpld.rpc
xrpld.rpc ~= xrpld.net
xrpld.rpc > xrpld.net
Loop: xrpld.overlay xrpld.rpc
xrpld.rpc ~= xrpld.overlay
xrpld.rpc > xrpld.overlay
Loop: xrpld.perflog xrpld.rpc
xrpld.rpc ~= xrpld.perflog

View File

@@ -7,7 +7,6 @@ libxrpl.protocol > xrpl.hook
libxrpl.protocol > xrpl.json
libxrpl.protocol > xrpl.protocol
libxrpl.resource > xrpl.basics
libxrpl.resource > xrpl.json
libxrpl.resource > xrpl.resource
libxrpl.server > xrpl.basics
libxrpl.server > xrpl.json
@@ -62,6 +61,7 @@ test.json > test.jtx
test.json > xrpl.json
test.jtx > xrpl.basics
test.jtx > xrpld.app
test.jtx > xrpld.consensus
test.jtx > xrpld.core
test.jtx > xrpld.ledger
test.jtx > xrpld.net
@@ -140,7 +140,6 @@ test.shamap > xrpl.protocol
test.toplevel > test.csf
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
tests.libxrpl > xrpl.basics
xrpl.hook > xrpl.basics
xrpl.hook > xrpl.protocol
xrpl.json > xrpl.basics
@@ -169,6 +168,7 @@ xrpld.core > xrpl.basics
xrpld.core > xrpl.json
xrpld.core > xrpl.protocol
xrpld.ledger > xrpl.basics
xrpld.ledger > xrpld.core
xrpld.ledger > xrpl.json
xrpld.ledger > xrpl.protocol
xrpld.net > xrpl.basics
@@ -193,6 +193,7 @@ xrpld.peerfinder > xrpld.core
xrpld.peerfinder > xrpl.protocol
xrpld.perflog > xrpl.basics
xrpld.perflog > xrpl.json
xrpld.perflog > xrpl.protocol
xrpld.rpc > xrpl.basics
xrpld.rpc > xrpld.core
xrpld.rpc > xrpld.ledger

View File

@@ -21,18 +21,6 @@ endif()
project (xrpl)
set(Boost_NO_BOOST_CMAKE ON)
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU")
# GCC-specific fixes
add_compile_options(-Wno-unknown-pragmas -Wno-subobject-linkage)
# -Wno-subobject-linkage can be removed when we upgrade GCC version to at least 13.3
elseif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
# Clang-specific fixes
add_compile_options(-Wno-unknown-warning-option) # Ignore unknown warning options
elseif(MSVC)
# MSVC-specific fixes
add_compile_options(/wd4068) # Ignore unknown pragmas
endif()
# make GIT_COMMIT_HASH define available to all sources
find_package(Git)
if(Git_FOUND)
@@ -109,11 +97,6 @@ set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2
)
set(SECP256K1_INSTALL TRUE)
set(SECP256K1_BUILD_BENCHMARK FALSE)
set(SECP256K1_BUILD_TESTS FALSE)
set(SECP256K1_BUILD_EXHAUSTIVE_TESTS FALSE)
set(SECP256K1_BUILD_CTIME_TESTS FALSE)
set(SECP256K1_BUILD_EXAMPLES FALSE)
add_subdirectory(external/secp256k1)
add_library(secp256k1::secp256k1 ALIAS secp256k1)
add_subdirectory(external/ed25519-donna)
@@ -170,8 +153,3 @@ include(RippledCore)
include(RippledInstall)
include(RippledDocs)
include(RippledValidatorKeys)
if(tests)
include(CTest)
add_subdirectory(src/tests/libxrpl)
endif()

4716
RELEASENOTES.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -68,6 +68,13 @@ target_link_libraries(xrpl.imports.main
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp>
)
# Enhanced logging can use date::make_zoned/current_zone when location-rich log
# formatting is compiled in. Link date-tz when available so Debug builds and
# explicit -DBEAST_ENHANCED_LOGGING=ON builds resolve those symbols.
if(TARGET date::date-tz)
target_link_libraries(xrpl.imports.main INTERFACE date::date-tz)
endif()
include(add_module)
include(target_link_modules)
@@ -152,9 +159,6 @@ if(xrpld)
add_executable(rippled)
if(tests)
target_compile_definitions(rippled PUBLIC ENABLE_TESTS)
target_compile_definitions(rippled PRIVATE
UNIT_TEST_REFERENCE_FEE=${UNIT_TEST_REFERENCE_FEE}
)
endif()
target_include_directories(rippled
PRIVATE

View File

@@ -22,6 +22,9 @@ target_compile_definitions (opts
$<$<BOOL:${beast_no_unit_test_inline}>:BEAST_NO_UNIT_TEST_INLINE=1>
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
$<$<BOOL:${single_io_service_thread}>:RIPPLE_SINGLE_IO_SERVICE_THREAD=1>
# Enhanced logging is enabled for Debug builds, or explicitly via
# -DBEAST_ENHANCED_LOGGING=ON for other build types.
$<$<OR:$<CONFIG:Debug>,$<BOOL:${BEAST_ENHANCED_LOGGING}>>:BEAST_ENHANCED_LOGGING=1>
$<$<BOOL:${voidstar}>:ENABLE_VOIDSTAR>)
target_compile_options (opts
INTERFACE

View File

@@ -2,7 +2,22 @@
convenience variables and sanity checks
#]===================================================================]
get_property(is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
include(ProcessorCount)
if (NOT ep_procs)
ProcessorCount(ep_procs)
if (ep_procs GREATER 1)
# never use more than half of cores for EP builds
math (EXPR ep_procs "${ep_procs} / 2")
message (STATUS "Using ${ep_procs} cores for ExternalProject builds.")
endif ()
endif ()
get_property (is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
if (is_multiconfig STREQUAL "NOTFOUND")
if (${CMAKE_GENERATOR} STREQUAL "Xcode" OR ${CMAKE_GENERATOR} MATCHES "^Visual Studio")
set (is_multiconfig TRUE)
endif ()
endif ()
set (CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "" FORCE)
if (NOT is_multiconfig)

View File

@@ -11,14 +11,8 @@ option(assert "Enables asserts, even in release builds" OFF)
option(xrpld "Build xrpld" ON)
option(tests "Build tests" ON)
if(tests)
# This setting allows making a separate workflow to test fees other than default 10
if(NOT UNIT_TEST_REFERENCE_FEE)
set(UNIT_TEST_REFERENCE_FEE "10" CACHE STRING "")
endif()
endif()
option(unity "Creates a build using UNITY support in cmake." OFF)
option(unity "Creates a build using UNITY support in cmake. This is the default" ON)
if(unity)
if(NOT is_ci)
set(CMAKE_UNITY_BUILD_BATCH_SIZE 15 CACHE STRING "")

View File

@@ -2,6 +2,7 @@ find_package(Boost 1.86 REQUIRED
COMPONENTS
chrono
container
context
coroutine
date_time
filesystem
@@ -24,7 +25,7 @@ endif()
target_link_libraries(ripple_boost
INTERFACE
Boost::headers
Boost::boost
Boost::chrono
Boost::container
Boost::coroutine

View File

@@ -1,41 +0,0 @@
include(isolate_headers)
function(xrpl_add_test name)
set(target ${PROJECT_NAME}.test.${name})
file(GLOB_RECURSE sources CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/${name}/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/${name}.cpp"
)
add_executable(${target} EXCLUDE_FROM_ALL ${ARGN} ${sources})
isolate_headers(
${target}
"${CMAKE_SOURCE_DIR}"
"${CMAKE_SOURCE_DIR}/tests/${name}"
PRIVATE
)
# Make sure the test isn't optimized away in unity builds
set_target_properties(${target} PROPERTIES
UNITY_BUILD_MODE GROUP
UNITY_BUILD_BATCH_SIZE 0) # Adjust as needed
add_test(NAME ${target} COMMAND ${target})
set_tests_properties(
${target} PROPERTIES
FIXTURES_REQUIRED ${target}_fixture
)
add_test(
NAME ${target}.build
COMMAND
${CMAKE_COMMAND}
--build ${CMAKE_BINARY_DIR}
--config $<CONFIG>
--target ${target}
)
set_tests_properties(${target}.build PROPERTIES
FIXTURES_SETUP ${target}_fixture
)
endfunction()

View File

@@ -1,37 +0,0 @@
{% set os = detect_api.detect_os() %}
{% set arch = detect_api.detect_arch() %}
{% set compiler, version, compiler_exe = detect_api.detect_default_compiler() %}
{% set compiler_version = version %}
{% if os == "Linux" %}
{% set compiler_version = detect_api.default_compiler_version(compiler, version) %}
{% endif %}
[settings]
os={{ os }}
arch={{ arch }}
build_type=Debug
compiler={{compiler}}
compiler.version={{ compiler_version }}
compiler.cppstd=20
{% if os == "Windows" %}
compiler.runtime=static
{% else %}
compiler.libcxx={{detect_api.detect_libcxx(compiler, version, compiler_exe)}}
{% endif %}
[conf]
{% if compiler == "clang" and compiler_version >= 19 %}
tools.build:cxxflags=['-Wno-missing-template-arg-list-after-template-kw']
{% endif %}
{% if compiler == "apple-clang" and compiler_version >= 17 %}
tools.build:cxxflags=['-Wno-missing-template-arg-list-after-template-kw']
{% endif %}
{% if compiler == "clang" and compiler_version == 16 %}
tools.build:cxxflags=['-DBOOST_ASIO_DISABLE_CONCEPTS']
{% endif %}
{% if compiler == "gcc" and compiler_version < 13 %}
tools.build:cxxflags=['-Wno-restrict']
{% endif %}
[tool_requires]
!cmake/*: cmake/[>=3 <4]

View File

@@ -1,4 +1,4 @@
from conan import ConanFile, __version__ as conan_version
from conan import ConanFile
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
import re
@@ -26,18 +26,16 @@ class Xrpl(ConanFile):
}
requires = [
'date/3.0.3',
'grpc/1.50.1',
'libarchive/3.8.1',
'nudb/2.0.9',
'libarchive/3.7.6',
'nudb/2.0.8',
'openssl/3.6.0',
'soci/4.0.3@xahaud/stable',
'xxhash/0.8.2',
'zlib/1.3.1',
]
test_requires = [
'doctest/2.4.11',
]
tool_requires = [
'protobuf/3.21.12',
]
@@ -93,13 +91,12 @@ class Xrpl(ConanFile):
}
def set_version(self):
if self.version is None:
path = f'{self.recipe_folder}/src/libxrpl/protocol/BuildInfo.cpp'
regex = r'versionString\s?=\s?\"(.*)\"'
with open(path, encoding='utf-8') as file:
matches = (re.search(regex, line) for line in file)
match = next(m for m in matches if m)
self.version = match.group(1)
path = f'{self.recipe_folder}/src/libxrpl/protocol/BuildInfo.cpp'
regex = r'versionString\s?=\s?\"(.*)\"'
with open(path, 'r') as file:
matches = (re.search(regex, line) for line in file)
match = next(m for m in matches if m)
self.version = match.group(1)
def build_requirements(self):
# These provide build tools (protoc, grpc plugins) that run during build
@@ -113,24 +110,20 @@ class Xrpl(ConanFile):
self.options['boost/*'].visibility = 'global'
def requirements(self):
# Conan 2 requires transitive headers to be specified
transitive_headers_opt = {'transitive_headers': True} if conan_version.split('.')[0] == '2' else {}
# Force sqlite3 version to avoid conflicts with soci
self.requires('sqlite3/3.49.1', override=True)
# Force our custom snappy build to avoid Conan CMakeDeps stdc++ heuristic bug
self.requires('sqlite3/3.47.0', override=True)
# Force our custom snappy build for all dependencies
self.requires('snappy/1.1.10@xahaud/stable', override=True)
# Force boost version for all dependencies to avoid conflicts
self.requires('boost/1.86.0', force=True, **transitive_headers_opt)
self.requires('date/3.0.4', **transitive_headers_opt)
self.requires('boost/1.86.0', override=True)
self.requires('lz4/1.10.0', force=True)
if self.options.with_wasmedge:
self.requires('wasmedge/0.11.2@xahaud/stable', **transitive_headers_opt)
self.requires('wasmedge/0.11.2@xahaud/stable')
if self.options.jemalloc:
self.requires('jemalloc/5.3.0', **transitive_headers_opt)
self.requires('jemalloc/5.3.0')
if self.options.rocksdb:
self.requires('rocksdb/10.0.1')
self.requires('xxhash/0.8.3', **transitive_headers_opt)
self.requires('rocksdb/6.29.5')
exports_sources = (
'CMakeLists.txt',
@@ -185,17 +178,7 @@ class Xrpl(ConanFile):
# `include/`, not `include/ripple/proto/`.
libxrpl.includedirs = ['include', 'include/ripple/proto']
libxrpl.requires = [
'boost::headers',
'boost::chrono',
'boost::container',
'boost::coroutine',
'boost::date_time',
'boost::filesystem',
'boost::json',
'boost::program_options',
'boost::regex',
'boost::system',
'boost::thread',
'boost::boost',
'date::date',
'grpc::grpc++',
'libarchive::libarchive',

View File

@@ -23,7 +23,7 @@ direction.
```
apt update
apt install --yes curl git libssl-dev pipx python3.10-dev python3-pip make g++-11 libprotobuf-dev protobuf-compiler
apt install --yes curl git libssl-dev python3.10-dev python3-pip make g++-11 libprotobuf-dev protobuf-compiler
curl --location --remote-name \
"https://github.com/Kitware/CMake/releases/download/v3.25.1/cmake-3.25.1.tar.gz"
@@ -35,8 +35,7 @@ make --jobs $(nproc)
make install
cd ..
pipx install 'conan<2'
pipx ensurepath
pip3 install 'conan<2'
```
[1]: https://github.com/thejohnfreeman/rippled-docker/blob/master/ubuntu-22.04/install.sh

View File

@@ -9,7 +9,7 @@ project(
LANGUAGES CXX
)
find_package(xrpl CONFIG REQUIRED)
find_package(xrpl REQUIRED)
add_executable(example)
target_sources(example PRIVATE src/example.cpp)

View File

@@ -0,0 +1,59 @@
from conan import ConanFile, conan_version
from conan.tools.cmake import CMake, cmake_layout
class Example(ConanFile):
def set_name(self):
if self.name is None:
self.name = 'example'
def set_version(self):
if self.version is None:
self.version = '0.1.0'
license = 'ISC'
author = 'John Freeman <jfreeman08@gmail.com>'
settings = 'os', 'compiler', 'build_type', 'arch'
options = {'shared': [True, False], 'fPIC': [True, False]}
default_options = {
'shared': False,
'fPIC': True,
'xrpl:xrpld': False,
}
requires = ['xrpl/2.2.0-rc1@jfreeman/nodestore']
generators = ['CMakeDeps', 'CMakeToolchain']
exports_sources = [
'CMakeLists.txt',
'cmake/*',
'external/*',
'include/*',
'src/*',
]
# For out-of-source build.
# https://docs.conan.io/en/latest/reference/build_helpers/cmake.html#configure
no_copy_source = True
def layout(self):
cmake_layout(self)
def config_options(self):
if self.settings.os == 'Windows':
del self.options.fPIC
def build(self):
cmake = CMake(self)
cmake.configure(variables={'BUILD_TESTING': 'NO'})
cmake.build()
def package(self):
cmake = CMake(self)
cmake.install()
def package_info(self):
path = f'{self.package_folder}/share/{self.name}/cpp_info.py'
with open(path, 'r') as file:
exec(file.read(), {}, {'self': self.cpp_info})

View File

@@ -1,10 +1,8 @@
#include <xrpl/protocol/BuildInfo.h>
#include <cstdio>
int
main(int argc, char const** argv)
{
#include <xrpl/protocol/BuildInfo.h>
int main(int argc, char const** argv) {
std::printf("%s\n", ripple::BuildInfo::getVersionString().c_str());
return 0;
}

View File

@@ -1,4 +1,4 @@
cmake_minimum_required(VERSION 3.18)
cmake_minimum_required(VERSION 3.25)
# Note, version set explicitly by rippled project
project(antithesis-sdk-cpp VERSION 0.4.4 LANGUAGES CXX)

10
external/nudb/conandata.yml vendored Normal file
View File

@@ -0,0 +1,10 @@
sources:
"2.0.8":
url: "https://github.com/CPPAlliance/NuDB/archive/2.0.8.tar.gz"
sha256: "9b71903d8ba111cd893ab064b9a8b6ac4124ed8bd6b4f67250205bc43c7f13a8"
patches:
"2.0.8":
- patch_file: "patches/2.0.8-0001-add-include-stdexcept-for-msvc.patch"
patch_description: "Fix build for MSVC by including stdexcept"
patch_type: "portability"
patch_source: "https://github.com/cppalliance/NuDB/pull/100/files"

72
external/nudb/conanfile.py vendored Normal file
View File

@@ -0,0 +1,72 @@
import os
from conan import ConanFile
from conan.tools.build import check_min_cppstd
from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get
from conan.tools.layout import basic_layout
required_conan_version = ">=1.52.0"
class NudbConan(ConanFile):
name = "nudb"
description = "A fast key/value insert-only database for SSD drives in C++11"
license = "BSL-1.0"
url = "https://github.com/conan-io/conan-center-index"
homepage = "https://github.com/CPPAlliance/NuDB"
topics = ("header-only", "KVS", "insert-only")
package_type = "header-library"
settings = "os", "arch", "compiler", "build_type"
no_copy_source = True
@property
def _min_cppstd(self):
return 11
def export_sources(self):
export_conandata_patches(self)
def layout(self):
basic_layout(self, src_folder="src")
def requirements(self):
self.requires("boost/1.83.0")
def package_id(self):
self.info.clear()
def validate(self):
if self.settings.compiler.cppstd:
check_min_cppstd(self, self._min_cppstd)
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def build(self):
apply_conandata_patches(self)
def package(self):
copy(self, "LICENSE*",
dst=os.path.join(self.package_folder, "licenses"),
src=self.source_folder)
copy(self, "*",
dst=os.path.join(self.package_folder, "include"),
src=os.path.join(self.source_folder, "include"))
def package_info(self):
self.cpp_info.bindirs = []
self.cpp_info.libdirs = []
self.cpp_info.set_property("cmake_target_name", "NuDB")
self.cpp_info.set_property("cmake_target_aliases", ["NuDB::nudb"])
self.cpp_info.set_property("cmake_find_mode", "both")
self.cpp_info.components["core"].set_property("cmake_target_name", "nudb")
self.cpp_info.components["core"].names["cmake_find_package"] = "nudb"
self.cpp_info.components["core"].names["cmake_find_package_multi"] = "nudb"
self.cpp_info.components["core"].requires = ["boost::thread", "boost::system"]
# TODO: to remove in conan v2 once cmake_find_package_* generators removed
self.cpp_info.names["cmake_find_package"] = "NuDB"
self.cpp_info.names["cmake_find_package_multi"] = "NuDB"

View File

@@ -0,0 +1,24 @@
diff --git a/include/nudb/detail/stream.hpp b/include/nudb/detail/stream.hpp
index 6c07bf1..e0ce8ed 100644
--- a/include/nudb/detail/stream.hpp
+++ b/include/nudb/detail/stream.hpp
@@ -14,6 +14,7 @@
#include <cstdint>
#include <cstring>
#include <memory>
+#include <stdexcept>
namespace nudb {
namespace detail {
diff --git a/include/nudb/impl/context.ipp b/include/nudb/impl/context.ipp
index beb7058..ffde0b3 100644
--- a/include/nudb/impl/context.ipp
+++ b/include/nudb/impl/context.ipp
@@ -9,6 +9,7 @@
#define NUDB_IMPL_CONTEXT_IPP
#include <nudb/detail/store_base.hpp>
+#include <stdexcept>
namespace nudb {

27
external/rocksdb/conandata.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
sources:
"6.29.5":
url: "https://github.com/facebook/rocksdb/archive/refs/tags/v6.29.5.tar.gz"
sha256: "ddbf84791f0980c0bbce3902feb93a2c7006f6f53bfd798926143e31d4d756f0"
"6.27.3":
url: "https://github.com/facebook/rocksdb/archive/refs/tags/v6.27.3.tar.gz"
sha256: "ee29901749b9132692b26f0a6c1d693f47d1a9ed8e3771e60556afe80282bf58"
"6.20.3":
url: "https://github.com/facebook/rocksdb/archive/refs/tags/v6.20.3.tar.gz"
sha256: "c6502c7aae641b7e20fafa6c2b92273d935d2b7b2707135ebd9a67b092169dca"
"8.8.1":
url: "https://github.com/facebook/rocksdb/archive/refs/tags/v8.8.1.tar.gz"
sha256: "056c7e21ad8ae36b026ac3b94b9d6e0fcc60e1d937fc80330921e4181be5c36e"
patches:
"6.29.5":
- patch_file: "patches/6.29.5-0001-add-include-cstdint-for-gcc-13.patch"
patch_description: "Fix build with gcc 13 by including cstdint"
patch_type: "portability"
patch_source: "https://github.com/facebook/rocksdb/pull/11118"
- patch_file: "patches/6.29.5-0002-exclude-thirdparty.patch"
patch_description: "Do not include thirdparty.inc"
patch_type: "portability"
"6.27.3":
- patch_file: "patches/6.27.3-0001-add-include-cstdint-for-gcc-13.patch"
patch_description: "Fix build with gcc 13 by including cstdint"
patch_type: "portability"
patch_source: "https://github.com/facebook/rocksdb/pull/11118"

233
external/rocksdb/conanfile.py vendored Normal file
View File

@@ -0,0 +1,233 @@
import os
import glob
import shutil
from conan import ConanFile
from conan.errors import ConanInvalidConfiguration
from conan.tools.build import check_min_cppstd
from conan.tools.cmake import CMake, CMakeDeps, CMakeToolchain, cmake_layout
from conan.tools.files import apply_conandata_patches, collect_libs, copy, export_conandata_patches, get, rm, rmdir
from conan.tools.microsoft import check_min_vs, is_msvc, is_msvc_static_runtime
from conan.tools.scm import Version
required_conan_version = ">=1.53.0"
class RocksDBConan(ConanFile):
name = "rocksdb"
homepage = "https://github.com/facebook/rocksdb"
license = ("GPL-2.0-only", "Apache-2.0")
url = "https://github.com/conan-io/conan-center-index"
description = "A library that provides an embeddable, persistent key-value store for fast storage"
topics = ("database", "leveldb", "facebook", "key-value")
package_type = "library"
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
"fPIC": [True, False],
"lite": [True, False],
"with_gflags": [True, False],
"with_snappy": [True, False],
"with_lz4": [True, False],
"with_zlib": [True, False],
"with_zstd": [True, False],
"with_tbb": [True, False],
"with_jemalloc": [True, False],
"enable_sse": [False, "sse42", "avx2"],
"use_rtti": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
"lite": False,
"with_snappy": False,
"with_lz4": False,
"with_zlib": False,
"with_zstd": False,
"with_gflags": False,
"with_tbb": False,
"with_jemalloc": False,
"enable_sse": False,
"use_rtti": False,
}
@property
def _min_cppstd(self):
return "11" if Version(self.version) < "8.8.1" else "17"
@property
def _compilers_minimum_version(self):
return {} if self._min_cppstd == "11" else {
"apple-clang": "10",
"clang": "7",
"gcc": "7",
"msvc": "191",
"Visual Studio": "15",
}
def export_sources(self):
export_conandata_patches(self)
def config_options(self):
if self.settings.os == "Windows":
del self.options.fPIC
if self.settings.arch != "x86_64":
del self.options.with_tbb
if self.settings.build_type == "Debug":
self.options.use_rtti = True # Rtti are used in asserts for debug mode...
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
def layout(self):
cmake_layout(self, src_folder="src")
def requirements(self):
if self.options.with_gflags:
self.requires("gflags/2.2.2")
if self.options.with_snappy:
self.requires("snappy/1.1.10")
if self.options.with_lz4:
self.requires("lz4/1.10.0")
if self.options.with_zlib:
self.requires("zlib/[>=1.2.11 <2]")
if self.options.with_zstd:
self.requires("zstd/1.5.6")
if self.options.get_safe("with_tbb"):
self.requires("onetbb/2021.12.0")
if self.options.with_jemalloc:
self.requires("jemalloc/5.3.0")
def validate(self):
if self.settings.compiler.get_safe("cppstd"):
check_min_cppstd(self, self._min_cppstd)
minimum_version = self._compilers_minimum_version.get(str(self.settings.compiler), False)
if minimum_version and Version(self.settings.compiler.version) < minimum_version:
raise ConanInvalidConfiguration(
f"{self.ref} requires C++{self._min_cppstd}, which your compiler does not support."
)
if self.settings.arch not in ["x86_64", "ppc64le", "ppc64", "mips64", "armv8"]:
raise ConanInvalidConfiguration("Rocksdb requires 64 bits")
check_min_vs(self, "191")
if self.version == "6.20.3" and \
self.settings.os == "Linux" and \
self.settings.compiler == "gcc" and \
Version(self.settings.compiler.version) < "5":
raise ConanInvalidConfiguration("Rocksdb 6.20.3 is not compilable with gcc <5.") # See https://github.com/facebook/rocksdb/issues/3522
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def generate(self):
tc = CMakeToolchain(self)
tc.variables["FAIL_ON_WARNINGS"] = False
tc.variables["WITH_TESTS"] = False
tc.variables["WITH_TOOLS"] = False
tc.variables["WITH_CORE_TOOLS"] = False
tc.variables["WITH_BENCHMARK_TOOLS"] = False
tc.variables["WITH_FOLLY_DISTRIBUTED_MUTEX"] = False
if is_msvc(self):
tc.variables["WITH_MD_LIBRARY"] = not is_msvc_static_runtime(self)
tc.variables["ROCKSDB_INSTALL_ON_WINDOWS"] = self.settings.os == "Windows"
tc.variables["ROCKSDB_LITE"] = self.options.lite
tc.variables["WITH_GFLAGS"] = self.options.with_gflags
tc.variables["WITH_SNAPPY"] = self.options.with_snappy
tc.variables["WITH_LZ4"] = self.options.with_lz4
tc.variables["WITH_ZLIB"] = self.options.with_zlib
tc.variables["WITH_ZSTD"] = self.options.with_zstd
tc.variables["WITH_TBB"] = self.options.get_safe("with_tbb", False)
tc.variables["WITH_JEMALLOC"] = self.options.with_jemalloc
tc.variables["ROCKSDB_BUILD_SHARED"] = self.options.shared
tc.variables["ROCKSDB_LIBRARY_EXPORTS"] = self.settings.os == "Windows" and self.options.shared
tc.variables["ROCKSDB_DLL" ] = self.settings.os == "Windows" and self.options.shared
tc.variables["USE_RTTI"] = self.options.use_rtti
if not bool(self.options.enable_sse):
tc.variables["PORTABLE"] = True
tc.variables["FORCE_SSE42"] = False
elif self.options.enable_sse == "sse42":
tc.variables["PORTABLE"] = True
tc.variables["FORCE_SSE42"] = True
elif self.options.enable_sse == "avx2":
tc.variables["PORTABLE"] = False
tc.variables["FORCE_SSE42"] = False
# not available yet in CCI
tc.variables["WITH_NUMA"] = False
tc.generate()
deps = CMakeDeps(self)
if self.options.with_jemalloc:
deps.set_property("jemalloc", "cmake_file_name", "JeMalloc")
deps.set_property("jemalloc", "cmake_target_name", "JeMalloc::JeMalloc")
deps.generate()
def build(self):
apply_conandata_patches(self)
cmake = CMake(self)
cmake.configure()
cmake.build()
def _remove_static_libraries(self):
rm(self, "rocksdb.lib", os.path.join(self.package_folder, "lib"))
for lib in glob.glob(os.path.join(self.package_folder, "lib", "*.a")):
if not lib.endswith(".dll.a"):
os.remove(lib)
def _remove_cpp_headers(self):
for path in glob.glob(os.path.join(self.package_folder, "include", "rocksdb", "*")):
if path != os.path.join(self.package_folder, "include", "rocksdb", "c.h"):
if os.path.isfile(path):
os.remove(path)
else:
shutil.rmtree(path)
def package(self):
copy(self, "COPYING", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
copy(self, "LICENSE*", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
cmake = CMake(self)
cmake.install()
if self.options.shared:
self._remove_static_libraries()
self._remove_cpp_headers() # Force stable ABI for shared libraries
rmdir(self, os.path.join(self.package_folder, "lib", "cmake"))
rmdir(self, os.path.join(self.package_folder, "lib", "pkgconfig"))
def package_info(self):
cmake_target = "rocksdb-shared" if self.options.shared else "rocksdb"
self.cpp_info.set_property("cmake_file_name", "RocksDB")
self.cpp_info.set_property("cmake_target_name", f"RocksDB::{cmake_target}")
# TODO: back to global scope in conan v2 once cmake_find_package* generators removed
self.cpp_info.components["librocksdb"].libs = collect_libs(self)
if self.settings.os == "Windows":
self.cpp_info.components["librocksdb"].system_libs = ["shlwapi", "rpcrt4"]
if self.options.shared:
self.cpp_info.components["librocksdb"].defines = ["ROCKSDB_DLL"]
elif self.settings.os in ["Linux", "FreeBSD"]:
self.cpp_info.components["librocksdb"].system_libs = ["pthread", "m"]
if self.options.lite:
self.cpp_info.components["librocksdb"].defines.append("ROCKSDB_LITE")
# TODO: to remove in conan v2 once cmake_find_package* generators removed
self.cpp_info.names["cmake_find_package"] = "RocksDB"
self.cpp_info.names["cmake_find_package_multi"] = "RocksDB"
self.cpp_info.components["librocksdb"].names["cmake_find_package"] = cmake_target
self.cpp_info.components["librocksdb"].names["cmake_find_package_multi"] = cmake_target
self.cpp_info.components["librocksdb"].set_property("cmake_target_name", f"RocksDB::{cmake_target}")
if self.options.with_gflags:
self.cpp_info.components["librocksdb"].requires.append("gflags::gflags")
if self.options.with_snappy:
self.cpp_info.components["librocksdb"].requires.append("snappy::snappy")
if self.options.with_lz4:
self.cpp_info.components["librocksdb"].requires.append("lz4::lz4")
if self.options.with_zlib:
self.cpp_info.components["librocksdb"].requires.append("zlib::zlib")
if self.options.with_zstd:
self.cpp_info.components["librocksdb"].requires.append("zstd::zstd")
if self.options.get_safe("with_tbb"):
self.cpp_info.components["librocksdb"].requires.append("onetbb::onetbb")
if self.options.with_jemalloc:
self.cpp_info.components["librocksdb"].requires.append("jemalloc::jemalloc")

View File

@@ -0,0 +1,30 @@
--- a/include/rocksdb/utilities/checkpoint.h
+++ b/include/rocksdb/utilities/checkpoint.h
@@ -8,6 +8,7 @@
#pragma once
#ifndef ROCKSDB_LITE
+#include <cstdint>
#include <string>
#include <vector>
#include "rocksdb/status.h"
--- a/table/block_based/data_block_hash_index.h
+++ b/table/block_based/data_block_hash_index.h
@@ -5,6 +5,7 @@
#pragma once
+#include <cstdint>
#include <string>
#include <vector>
--- a/util/string_util.h
+++ b/util/string_util.h
@@ -6,6 +6,7 @@
#pragma once
+#include <cstdint>
#include <sstream>
#include <string>
#include <unordered_map>

View File

@@ -0,0 +1,16 @@
diff --git a/CMakeLists.txt b/CMakeLists.txt
index ec59d4491..35577c998 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -101 +100,0 @@ if(MSVC)
- option(WITH_GFLAGS "build with GFlags" OFF)
@@ -103,2 +102,2 @@ if(MSVC)
- include(${CMAKE_CURRENT_SOURCE_DIR}/thirdparty.inc)
-else()
+endif()
+
@@ -117 +116 @@ else()
- if(MINGW)
+ if(MINGW OR MSVC)
@@ -183 +181,0 @@ else()
-endif()

View File

@@ -47,5 +47,8 @@
#define MEM_OVERLAP -43
#define TOO_MANY_STATE_MODIFICATIONS -44
#define TOO_MANY_NAMESPACES -45
#define EXPORT_FAILURE -46
#define TOO_MANY_EXPORTED_TXN -47
#define TOO_LITTLE_ENTROPY -48
#define HOOK_ERROR_CODES
#endif //HOOK_ERROR_CODES

View File

@@ -336,5 +336,24 @@ prepare(
uint32_t read_ptr,
uint32_t read_len);
extern int64_t
xport_reserve(uint32_t count);
extern int64_t
xport(
uint32_t write_ptr,
uint32_t write_len,
uint32_t read_ptr,
uint32_t read_len);
extern int64_t
xport_cancel(uint32_t ticket_seq);
extern int64_t
dice(uint32_t sides);
extern int64_t
random(uint32_t write_ptr, uint32_t write_len);
#define HOOK_EXTERN
#endif // HOOK_EXTERN

View File

@@ -45,8 +45,8 @@
#include "error.h"
#include "extern.h"
#include "macro.h"
#include "sfcodes.h"
#include "macro.h"
#include "tts.h"
#endif

View File

@@ -9,7 +9,7 @@
#define sfUNLModifyDisabling ((16U << 16U) + 17U)
#define sfHookResult ((16U << 16U) + 18U)
#define sfWasLockingChainSend ((16U << 16U) + 19U)
#define sfWithdrawalPolicy ((16U << 16U) + 20U)
#define sfSidecarType ((16U << 16U) + 20U)
#define sfLedgerEntryType ((1U << 16U) + 1U)
#define sfTransactionType ((1U << 16U) + 2U)
#define sfSignerWeight ((1U << 16U) + 3U)
@@ -23,6 +23,8 @@
#define sfHookApiVersion ((1U << 16U) + 20U)
#define sfHookStateScale ((1U << 16U) + 21U)
#define sfLedgerFixType ((1U << 16U) + 22U)
#define sfHookExportCount ((1U << 16U) + 98U)
#define sfEntropyCount ((1U << 16U) + 99U)
#define sfNetworkID ((2U << 16U) + 1U)
#define sfFlags ((2U << 16U) + 2U)
#define sfSourceTag ((2U << 16U) + 3U)
@@ -73,7 +75,6 @@
#define sfLockCount ((2U << 16U) + 49U)
#define sfFirstNFTokenSequence ((2U << 16U) + 50U)
#define sfOracleDocumentID ((2U << 16U) + 51U)
#define sfPermissionValue ((2U << 16U) + 52U)
#define sfStartTime ((2U << 16U) + 93U)
#define sfRepeatCount ((2U << 16U) + 94U)
#define sfDelaySeconds ((2U << 16U) + 95U)
@@ -82,6 +83,7 @@
#define sfRewardTime ((2U << 16U) + 98U)
#define sfRewardLgrFirst ((2U << 16U) + 99U)
#define sfRewardLgrLast ((2U << 16U) + 100U)
#define sfCancelTicketSequence ((2U << 16U) + 101U)
#define sfIndexNext ((3U << 16U) + 1U)
#define sfIndexPrevious ((3U << 16U) + 2U)
#define sfBookNode ((3U << 16U) + 3U)
@@ -117,7 +119,6 @@
#define sfTakerGetsCurrency ((17U << 16U) + 3U)
#define sfTakerGetsIssuer ((17U << 16U) + 4U)
#define sfMPTokenIssuanceID ((21U << 16U) + 1U)
#define sfShareMPTID ((21U << 16U) + 2U)
#define sfLedgerHash ((5U << 16U) + 1U)
#define sfParentHash ((5U << 16U) + 2U)
#define sfTransactionHash ((5U << 16U) + 3U)
@@ -155,8 +156,6 @@
#define sfEscrowID ((5U << 16U) + 35U)
#define sfURITokenID ((5U << 16U) + 36U)
#define sfDomainID ((5U << 16U) + 37U)
#define sfVaultID ((5U << 16U) + 38U)
#define sfParentBatchID ((5U << 16U) + 39U)
#define sfHookOnOutgoing ((5U << 16U) + 93U)
#define sfHookOnIncoming ((5U << 16U) + 94U)
#define sfCron ((5U << 16U) + 95U)
@@ -164,11 +163,8 @@
#define sfEmittedTxnID ((5U << 16U) + 97U)
#define sfGovernanceMarks ((5U << 16U) + 98U)
#define sfGovernanceFlags ((5U << 16U) + 99U)
#define sfEntropyDigest ((5U << 16U) + 100U)
#define sfNumber ((9U << 16U) + 1U)
#define sfAssetsAvailable ((9U << 16U) + 2U)
#define sfAssetsMaximum ((9U << 16U) + 3U)
#define sfAssetsTotal ((9U << 16U) + 4U)
#define sfLossUnrealized ((9U << 16U) + 5U)
#define sfAmount ((6U << 16U) + 1U)
#define sfBalance ((6U << 16U) + 2U)
#define sfLimitAmount ((6U << 16U) + 3U)
@@ -241,7 +237,6 @@
#define sfNFTokenMinter ((8U << 16U) + 9U)
#define sfEmitCallback ((8U << 16U) + 10U)
#define sfHolder ((8U << 16U) + 11U)
#define sfDelegate ((8U << 16U) + 12U)
#define sfHookAccount ((8U << 16U) + 16U)
#define sfOtherChainSource ((8U << 16U) + 18U)
#define sfOtherChainDestination ((8U << 16U) + 19U)
@@ -279,7 +274,6 @@
#define sfNFToken ((14U << 16U) + 12U)
#define sfEmitDetails ((14U << 16U) + 13U)
#define sfHook ((14U << 16U) + 14U)
#define sfPermission ((14U << 16U) + 15U)
#define sfSigner ((14U << 16U) + 16U)
#define sfMajority ((14U << 16U) + 18U)
#define sfDisabledValidator ((14U << 16U) + 19U)
@@ -297,9 +291,7 @@
#define sfXChainCreateAccountAttestationCollectionElement ((14U << 16U) + 31U)
#define sfPriceData ((14U << 16U) + 32U)
#define sfCredential ((14U << 16U) + 33U)
#define sfRawTransaction ((14U << 16U) + 34U)
#define sfBatchSigner ((14U << 16U) + 35U)
#define sfBook ((14U << 16U) + 36U)
#define sfExportedTxn ((14U << 16U) + 90U)
#define sfAmountEntry ((14U << 16U) + 91U)
#define sfMintURIToken ((14U << 16U) + 92U)
#define sfHookEmission ((14U << 16U) + 93U)
@@ -307,6 +299,7 @@
#define sfActiveValidator ((14U << 16U) + 95U)
#define sfGenesisMint ((14U << 16U) + 96U)
#define sfRemark ((14U << 16U) + 97U)
#define sfExportResult ((14U << 16U) + 98U)
#define sfSigners ((15U << 16U) + 3U)
#define sfSignerEntries ((15U << 16U) + 4U)
#define sfTemplate ((15U << 16U) + 5U)
@@ -317,7 +310,6 @@
#define sfNFTokens ((15U << 16U) + 10U)
#define sfHooks ((15U << 16U) + 11U)
#define sfVoteSlots ((15U << 16U) + 12U)
#define sfAdditionalBooks ((15U << 16U) + 13U)
#define sfMajorities ((15U << 16U) + 16U)
#define sfDisabledValidators ((15U << 16U) + 17U)
#define sfHookExecutions ((15U << 16U) + 18U)
@@ -330,9 +322,6 @@
#define sfAuthorizeCredentials ((15U << 16U) + 26U)
#define sfUnauthorizeCredentials ((15U << 16U) + 27U)
#define sfAcceptedCredentials ((15U << 16U) + 28U)
#define sfPermissions ((15U << 16U) + 29U)
#define sfRawTransactions ((15U << 16U) + 30U)
#define sfBatchSigners ((15U << 16U) + 31U)
#define sfAmounts ((15U << 16U) + 92U)
#define sfHookEmissions ((15U << 16U) + 93U)
#define sfImportVLKeys ((15U << 16U) + 94U)

View File

@@ -61,14 +61,7 @@
#define ttNFTOKEN_MODIFY 70
#define ttPERMISSIONED_DOMAIN_SET 71
#define ttPERMISSIONED_DOMAIN_DELETE 72
#define ttDELEGATE_SET 73
#define ttVAULT_CREATE 74
#define ttVAULT_SET 75
#define ttVAULT_DELETE 76
#define ttVAULT_DEPOSIT 77
#define ttVAULT_WITHDRAW 78
#define ttVAULT_CLAWBACK 79
#define ttBATCH 80
#define ttEXPORT 91
#define ttCRON 92
#define ttCRON_SET 93
#define ttREMARKS_SET 94
@@ -82,3 +75,4 @@
#define ttUNL_MODIFY 102
#define ttEMIT_FAILURE 103
#define ttUNL_REPORT 104
#define ttCONSENSUS_ENTROPY 105

View File

@@ -27,6 +27,7 @@
#include <algorithm>
#include <optional>
#include <ostream>
#include <string>
#include <unordered_map>
#include <vector>
@@ -367,7 +368,7 @@ get(Section const& section,
}
inline std::string
get(Section const& section, std::string const& name, char const* defaultValue)
get(Section const& section, std::string const& name, const char* defaultValue)
{
try
{

View File

@@ -22,10 +22,10 @@
#include <xrpl/basics/Slice.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <cstdint>
#include <cstring>
#include <memory>
#include <utility>
namespace ripple {

View File

@@ -21,11 +21,9 @@
#define RIPPLED_COMPRESSIONALGORITHMS_H_INCLUDED
#include <xrpl/basics/contract.h>
#include <lz4.h>
#include <algorithm>
#include <cstdint>
#include <lz4.h>
#include <stdexcept>
#include <vector>
@@ -55,7 +53,7 @@ lz4Compress(void const* in, std::size_t inSize, BufferFactory&& bf)
auto compressed = bf(outCapacity);
auto compressedSize = LZ4_compress_default(
reinterpret_cast<char const*>(in),
reinterpret_cast<const char*>(in),
reinterpret_cast<char*>(compressed),
inSize,
outCapacity);
@@ -89,7 +87,7 @@ lz4Decompress(
Throw<std::runtime_error>("lz4Decompress: integer overflow (output)");
if (LZ4_decompress_safe(
reinterpret_cast<char const*>(in),
reinterpret_cast<const char*>(in),
reinterpret_cast<char*>(decompressed),
inSize,
decompressedSize) != decompressedSize)

View File

@@ -21,7 +21,6 @@
#define RIPPLE_BASICS_COUNTEDOBJECT_H_INCLUDED
#include <xrpl/beast/type_name.h>
#include <atomic>
#include <string>
#include <utility>

View File

@@ -22,19 +22,11 @@
#include <xrpl/basics/contract.h>
#if defined(__clang__)
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated"
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
#endif
#include <boost/outcome.hpp>
#if defined(__clang__)
#pragma clang diagnostic pop
#endif
#include <concepts>
#include <stdexcept>
#include <type_traits>
namespace ripple {
@@ -103,7 +95,7 @@ public:
{
}
constexpr E const&
constexpr const E&
value() const&
{
return val_;
@@ -121,7 +113,7 @@ public:
return std::move(val_);
}
constexpr E const&&
constexpr const E&&
value() const&&
{
return std::move(val_);

View File

@@ -1,515 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2023 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_BASICS_INTRUSIVEPOINTER_H_INCLUDED
#define RIPPLE_BASICS_INTRUSIVEPOINTER_H_INCLUDED
#include <concepts>
#include <cstdint>
#include <type_traits>
#include <utility>
namespace ripple {
//------------------------------------------------------------------------------
/** Tag to create an intrusive pointer from another intrusive pointer by using a
static cast. This is useful to create an intrusive pointer to a derived
class from an intrusive pointer to a base class.
*/
struct StaticCastTagSharedIntrusive
{
};
/** Tag to create an intrusive pointer from another intrusive pointer by using a
dynamic cast. This is useful to create an intrusive pointer to a derived
class from an intrusive pointer to a base class. If the cast fails an empty
(null) intrusive pointer is created.
*/
struct DynamicCastTagSharedIntrusive
{
};
/** When creating or adopting a raw pointer, controls whether the strong count
is incremented or not. Use this tag to increment the strong count.
*/
struct SharedIntrusiveAdoptIncrementStrongTag
{
};
/** When creating or adopting a raw pointer, controls whether the strong count
is incremented or not. Use this tag to leave the strong count unchanged.
*/
struct SharedIntrusiveAdoptNoIncrementTag
{
};
//------------------------------------------------------------------------------
//
template <class T>
concept CAdoptTag = std::is_same_v<T, SharedIntrusiveAdoptIncrementStrongTag> ||
std::is_same_v<T, SharedIntrusiveAdoptNoIncrementTag>;
//------------------------------------------------------------------------------
/** A shared intrusive pointer class that supports weak pointers.
This is meant to be used for SHAMapInnerNodes, but may be useful for other
cases. Since the reference counts are stored on the pointee, the pointee is
not destroyed until both the strong _and_ weak pointer counts go to zero.
When the strong pointer count goes to zero, the "partialDestructor" is
called. This can be used to destroy as much of the object as possible while
still retaining the reference counts. For example, for SHAMapInnerNodes the
children may be reset in that function. Note that std::shared_poiner WILL
run the destructor when the strong count reaches zero, but may not free the
memory used by the object until the weak count reaches zero. In rippled, we
typically allocate shared pointers with the `make_shared` function. When
that is used, the memory is not reclaimed until the weak count reaches zero.
*/
template <class T>
class SharedIntrusive
{
public:
SharedIntrusive() = default;
template <CAdoptTag TAdoptTag>
SharedIntrusive(T* p, TAdoptTag) noexcept;
SharedIntrusive(SharedIntrusive const& rhs);
template <class TT>
// TODO: convertible_to isn't quite right. That include a static castable.
// Find the right concept.
requires std::convertible_to<TT*, T*>
SharedIntrusive(SharedIntrusive<TT> const& rhs);
SharedIntrusive(SharedIntrusive&& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedIntrusive(SharedIntrusive<TT>&& rhs);
SharedIntrusive&
operator=(SharedIntrusive const& rhs);
bool
operator!=(std::nullptr_t) const;
bool
operator==(std::nullptr_t) const;
template <class TT>
requires std::convertible_to<TT*, T*>
SharedIntrusive&
operator=(SharedIntrusive<TT> const& rhs);
SharedIntrusive&
operator=(SharedIntrusive&& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedIntrusive&
operator=(SharedIntrusive<TT>&& rhs);
/** Adopt the raw pointer. The strong reference may or may not be
incremented, depending on the TAdoptTag
*/
template <CAdoptTag TAdoptTag = SharedIntrusiveAdoptIncrementStrongTag>
void
adopt(T* p);
~SharedIntrusive();
/** Create a new SharedIntrusive by statically casting the pointer
controlled by the rhs param.
*/
template <class TT>
SharedIntrusive(
StaticCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs);
/** Create a new SharedIntrusive by statically casting the pointer
controlled by the rhs param.
*/
template <class TT>
SharedIntrusive(StaticCastTagSharedIntrusive, SharedIntrusive<TT>&& rhs);
/** Create a new SharedIntrusive by dynamically casting the pointer
controlled by the rhs param.
*/
template <class TT>
SharedIntrusive(
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs);
/** Create a new SharedIntrusive by dynamically casting the pointer
controlled by the rhs param.
*/
template <class TT>
SharedIntrusive(DynamicCastTagSharedIntrusive, SharedIntrusive<TT>&& rhs);
T&
operator*() const noexcept;
T*
operator->() const noexcept;
explicit
operator bool() const noexcept;
/** Set the pointer to null, decrement the strong count, and run the
appropriate release action.
*/
void
reset();
/** Get the raw pointer */
T*
get() const;
/** Return the strong count */
std::size_t
use_count() const;
template <class TT, class... Args>
friend SharedIntrusive<TT>
make_SharedIntrusive(Args&&... args);
template <class TT>
friend class SharedIntrusive;
template <class TT>
friend class SharedWeakUnion;
template <class TT>
friend class WeakIntrusive;
private:
/** Return the raw pointer held by this object. */
T*
unsafeGetRawPtr() const;
/** Exchange the current raw pointer held by this object with the given
pointer. Decrement the strong count of the raw pointer previously held
by this object and run the appropriate release action.
*/
void
unsafeReleaseAndStore(T* next);
/** Set the raw pointer directly. This is wrapped in a function so the class
can support both atomic and non-atomic pointers in a future patch.
*/
void
unsafeSetRawPtr(T* p);
/** Exchange the raw pointer directly.
This sets the raw pointer to the given value and returns the previous
value. This is wrapped in a function so the class can support both
atomic and non-atomic pointers in a future patch.
*/
T*
unsafeExchange(T* p);
/** pointer to the type with an intrusive count */
T* ptr_{nullptr};
};
//------------------------------------------------------------------------------
/** A weak intrusive pointer class for the SharedIntrusive pointer class.
Note that this weak pointer class asks differently from normal weak pointer
classes. When the strong pointer count goes to zero, the "partialDestructor"
is called. See the comment on SharedIntrusive for a fuller explanation.
*/
template <class T>
class WeakIntrusive
{
public:
WeakIntrusive() = default;
WeakIntrusive(WeakIntrusive const& rhs);
WeakIntrusive(WeakIntrusive&& rhs);
WeakIntrusive(SharedIntrusive<T> const& rhs);
// There is no move constructor from a strong intrusive ptr because
// moving would be move expensive than copying in this case (the strong
// ref would need to be decremented)
WeakIntrusive(SharedIntrusive<T> const&& rhs) = delete;
// Since there are no current use cases for copy assignment in
// WeakIntrusive, we delete this operator to simplify the implementation. If
// a need arises in the future, we can reintroduce it with proper
// consideration."
WeakIntrusive&
operator=(WeakIntrusive const&) = delete;
template <class TT>
requires std::convertible_to<TT*, T*>
WeakIntrusive&
operator=(SharedIntrusive<TT> const& rhs);
/** Adopt the raw pointer and increment the weak count. */
void
adopt(T* ptr);
~WeakIntrusive();
/** Get a strong pointer from the weak pointer, if possible. This will
only return a seated pointer if the strong count on the raw pointer
is non-zero before locking.
*/
SharedIntrusive<T>
lock() const;
/** Return true if the strong count is zero. */
bool
expired() const;
/** Set the pointer to null and decrement the weak count.
Note: This may run the destructor if the strong count is zero.
*/
void
reset();
private:
T* ptr_ = nullptr;
/** Decrement the weak count. This does _not_ set the raw pointer to
null.
Note: This may run the destructor if the strong count is zero.
*/
void
unsafeReleaseNoStore();
};
//------------------------------------------------------------------------------
/** A combination of a strong and a weak intrusive pointer stored in the
space of a single pointer.
This class is similar to a `std::variant<SharedIntrusive,WeakIntrusive>`
with some optimizations. In particular, it uses a low-order bit to
determine if the raw pointer represents a strong pointer or a weak
pointer. It can also be quickly switched between its strong pointer and
weak pointer representations. This class is useful for storing intrusive
pointers in tagged caches.
*/
template <class T>
class SharedWeakUnion
{
// Tagged pointer. Low bit determines if this is a strong or a weak
// pointer. The low bit must be masked to zero when converting back to a
// pointer. If the low bit is '1', this is a weak pointer.
static_assert(
alignof(T) >= 2,
"Bad alignment: Combo pointer requires low bit to be zero");
public:
SharedWeakUnion() = default;
SharedWeakUnion(SharedWeakUnion const& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakUnion(SharedIntrusive<TT> const& rhs);
SharedWeakUnion(SharedWeakUnion&& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakUnion(SharedIntrusive<TT>&& rhs);
SharedWeakUnion&
operator=(SharedWeakUnion const& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakUnion&
operator=(SharedIntrusive<TT> const& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakUnion&
operator=(SharedIntrusive<TT>&& rhs);
~SharedWeakUnion();
/** Return a strong pointer if this is already a strong pointer (i.e.
don't lock the weak pointer. Use the `lock` method if that's what's
needed)
*/
SharedIntrusive<T>
getStrong() const;
/** Return true if this is a strong pointer and the strong pointer is
seated.
*/
explicit
operator bool() const noexcept;
/** Set the pointer to null, decrement the appropriate ref count, and
run the appropriate release action.
*/
void
reset();
/** If this is a strong pointer, return the raw pointer. Otherwise
return null.
*/
T*
get() const;
/** If this is a strong pointer, return the strong count. Otherwise
* return 0
*/
std::size_t
use_count() const;
/** Return true if there is a non-zero strong count. */
bool
expired() const;
/** If this is a strong pointer, return the strong pointer. Otherwise
attempt to lock the weak pointer.
*/
SharedIntrusive<T>
lock() const;
/** Return true is this represents a strong pointer. */
bool
isStrong() const;
/** Return true is this represents a weak pointer. */
bool
isWeak() const;
/** If this is a weak pointer, attempt to convert it to a strong
pointer.
@return true if successfully converted to a strong pointer (or was
already a strong pointer). Otherwise false.
*/
bool
convertToStrong();
/** If this is a strong pointer, attempt to convert it to a weak
pointer.
@return false if the pointer is null. Otherwise return true.
*/
bool
convertToWeak();
private:
// Tagged pointer. Low bit determines if this is a strong or a weak
// pointer. The low bit must be masked to zero when converting back to a
// pointer. If the low bit is '1', this is a weak pointer.
std::uintptr_t tp_{0};
static constexpr std::uintptr_t tagMask = 1;
static constexpr std::uintptr_t ptrMask = ~tagMask;
private:
/** Return the raw pointer held by this object.
*/
T*
unsafeGetRawPtr() const;
enum class RefStrength { strong, weak };
/** Set the raw pointer and tag bit directly.
*/
void
unsafeSetRawPtr(T* p, RefStrength rs);
/** Set the raw pointer and tag bit to all zeros (strong null pointer).
*/
void unsafeSetRawPtr(std::nullptr_t);
/** Decrement the appropriate ref count, and run the appropriate release
action. Note: this does _not_ set the raw pointer to null.
*/
void
unsafeReleaseNoStore();
};
//------------------------------------------------------------------------------
/** Create a shared intrusive pointer.
Note: unlike std::shared_ptr, where there is an advantage of allocating
the pointer and control block together, there is no benefit for intrusive
pointers.
*/
template <class TT, class... Args>
SharedIntrusive<TT>
make_SharedIntrusive(Args&&... args)
{
auto p = new TT(std::forward<Args>(args)...);
static_assert(
noexcept(SharedIntrusive<TT>(
std::declval<TT*>(),
std::declval<SharedIntrusiveAdoptNoIncrementTag>())),
"SharedIntrusive constructor should not throw or this can leak "
"memory");
return SharedIntrusive<TT>(p, SharedIntrusiveAdoptNoIncrementTag{});
}
//------------------------------------------------------------------------------
namespace intr_ptr {
template <class T>
using SharedPtr = SharedIntrusive<T>;
template <class T>
using WeakPtr = WeakIntrusive<T>;
template <class T>
using SharedWeakUnionPtr = SharedWeakUnion<T>;
template <class T, class... A>
SharedPtr<T>
make_shared(A&&... args)
{
return make_SharedIntrusive<T>(std::forward<A>(args)...);
}
template <class T, class TT>
SharedPtr<T>
static_pointer_cast(TT const& v)
{
return SharedPtr<T>(StaticCastTagSharedIntrusive{}, v);
}
template <class T, class TT>
SharedPtr<T>
dynamic_pointer_cast(TT const& v)
{
return SharedPtr<T>(DynamicCastTagSharedIntrusive{}, v);
}
} // namespace intr_ptr
} // namespace ripple
#endif

View File

@@ -1,740 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2023 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_BASICS_INTRUSIVEPOINTER_IPP_INCLUDED
#define RIPPLE_BASICS_INTRUSIVEPOINTER_IPP_INCLUDED
#include <xrpl/basics/IntrusivePointer.h>
#include <xrpl/basics/IntrusiveRefCounts.h>
#include <utility>
namespace ripple {
template <class T>
template <CAdoptTag TAdoptTag>
SharedIntrusive<T>::SharedIntrusive(T* p, TAdoptTag) noexcept : ptr_{p}
{
if constexpr (std::is_same_v<
TAdoptTag,
SharedIntrusiveAdoptIncrementStrongTag>)
{
if (p)
p->addStrongRef();
}
}
template <class T>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive const& rhs)
: ptr_{[&] {
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addStrongRef();
return p;
}()}
{
}
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive<TT> const& rhs)
: ptr_{[&] {
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addStrongRef();
return p;
}()}
{
}
template <class T>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive&& rhs)
: ptr_{rhs.unsafeExchange(nullptr)}
{
}
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive<TT>&& rhs)
: ptr_{rhs.unsafeExchange(nullptr)}
{
}
template <class T>
SharedIntrusive<T>&
SharedIntrusive<T>::operator=(SharedIntrusive const& rhs)
{
if (this == &rhs)
return *this;
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addStrongRef();
unsafeReleaseAndStore(p);
return *this;
}
template <class T>
template <class TT>
// clang-format off
requires std::convertible_to<TT*, T*>
// clang-format on
SharedIntrusive<T>&
SharedIntrusive<T>::operator=(SharedIntrusive<TT> const& rhs)
{
if constexpr (std::is_same_v<T, TT>)
{
// This case should never be hit. The operator above will run instead.
// (The normal operator= is needed or it will be marked `deleted`)
if (this == &rhs)
return *this;
}
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addStrongRef();
unsafeReleaseAndStore(p);
return *this;
}
template <class T>
SharedIntrusive<T>&
SharedIntrusive<T>::operator=(SharedIntrusive&& rhs)
{
if (this == &rhs)
return *this;
unsafeReleaseAndStore(rhs.unsafeExchange(nullptr));
return *this;
}
template <class T>
template <class TT>
// clang-format off
requires std::convertible_to<TT*, T*>
// clang-format on
SharedIntrusive<T>&
SharedIntrusive<T>::operator=(SharedIntrusive<TT>&& rhs)
{
static_assert(
!std::is_same_v<T, TT>,
"This overload should not be instantiated for T == TT");
unsafeReleaseAndStore(rhs.unsafeExchange(nullptr));
return *this;
}
template <class T>
bool
SharedIntrusive<T>::operator!=(std::nullptr_t) const
{
return this->get() != nullptr;
}
template <class T>
bool
SharedIntrusive<T>::operator==(std::nullptr_t) const
{
return this->get() == nullptr;
}
template <class T>
template <CAdoptTag TAdoptTag>
void
SharedIntrusive<T>::adopt(T* p)
{
if constexpr (std::is_same_v<
TAdoptTag,
SharedIntrusiveAdoptIncrementStrongTag>)
{
if (p)
p->addStrongRef();
}
unsafeReleaseAndStore(p);
}
template <class T>
SharedIntrusive<T>::~SharedIntrusive()
{
unsafeReleaseAndStore(nullptr);
};
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
StaticCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs)
: ptr_{[&] {
auto p = static_cast<T*>(rhs.unsafeGetRawPtr());
if (p)
p->addStrongRef();
return p;
}()}
{
}
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
StaticCastTagSharedIntrusive,
SharedIntrusive<TT>&& rhs)
: ptr_{static_cast<T*>(rhs.unsafeExchange(nullptr))}
{
}
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs)
: ptr_{[&] {
auto p = dynamic_cast<T*>(rhs.unsafeGetRawPtr());
if (p)
p->addStrongRef();
return p;
}()}
{
}
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT>&& rhs)
{
// This can be simplified without the `exchange`, but the `exchange` is kept
// in anticipation of supporting atomic operations.
auto toSet = rhs.unsafeExchange(nullptr);
if (toSet)
{
ptr_ = dynamic_cast<T*>(toSet);
if (!ptr_)
// need to set the pointer back or will leak
rhs.unsafeExchange(toSet);
}
}
template <class T>
T&
SharedIntrusive<T>::operator*() const noexcept
{
return *unsafeGetRawPtr();
}
template <class T>
T*
SharedIntrusive<T>::operator->() const noexcept
{
return unsafeGetRawPtr();
}
template <class T>
SharedIntrusive<T>::operator bool() const noexcept
{
return bool(unsafeGetRawPtr());
}
template <class T>
void
SharedIntrusive<T>::reset()
{
unsafeReleaseAndStore(nullptr);
}
template <class T>
T*
SharedIntrusive<T>::get() const
{
return unsafeGetRawPtr();
}
template <class T>
std::size_t
SharedIntrusive<T>::use_count() const
{
if (auto p = unsafeGetRawPtr())
return p->use_count();
return 0;
}
template <class T>
T*
SharedIntrusive<T>::unsafeGetRawPtr() const
{
return ptr_;
}
template <class T>
void
SharedIntrusive<T>::unsafeSetRawPtr(T* p)
{
ptr_ = p;
}
template <class T>
T*
SharedIntrusive<T>::unsafeExchange(T* p)
{
return std::exchange(ptr_, p);
}
template <class T>
void
SharedIntrusive<T>::unsafeReleaseAndStore(T* next)
{
auto prev = unsafeExchange(next);
if (!prev)
return;
using enum ReleaseStrongRefAction;
auto action = prev->releaseStrongRef();
switch (action)
{
case noop:
break;
case destroy:
delete prev;
break;
case partialDestroy:
prev->partialDestructor();
partialDestructorFinished(&prev);
// prev is null and may no longer be used
break;
}
}
//------------------------------------------------------------------------------
template <class T>
WeakIntrusive<T>::WeakIntrusive(WeakIntrusive const& rhs) : ptr_{rhs.ptr_}
{
if (ptr_)
ptr_->addWeakRef();
}
template <class T>
WeakIntrusive<T>::WeakIntrusive(WeakIntrusive&& rhs) : ptr_{rhs.ptr_}
{
rhs.ptr_ = nullptr;
}
template <class T>
WeakIntrusive<T>::WeakIntrusive(SharedIntrusive<T> const& rhs)
: ptr_{rhs.unsafeGetRawPtr()}
{
if (ptr_)
ptr_->addWeakRef();
}
template <class T>
template <class TT>
// clang-format off
requires std::convertible_to<TT*, T*>
// clang-format on
WeakIntrusive<T>&
WeakIntrusive<T>::operator=(SharedIntrusive<TT> const& rhs)
{
unsafeReleaseNoStore();
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addWeakRef();
return *this;
}
template <class T>
void
WeakIntrusive<T>::adopt(T* ptr)
{
unsafeReleaseNoStore();
if (ptr)
ptr->addWeakRef();
ptr_ = ptr;
}
template <class T>
WeakIntrusive<T>::~WeakIntrusive()
{
unsafeReleaseNoStore();
}
template <class T>
SharedIntrusive<T>
WeakIntrusive<T>::lock() const
{
if (ptr_ && ptr_->checkoutStrongRefFromWeak())
{
return SharedIntrusive<T>{ptr_, SharedIntrusiveAdoptNoIncrementTag{}};
}
return {};
}
template <class T>
bool
WeakIntrusive<T>::expired() const
{
return (!ptr_ || ptr_->expired());
}
template <class T>
void
WeakIntrusive<T>::reset()
{
unsafeReleaseNoStore();
ptr_ = nullptr;
}
template <class T>
void
WeakIntrusive<T>::unsafeReleaseNoStore()
{
if (!ptr_)
return;
using enum ReleaseWeakRefAction;
auto action = ptr_->releaseWeakRef();
switch (action)
{
case noop:
break;
case destroy:
delete ptr_;
break;
}
}
//------------------------------------------------------------------------------
template <class T>
SharedWeakUnion<T>::SharedWeakUnion(SharedWeakUnion const& rhs) : tp_{rhs.tp_}
{
auto p = rhs.unsafeGetRawPtr();
if (!p)
return;
if (rhs.isStrong())
p->addStrongRef();
else
p->addWeakRef();
}
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakUnion<T>::SharedWeakUnion(SharedIntrusive<TT> const& rhs)
{
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addStrongRef();
unsafeSetRawPtr(p, RefStrength::strong);
}
template <class T>
SharedWeakUnion<T>::SharedWeakUnion(SharedWeakUnion&& rhs) : tp_{rhs.tp_}
{
rhs.unsafeSetRawPtr(nullptr);
}
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakUnion<T>::SharedWeakUnion(SharedIntrusive<TT>&& rhs)
{
auto p = rhs.unsafeGetRawPtr();
if (p)
unsafeSetRawPtr(p, RefStrength::strong);
rhs.unsafeSetRawPtr(nullptr);
}
template <class T>
SharedWeakUnion<T>&
SharedWeakUnion<T>::operator=(SharedWeakUnion const& rhs)
{
if (this == &rhs)
return *this;
unsafeReleaseNoStore();
if (auto p = rhs.unsafeGetRawPtr())
{
if (rhs.isStrong())
{
p->addStrongRef();
unsafeSetRawPtr(p, RefStrength::strong);
}
else
{
p->addWeakRef();
unsafeSetRawPtr(p, RefStrength::weak);
}
}
else
{
unsafeSetRawPtr(nullptr);
}
return *this;
}
template <class T>
template <class TT>
// clang-format off
requires std::convertible_to<TT*, T*>
// clang-format on
SharedWeakUnion<T>&
SharedWeakUnion<T>::operator=(SharedIntrusive<TT> const& rhs)
{
unsafeReleaseNoStore();
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addStrongRef();
unsafeSetRawPtr(p, RefStrength::strong);
return *this;
}
template <class T>
template <class TT>
// clang-format off
requires std::convertible_to<TT*, T*>
// clang-format on
SharedWeakUnion<T>&
SharedWeakUnion<T>::operator=(SharedIntrusive<TT>&& rhs)
{
unsafeReleaseNoStore();
unsafeSetRawPtr(rhs.unsafeGetRawPtr(), RefStrength::strong);
rhs.unsafeSetRawPtr(nullptr);
return *this;
}
template <class T>
SharedWeakUnion<T>::~SharedWeakUnion()
{
unsafeReleaseNoStore();
};
// Return a strong pointer if this is already a strong pointer (i.e. don't
// lock the weak pointer. Use the `lock` method if that's what's needed)
template <class T>
SharedIntrusive<T>
SharedWeakUnion<T>::getStrong() const
{
SharedIntrusive<T> result;
auto p = unsafeGetRawPtr();
if (p && isStrong())
{
result.template adopt<SharedIntrusiveAdoptIncrementStrongTag>(p);
}
return result;
}
template <class T>
SharedWeakUnion<T>::operator bool() const noexcept
{
return bool(get());
}
template <class T>
void
SharedWeakUnion<T>::reset()
{
unsafeReleaseNoStore();
unsafeSetRawPtr(nullptr);
}
template <class T>
T*
SharedWeakUnion<T>::get() const
{
return isStrong() ? unsafeGetRawPtr() : nullptr;
}
template <class T>
std::size_t
SharedWeakUnion<T>::use_count() const
{
if (auto p = get())
return p->use_count();
return 0;
}
template <class T>
bool
SharedWeakUnion<T>::expired() const
{
auto p = unsafeGetRawPtr();
return (!p || p->expired());
}
template <class T>
SharedIntrusive<T>
SharedWeakUnion<T>::lock() const
{
SharedIntrusive<T> result;
auto p = unsafeGetRawPtr();
if (!p)
return result;
if (isStrong())
{
result.template adopt<SharedIntrusiveAdoptIncrementStrongTag>(p);
return result;
}
if (p->checkoutStrongRefFromWeak())
{
result.template adopt<SharedIntrusiveAdoptNoIncrementTag>(p);
return result;
}
return result;
}
template <class T>
bool
SharedWeakUnion<T>::isStrong() const
{
return !(tp_ & tagMask);
}
template <class T>
bool
SharedWeakUnion<T>::isWeak() const
{
return tp_ & tagMask;
}
template <class T>
bool
SharedWeakUnion<T>::convertToStrong()
{
if (isStrong())
return true;
auto p = unsafeGetRawPtr();
if (p && p->checkoutStrongRefFromWeak())
{
[[maybe_unused]] auto action = p->releaseWeakRef();
XRPL_ASSERT(
(action == ReleaseWeakRefAction::noop),
"ripple::SharedWeakUnion::convertToStrong : "
"action is noop");
unsafeSetRawPtr(p, RefStrength::strong);
return true;
}
return false;
}
template <class T>
bool
SharedWeakUnion<T>::convertToWeak()
{
if (isWeak())
return true;
auto p = unsafeGetRawPtr();
if (!p)
return false;
using enum ReleaseStrongRefAction;
auto action = p->addWeakReleaseStrongRef();
switch (action)
{
case noop:
break;
case destroy:
// We just added a weak ref. How could we destroy?
UNREACHABLE(
"ripple::SharedWeakUnion::convertToWeak : destroying freshly "
"added ref");
delete p;
unsafeSetRawPtr(nullptr);
return true; // Should never happen
case partialDestroy:
// This is a weird case. We just converted the last strong
// pointer to a weak pointer.
p->partialDestructor();
partialDestructorFinished(&p);
// p is null and may no longer be used
break;
}
unsafeSetRawPtr(p, RefStrength::weak);
return true;
}
template <class T>
T*
SharedWeakUnion<T>::unsafeGetRawPtr() const
{
return reinterpret_cast<T*>(tp_ & ptrMask);
}
template <class T>
void
SharedWeakUnion<T>::unsafeSetRawPtr(T* p, RefStrength rs)
{
tp_ = reinterpret_cast<std::uintptr_t>(p);
if (tp_ && rs == RefStrength::weak)
tp_ |= tagMask;
}
template <class T>
void
SharedWeakUnion<T>::unsafeSetRawPtr(std::nullptr_t)
{
tp_ = 0;
}
template <class T>
void
SharedWeakUnion<T>::unsafeReleaseNoStore()
{
auto p = unsafeGetRawPtr();
if (!p)
return;
if (isStrong())
{
using enum ReleaseStrongRefAction;
auto strongAction = p->releaseStrongRef();
switch (strongAction)
{
case noop:
break;
case destroy:
delete p;
break;
case partialDestroy:
p->partialDestructor();
partialDestructorFinished(&p);
// p is null and may no longer be used
break;
}
}
else
{
using enum ReleaseWeakRefAction;
auto weakAction = p->releaseWeakRef();
switch (weakAction)
{
case noop:
break;
case destroy:
delete p;
break;
}
}
}
} // namespace ripple
#endif

View File

@@ -1,502 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2023 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_BASICS_INTRUSIVEREFCOUNTS_H_INCLUDED
#define RIPPLE_BASICS_INTRUSIVEREFCOUNTS_H_INCLUDED
#include <xrpl/beast/utility/instrumentation.h>
#include <atomic>
#include <cstdint>
namespace ripple {
/** Action to perform when releasing a strong pointer.
noop: Do nothing. For example, a `noop` action will occur when a count is
decremented to a non-zero value.
partialDestroy: Run the `partialDestructor`. This action will happen when a
strong count is decremented to zero and the weak count is non-zero.
destroy: Run the destructor. This action will occur when either the strong
count or weak count is decremented and the other count is also zero.
*/
enum class ReleaseStrongRefAction { noop, partialDestroy, destroy };
/** Action to perform when releasing a weak pointer.
noop: Do nothing. For example, a `noop` action will occur when a count is
decremented to a non-zero value.
destroy: Run the destructor. This action will occur when either the strong
count or weak count is decremented and the other count is also zero.
*/
enum class ReleaseWeakRefAction { noop, destroy };
/** Implement the strong count, weak count, and bit flags for an intrusive
pointer.
A class can satisfy the requirements of a ripple::IntrusivePointer by
inheriting from this class.
*/
struct IntrusiveRefCounts
{
virtual ~IntrusiveRefCounts() noexcept;
// This must be `noexcept` or the make_SharedIntrusive function could leak
// memory.
void
addStrongRef() const noexcept;
void
addWeakRef() const noexcept;
ReleaseStrongRefAction
releaseStrongRef() const;
// Same as:
// {
// addWeakRef();
// return releaseStrongRef;
// }
// done as one atomic operation
ReleaseStrongRefAction
addWeakReleaseStrongRef() const;
ReleaseWeakRefAction
releaseWeakRef() const;
// Returns true is able to checkout a strong ref. False otherwise
bool
checkoutStrongRefFromWeak() const noexcept;
bool
expired() const noexcept;
std::size_t
use_count() const noexcept;
// This function MUST be called after a partial destructor finishes running.
// Calling this function may cause other threads to delete the object
// pointed to by `o`, so `o` should never be used after calling this
// function. The parameter will be set to a `nullptr` after calling this
// function to emphasize that it should not be used.
// Note: This is intentionally NOT called at the end of `partialDestructor`.
// The reason for this is if new classes are written to support this smart
// pointer class, they need to write their own `partialDestructor` function
// and ensure `partialDestructorFinished` is called at the end. Putting this
// call inside the smart pointer class itself is expected to be less error
// prone.
// Note: The "two-star" programming is intentional. It emphasizes that `o`
// may be deleted and the unergonomic API is meant to signal the special
// nature of this function call to callers.
// Note: This is a template to support incompletely defined classes.
template <class T>
friend void
partialDestructorFinished(T** o);
private:
// TODO: We may need to use a uint64_t for both counts. This will reduce the
// memory savings. We need to audit the code to make sure 16 bit counts are
// enough for strong pointers and 14 bit counts are enough for weak
// pointers. Use type aliases to make it easy to switch types.
using CountType = std::uint16_t;
static constexpr size_t StrongCountNumBits = sizeof(CountType) * 8;
static constexpr size_t WeakCountNumBits = StrongCountNumBits - 2;
using FieldType = std::uint32_t;
static constexpr size_t FieldTypeBits = sizeof(FieldType) * 8;
static constexpr FieldType one = 1;
/** `refCounts` consists of four fields that are treated atomically:
1. Strong count. This is a count of the number of shared pointers that
hold a reference to this object. When the strong counts goes to zero,
if the weak count is zero, the destructor is run. If the weak count is
non-zero when the strong count goes to zero then the partialDestructor
is run.
2. Weak count. This is a count of the number of weak pointer that hold
a reference to this object. When the weak count goes to zero and the
strong count is also zero, then the destructor is run.
3. Partial destroy started bit. This bit is set if the
`partialDestructor` function has been started (or is about to be
started). This is used to prevent the destructor from running
concurrently with the partial destructor. This can easily happen when
the last strong pointer release its reference in one thread and starts
the partialDestructor, while in another thread the last weak pointer
goes out of scope and starts the destructor while the partialDestructor
is still running. Both a start and finished bit is needed to handle a
corner-case where the last strong pointer goes out of scope, then then
last `weakPointer` goes out of scope, but this happens before the
`partialDestructor` bit is set. It would be possible to use a single
bit if it could also be set atomically when the strong count goes to
zero and the weak count is non-zero, but that would add complexity (and
likely slow down common cases as well).
4. Partial destroy finished bit. This bit is set when the
`partialDestructor` has finished running. See (3) above for more
information.
*/
mutable std::atomic<FieldType> refCounts{strongDelta};
/** Amount to change the strong count when adding or releasing a reference
Note: The strong count is stored in the low `StrongCountNumBits` bits
of refCounts
*/
static constexpr FieldType strongDelta = 1;
/** Amount to change the weak count when adding or releasing a reference
Note: The weak count is stored in the high `WeakCountNumBits` bits of
refCounts
*/
static constexpr FieldType weakDelta = (one << StrongCountNumBits);
/** Flag that is set when the partialDestroy function has started running
(or is about to start running).
See description of the `refCounts` field for a fuller description of
this field.
*/
static constexpr FieldType partialDestroyStartedMask =
(one << (FieldTypeBits - 1));
/** Flag that is set when the partialDestroy function has finished running
See description of the `refCounts` field for a fuller description of
this field.
*/
static constexpr FieldType partialDestroyFinishedMask =
(one << (FieldTypeBits - 2));
/** Mask that will zero out all the `count` bits and leave the tag bits
unchanged.
*/
static constexpr FieldType tagMask =
partialDestroyStartedMask | partialDestroyFinishedMask;
/** Mask that will zero out the `tag` bits and leave the count bits
unchanged.
*/
static constexpr FieldType valueMask = ~tagMask;
/** Mask that will zero out everything except the strong count.
*/
static constexpr FieldType strongMask =
((one << StrongCountNumBits) - 1) & valueMask;
/** Mask that will zero out everything except the weak count.
*/
static constexpr FieldType weakMask =
(((one << WeakCountNumBits) - 1) << StrongCountNumBits) & valueMask;
/** Unpack the count and tag fields from the packed atomic integer form. */
struct RefCountPair
{
CountType strong;
CountType weak;
/** The `partialDestroyStartedBit` is set to on when the partial
destroy function is started. It is not a boolean; it is a uint32
with all bits zero with the possible exception of the
`partialDestroyStartedMask` bit. This is done so it can be directly
masked into the `combinedValue`.
*/
FieldType partialDestroyStartedBit{0};
/** The `partialDestroyFinishedBit` is set to on when the partial
destroy function has finished.
*/
FieldType partialDestroyFinishedBit{0};
RefCountPair(FieldType v) noexcept;
RefCountPair(CountType s, CountType w) noexcept;
/** Convert back to the packed integer form. */
FieldType
combinedValue() const noexcept;
static constexpr CountType maxStrongValue =
static_cast<CountType>((one << StrongCountNumBits) - 1);
static constexpr CountType maxWeakValue =
static_cast<CountType>((one << WeakCountNumBits) - 1);
/** Put an extra margin to detect when running up against limits.
This is only used in debug code, and is useful if we reduce the
number of bits in the strong and weak counts (to 16 and 14 bits).
*/
static constexpr CountType checkStrongMaxValue = maxStrongValue - 32;
static constexpr CountType checkWeakMaxValue = maxWeakValue - 32;
};
};
inline void
IntrusiveRefCounts::addStrongRef() const noexcept
{
refCounts.fetch_add(strongDelta, std::memory_order_acq_rel);
}
inline void
IntrusiveRefCounts::addWeakRef() const noexcept
{
refCounts.fetch_add(weakDelta, std::memory_order_acq_rel);
}
inline ReleaseStrongRefAction
IntrusiveRefCounts::releaseStrongRef() const
{
// Subtract `strongDelta` from refCounts. If this releases the last strong
// ref, set the `partialDestroyStarted` bit. It is important that the ref
// count and the `partialDestroyStartedBit` are changed atomically (hence
// the loop and `compare_exchange` op). If this didn't need to be done
// atomically, the loop could be replaced with a `fetch_sub` and a
// conditional `fetch_or`. This loop will almost always run once.
using enum ReleaseStrongRefAction;
auto prevIntVal = refCounts.load(std::memory_order_acquire);
while (1)
{
RefCountPair const prevVal{prevIntVal};
XRPL_ASSERT(
(prevVal.strong >= strongDelta),
"ripple::IntrusiveRefCounts::releaseStrongRef : previous ref "
"higher than new");
auto nextIntVal = prevIntVal - strongDelta;
ReleaseStrongRefAction action = noop;
if (prevVal.strong == 1)
{
if (prevVal.weak == 0)
{
action = destroy;
}
else
{
nextIntVal |= partialDestroyStartedMask;
action = partialDestroy;
}
}
if (refCounts.compare_exchange_weak(
prevIntVal, nextIntVal, std::memory_order_acq_rel))
{
// Can't be in partial destroy because only decrementing the strong
// count to zero can start a partial destroy, and that can't happen
// twice.
XRPL_ASSERT(
(action == noop) || !(prevIntVal & partialDestroyStartedMask),
"ripple::IntrusiveRefCounts::releaseStrongRef : not in partial "
"destroy");
return action;
}
}
}
inline ReleaseStrongRefAction
IntrusiveRefCounts::addWeakReleaseStrongRef() const
{
using enum ReleaseStrongRefAction;
static_assert(weakDelta > strongDelta);
auto constexpr delta = weakDelta - strongDelta;
auto prevIntVal = refCounts.load(std::memory_order_acquire);
// This loop will almost always run once. The loop is needed to atomically
// change the counts and flags (the count could be atomically changed, but
// the flags depend on the current value of the counts).
//
// Note: If this becomes a perf bottleneck, the `partialDestoryStartedMask`
// may be able to be set non-atomically. But it is easier to reason about
// the code if the flag is set atomically.
while (1)
{
RefCountPair const prevVal{prevIntVal};
// Converted the last strong pointer to a weak pointer.
//
// Can't be in partial destroy because only decrementing the
// strong count to zero can start a partial destroy, and that
// can't happen twice.
XRPL_ASSERT(
(!prevVal.partialDestroyStartedBit),
"ripple::IntrusiveRefCounts::addWeakReleaseStrongRef : not in "
"partial destroy");
auto nextIntVal = prevIntVal + delta;
ReleaseStrongRefAction action = noop;
if (prevVal.strong == 1)
{
if (prevVal.weak == 0)
{
action = noop;
}
else
{
nextIntVal |= partialDestroyStartedMask;
action = partialDestroy;
}
}
if (refCounts.compare_exchange_weak(
prevIntVal, nextIntVal, std::memory_order_acq_rel))
{
XRPL_ASSERT(
(!(prevIntVal & partialDestroyStartedMask)),
"ripple::IntrusiveRefCounts::addWeakReleaseStrongRef : not "
"started partial destroy");
return action;
}
}
}
inline ReleaseWeakRefAction
IntrusiveRefCounts::releaseWeakRef() const
{
auto prevIntVal = refCounts.fetch_sub(weakDelta, std::memory_order_acq_rel);
RefCountPair prev = prevIntVal;
if (prev.weak == 1 && prev.strong == 0)
{
if (!prev.partialDestroyStartedBit)
{
// This case should only be hit if the partialDestroyStartedBit is
// set non-atomically (and even then very rarely). The code is kept
// in case we need to set the flag non-atomically for perf reasons.
refCounts.wait(prevIntVal, std::memory_order_acquire);
prevIntVal = refCounts.load(std::memory_order_acquire);
prev = RefCountPair{prevIntVal};
}
if (!prev.partialDestroyFinishedBit)
{
// partial destroy MUST finish before running a full destroy (when
// using weak pointers)
refCounts.wait(prevIntVal - weakDelta, std::memory_order_acquire);
}
return ReleaseWeakRefAction::destroy;
}
return ReleaseWeakRefAction::noop;
}
inline bool
IntrusiveRefCounts::checkoutStrongRefFromWeak() const noexcept
{
auto curValue = RefCountPair{1, 1}.combinedValue();
auto desiredValue = RefCountPair{2, 1}.combinedValue();
while (!refCounts.compare_exchange_weak(
curValue, desiredValue, std::memory_order_acq_rel))
{
RefCountPair const prev{curValue};
if (!prev.strong)
return false;
desiredValue = curValue + strongDelta;
}
return true;
}
inline bool
IntrusiveRefCounts::expired() const noexcept
{
RefCountPair const val = refCounts.load(std::memory_order_acquire);
return val.strong == 0;
}
inline std::size_t
IntrusiveRefCounts::use_count() const noexcept
{
RefCountPair const val = refCounts.load(std::memory_order_acquire);
return val.strong;
}
inline IntrusiveRefCounts::~IntrusiveRefCounts() noexcept
{
#ifndef NDEBUG
auto v = refCounts.load(std::memory_order_acquire);
XRPL_ASSERT(
(!(v & valueMask)),
"ripple::IntrusiveRefCounts::~IntrusiveRefCounts : count must be zero");
auto t = v & tagMask;
XRPL_ASSERT(
(!t || t == tagMask),
"ripple::IntrusiveRefCounts::~IntrusiveRefCounts : valid tag");
#endif
}
//------------------------------------------------------------------------------
inline IntrusiveRefCounts::RefCountPair::RefCountPair(
IntrusiveRefCounts::FieldType v) noexcept
: strong{static_cast<CountType>(v & strongMask)}
, weak{static_cast<CountType>((v & weakMask) >> StrongCountNumBits)}
, partialDestroyStartedBit{v & partialDestroyStartedMask}
, partialDestroyFinishedBit{v & partialDestroyFinishedMask}
{
XRPL_ASSERT(
(strong < checkStrongMaxValue && weak < checkWeakMaxValue),
"ripple::IntrusiveRefCounts::RefCountPair(FieldType) : inputs inside "
"range");
}
inline IntrusiveRefCounts::RefCountPair::RefCountPair(
IntrusiveRefCounts::CountType s,
IntrusiveRefCounts::CountType w) noexcept
: strong{s}, weak{w}
{
XRPL_ASSERT(
(strong < checkStrongMaxValue && weak < checkWeakMaxValue),
"ripple::IntrusiveRefCounts::RefCountPair(CountType, CountType) : "
"inputs inside range");
}
inline IntrusiveRefCounts::FieldType
IntrusiveRefCounts::RefCountPair::combinedValue() const noexcept
{
XRPL_ASSERT(
(strong < checkStrongMaxValue && weak < checkWeakMaxValue),
"ripple::IntrusiveRefCounts::RefCountPair::combinedValue : inputs "
"inside range");
return (static_cast<IntrusiveRefCounts::FieldType>(weak)
<< IntrusiveRefCounts::StrongCountNumBits) |
static_cast<IntrusiveRefCounts::FieldType>(strong) |
partialDestroyStartedBit | partialDestroyFinishedBit;
}
template <class T>
inline void
partialDestructorFinished(T** o)
{
T& self = **o;
IntrusiveRefCounts::RefCountPair p =
self.refCounts.fetch_or(IntrusiveRefCounts::partialDestroyFinishedMask);
XRPL_ASSERT(
(!p.partialDestroyFinishedBit && p.partialDestroyStartedBit &&
!p.strong),
"ripple::partialDestructorFinished : not a weak ref");
if (!p.weak)
{
// There was a weak count before the partial destructor ran (or we would
// have run the full destructor) and now there isn't a weak count. Some
// thread is waiting to run the destructor.
self.refCounts.notify_one();
}
// Set the pointer to null to emphasize that the object shouldn't be used
// after calling this function as it may be destroyed in another thread.
*o = nullptr;
}
//------------------------------------------------------------------------------
} // namespace ripple
#endif

View File

@@ -21,7 +21,6 @@
#define RIPPLE_BASICS_LOCALVALUE_H_INCLUDED
#include <boost/thread/tss.hpp>
#include <memory>
#include <unordered_map>

View File

@@ -22,10 +22,8 @@
#include <xrpl/basics/UnorderedContainers.h>
#include <xrpl/beast/utility/Journal.h>
#include <boost/beast/core/string.hpp>
#include <boost/filesystem.hpp>
#include <fstream>
#include <map>
#include <memory>

View File

@@ -20,11 +20,11 @@
#ifndef RIPPLE_BASICS_RESOLVER_H_INCLUDED
#define RIPPLE_BASICS_RESOLVER_H_INCLUDED
#include <xrpl/beast/net/IPEndpoint.h>
#include <functional>
#include <vector>
#include <xrpl/beast/net/IPEndpoint.h>
namespace ripple {
class Resolver

View File

@@ -22,7 +22,6 @@
#include <xrpl/basics/Resolver.h>
#include <xrpl/beast/utility/Journal.h>
#include <boost/asio/io_service.hpp>
namespace ripple {

View File

@@ -1,135 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2023 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_BASICS_SHAREDWEAKCACHEPOINTER_H_INCLUDED
#define RIPPLE_BASICS_SHAREDWEAKCACHEPOINTER_H_INCLUDED
#include <memory>
#include <variant>
namespace ripple {
/** A combination of a std::shared_ptr and a std::weak_pointer.
This class is a wrapper to a `std::variant<std::shared_ptr,std::weak_ptr>`
This class is useful for storing intrusive pointers in tagged caches using less
memory than storing both pointers directly.
*/
template <class T>
class SharedWeakCachePointer
{
public:
SharedWeakCachePointer() = default;
SharedWeakCachePointer(SharedWeakCachePointer const& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer(std::shared_ptr<TT> const& rhs);
SharedWeakCachePointer(SharedWeakCachePointer&& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer(std::shared_ptr<TT>&& rhs);
SharedWeakCachePointer&
operator=(SharedWeakCachePointer const& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer&
operator=(std::shared_ptr<TT> const& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer&
operator=(std::shared_ptr<TT>&& rhs);
~SharedWeakCachePointer();
/** Return a strong pointer if this is already a strong pointer (i.e. don't
lock the weak pointer. Use the `lock` method if that's what's needed)
*/
std::shared_ptr<T> const&
getStrong() const;
/** Return true if this is a strong pointer and the strong pointer is
seated.
*/
explicit
operator bool() const noexcept;
/** Set the pointer to null, decrement the appropriate ref count, and run
the appropriate release action.
*/
void
reset();
/** If this is a strong pointer, return the raw pointer. Otherwise return
null.
*/
T*
get() const;
/** If this is a strong pointer, return the strong count. Otherwise return 0
*/
std::size_t
use_count() const;
/** Return true if there is a non-zero strong count. */
bool
expired() const;
/** If this is a strong pointer, return the strong pointer. Otherwise
attempt to lock the weak pointer.
*/
std::shared_ptr<T>
lock() const;
/** Return true is this represents a strong pointer. */
bool
isStrong() const;
/** Return true is this represents a weak pointer. */
bool
isWeak() const;
/** If this is a weak pointer, attempt to convert it to a strong pointer.
@return true if successfully converted to a strong pointer (or was
already a strong pointer). Otherwise false.
*/
bool
convertToStrong();
/** If this is a strong pointer, attempt to convert it to a weak pointer.
@return false if the pointer is null. Otherwise return true.
*/
bool
convertToWeak();
private:
std::variant<std::shared_ptr<T>, std::weak_ptr<T>> combo_;
};
} // namespace ripple
#endif

View File

@@ -1,192 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2023 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_BASICS_SHAREDWEAKCACHEPOINTER_IPP_INCLUDED
#define RIPPLE_BASICS_SHAREDWEAKCACHEPOINTER_IPP_INCLUDED
#include <xrpl/basics/SharedWeakCachePointer.h>
namespace ripple {
template <class T>
SharedWeakCachePointer<T>::SharedWeakCachePointer(
SharedWeakCachePointer const& rhs) = default;
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>::SharedWeakCachePointer(
std::shared_ptr<TT> const& rhs)
: combo_{rhs}
{
}
template <class T>
SharedWeakCachePointer<T>::SharedWeakCachePointer(
SharedWeakCachePointer&& rhs) = default;
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>::SharedWeakCachePointer(std::shared_ptr<TT>&& rhs)
: combo_{std::move(rhs)}
{
}
template <class T>
SharedWeakCachePointer<T>&
SharedWeakCachePointer<T>::operator=(SharedWeakCachePointer const& rhs) =
default;
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>&
SharedWeakCachePointer<T>::operator=(std::shared_ptr<TT> const& rhs)
{
combo_ = rhs;
return *this;
}
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>&
SharedWeakCachePointer<T>::operator=(std::shared_ptr<TT>&& rhs)
{
combo_ = std::move(rhs);
return *this;
}
template <class T>
SharedWeakCachePointer<T>::~SharedWeakCachePointer() = default;
// Return a strong pointer if this is already a strong pointer (i.e. don't
// lock the weak pointer. Use the `lock` method if that's what's needed)
template <class T>
std::shared_ptr<T> const&
SharedWeakCachePointer<T>::getStrong() const
{
static std::shared_ptr<T> const empty;
if (auto p = std::get_if<std::shared_ptr<T>>(&combo_))
return *p;
return empty;
}
template <class T>
SharedWeakCachePointer<T>::operator bool() const noexcept
{
return !!std::get_if<std::shared_ptr<T>>(&combo_);
}
template <class T>
void
SharedWeakCachePointer<T>::reset()
{
combo_ = std::shared_ptr<T>{};
}
template <class T>
T*
SharedWeakCachePointer<T>::get() const
{
return std::get_if<std::shared_ptr<T>>(&combo_).get();
}
template <class T>
std::size_t
SharedWeakCachePointer<T>::use_count() const
{
if (auto p = std::get_if<std::shared_ptr<T>>(&combo_))
return p->use_count();
return 0;
}
template <class T>
bool
SharedWeakCachePointer<T>::expired() const
{
if (auto p = std::get_if<std::weak_ptr<T>>(&combo_))
return p->expired();
return !std::get_if<std::shared_ptr<T>>(&combo_);
}
template <class T>
std::shared_ptr<T>
SharedWeakCachePointer<T>::lock() const
{
if (auto p = std::get_if<std::shared_ptr<T>>(&combo_))
return *p;
if (auto p = std::get_if<std::weak_ptr<T>>(&combo_))
return p->lock();
return {};
}
template <class T>
bool
SharedWeakCachePointer<T>::isStrong() const
{
if (auto p = std::get_if<std::shared_ptr<T>>(&combo_))
return !!p->get();
return false;
}
template <class T>
bool
SharedWeakCachePointer<T>::isWeak() const
{
return !isStrong();
}
template <class T>
bool
SharedWeakCachePointer<T>::convertToStrong()
{
if (isStrong())
return true;
if (auto p = std::get_if<std::weak_ptr<T>>(&combo_))
{
if (auto s = p->lock())
{
combo_ = std::move(s);
return true;
}
}
return false;
}
template <class T>
bool
SharedWeakCachePointer<T>::convertToWeak()
{
if (isWeak())
return true;
if (auto p = std::get_if<std::shared_ptr<T>>(&combo_))
{
combo_ = std::weak_ptr<T>(*p);
return true;
}
return false;
}
} // namespace ripple
#endif

View File

@@ -23,7 +23,6 @@
#include <xrpl/basics/contract.h>
#include <xrpl/basics/strHex.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <algorithm>
#include <array>
#include <cstdint>

View File

@@ -29,6 +29,7 @@
#include <array>
#include <cstdint>
#include <optional>
#include <sstream>
#include <string>
namespace ripple {

View File

@@ -20,14 +20,11 @@
#ifndef RIPPLE_BASICS_TAGGEDCACHE_H_INCLUDED
#define RIPPLE_BASICS_TAGGEDCACHE_H_INCLUDED
#include <xrpl/basics/IntrusivePointer.h>
#include <xrpl/basics/Log.h>
#include <xrpl/basics/SharedWeakCachePointer.ipp>
#include <xrpl/basics/UnorderedContainers.h>
#include <xrpl/basics/hardened_hash.h>
#include <xrpl/beast/clock/abstract_clock.h>
#include <xrpl/beast/insight/Insight.h>
#include <atomic>
#include <functional>
#include <mutex>
@@ -53,8 +50,6 @@ template <
class Key,
class T,
bool IsKeyCache = false,
class SharedWeakUnionPointerType = SharedWeakCachePointer<T>,
class SharedPointerType = std::shared_ptr<T>,
class Hash = hardened_hash<>,
class KeyEqual = std::equal_to<Key>,
class Mutex = std::recursive_mutex>
@@ -65,8 +60,6 @@ public:
using key_type = Key;
using mapped_type = T;
using clock_type = beast::abstract_clock<std::chrono::steady_clock>;
using shared_weak_combo_pointer_type = SharedWeakUnionPointerType;
using shared_pointer_type = SharedPointerType;
public:
TaggedCache(
@@ -76,48 +69,231 @@ public:
clock_type& clock,
beast::Journal journal,
beast::insight::Collector::ptr const& collector =
beast::insight::NullCollector::New());
beast::insight::NullCollector::New())
: m_journal(journal)
, m_clock(clock)
, m_stats(
name,
std::bind(&TaggedCache::collect_metrics, this),
collector)
, m_name(name)
, m_target_size(size)
, m_target_age(expiration)
, m_cache_count(0)
, m_hits(0)
, m_misses(0)
{
}
public:
/** Return the clock associated with the cache. */
clock_type&
clock();
clock()
{
return m_clock;
}
/** Returns the number of items in the container. */
std::size_t
size() const;
size() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
}
void
setTargetSize(int s)
{
std::lock_guard lock(m_mutex);
m_target_size = s;
if (s > 0)
{
for (auto& partition : m_cache.map())
{
partition.rehash(static_cast<std::size_t>(
(s + (s >> 2)) /
(partition.max_load_factor() * m_cache.partitions()) +
1));
}
}
JLOG(m_journal.debug()) << m_name << " target size set to " << s;
}
clock_type::duration
getTargetAge() const
{
std::lock_guard lock(m_mutex);
return m_target_age;
}
void
setTargetAge(clock_type::duration s)
{
std::lock_guard lock(m_mutex);
m_target_age = s;
JLOG(m_journal.debug())
<< m_name << " target age set to " << m_target_age.count();
}
int
getCacheSize() const;
getCacheSize() const
{
std::lock_guard lock(m_mutex);
return m_cache_count;
}
int
getTrackSize() const;
getTrackSize() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
}
float
getHitRate();
getHitRate()
{
std::lock_guard lock(m_mutex);
auto const total = static_cast<float>(m_hits + m_misses);
return m_hits * (100.0f / std::max(1.0f, total));
}
void
clear();
clear()
{
std::lock_guard lock(m_mutex);
m_cache.clear();
m_cache_count = 0;
}
void
reset();
reset()
{
std::lock_guard lock(m_mutex);
m_cache.clear();
m_cache_count = 0;
m_hits = 0;
m_misses = 0;
}
/** Refresh the last access time on a key if present.
@return `true` If the key was found.
*/
template <class KeyComparable>
bool
touch_if_exists(KeyComparable const& key);
touch_if_exists(KeyComparable const& key)
{
std::lock_guard lock(m_mutex);
auto const iter(m_cache.find(key));
if (iter == m_cache.end())
{
++m_stats.misses;
return false;
}
iter->second.touch(m_clock.now());
++m_stats.hits;
return true;
}
using SweptPointersVector = std::vector<SharedWeakUnionPointerType>;
using SweptPointersVector = std::pair<
std::vector<std::shared_ptr<mapped_type>>,
std::vector<std::weak_ptr<mapped_type>>>;
void
sweep();
sweep()
{
// Keep references to all the stuff we sweep
// For performance, each worker thread should exit before the swept data
// is destroyed but still within the main cache lock.
std::vector<SweptPointersVector> allStuffToSweep(m_cache.partitions());
clock_type::time_point const now(m_clock.now());
clock_type::time_point when_expire;
auto const start = std::chrono::steady_clock::now();
{
std::lock_guard lock(m_mutex);
if (m_target_size == 0 ||
(static_cast<int>(m_cache.size()) <= m_target_size))
{
when_expire = now - m_target_age;
}
else
{
when_expire =
now - m_target_age * m_target_size / m_cache.size();
clock_type::duration const minimumAge(std::chrono::seconds(1));
if (when_expire > (now - minimumAge))
when_expire = now - minimumAge;
JLOG(m_journal.trace())
<< m_name << " is growing fast " << m_cache.size() << " of "
<< m_target_size << " aging at "
<< (now - when_expire).count() << " of "
<< m_target_age.count();
}
std::vector<std::thread> workers;
workers.reserve(m_cache.partitions());
std::atomic<int> allRemovals = 0;
for (std::size_t p = 0; p < m_cache.partitions(); ++p)
{
workers.push_back(sweepHelper(
when_expire,
now,
m_cache.map()[p],
allStuffToSweep[p],
allRemovals,
lock));
}
for (std::thread& worker : workers)
worker.join();
m_cache_count -= allRemovals;
}
// At this point allStuffToSweep will go out of scope outside the lock
// and decrement the reference count on each strong pointer.
JLOG(m_journal.debug())
<< m_name << " TaggedCache sweep lock duration "
<< std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - start)
.count()
<< "ms";
}
bool
del(key_type const& key, bool valid);
del(const key_type& key, bool valid)
{
// Remove from cache, if !valid, remove from map too. Returns true if
// removed from cache
std::lock_guard lock(m_mutex);
auto cit = m_cache.find(key);
if (cit == m_cache.end())
return false;
Entry& entry = cit->second;
bool ret = false;
if (entry.isCached())
{
--m_cache_count;
entry.ptr.reset();
ret = true;
}
if (!valid || entry.isExpired())
m_cache.erase(cit);
return ret;
}
public:
/** Replace aliased objects with originals.
Due to concurrency it is possible for two separate objects with
@@ -131,23 +307,100 @@ public:
@return `true` If the key already existed.
*/
template <class R>
public:
bool
canonicalize(
key_type const& key,
SharedPointerType& data,
R&& replaceCallback);
const key_type& key,
std::shared_ptr<T>& data,
std::function<bool(std::shared_ptr<T> const&)>&& replace)
{
// Return canonical value, store if needed, refresh in cache
// Return values: true=we had the data already
std::lock_guard lock(m_mutex);
auto cit = m_cache.find(key);
if (cit == m_cache.end())
{
m_cache.emplace(
std::piecewise_construct,
std::forward_as_tuple(key),
std::forward_as_tuple(m_clock.now(), data));
++m_cache_count;
return false;
}
Entry& entry = cit->second;
entry.touch(m_clock.now());
if (entry.isCached())
{
if (replace(entry.ptr))
{
entry.ptr = data;
entry.weak_ptr = data;
}
else
{
data = entry.ptr;
}
return true;
}
auto cachedData = entry.lock();
if (cachedData)
{
if (replace(entry.ptr))
{
entry.ptr = data;
entry.weak_ptr = data;
}
else
{
entry.ptr = cachedData;
data = cachedData;
}
++m_cache_count;
return true;
}
entry.ptr = data;
entry.weak_ptr = data;
++m_cache_count;
return false;
}
bool
canonicalize_replace_cache(
key_type const& key,
SharedPointerType const& data);
const key_type& key,
std::shared_ptr<T> const& data)
{
return canonicalize(
key,
const_cast<std::shared_ptr<T>&>(data),
[](std::shared_ptr<T> const&) { return true; });
}
bool
canonicalize_replace_client(key_type const& key, SharedPointerType& data);
canonicalize_replace_client(const key_type& key, std::shared_ptr<T>& data)
{
return canonicalize(
key, data, [](std::shared_ptr<T> const&) { return false; });
}
SharedPointerType
fetch(key_type const& key);
std::shared_ptr<T>
fetch(const key_type& key)
{
std::lock_guard<mutex_type> l(m_mutex);
auto ret = initialFetch(key, l);
if (!ret)
++m_misses;
return ret;
}
/** Insert the element into the container.
If the key already exists, nothing happens.
@@ -156,11 +409,26 @@ public:
template <class ReturnType = bool>
auto
insert(key_type const& key, T const& value)
-> std::enable_if_t<!IsKeyCache, ReturnType>;
-> std::enable_if_t<!IsKeyCache, ReturnType>
{
auto p = std::make_shared<T>(std::cref(value));
return canonicalize_replace_client(key, p);
}
template <class ReturnType = bool>
auto
insert(key_type const& key) -> std::enable_if_t<IsKeyCache, ReturnType>;
insert(key_type const& key) -> std::enable_if_t<IsKeyCache, ReturnType>
{
std::lock_guard lock(m_mutex);
clock_type::time_point const now(m_clock.now());
auto [it, inserted] = m_cache.emplace(
std::piecewise_construct,
std::forward_as_tuple(key),
std::forward_as_tuple(now));
if (!inserted)
it->second.last_access = now;
return inserted;
}
// VFALCO NOTE It looks like this returns a copy of the data in
// the output parameter 'data'. This could be expensive.
@@ -168,18 +436,50 @@ public:
// simply return an iterator.
//
bool
retrieve(key_type const& key, T& data);
retrieve(const key_type& key, T& data)
{
// retrieve the value of the stored data
auto entry = fetch(key);
if (!entry)
return false;
data = *entry;
return true;
}
mutex_type&
peekMutex();
peekMutex()
{
return m_mutex;
}
std::vector<key_type>
getKeys() const;
getKeys() const
{
std::vector<key_type> v;
{
std::lock_guard lock(m_mutex);
v.reserve(m_cache.size());
for (auto const& _ : m_cache)
v.push_back(_.first);
}
return v;
}
// CachedSLEs functions.
/** Returns the fraction of cache hits. */
double
rate() const;
rate() const
{
std::lock_guard lock(m_mutex);
auto const tot = m_hits + m_misses;
if (tot == 0)
return 0;
return double(m_hits) / tot;
}
/** Fetch an item from the cache.
If the digest was not found, Handler
@@ -187,16 +487,73 @@ public:
std::shared_ptr<SLE const>(void)
*/
template <class Handler>
SharedPointerType
fetch(key_type const& digest, Handler const& h);
std::shared_ptr<T>
fetch(key_type const& digest, Handler const& h)
{
{
std::lock_guard l(m_mutex);
if (auto ret = initialFetch(digest, l))
return ret;
}
auto sle = h();
if (!sle)
return {};
std::lock_guard l(m_mutex);
++m_misses;
auto const [it, inserted] =
m_cache.emplace(digest, Entry(m_clock.now(), std::move(sle)));
if (!inserted)
it->second.touch(m_clock.now());
return it->second.ptr;
}
// End CachedSLEs functions.
private:
SharedPointerType
initialFetch(key_type const& key, std::lock_guard<mutex_type> const& l);
std::shared_ptr<T>
initialFetch(key_type const& key, std::lock_guard<mutex_type> const& l)
{
auto cit = m_cache.find(key);
if (cit == m_cache.end())
return {};
Entry& entry = cit->second;
if (entry.isCached())
{
++m_hits;
entry.touch(m_clock.now());
return entry.ptr;
}
entry.ptr = entry.lock();
if (entry.isCached())
{
// independent of cache size, so not counted as a hit
++m_cache_count;
entry.touch(m_clock.now());
return entry.ptr;
}
m_cache.erase(cit);
return {};
}
void
collect_metrics();
collect_metrics()
{
m_stats.size.set(getCacheSize());
{
beast::insight::Gauge::value_type hit_rate(0);
{
std::lock_guard lock(m_mutex);
auto const total(m_hits + m_misses);
if (total != 0)
hit_rate = (m_hits * 100) / total;
}
m_stats.hit_rate.set(hit_rate);
}
}
private:
struct Stats
@@ -242,37 +599,36 @@ private:
class ValueEntry
{
public:
shared_weak_combo_pointer_type ptr;
std::shared_ptr<mapped_type> ptr;
std::weak_ptr<mapped_type> weak_ptr;
clock_type::time_point last_access;
ValueEntry(
clock_type::time_point const& last_access_,
shared_pointer_type const& ptr_)
: ptr(ptr_), last_access(last_access_)
std::shared_ptr<mapped_type> const& ptr_)
: ptr(ptr_), weak_ptr(ptr_), last_access(last_access_)
{
}
bool
isWeak() const
{
if (!ptr)
return true;
return ptr.isWeak();
return ptr == nullptr;
}
bool
isCached() const
{
return ptr && ptr.isStrong();
return ptr != nullptr;
}
bool
isExpired() const
{
return ptr.expired();
return weak_ptr.expired();
}
SharedPointerType
std::shared_ptr<mapped_type>
lock()
{
return ptr.lock();
return weak_ptr.lock();
}
void
touch(clock_type::time_point const& now)
@@ -301,7 +657,72 @@ private:
typename KeyValueCacheType::map_type& partition,
SweptPointersVector& stuffToSweep,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&);
std::lock_guard<std::recursive_mutex> const&)
{
return std::thread([&, this]() {
int cacheRemovals = 0;
int mapRemovals = 0;
// Keep references to all the stuff we sweep
// so that we can destroy them outside the lock.
stuffToSweep.first.reserve(partition.size());
stuffToSweep.second.reserve(partition.size());
{
auto cit = partition.begin();
while (cit != partition.end())
{
if (cit->second.isWeak())
{
// weak
if (cit->second.isExpired())
{
stuffToSweep.second.push_back(
std::move(cit->second.weak_ptr));
++mapRemovals;
cit = partition.erase(cit);
}
else
{
++cit;
}
}
else if (cit->second.last_access <= when_expire)
{
// strong, expired
++cacheRemovals;
if (cit->second.ptr.use_count() == 1)
{
stuffToSweep.first.push_back(
std::move(cit->second.ptr));
++mapRemovals;
cit = partition.erase(cit);
}
else
{
// remains weakly cached
cit->second.ptr.reset();
++cit;
}
}
else
{
// strong, not expired
++cit;
}
}
}
if (mapRemovals || cacheRemovals)
{
JLOG(m_journal.debug())
<< "TaggedCache partition sweep " << m_name
<< ": cache = " << partition.size() << "-" << cacheRemovals
<< ", map-=" << mapRemovals;
}
allRemovals += cacheRemovals;
});
}
[[nodiscard]] std::thread
sweepHelper(
@@ -310,7 +731,45 @@ private:
typename KeyOnlyCacheType::map_type& partition,
SweptPointersVector&,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&);
std::lock_guard<std::recursive_mutex> const&)
{
return std::thread([&, this]() {
int cacheRemovals = 0;
int mapRemovals = 0;
// Keep references to all the stuff we sweep
// so that we can destroy them outside the lock.
{
auto cit = partition.begin();
while (cit != partition.end())
{
if (cit->second.last_access > now)
{
cit->second.last_access = now;
++cit;
}
else if (cit->second.last_access <= when_expire)
{
cit = partition.erase(cit);
}
else
{
++cit;
}
}
}
if (mapRemovals || cacheRemovals)
{
JLOG(m_journal.debug())
<< "TaggedCache partition sweep " << m_name
<< ": cache = " << partition.size() << "-" << cacheRemovals
<< ", map-=" << mapRemovals;
}
allRemovals += cacheRemovals;
});
};
beast::Journal m_journal;
clock_type& m_clock;
@@ -322,10 +781,10 @@ private:
std::string m_name;
// Desired number of cache entries (0 = ignore)
int const m_target_size;
int m_target_size;
// Desired maximum cache age
clock_type::duration const m_target_age;
clock_type::duration m_target_age;
// Number of items cached
int m_cache_count;

File diff suppressed because it is too large Load Diff

View File

@@ -25,7 +25,6 @@
#include <xrpl/beast/hash/hash_append.h>
#include <xrpl/beast/hash/uhash.h>
#include <xrpl/beast/hash/xxhasher.h>
#include <unordered_map>
#include <unordered_set>

View File

@@ -20,6 +20,7 @@
#ifndef RIPPLE_ALGORITHM_H_INCLUDED
#define RIPPLE_ALGORITHM_H_INCLUDED
#include <iterator>
#include <utility>
namespace ripple {

View File

@@ -33,13 +33,12 @@
#include <xrpl/basics/strHex.h>
#include <xrpl/beast/utility/Zero.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <boost/endian/conversion.hpp>
#include <boost/functional/hash.hpp>
#include <algorithm>
#include <array>
#include <cstring>
#include <functional>
#include <type_traits>
namespace ripple {
@@ -386,7 +385,7 @@ public:
}
base_uint&
operator^=(base_uint const& b)
operator^=(const base_uint& b)
{
for (int i = 0; i < WIDTH; i++)
data_[i] ^= b.data_[i];
@@ -395,7 +394,7 @@ public:
}
base_uint&
operator&=(base_uint const& b)
operator&=(const base_uint& b)
{
for (int i = 0; i < WIDTH; i++)
data_[i] &= b.data_[i];
@@ -404,7 +403,7 @@ public:
}
base_uint&
operator|=(base_uint const& b)
operator|=(const base_uint& b)
{
for (int i = 0; i < WIDTH; i++)
data_[i] |= b.data_[i];
@@ -427,11 +426,11 @@ public:
return *this;
}
base_uint const
const base_uint
operator++(int)
{
// postfix operator
base_uint const ret = *this;
const base_uint ret = *this;
++(*this);
return ret;
@@ -453,11 +452,11 @@ public:
return *this;
}
base_uint const
const base_uint
operator--(int)
{
// postfix operator
base_uint const ret = *this;
const base_uint ret = *this;
--(*this);
return ret;
@@ -478,7 +477,7 @@ public:
}
base_uint&
operator+=(base_uint const& b)
operator+=(const base_uint& b)
{
std::uint64_t carry = 0;
@@ -523,7 +522,7 @@ public:
}
[[nodiscard]] constexpr bool
parseHex(char const* str)
parseHex(const char* str)
{
return parseHex(std::string_view{str});
}

View File

@@ -20,16 +20,17 @@
#ifndef RIPPLE_BASICS_CHRONO_H_INCLUDED
#define RIPPLE_BASICS_CHRONO_H_INCLUDED
#include <date/date.h>
#include <xrpl/beast/clock/abstract_clock.h>
#include <xrpl/beast/clock/basic_seconds_clock.h>
#include <xrpl/beast/clock/manual_clock.h>
#include <date/date.h>
#include <chrono>
#include <cstdint>
#include <ratio>
#include <string>
#include <type_traits>
namespace ripple {

View File

@@ -43,7 +43,7 @@ struct less
using result_type = bool;
constexpr bool
operator()(T const& left, T const& right) const
operator()(const T& left, const T& right) const
{
return std::less<T>()(left, right);
}
@@ -55,7 +55,7 @@ struct equal_to
using result_type = bool;
constexpr bool
operator()(T const& left, T const& right) const
operator()(const T& left, const T& right) const
{
return std::equal_to<T>()(left, right);
}

View File

@@ -21,9 +21,9 @@
#define RIPPLE_BASICS_CONTRACT_H_INCLUDED
#include <xrpl/beast/type_name.h>
#include <exception>
#include <string>
#include <typeinfo>
#include <utility>
namespace ripple {

View File

@@ -24,8 +24,12 @@
#include <xrpl/beast/hash/xxhasher.h>
#include <cstdint>
#include <functional>
#include <mutex>
#include <random>
#include <type_traits>
#include <unordered_map>
#include <unordered_set>
#include <utility>
namespace ripple {

View File

@@ -21,7 +21,6 @@
#define RIPPLE_BASICS_MAKE_SSLCONTEXT_H_INCLUDED
#include <boost/asio/ssl/context.hpp>
#include <string>
namespace ripple {

View File

@@ -23,6 +23,7 @@
#include <cstdint>
#include <limits>
#include <optional>
#include <utility>
namespace ripple {
auto constexpr muldiv_max = std::numeric_limits<std::uint64_t>::max();

View File

@@ -22,7 +22,6 @@
#include <xrpl/beast/hash/uhash.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <functional>
#include <optional>
#include <string>
@@ -52,7 +51,7 @@ template <
typename Value,
typename Hash,
typename Pred = std::equal_to<Key>,
typename Alloc = std::allocator<std::pair<Key const, Value>>>
typename Alloc = std::allocator<std::pair<const Key, Value>>>
class partitioned_unordered_map
{
std::size_t partitions_;

View File

@@ -22,9 +22,9 @@
#include <xrpl/beast/utility/instrumentation.h>
#include <xrpl/beast/xor_shift_engine.h>
#include <cstddef>
#include <cstdint>
#include <cstring>
#include <limits>
#include <mutex>
#include <random>

View File

@@ -19,7 +19,6 @@
#define RIPPLE_BASICS_SPINLOCK_H_INCLUDED
#include <xrpl/beast/utility/instrumentation.h>
#include <atomic>
#include <limits>
#include <type_traits>

View File

@@ -21,11 +21,11 @@
#define BEAST_UTILITY_TAGGED_INTEGER_H_INCLUDED
#include <xrpl/beast/hash/hash_append.h>
#include <boost/operators.hpp>
#include <functional>
#include <iostream>
#include <type_traits>
#include <utility>
namespace ripple {
@@ -74,13 +74,13 @@ public:
}
bool
operator<(tagged_integer const& rhs) const noexcept
operator<(const tagged_integer& rhs) const noexcept
{
return m_value < rhs.m_value;
}
bool
operator==(tagged_integer const& rhs) const noexcept
operator==(const tagged_integer& rhs) const noexcept
{
return m_value == rhs.m_value;
}
@@ -142,14 +142,14 @@ public:
}
tagged_integer&
operator<<=(tagged_integer const& rhs) noexcept
operator<<=(const tagged_integer& rhs) noexcept
{
m_value <<= rhs.m_value;
return *this;
}
tagged_integer&
operator>>=(tagged_integer const& rhs) noexcept
operator>>=(const tagged_integer& rhs) noexcept
{
m_value >>= rhs.m_value;
return *this;

View File

@@ -21,7 +21,6 @@
#define BEAST_ASIO_IO_LATENCY_PROBE_H_INCLUDED
#include <xrpl/beast/utility/instrumentation.h>
#include <boost/asio/basic_waitable_timer.hpp>
#include <boost/asio/io_service.hpp>

View File

@@ -20,6 +20,9 @@
#ifndef BEAST_CHRONO_ABSTRACT_CLOCK_H_INCLUDED
#define BEAST_CHRONO_ABSTRACT_CLOCK_H_INCLUDED
#include <chrono>
#include <string>
namespace beast {
/** Abstract interface to a clock.

View File

@@ -23,8 +23,6 @@
#include <xrpl/beast/clock/abstract_clock.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <chrono>
namespace beast {
/** Manual clock implementation.

View File

@@ -22,7 +22,6 @@
#include <xrpl/beast/container/aged_container.h>
#include <chrono>
#include <type_traits>
namespace beast {

View File

@@ -20,6 +20,8 @@
#ifndef BEAST_CONTAINER_DETAIL_AGED_ASSOCIATIVE_CONTAINER_H_INCLUDED
#define BEAST_CONTAINER_DETAIL_AGED_ASSOCIATIVE_CONTAINER_H_INCLUDED
#include <type_traits>
namespace beast {
namespace detail {

View File

@@ -25,14 +25,13 @@
#include <xrpl/beast/container/detail/aged_associative_container.h>
#include <xrpl/beast/container/detail/aged_container_iterator.h>
#include <xrpl/beast/container/detail/empty_base_optimization.h>
#include <boost/intrusive/list.hpp>
#include <boost/intrusive/set.hpp>
#include <boost/version.hpp>
#include <algorithm>
#include <functional>
#include <initializer_list>
#include <iterator>
#include <memory>
#include <type_traits>
#include <utility>

View File

@@ -25,10 +25,8 @@
#include <xrpl/beast/container/detail/aged_associative_container.h>
#include <xrpl/beast/container/detail/aged_container_iterator.h>
#include <xrpl/beast/container/detail/empty_base_optimization.h>
#include <boost/intrusive/list.hpp>
#include <boost/intrusive/unordered_set.hpp>
#include <algorithm>
#include <cmath>
#include <functional>

View File

@@ -11,7 +11,6 @@
#define BEAST_CONTAINER_DETAIL_EMPTY_BASE_OPTIMIZATION_H_INCLUDED
#include <boost/type_traits/is_final.hpp>
#include <type_traits>
#include <utility>

View File

@@ -23,15 +23,16 @@
#include <xrpl/beast/utility/instrumentation.h>
#include <boost/core/detail/string_view.hpp>
#include <algorithm>
#include <cerrno>
#include <charconv>
#include <cstdlib>
#include <iterator>
#include <limits>
#include <string>
#include <type_traits>
#include <typeinfo>
#include <utility>
namespace beast {

View File

@@ -21,6 +21,7 @@
#define BEAST_INTRUSIVE_LIST_H_INCLUDED
#include <iterator>
#include <type_traits>
namespace beast {

View File

@@ -23,37 +23,14 @@
#include <boost/container/flat_set.hpp>
#include <boost/endian/conversion.hpp>
/*
Workaround for overzealous clang warning, which trips on libstdc++ headers
In file included from
/usr/lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/stl_algo.h:61:
/usr/lib/gcc/x86_64-linux-gnu/12/../../../../include/c++/12/bits/stl_tempbuf.h:263:8:
error: 'get_temporary_buffer<std::pair<ripple::Quality, const
std::vector<std::unique_ptr<ripple::Step>> *>>' is deprecated
[-Werror,-Wdeprecated-declarations] 263 |
std::get_temporary_buffer<value_type>(_M_original_len));
^
*/
#if defined(__clang__)
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated"
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
#endif
#include <functional>
#if defined(__clang__)
#pragma clang diagnostic pop
#endif
#include <array>
#include <chrono>
#include <cstdint>
#include <cstring>
#include <functional>
#include <map>
#include <memory>
#include <set>
#include <string>
#include <system_error>
#include <tuple>

View File

@@ -30,7 +30,7 @@ namespace beast {
template <class Hasher = xxhasher>
struct uhash
{
uhash() = default;
explicit uhash() = default;
using result_type = typename Hasher::result_type;

View File

@@ -21,7 +21,6 @@
#define BEAST_HASH_XXHASHER_H_INCLUDED
#include <boost/endian/conversion.hpp>
#include <xxhash.h>
#include <cstddef>

View File

@@ -20,10 +20,10 @@
#ifndef BEAST_INSIGHT_METER_H_INCLUDED
#define BEAST_INSIGHT_METER_H_INCLUDED
#include <xrpl/beast/insight/MeterImpl.h>
#include <memory>
#include <xrpl/beast/insight/MeterImpl.h>
namespace beast {
namespace insight {

Some files were not shown because too many files have changed in this diff Show More