Compare commits

..

232 Commits

Author SHA1 Message Date
Nicholas Dudfield
b12cee5d47 fix(consensus): wait for sidecar observation
Require tx-converged peers to advertise sidecar hashes before accepting RNG entropy or export signature success from local quorum alignment.

The RNG reveal fast path now publishes the entropy set and waits for peer observation instead of accepting in the same tick. On timeout, RNG clears the advertised entropy hash and falls back to deterministic zero.

Add unit and CSF regression coverage for asymmetric peer observation.
2026-04-28 12:20:42 +07:00
Nicholas Dudfield
a3b1e45f4d fix(consensus): bound export sidecar observation
When a candidate set contains ttEXPORT but a node has no local verified export sig material yet, give tx-converged peers one bounded opportunity to advertise an exportSigSetHash before closed-ledger apply.

This is a safety coordination window, not a wait-for-Export-success mechanism. If no advertised sidecar arrives or fetched material cannot be merged by the deadline, Export convergence is marked failed and the transaction retries or expires through normal rules.

Add CSF coverage for a peer that can only succeed by fetching peer-advertised export sidecars, plus a direct ConsensusExtensionsTick test for the pre-advertisement observation window. Document the consensus-extension priority order: safe, fast, works.
2026-04-28 11:15:03 +07:00
Nicholas Dudfield
3938ba7af4 docs(consensus): record export sidecar material flow
Clarify that export sidecar publication is local verified material only, and fetched sidecar leaves must be active-view checked, candidate-tx verified, and promoted into ExportSigCollector before closed-ledger apply can use them.
2026-04-28 10:59:00 +07:00
Nicholas Dudfield
96b1104646 fix(consensus): use quorum for export-only sidecars
Export-only originally used unanimity as a conservative substitute for the CE/RNG sidecar machinery. That made sense before Export had its own signed ExtendedPosition field and exportSigSetHash convergence gate.

Now Export sidecars are signed and converged independently of RNG, so a quorum-aligned exportSigSetHash plus verified active-view signature quorum is deterministic enough for Export-only mode. Keeping unanimity would let one active validator veto an otherwise converged export round.

Update CSF and testnet coverage to treat Export-only the same way: one missing/conflicting signer in a 5-validator network succeeds at 4/5, while below-quorum still retries or expires.
2026-04-28 10:54:04 +07:00
Nicholas Dudfield
92bdd2ed9f fix(consensus): harvest replayed export signatures 2026-04-28 10:16:52 +07:00
Nicholas Dudfield
d87cfdc604 fix(consensus): clear export sigs when export disabled 2026-04-28 09:17:17 +07:00
Nicholas Dudfield
a956abb2d1 docs(consensus): clarify export sig quorum gates 2026-04-28 08:56:53 +07:00
Nicholas Dudfield
aa36a80ab7 docs(consensus): document sidecar acquire rationale 2026-04-28 08:35:45 +07:00
Nicholas Dudfield
e729aa11eb fix(hooks): preserve finalization semantics
Keep hook result/state finalization non-fatal while enforcing the hook-export backlog cap through the transaction-level ApplyContext guard. This avoids resetting non-success tec metadata and preserves hook_again weak execution behavior.
2026-04-28 07:47:20 +07:00
Nicholas Dudfield
c58da3da58 fix(export): cap hook export backlog
Enforce the pending export cap for hook-emitted ttEXPORT work before commit. Replace the non-present sfEmittedTxn template field when building ltEMITTED_TXN entries so in-flight ledger checks see the emitted wrapper.

Overflowing xport emission now returns tecDIR_FULL and leaves the emitted backlog capped at ExportLimits::maxPendingExports.
2026-04-27 22:55:23 +07:00
Nicholas Dudfield
0c2c59d258 fix(export): enforce pending export limits
Cap pending ttEXPORT work in open/apply ledgers, including hook-emitted exports when TxQ drains the emitted directory into the open ledger. Enforce the same bound for per-account shadow tickets so durable pending imports cannot grow unbounded.
2026-04-27 21:30:24 +07:00
Nicholas Dudfield
15662eb1b1 fix(consensus): cap export proposal signatures
Limit outbound TMProposeSet export signature attachments to ExportLimits::maxPendingExports so honest proposals stay within the same bound enforced by inbound proposal validation. Extra exports remain unsigned for that proposal and rely on the existing retry/expiry path.
2026-04-27 21:03:00 +07:00
Nicholas Dudfield
492fe90643 fix(consensus): expire stale export signatures
Stamp export signatures learned from proposals, sidecar sets, and candidate tx-set upgrades with a ledger sequence so cleanupStale can age them out. Remove invalid unverified signatures after tx-local verification fails, with a buffer match check to avoid deleting newer replacements.
2026-04-27 20:56:54 +07:00
Nicholas Dudfield
ea413873b2 fix(consensus): preserve export state without rng 2026-04-27 20:34:58 +07:00
Nicholas Dudfield
625419eab7 fix(consensus): verify export sigs against tx set 2026-04-27 18:07:09 +07:00
Nicholas Dudfield
2218bdd7f3 fix(consensus): require export sigset quorum alignment 2026-04-27 17:36:06 +07:00
Nicholas Dudfield
f13233b00a docs(consensus): clarify validation sidecar signing rule
Remove the stale TMValidation exportSignatures field from the draft proto path now that export signatures ride signed proposal sidecars. Document that any future validation-carried ConsensusExtensions data must be covered by the signed validation payload and duplicate/replay identity, not an unsigned wrapper field.
2026-04-27 15:45:27 +07:00
Nicholas Dudfield
a61f334ca2 docs(consensus): capture extension design principles
Document the consensus-extension invariants for RNG, sidecars, export sig convergence, validator quorum, zero-entropy fallback, and proposal signing. Link the note from the RCL consensus README so future changes have a durable checklist.
2026-04-27 15:33:15 +07:00
Nicholas Dudfield
53a119ce30 fix(consensus): require rng entropy quorum alignment
Count the local proposer when deciding whether the previous round had enough participants for RNG, since prevProposers only tracks peers. This avoids a 4/5 honest quorum being treated as below quorum after one validator diverges.

Allow an already quorum-aligned entropySetHash to proceed despite below-quorum conflicting hashes, while retaining zero-entropy fallback when no entropy hash reaches quorum alignment. Add CSF coverage for a persistent single bogus entropy hash and for conflicting bogus hashes without quorum.
2026-04-27 15:29:36 +07:00
Nicholas Dudfield
63d1197345 fix(consensus): zero rng on unresolved entropy hash conflict 2026-04-27 15:10:39 +07:00
Nicholas Dudfield
aafd5b940b test(consensus): avoid brittle rng lcl quorum check 2026-04-27 14:47:18 +07:00
Nicholas Dudfield
efc497cf23 chore(levelization): refresh app overlay loop summary
This does not introduce a new levelization cycle; the existing xrpld.app <-> xrpld.overlay loop now has equal aggregate include counts after the consensus-extension work. Treat this as essentially the same architectural situation, not a meaningful worsening by itself.

TODO: if we want to fix the boundary properly, extract a small shared consensus-extension wire/interface layer below both app and overlay instead of shaving includes to change the generated ratio.
2026-04-27 14:01:54 +07:00
Nicholas Dudfield
f4e78c9a24 fix(consensus): apply negative unl to sidecar validator view 2026-04-27 12:50:43 +07:00
Nicholas Dudfield
7b5865c69c fix(consensus): sign export proposal attachments 2026-04-27 11:57:29 +07:00
Nicholas Dudfield
9f1ad521e1 fix(consensus): use active validator snapshots for sidecars 2026-04-27 10:59:33 +07:00
Nicholas Dudfield
26bbef8efd fix(consensus): harden sidecar quorum inputs 2026-04-27 10:14:12 +07:00
Nicholas Dudfield
6e71f84867 refactor: add typed sidecar SHAMap sync 2026-04-27 09:58:34 +07:00
Nicholas Dudfield
ab9b48f67a Merge remote-tracking branch 'origin/dev' into feature-export-rng
# Conflicts:
#	.github/workflows/levelization.yml
#	Builds/levelization/README.md
#	Builds/levelization/levelization.py
#	Builds/levelization/levelization.sh
#	cmake/RippledCore.cmake
2026-04-27 09:14:59 +07:00
Nicholas Dudfield
04077c1a55 test(testnet): assert zero entropy in degraded ledgers 2026-04-10 12:04:46 +07:00
Nicholas Dudfield
d94079d762 test(rng): relax PartialReveals sync assertion 2026-04-10 11:18:52 +07:00
Nicholas Dudfield
92ec07a1be chore: regenerate hook/sfcodes.h + format fix
Regenerate sfcodes.h to include the new sfSidecarType field
(UINT8, code 20).  Also apply clang-format to ConsensusExtensions.cpp.
2026-04-10 10:36:50 +07:00
Nicholas Dudfield
664db62588 fix: sidecar kind lost on cache hit + harden export sig parse
1. Record SidecarKind in pendingRngFetches_ before calling
   onAcquiredSidecarSet on local-cache-hit path. Without this,
   cached reveal/exportSig sets silently fell back to commit kind
   and were rejected by the sfSidecarType check.

2. Wrap export sig visitLeaves callback in try/catch (matching the
   RNG path) and enforce sfSidecarType == sidecarExportSig before
   processing — closes the shape-only acceptance gap.
2026-04-10 10:22:58 +07:00
Nicholas Dudfield
03a436d918 refactor: convert sidecar SHAMap entries from STTx to STObject
Replace STTx-based sidecar entries with plain STObject(sfGeneric)
using sfSidecarType (UINT8) discriminator. Eliminates unnecessary
transaction envelope overhead (sfSequence, sfFee, sfFlags) and
content-sniffing heuristics from the parse path.

Build: STObject with sidecarRngCommit/sidecarRngReveal/sidecarExportSig
Parse: sfSidecarType dispatch + typed field accessors
2026-04-10 10:14:06 +07:00
Nicholas Dudfield
7474048295 refactor: typed sidecar dispatch — eliminate content-sniffing heuristic
Replace the content-sniffing heuristic in onAcquiredSidecarSet with
typed dispatch based on SidecarKind.

The type is already known at fetch time:
- commitSetHash → SidecarKind::commit
- entropySetHash → SidecarKind::reveal
- exportSigSetHash → SidecarKind::exportSig

pendingRngFetches_ changes from hash_set<uint256> to
hash_map<uint256, SidecarKind>.  When the set arrives,
look up the kind by hash and dispatch — no leaf inspection.

This is the set-classification fix (Option E from the design doc):
no new SField, no STTx changes, no protocol additions, no RNG
proof-chain churn.
2026-04-10 09:18:43 +07:00
Nicholas Dudfield
1ee660529e fix: RPC handler sync, unused local, idiomatic Buffer comparison
- Add rng_poll_ms, no_export_sig, bootstrap_fast_start to the
  runtime_config RPC handler (SET and GET paths) so all ConfigVals
  fields are configurable live via admin RPC.
- Remove unused `added` counter in CSF fetchRngSetIfNeeded (was
  causing compiler warnings after debug logging removal).
- Use Buffer::operator== instead of std::memcmp in upgradeSignature,
  drop <cstring> include.
2026-04-10 08:56:16 +07:00
Nicholas Dudfield
311dfa1c23 chore: add TODO for RuntimeConfig activation gating
Both runtime_config and disconnect RPC handlers are already
Role::ADMIN.  Add a TODO to consider gating the entire
RuntimeConfig system on a config flag or compile-time define
for production nodes.
2026-04-10 08:31:54 +07:00
Nicholas Dudfield
f27cd2c567 refactor: consolidate env vars into RuntimeConfig
Move XAHAU_RNG_POLL_MS and XAHAUD_NO_EXPORT_SIG into RuntimeConfig
as rngPollMs and noExportSig fields.  Both are now configurable via
the XAHAU_RUNTIME_CONFIG JSON blob or individual env vars, and
controllable at runtime via the runtime_config RPC.

rngPollMs is clamped to minimum 50ms (prevents tight-loop polling).
Default remains 250ms.

This removes the last loose std::getenv calls from production code
outside of RuntimeConfig.  All env-var-based configuration now flows
through a single system.
2026-04-10 08:24:20 +07:00
Nicholas Dudfield
f34fdc297c fix(export): close upgradeSignature TOCTOU with buffer comparison
upgradeSignature now takes the verified buffer and compares it against
the currently stored buffer before promoting to verified.  This guards
against concurrent overlay threads overwriting the buffer between the
caller's unverifiedSignatures() snapshot and the upgrade call.

If the stored buffer was overwritten (different size or content), the
upgrade is silently skipped — the new buffer will be verified on its
next encounter.
2026-04-10 08:19:45 +07:00
Nicholas Dudfield
65fa63883d chore: remove CSF debug logging that floods CI output
Strip JLOG(j_.debug()) calls from buildEntropySet, fetchRngSetIfNeeded,
and finalizeRoundEntropy in CSF Peer.h.  These were added for local
debugging and caused CI failures due to output size limits.
2026-04-09 20:21:37 +07:00
Nicholas Dudfield
d8c683fb4c test(rng): fix AlignmentRequired test to run 1 round not 3
Running 3 rounds caused peer 0 to desync on round 2, dropping
prevProposers for the majority on round 3, triggering bootstrap
skip → zero entropy on the last round.  The gate works correctly
(logs show aligned=3, peersSeen=3) but the test was checking the
LAST round's entropy, not the round where the gate was exercised.

Run 1 round after warmup — sufficient to exercise the gate.
2026-04-09 18:09:17 +07:00
Nicholas Dudfield
fd53af304b fix(rng): measure entropy deadline from publish time, not reveal start
The entropy convergence deadline was measured from revealPhaseStart_,
which is set when entering ConvergingReveal.  By the time the entropy
set is published (after reveal timeout + observation tick), most of
the deadline budget was already spent — leaving insufficient time
for peer alignment.

Add entropyPublishStart_ timestamp set when the entropy set is first
published.  All convergence gate deadlines now measure from this
point, giving the full 2x rngREVEAL_TIMEOUT window for peer
proposals to propagate and alignment to be observed.
2026-04-09 18:06:18 +07:00
Nicholas Dudfield
2a3f0ec923 fix(rng): bounded wait for alignment instead of immediate fallback
When peers have published entropySetHash but none match ours yet
(e.g. a subset peer is the only one seen so far), wait for the
bounded deadline instead of immediately falling back to zero.
Other aligned peers may not have published yet — give them time.

Only fall back to zero if no alignment is observed within the
deadline (2x rngREVEAL_TIMEOUT).
2026-04-09 17:58:41 +07:00
Nicholas Dudfield
00f1f7ba30 fix(rng): subset-aware conflict detection in entropy convergence gate
After fetch/merge, if our entropy set hash didn't change, the
conflicting peer had a subset of our data — not a real threat.
Clear the conflict flag so we don't fall back to zero when a peer
simply has fewer reveals than us.

If the hash DID change (merge added data), re-count alignment
with the updated hash before treating it as a real conflict.

This prevents the majority from falling back to zero just because
one peer (e.g. isolated) has a smaller reveal set.
2026-04-09 17:53:58 +07:00
Nicholas Dudfield
49f05e4e47 fix(rng): require positive peer alignment for non-zero entropy
The observation tick alone was insufficient — a node could pass the
gate without any peer confirming its entropySetHash.  Now the gate
requires at least one tx-converged peer with a matching hash before
accepting non-zero entropy.

Three cases after the observation tick:
1. aligned > 0: peers confirm our hash → proceed with entropy
2. conflict: fetch/merge/rebuild → bounded wait → zero fallback
3. aligned=0, peersSeen=0: no peers published yet → bounded wait →
   zero fallback if still no peers at deadline
4. aligned=0, peersSeen>0: peers published but none match → zero

Also:
- CSF finalizeRoundEntropy now uses shouldZeroEntropy() (quorum check)
- Two new TDD tests:
  - testRngNoEntropyWithoutPeerAlignment: healthy network must agree
  - testRngAlignmentRequiredForNonZeroEntropy: isolated peer must not
    produce non-zero entropy that differs from majority
2026-04-09 17:51:51 +07:00
Nicholas Dudfield
1f51b9c594 fix(csf): quorum threshold in shouldZeroEntropy + test adjustments
CSF shouldZeroEntropy() now checks reveals < quorumThreshold (80% of
UNL), matching production.  MajorRevealLoss test adjusted to verify
majority group agreement rather than requiring full synchronization
(peer 0 may desync when it misses most reveals).

All 15 ConsensusRng tests now pass.
2026-04-09 17:40:05 +07:00
Nicholas Dudfield
88a548a8ef fix(rng): observation tick + CSF quorum threshold in shouldZeroEntropy
Two fixes addressing the asymmetric-view problem:

1. Convergence gate now forces one observation tick after first
   publishing the entropySet before accepting.  Previously a node
   could publish + accept in the same tick, never seeing a peer's
   different hash.  The entropySetPublished_ flag ensures at least
   one round-trip for proposal propagation.

2. CSF shouldZeroEntropy() now checks quorum threshold (80% of UNL),
   matching production behavior.  Previously it only checked empty().

Result: PartialReveals test now passes — all 6 peers converge on
the same entropy (count=6) via union merge after the observation tick.
14/15 ConsensusRng tests pass.
2026-04-09 17:31:36 +07:00
Nicholas Dudfield
db302a0f78 fix(rng): add selfSeedReveal to fix CSF reveal counting
The CSF never self-seeded its own reveal into pendingReveals_ because
harvestRngData only processes peer proposals, not self.  The real code
handles this in decorateMessage, but the CSF has no equivalent.

Add selfSeedReveal() called from the tick at reveal transition.
Both the real ConsensusExtensions and the CSF Extensions implement it.
The real code now has belt-and-suspenders: tick + decorateMessage.

This fixes CSF peers having N-1 reveals instead of N, which caused
every peer to compute entropy from a different subset.
2026-04-09 17:23:53 +07:00
Nicholas Dudfield
383d9ec2e7 feat(csf): add SidecarStore for sidecar set fetch/merge simulation
Add a content-addressed SidecarStore to the CSF, simulating the
InboundTransactions SHAMap fetch pipeline.  Tagged entries (commit
or reveal) are published by hash during buildCommitSet/buildEntropySet
and fetched by hash during fetchRngSetIfNeeded, with type-aware
union merge into the correct local pending set.

Also adds debug logging to CSF Extensions for entropy pipeline
troubleshooting.
2026-04-09 17:17:53 +07:00
Nicholas Dudfield
52671bfc99 test(rng): add XAHAU_RNG_TEST env var filter for focused test runs
Set XAHAU_RNG_TEST=<substring> to run only matching test methods.
e.g. XAHAU_RNG_TEST=SingleByzantine runs only that test.
2026-04-09 16:51:26 +07:00
Nicholas Dudfield
8307fca3b9 fix(rng): add entropySetHash convergence gate before accept
Add a bounded pre-accept convergence check for entropySetHash,
closing the gap where two honest validators could accept with
different reveal subsets and compute different entropy (ledger fork).

After publishing the entropy set, the gate:
1. Inspects tx-converged peer positions for conflicting entropySetHash
2. Fetches differing sets via fetchRngSetIfNeeded (union merge)
3. Rebuilds and re-publishes the local entropy set after merge
4. Waits within a bounded window (2x rngREVEAL_TIMEOUT)
5. Falls back to zero entropy if conflict persists past deadline

This follows the same pattern as the existing commitSetHash conflict
handling and exportSigSetHash convergence gate.  Union merge ensures
monotonic set growth — honest timing skew resolves quickly, and
hostile hash spam hits the hard deadline and falls back safely.

The "one bad actor shouldn't deny entropy" optimization (supermajority
vote) is deferred to a follow-up patch per codex recommendation.
2026-04-09 16:30:02 +07:00
Nicholas Dudfield
6526621c16 test(rng): add TDD tests for entropySetHash convergence gate
Three new CSF tests that document expected behavior for the
entropySetHash convergence gate (not yet implemented):

1. testRngEntropyConvergesWithPartialReveals: two groups each drop
   one peer's reveal, creating different quorate subsets.  Must not
   fork — either converge via SHAMap merge or both fall back to zero.

2. testRngEntropyFallbackOnMajorRevealLoss: one peer drops most
   reveals (below quorum locally).  Network must still agree.

3. testRngSingleByzantineCannotDenyEntropy: one Byzantine peer
   (future: forced garbage entropySetHash) should not prevent the
   other 80% from producing valid entropy.

Also adds dropRevealFrom_ test knob to CSF Peer::Extensions for
simulating asymmetric reveal delivery.
2026-04-09 16:26:30 +07:00
Nicholas Dudfield
2a9b1c9c22 fix(export): guard against empty verified sigs + add invariant asserts
- Skip addVerifiedSignature in decorateMessage when sigBuf is empty
  (sign() threw — don't mark a failed sign as "verified")
- Add XRPL_ASSERT in addVerifiedSignature and addUnverifiedSignature
  requiring non-empty signature buffers
- Add XRPL_ASSERT in checkQuorumAndSnapshot verifying that every
  entry in the verified set exists in the signatures map with a
  non-empty buffer
2026-04-09 16:02:35 +07:00
Nicholas Dudfield
54ca21b604 fix(export): verified-only quorum, SHAMap, and transactor upgrade pass
Enforce the contract: source chain finalizes an export only when it
has a quorum of cryptographically verified multisignatures.

ExportSigCollector changes:
- signatureCount() now counts verified entries only
- checkQuorumAndSnapshot() returns verified-only snapshot
- snapshot() and snapshotWithSigs() return verified-only data
- buildExportSigSet (via snapshot) publishes verified-only entries
- unverifiedSignatures() returns sigs needing verification
- upgradeSignature() promotes unverified to verified
- addStandaloneSignature() marks as verified (no consensus to check)
- All add methods now set firstSeenSeq (fixes stale cleanup bug)

Export::doApply changes:
- Upgrade pass before quorum check: deserializes the inner tx (which
  is always available as ctx_.tx), verifies any unverified sigs via
  buildMultiSigningData + verify(), upgrades them in the collector
- Then checks quorum on verified-only count
- Assembles blob from verified-only snapshot

This means:
- Unverified sigs (relay ordering) are local cache only
- They don't count toward quorum until upgraded
- SHAMap convergence operates on verified sigs only
- Destination chain verification remains defense-in-depth
2026-04-09 15:54:41 +07:00
Nicholas Dudfield
462db6004c fix(rng): replace nonexistent leafCount() with std::distance
SHAMap has no leafCount() method — it was a local variable in
SHAMap.cpp, not a public API.  Use std::distance(begin(), end())
on the SHAMap's ForwardRange iterators instead.  Cost is O(n) but
the set is bounded by UNL size (~20-35 entries).
2026-04-09 15:42:04 +07:00
Nicholas Dudfield
cfca708aae fix(rng): remove pendingReveals fallback from entropy output path
shouldZeroEntropy() and sfEntropyCount no longer fall back to
pendingReveals_.  If entropySetMap_ is null, entropy failed — the
pipeline didn't complete, and the map is the only canonical source.

pendingReveals_ is now strictly an internal staging area for the
commit/reveal pipeline.  All final entropy decisions flow through
entropySetMap_, which is the consensus-agreed set.
2026-04-09 15:40:22 +07:00
Nicholas Dudfield
5f70e5259c fix(rng): use entropySetMap for shouldZeroEntropy and sfEntropyCount
The H2 entropy fix switched the digest computation to entropySetMap_
but shouldZeroEntropy() and sfEntropyCount still used pendingReveals_.
Since pendingReveals_ can diverge from the published entropySetMap_
(late reveals mutate it after the map hash is published), two nodes
agreeing on the same entropySetHash could still build different
ttCONSENSUS_ENTROPY pseudo-transactions.

Now shouldZeroEntropy() checks entropySetMap_ leaf count when the map
is available, and sfEntropyCount uses the map's leaf count.  Both
fall back to pendingReveals_ only during pipeline stages before the
map is built.
2026-04-09 15:35:00 +07:00
Nicholas Dudfield
8697c5d821 refactor(export): explicit verified/unverified sig API in collector
Replace the ambiguous addSignature/hasSignature API with clearly
named methods that make verification state explicit:

  addVerifiedSignature()   — sig passed buildMultiSigningData + verify()
  addUnverifiedSignature() — trusted source but no multisign check yet
  addStandaloneSignature() — pubkey-only for standalone/test mode
  hasVerifiedSignature()   — only returns true for verified sigs

Unverified sigs (relay ordering fallback) are no longer treated as
verified by the cache.  When the same sig is encountered again via a
path that CAN verify (e.g. SHAMap merge after the tx arrives), the
verification runs and upgrades it to verified.

addUnverifiedSignature() won't overwrite a verified sig, preventing
downgrade.  SigEntry tracks verified validators in a separate set.
2026-04-09 15:34:13 +07:00
Nicholas Dudfield
9436e5868e fix(export): soften hard reject to best-effort verify for relay ordering
Revert the hard reject when ttEXPORT is not in the open ledger.
Under relay ordering, a node can receive a proposal with export sigs
before the ttEXPORT tx itself arrives.  Dropping these sigs loses a
valid validator contribution for the entire round with no recovery
path until terRETRY_EXPORT on the next round.

Post C1+C2, the proposal-level authentication is sufficient trust:
checkSign() verified the sender holds the private key, and sender
binding verified the embedded pubkey matches.  Store the sig and
let the multisign content be verified on the destination chain.
The collector's stale cleanup (256 ledgers) bounds retention.

When the tx IS in the open ledger (common case), the multisign sig
is still fully verified via buildMultiSigningData + verify().
2026-04-09 15:22:20 +07:00
Nicholas Dudfield
c6fa973cf6 fix(rng): compute entropy from entropySetMap instead of pendingReveals
H2: Compute final entropy from the agreed-upon entropySetMap_ SHAMap
rather than from the local pendingReveals_ in-memory map.

Previously, two nodes with different reveal subsets at timeout would
compute different entropy from their local pendingReveals_ maps,
despite both passing haveConsensus() (which only checks txSetHash).
This could cause a ledger fork.

Now the entropy computation reads directly from the entropySetMap_
whose hash was published in proposals and converged via SHAMap
fetch/merge.  Nodes that agree on entropySetHash deterministically
produce the same entropy regardless of local pendingReveals_ state.

If entropySetMap_ is null (bootstrap skip, pipeline failure), the
existing shouldZeroEntropy() fallback handles it.
2026-04-09 15:18:45 +07:00
Nicholas Dudfield
939e03714c fix(export): cap exportSignatures count per proposal
Reject proposals with more than ExportLimits::maxPendingExports (8)
export sig entries.  Honest validators attach at most one sig per
pending export, bounded by the same limit.  Prevents DoS via
proposals with millions of entries triggering lock contention on
the validator list and collector mutexes.
2026-04-09 15:13:15 +07:00
Nicholas Dudfield
969f98f57e perf(export): skip redundant sig verification via collector lookup
Add hasSignature() to ExportSigCollector — checks if a verified sig
already exists for a given (txHash, validator) pair.  Both the
proposal ingestion path and the SHAMap merge path now check this
before calling verify(), avoiding redundant ed25519 verification
when the same sig arrives via multiple paths.

No external sig cache exists in rippled, so the collector itself
serves as the verification cache: once a sig is stored (always
post-verify), subsequent encounters skip the crypto work.
2026-04-09 15:03:57 +07:00
Nicholas Dudfield
435deb0e78 fix(export): close remaining sig verification gaps
Three fixes from codex review:

1. Remove unsafe fallback in proposal ingestion path: reject export
   sigs when the ttEXPORT tx is not in the open ledger instead of
   storing them unverified.  The tx must be in the open ledger for
   validators to have signed it, so this is not a legitimate case.

2. Add full sig verification to the SHAMap merge path
   (onAcquiredSidecarSet): verify each export sig entry against
   buildMultiSigningData + verify() before storing in the collector.
   Previously this path only checked trusted() on the pubkey,
   allowing a malicious UNL validator to publish a sidecar set with
   forged sigs for other validators.

3. Close cluster mode bypass: always call checkSign() and gate export
   sig harvesting on sigValid, even when cluster() is true.  Cluster
   trust is for relay/resource charging, not for accepting on-chain
   cryptographic artifacts.
2026-04-09 14:59:20 +07:00
Nicholas Dudfield
b80352e512 fix(export): verify multisign signatures at ingestion time
C3: Cryptographically verify each export signature blob against the
inner transaction's signing data before storing in the collector.
Looks up the ttEXPORT tx from the open ledger, reconstructs the
signing data via buildMultiSigningData, and calls verify().

If the tx isn't in our open ledger yet (timing/relay), the sig is
stored unverified as a fallback — it can be verified later at the
SHAMap merge path or will be rejected at Export::doApply if invalid.

This runs on the jtPROPOSAL_t job queue thread (not the IO strand
or transactor), so the verify() cost has no impact on consensus
critical path performance.
2026-04-09 14:43:30 +07:00
Nicholas Dudfield
57c46c61fc fix(export): two-pass sender validation and atomic quorum+snapshot
C2 hardening: validate all export sig blobs before committing any,
preventing partial state if a later blob fails the sender binding
check. Also moves the trusted() check before the loop since senderPK
is constant.

H1: Add checkQuorumAndSnapshot() to ExportSigCollector that performs
the quorum threshold check and signature snapshot under a single lock
acquisition. Export::doApply now uses this instead of separate
signatureCount() + snapshotWithSigs() calls, eliminating the TOCTOU
window where concurrent overlay threads could mutate the collector
between the two operations.
2026-04-09 14:39:35 +07:00
Nicholas Dudfield
37ff13df50 fix(export): move sig harvesting after checkSign and bind pubkey to sender
C1: Move onTrustedPeerMessage() from the synchronous onMessage(TMProposeSet)
handler into checkPropose(), after checkSign() verifies the proposal's
cryptographic signature. Previously, export sigs were ingested before
signature verification, allowing any peer to inject forged sigs by
spoofing nodepubkey to a trusted validator's key.

C2: Add sender binding in onTrustedPeerMessage() — each export sig
blob's embedded validator pubkey must match the proposal sender's
nodepubkey. Reject the entire proposal's export sigs on any mismatch,
preventing a compromised validator from impersonating other validators
to single-handedly forge quorum.
2026-04-09 14:32:38 +07:00
Nicholas Dudfield
1b363b7eac fix: correct stale function name in ConsensusExtensionsTick comment 2026-04-09 14:03:07 +07:00
Nicholas Dudfield
9562b457cf chore: remove stale Peer-level RNG forwarders
All callsites now go through ce() consistently.
2026-03-23 09:57:53 +07:00
Nicholas Dudfield
724633ceb5 refactor(consensus): decouple CSF tests from xrpld.app via PeerTick.h
Move ConsensusExtensionsTick.h from xrpld/app/consensus/ to
xrpld/consensus/ — it's a pure template with no app-layer deps.
Extract Peer::Extensions::onTick() definition into test/csf/PeerTick.h
so Peer.h no longer includes from xrpld/app/.

Eliminates the test.csf > xrpld.app levelization edge.

Add --explain flag to levelization.py for tracing dependency edges.
2026-03-23 09:36:59 +07:00
Nicholas Dudfield
152d82e798 refactor(consensus): extract RNG/Export into ConsensusExtensions
Extract all Xahau consensus extension logic (RNG commit-reveal entropy
and Export validator multisign collection) from Consensus.h and
RCLCxAdaptor into a dedicated ConsensusExtensions class owned by
Application.

Implements all 10 lifecycle hooks from the design doc:
  onRoundStart, onTrustedPeerMessage, onTrustedPeerProposal,
  decoratePosition, decorateMessage, onTick, onPreBuild,
  onAcceptComplete, isSidecarSet, onAcquiredSidecarSet

Key design decisions:
- ConsensusTick<> template in ConsensusTypes.h keeps dependency
  direction clean (generic consensus defines contract, extensions
  implement it)
- extensionsTick<> shared template in ConsensusExtensionsTick.h
  ensures CSF test framework runs the same state machine as production
- ExportSigCollector ownership moved from global singleton to CE
- Sidecar acquisition routed through RCLConsensus mutex for thread
  safety (isExtensionSet + gotExtensionSet)
- RCLCxAdaptor reduced to thin ce() accessor + generic consensus
  interface methods

Files:
  new: ConsensusExtensions.h/.cpp, ConsensusExtensionsTick.h
  reduced: Consensus.h (-1060 lines), RCLConsensus.cpp (-1400 lines)
  updated: ConsensusTypes.h, Application.h/.cpp, NetworkOPs.cpp,
           PeerImp.cpp, Export.cpp, ExportSigCollector.h, Peer.h

7/7 testnet scenarios + 1463 consensus + 260 Export unit tests pass.
2026-03-19 20:23:19 +07:00
Nicholas Dudfield
0bb31ce7ce chore: add projected-source markers for consensus extension docs
Non-functional comment markers (//@@start, //@@end) for projected-source
documentation extraction.
2026-03-19 12:14:11 +07:00
Nicholas Dudfield
4cb3de0497 refactor(export): build xport wrapper via STObject then serialise to STTx
Replace the duplicated throwaway-STTx + real-STTx pattern with a
single STObject: set all fields including fee=0, serialise to compute
the fee, patch the fee, then serialise once into the final STTx.

20 lines shorter, no duplication.
2026-03-18 17:36:30 +07:00
Nicholas Dudfield
c6b315412d test(export): harden scenario tests with proper assertions
Replace warnings and logs with hard assertions across all export
scenario tests:

- export_helpers: add assert_hook_accepted(), assert_export_result(),
  assert_shadow_ticket() shared assertion helpers
- steady_state_export: assert hook ACCEPT + emitCount, ExportResult
  contents (signers, inner tx fields), shadow ticket exists
- retriable_export: assert ExportResult well-formed, shadow ticket
  created, payment not blocked
- export_degradation: assert export FAILS (not just log), no shadow
  ticket, payment still works
- export_unanimity: assert ExportResult + shadow ticket on success,
  absence on failure
2026-03-18 17:18:12 +07:00
Nicholas Dudfield
72395bec75 chore: clang-format 2026-03-18 17:12:41 +07:00
Nicholas Dudfield
8ed4d86f0f test(export): verify emitted ttEXPORT lifecycle end-to-end
testXportPayment now asserts the full emitted tx lifecycle:
- hook fires with ACCEPT, emitCount=1, returnCode=0
- sfHookEmissions present with 1 entry
- ltEMITTED_TXN in AffectedNodes
- emitted dir is not empty
- after close, emitted ttEXPORT appears in closed ledger

Also add FOCUSED_TEST env var gate for fast iteration during
development (set FOCUSED_TEST=1 to run only focused_test()).
2026-03-18 17:11:19 +07:00
Nicholas Dudfield
419fd16b9a chore: move export scenarios to export-suite.yml, add SuiteLogsWithOverrides
Move export scenario tests from suite.yml into their own
export-suite.yml file. The defaults already set CE+Export features
so individual test entries no longer need to repeat them.

Add SuiteLogsWithOverrides test utility: a Logs subclass that routes
specified journal partitions to stderr (always visible) while keeping
others on suite_.log (only on failure). Useful for debugging specific
subsystems during test development.
2026-03-18 17:09:47 +07:00
Nicholas Dudfield
a8097cd9a6 fix(export): compute emit fee before STTx construction
Mutating the fee via const_cast after STTx construction left a stale
cached getTransactionID(). When the emitted ttEXPORT was serialised
into the emitted directory and later deserialised, the round-tripped
txid differed from the original, causing tefNONDIR_EMIT in
Transactor::preclaim (the emitted dir entry was keyed with the stale
hash).

Build a throwaway STTx with fee=0 to calculate the fee size, then
construct the real STTx with the correct fee from the start.
2026-03-18 17:04:46 +07:00
Nicholas Dudfield
02a0552325 docs(export): clarify LLS semantics for retriable exports
Add comment explaining the three-state outcome table for exports
relative to LastLedgerSequence:
  ledger < LLS:  tesSUCCESS or terRETRY_EXPORT
  ledger == LLS: tesSUCCESS or tecEXPORT_EXPIRED
  ledger > LLS:  tefMAX_LEDGER (never reaches doApply)
2026-03-18 14:43:50 +07:00
Nicholas Dudfield
3698193b0a chore: clang-format 2026-03-18 14:21:00 +07:00
Nicholas Dudfield
de43ca2385 refactor(export): store multisigned tx as sfExportedTxn object in metadata
Use sfExportedTxn (OBJECT) instead of sfBlob (VL) for the multisigned
transaction in sfExportResult metadata. This renders as readable JSON
with all fields visible (Account, Signers, etc.) instead of an opaque
hex blob.

Also compute the tx hash directly via getHash(HashPrefix::transactionID)
on the STObject instead of serializing/deserializing through STTx.
2026-03-18 14:16:25 +07:00
Nicholas Dudfield
8c747a1916 feat(export): produce multisigned blob in export metadata
Export::doApply now builds the fully multisigned inner tx and stores
it as sfBlob in sfExportResult metadata. In standalone mode, the node
signs directly with its own validator keys (no consensus needed).

Key changes:
- ExportSigCollector stores actual multisign signatures, not just pubkeys
- RCLConsensus proposal attachment computes real multisign sigs over inner tx
- PeerImp harvests variable-length sig entries from proposals
- Export::doApply assembles Signers array, builds multisigned blob first,
  then uses its hash for the shadow ticket (getTransactionID includes all
  fields including Signers)
- Import skips OperationLimit and signing key checks for export callback
  path (sfTicketSequence present) — shadow ticket proves the relationship
- Full Export→XRPL→Import round-trip test: export on Xahau, submit
  multisigned blob to XRPL (with matching SignerList), build XPOP,
  import back, verify shadow ticket consumed
2026-03-18 13:54:29 +07:00
Nicholas Dudfield
cea110f29a feat: add XPOP test helper and XPOP_test suite
- src/test/jtx/xpop.h: test utilities for building XPOPs from Env ledgers
  (TestValidator, TestVLPublisher, TestXPOPContext, buildTestXPOP)
- src/test/app/XPOP_test.cpp: 4 tests (173 assertions)
  - LedgerProof construction from payment tx
  - XPOP v1 JSON structure verification
  - Merkle proof verification for multi-tx ledgers
  - Full Import round-trip: source Env payment → XPOP → dest Env Import → tesSUCCESS
2026-03-18 11:59:34 +07:00
Nicholas Dudfield
3ca056a94b feat: add XPOP test helper and XPOP_test suite
- src/test/jtx/xpop.h: test utilities for building XPOPs from Env ledgers
  (TestValidator, TestVLPublisher, buildTestXPOP)
- src/test/app/XPOP_test.cpp: 3 tests (133 assertions)
  - LedgerProof construction from payment tx
  - XPOP v1 JSON structure verification
  - Merkle proof verification for multi-tx ledgers
2026-03-18 11:41:16 +07:00
Nicholas Dudfield
705d8400db feat: add proof module for XPOP construction
New module at src/xrpld/app/proof/ with layered design:

- ProofBuilder: SHAMap merkle proof extraction (extractProofV1)
  Binary trie proof with 16-way branching, root hash verification.
- LedgerProof: proof-of-ledger (header fields + tx blob/meta + merkle proof)
  buildLedgerProof() extracts everything from a closed Ledger.
- XPOPv1: JSON format builder matching Import.cpp expectations
  buildXPOPv1() creates complete XPOP with validation signatures.

Designed for versioning: v1 JSON (current Import compat), future v2
binary proofs and account state proofs layer on the same core.
2026-03-18 11:29:29 +07:00
Nicholas Dudfield
655b751698 chore: regenerate hook/sfcodes.h for new sfields
Adds sfCancelTicketSequence (UINT32 101) and sfExportResult (OBJECT 98).
2026-03-18 11:06:04 +07:00
Nicholas Dudfield
f324081277 fix(import): verify XPOP tx hash matches shadow ticket
Shadow tickets store the exported transaction hash. Import now verifies
the XPOP's inner tx hash matches, preventing use of a different XPOP
with the same TicketSequence.
2026-03-18 10:52:19 +07:00
Nicholas Dudfield
24a284180a fix(export): address review findings from code audit
- Validate incoming export sig pubkeys against trusted UNL (PeerImp)
- Fix handleAcquiredRngSet misclassifying export sets when local map is null
- Add stale-entry cleanup (cleanupStale with 256-ledger timeout)
- Gate sig attachment on featureExport amendment (RCLConsensus)
- Fail with tefINTERNAL if ExportResult metadata can't be written
- Make export and cancel mutually exclusive (reject both in preflight)
- Remove dead ExportLimits.h include
2026-03-18 10:30:41 +07:00
Nicholas Dudfield
6f003cc983 feat(export): add SHAMap-based export sig convergence
Add deterministic export sig set convergence using CE infrastructure:
- exportSigSetHash in ExtendedPosition (flag 0x10)
- buildExportSigSet() builds SHAMap from collected sigs
- hasPendingExportSigs() gates convergence in phaseEstablish
- Parallel convergence gate alongside RNG sub-states
- handleAcquiredRngSet extended to merge export sig sets
- Tiered quorum: 80% with CE, 100% unanimity without CE

Scenario tests for all 3 feature combos:
- CE+Export, 1 node down: 4/5=80% → tesSUCCESS
- Export only, all up: 5/5=100% → tesSUCCESS
- Export only, 1 node down: 4/5≠100% → tecEXPORT_EXPIRED
2026-03-18 10:17:28 +07:00
Nicholas Dudfield
3a58020388 fix(export): tiered quorum threshold based on CE availability
Without CE: require unanimity (100% UNL) to avoid non-deterministic
quorum disagreement. With CE: use standard 80% quorum via
calculateQuorumThreshold (SHAMap convergence will ensure agreement).
Standalone/unit tests: require 1 sig.
2026-03-18 09:46:36 +07:00
Nicholas Dudfield
829441b52e fix(export): deduplicate export sigs across proposals within a round 2026-03-18 09:32:27 +07:00
Nicholas Dudfield
3a055663cc chore: add export-sig-attachment marker for projected-source 2026-03-18 09:26:33 +07:00
Nicholas Dudfield
985a194bdc feat(export): migrate to retriable ttEXPORT with proposal-based sigs
Replace the old ltEXPORTED_TXN + ttEXPORT_FINALIZE (validation-based
sigs, TxQ injection) approach with a retriable ttEXPORT that collects
validator signatures via TMProposeSet during consensus.

Added:
- terRETRY_EXPORT: keeps tx in retry set across ledger boundaries
- tecEXPORT_EXPIRED (200): LLS expiry frees sequence cleanly
- sfExportResult (OBJECT 98): signed export result in tx metadata
- ExportSigCollector: minimal thread-safe sig tracker
- Proposal sig attachment (RCLConsensus) + harvesting (PeerImp)
- exportSignatures field in TMProposeSet (ripple.proto)
- Metadata plumbing (TxMeta, ApplyViewImpl, ApplyStateTable)
- Hook xport() now emits ttEXPORT via normal emitted txn path

Removed:
- ttEXPORT_FINALIZE (type 90) pseudo-tx and Change::applyExportFinalize
- ltEXPORTED_TXN ledger entry and exportedDir/exportedTxn keylets
- ExportSignatureCollector (replaced by ExportSigCollector)
- TxQ export injection (quorum check + rawTxInsert)
- Validation-based export signing in RCLConsensus
- Application::getExportSignatureCollector

Verified on 5-node testnet: golden path (same-ledger finalization with
ExportResult in metadata), degraded path (tecEXPORT_EXPIRED on sub-quorum),
and hook xport() path (emitted ttEXPORT with shadow ticket creation).
2026-03-18 09:23:52 +07:00
Nicholas Dudfield
869f366d8a feat(export): add sfCancelTicketSequence for shadow ticket cancellation
Add sfCancelTicketSequence (UINT32 field 101) to ttEXPORT, allowing
users to cancel shadow tickets via a transaction. Both sfExportedTxn
and sfCancelTicketSequence are optional — at least one must be present.
This allows export, cancel, or both in a single transaction.

Test: create shadow ticket via export, cancel via sfCancelTicketSequence,
verify ticket is gone and owner reserve is freed.
2026-03-17 14:13:57 +07:00
Nicholas Dudfield
03936aa928 fix(export): require TicketSequence on exported transactions
Exported transactions must use TicketSequence (with Sequence=0)
because a bounced tx on the destination chain would jam sequential
sequence numbers. This is enforced in both the hook xport() API
and the Export transactor via ExportLedgerOps::validateTicketSequence().

Adds test: ttEXPORT rejects export without TicketSequence.
Updates existing test hooks to include TicketSequence in exported txns.
2026-03-17 12:30:55 +07:00
Nicholas Dudfield
6d180307ad feat(export): split Import into B2M and export callback paths
When the inner XPOP transaction has sfTicketSequence, Import now
takes the export callback path: consume the shadow ticket via
ExportLedgerOps::cancelShadowTicket() and return. No B2M balance
crediting, no account creation. Hooks fire normally and can inspect
the result via xpop_slot().

The B2M path is unchanged for non-ticket imports.

Also migrates the shadow ticket check in preclaim from the old
hookState namespace approach to keylet::shadowTicket().

Removes the unused shadowTicketNamespace constant.
2026-03-17 12:23:27 +07:00
Nicholas Dudfield
f2ca499c97 feat(export): add ltSHADOW_TICKET and xport_cancel hook API
Introduce shadow tickets for export replay protection:

- ltSHADOW_TICKET ledger entry: account-owned, keyed by
  account + ticket sequence. Fields: sfAccount, sfTicketSequence,
  sfTransactionHash, sfLedgerSequence, sfOwnerNode.

- ExportLedgerOps::createShadowTicket(): creates shadow ticket
  when exported tx has sfTicketSequence. Charges owner reserve.
  Called from both hook xport() path and Export transactor.

- ExportLedgerOps::cancelShadowTicket(): deletes shadow ticket,
  frees reserve. Used by xport_cancel hook API.

- xport_cancel(ticket_seq) hook API: allows hooks to cancel
  shadow tickets for exports that will never get a callback.

- InvariantCheck: add ltSHADOW_TICKET to valid entry types.

- Test: verify shadow ticket creation with correct fields and
  owner count bump via ttEXPORT with TicketSequence.
2026-03-17 12:13:41 +07:00
Nicholas Dudfield
bd68364f25 feat(export): add ttEXPORT user transaction and extract ExportLedgerOps
Rename the existing ttEXPORT pseudo-tx to ttEXPORT_FINALIZE (type 90)
to make room for a user-submittable ttEXPORT (type 91).

ttEXPORT allows non-hook users to submit export transactions directly,
creating the same ltEXPORTED_TXN entries that the hook xport() API
creates inline.

Extract shared logic into ExportLedgerOps.h:
- createExportedTxn(): creates ltEXPORTED_TXN, enforces directory cap
- validateNetworkID(): self-target and unconfigured guards
- validateExportAccount(): account ownership check

Both the hook API (HookAPI.cpp) and the Export transactor now call
into ExportLedgerOps, eliminating duplicated validation and ledger
mutation code.
2026-03-17 11:43:45 +07:00
Nicholas Dudfield
42a6407815 fix(export): reject exports when NETWORK_ID is unconfigured
If the node's NETWORK_ID is 0 (default/unconfigured) and the exported
transaction has no sfNetworkID field, we can't distinguish self-targeting
from legitimate cross-chain export. Reject to be safe.

Also adds exportTestConfig() helper and test for the unconfigured case.
2026-03-17 07:41:36 +07:00
Nicholas Dudfield
a387c853ab test(export): add NetworkID self-target guard test
Verify that xport() rejects exported transactions whose sfNetworkID
matches the local network. The hook builds a Payment with
NetworkID=21337 (matching the test env), and the guard correctly
returns EXPORT_FAILURE causing tecHOOK_REJECTED.

Also fix log level for the guard rejection to warn (not trace).
2026-03-17 07:31:35 +07:00
Nicholas Dudfield
9311e567d3 fix(export): reject exports targeting the local network
Explicitly forbid exported transactions whose sfNetworkID matches the
local network's ID. An exported txn re-executing on its origin chain
could cause exploits or logic issues.

The check is intentionally non-mandatory: XRPL mainnet (the primary
export target) doesn't use NetworkID, so absent = allowed.
2026-03-16 17:30:14 +07:00
Nicholas Dudfield
c26582bdf9 fix(export): move ExportLimits.h to xrpl/protocol
Both xrpld.overlay and xrpl.hook depend on xrpl.protocol, so placing
the header there avoids introducing a new xrpld.overlay > xrpl.hook
levelization dependency.
2026-03-16 15:58:58 +07:00
Nicholas Dudfield
417b999c7f chore(levelization): add xrpld.overlay > xrpl.hook dependency
New include of ExportLimits.h in PeerImp.cpp introduces this
module dependency (from feat(export) commit 89274b538).
2026-03-16 15:47:45 +07:00
Nicholas Dudfield
0205be4500 chore: add testnet scenario scripts
Entropy and export scenario scripts for local testnet validation.
2026-03-16 15:17:32 +07:00
Nicholas Dudfield
89274b5387 feat(export): wip export system limits
- max_export per hook: 4 → 2
- maxPendingExports: cap exported directory at 8 entries (tecDIR_FULL)
- clamp inbound signature processing in PeerImp to directory cap

The directory cap is the root DoS constraint: each pending export
requires every validator to sign and broadcast every round. Inbound
processing and signing throughput are transitively bounded by it.
2026-03-16 13:59:06 +07:00
Nicholas Dudfield
b65d9faf12 docs(consensus): add MERGE NOTE comments for upstream 86ef16dbeb resolution
Extends merge guidance to cover the empty-disputes bugfix (not yet in
sync-2.5.0): !disputes.empty() guard, stalled() j/clog params,
"should be rare" doc wording, debug→warn promotion, and auto-merged
testDisputes duplicate warning.
2026-03-11 10:45:04 +07:00
Nicholas Dudfield
aa1a7e5320 docs(consensus): add MERGE NOTE comments for sync-2.5.0 resolution
Inline comments at all 6 conflict points guiding the maintainer
through the expected merge conflicts when sync-2.5.0 lands:
ledgerMAX_CONSENSUS const, bootstrap params, calculateQuorumThreshold,
effectiveParms+stalled in haveConsensus, DisputedTx::stalled() j/clog
params, and testDisputes placement.
2026-03-11 10:09:02 +07:00
Nicholas Dudfield
6f0f17aad9 fix(consensus): cherry-pick upstream 86ef16dbeb empty-disputes stall fix
Cherry-pick of ripple/rippled@86ef16dbeb ("Fix: Don't flag consensus
as stalled prematurely (#5627)"). Not yet in any xahau sync branch.

Fixes false stall detection when there are no disputed transactions:
std::ranges::all_of on an empty set is vacuously true, so consensus
was incorrectly flagged as stalled. Adds !result_->disputes.empty()
guard.

Also adds diagnostic logging to DisputedTx::stalled() and the
stall detection path in haveConsensus(), and promotes the
"Need validated ledger" log from debug to warn.
2026-03-11 09:47:11 +07:00
Nicholas Dudfield
407bfa1467 feat(consensus): cherry-pick dd085e5d8 (upstream d22a5057b9) anti-stall mechanisms
Cherry-pick of ripple/rippled@d22a5057b9 / xahau dd085e5d8 ("Prevent
consensus from getting stuck in the establish phase (#5277)"), resolved
against our RNG pipeline and bootstrap fast-start changes.

Upstream adds three layered anti-stall mechanisms:
- Stateful per-dispute avalanche state machine (init→mid→late→stuck)
- Stall detection: declares consensus when all disputes individually settled
- Hard expiration: clamp(10× prev round, 15s, 120s) wall-clock safety net

Conflict resolution:
- ConsensusParms.h: kept both avalanche state machine (const members,
  avMIN_ROUNDS, avSTALLED_ROUNDS, getNeededWeight) and our bootstrap
  params (bootstrapRoundTimeSeed, bootstrapStableRoundsRequired).
  ledgerMAX_CONSENSUS left non-const for bootstrap override.
- Consensus.h: pass both stalled flag and effectiveParms to checkConsensus.
  Stall check uses original parms, bootstrap override only affects max
  consensus timeout.
- Consensus_test.cpp: kept all 12 RNG tests and new testDisputes test.
2026-03-11 09:36:38 +07:00
Nicholas Dudfield
f0dfcf6b81 fix(consensus): cap bootstrap ledgerMAX_CONSENSUS at 5s
Use an explicit 5s cap instead of dividing the default 15s.
5s is the sweet spot: long enough for peers to exchange proposals
and converge naturally, short enough to avoid wasted time.
Shorter values (e.g. 3.75s) cause nodes to hit reachedMax before
peers converge, cascading into slower subsequent rounds.
2026-03-10 14:30:20 +07:00
Nicholas Dudfield
503d2ebf98 feat(consensus): add XAHAUD_BOOTSTRAP_FAST_START for faster cold-start
Seed prevRoundTime_ to 3s instead of 15s on first round, override
idle interval to bypass closeTimeResolution (10-30s on early ledgers),
and halve ledgerMAX_CONSENSUS during bootstrap. Auto-disables after 3
consecutive rounds with UNL quorum participation.

Cuts 5-node testnet cold-start from ~28s to ~13s.

Also adds projected-source markers to TxQ, NetworkOPs, and Submit for
the transaction-submission documentation template.
2026-03-10 12:52:56 +07:00
Nicholas Dudfield
e52bc51384 refactor(consensus): extract shouldZeroEntropy() for quorum-gated entropy
Consolidate the repeated entropy fallback condition
(entropyFailed || no reveals || sub-quorum reveals) into a single
method. Fixes EntropyCount field reporting non-zero when the digest
was correctly zeroed due to sub-quorum reveals.
2026-03-10 08:42:10 +07:00
Nicholas Dudfield
91860db578 fix(consensus): require quorum-many reveals for non-zero entropy
Sub-quorum reveals (e.g. 3/4 threshold) were producing real entropy,
allowing a minority of validators to disproportionately influence the
output. Both injectEntropyPseudoTx and buildExplicitFinalProposalTxSet
now fall back to zero entropy when reveals < quorumThreshold().
2026-03-09 17:13:02 +07:00
Nicholas Dudfield
0b317a8e7a fix(consensus): skip rng pipeline during bootstrap convergence
When prevProposers < quorumThreshold, the network is still converging
and RNG can only produce zero entropy. Skip the commit/reveal pipeline
to avoid PIPELINE_TIMEOUT and conflict-wait delays that compound across
staggered startup rounds.
2026-03-09 16:27:36 +07:00
Nicholas Dudfield
dbd230b695 feat(rpc): add rng state to consensus_info response 2026-03-09 16:05:42 +07:00
Nicholas Dudfield
30cefcba85 chore: clang-format alignment fixes 2026-03-06 18:39:37 +07:00
Nicholas Dudfield
94edb5759d fix(export): gate pre-quorum on verified signature count
hasQuorum() and getExportsWithQuorum() were using raw signerMap.size()
which includes unverified signatures. TxQ could inject a ttEXPORT
pseudo-tx that then fails the stricter verified-signature check in
Change::applyExport(). Use verifiedSignatureCount() instead so TxQ
only injects when cryptographically verified quorum is actually met.

Also add cmake plumbing for enhanced logging: link date::date-tz when
available and enable BEAST_ENHANCED_LOGGING for Debug builds.
2026-03-06 18:38:54 +07:00
Nicholas Dudfield
ce57b6a3a0 fix(consensus): fix rng quorum to active UNL and demote rng log noise
Quorum fix:
- Rename expectedProposers_ → likelyParticipants_ to clarify role
- Fix commit quorum to 80% of active UNL snapshot (not shrinkable by
  recent proposer count, which was allowing 2/3 to pass as quorum)
- hasQuorumOfCommits() now uses simple threshold check only
- Add CSF test: persistent loss does not shrink quorum

Log level cleanup:
- Demote ~30 RNG/STALLDIAG per-peer/per-tick lines from info/debug to
  debug/trace across Consensus.h and RCLConsensus.cpp
- Principle: per-peer/per-tick → trace; state transitions → debug;
  milestones → info
- Reduces testnet log volume by ~93%
2026-03-06 18:36:43 +07:00
Nicholas Dudfield
fca5cad470 fix(log): catch tzdb exception in local-time formatter
date::current_zone() can throw if the timezone database is unavailable
or misconfigured (e.g. minimal container images). Fall back to UTC
formatting so enhanced logging does not make startup fatal.
2026-03-06 18:36:22 +07:00
Nicholas Dudfield
bb77c2090b consensus: gate RNG substates by amendment state 2026-03-06 14:09:06 +07:00
Nicholas Dudfield
90a94294e4 protocol: split export and consensus entropy amendments 2026-03-06 14:08:15 +07:00
Nicholas Dudfield
c2209b4472 docs(consensus): explain why seq=3 may mirror seq=2
Clarify inline that seq=3 publish can carry unchanged txSetHash while still providing extra entropySetHash delivery/fetch opportunity under packet loss or reordering.
2026-03-03 17:41:55 +07:00
Nicholas Dudfield
8fcb2ed336 docs(consensus): annotate implicit entropy injection rationale
Document why synthetic entropy pseudo-tx is canonically injected at onAccept/buildLCL and why explicit-final proposal remains experimental/default-off.
2026-03-03 17:31:04 +07:00
Nicholas Dudfield
fd1567d1ba consensus: document explicit-final tradeoffs and tighten rng diagnostics
Keep explicit final proposal as an opt-in experimental path with implicit mode as default.

Add inline rationale/TBD notes, extend stall diagnostics, and cover runtime-config + CSF txn-path behavior with tests.
2026-03-03 17:08:38 +07:00
Nicholas Dudfield
d32f34d3bf build(levelization): add fast python generator with CI parity check
Add Builds/levelization/levelization.py for fast local iteration and semantic comparison against canonical shell output via --compare-to.

Keep Builds/levelization/levelization.sh as canonical path, and update levelization workflow to fail if python output diverges from shell-generated results.

Also harden interactive-shell detection in levelization.sh for portability and document local usage in README.
2026-03-03 10:17:46 +07:00
Nicholas Dudfield
c491c5c82f refactor(consensus): reduce header fanout for faster iteration
Decouple RCLConsensus.h from Consensus.h by forward-declaring Consensus and storing Consensus<Adaptor> behind std::unique_ptr, moving thin wrappers out-of-line into RCLConsensus.cpp.

Also remove direct RCLConsensus.h include from NetworkOPs.h (forward declare), and add explicit includes in DatagramMonitor.h and ServerDefinitions.cpp to replace transitive dependencies.

Keep RNG fast-path behavior unchanged in Consensus.h; build and ripple.consensus.Consensus remain green.
2026-03-03 09:49:59 +07:00
Nicholas Dudfield
74817765ae consensus: restore full entropySet broadcast and document fanout tradeoffs 2026-03-03 08:32:09 +07:00
Nicholas Dudfield
fc23fa8535 consensus: reduce entropy-set proposal fanout
Keep entropy-set recovery path but elect a deterministic single broadcaster (lowest NodeID among tx-converged participants) instead of every proposer broadcasting entropySetHash.

This lowers steady-state proposal chatter while preserving liveness for lagging peers that need entropy-set fetch/merge.
2026-03-03 07:42:27 +07:00
Nicholas Dudfield
34c0f17b6b runtimeconfig: add rng_claim_drop_pct testing control
Expose rng_claim_drop_pct in runtime config (RPC + env) as a clamped 0-100 percentage used by RNG claim-drop testing.

Include RuntimeConfig RPC tests for round-trip and clamping behavior.
2026-03-03 07:20:32 +07:00
Nicholas Dudfield
765ad6a278 consensus: harden RNG set convergence under dropped claims
Track active RNG round sequence for fetched set validation so lagging observers can merge current-round commit sets instead of rejecting them as closed+1 out-of-round.

Refresh/re-publish commitSetHash after fetch-merge conflicts and publish entropySetHash in ConvergingReveal so peers can recover reveal sets.

Add inline tradeoff notes: extra proposal traffic is accepted to preserve consensus safety/liveness under packet loss or drop injection.
2026-03-03 07:14:46 +07:00
Nicholas Dudfield
f623ca89b9 chore(levelization): update loops result after format/merge 2026-03-02 17:01:47 +07:00
Nicholas Dudfield
e4865f09f9 Merge remote-tracking branch 'origin/dev' into feature-export-rng 2026-03-02 16:59:57 +07:00
Nicholas Dudfield
4c182e4738 consensus: guard commit-set conflicts and extend RNG CSF coverage 2026-03-02 16:59:41 +07:00
Nicholas Dudfield
d0c869c8a6 fix(consensus): tighten RNG acquired-set validation and observer quorum
Harden acquired RNG merge paths with strict entry typing, trusted key/node binding, round-sequence gating, reveal-to-commit linkage checks, and stale reveal/proof invalidation on commitment changes.

Adjust proposer expectation logic so non-proposing observers are not counted as expected committers, and add a CSF regression test covering observer self-commit exclusion.
2026-03-02 16:36:03 +07:00
Nicholas Dudfield
cac5efcd3c fix(consensus): harden acquired RNG set ingestion
Reject mixed commit/reveal maps, enforce per-entry type checks, bind node identity to trusted validator keys, and gate acquired entries to the active round.

Also verify acquired reveals against stored commitments and clear stale reveal/proof state when commitments change.
2026-03-02 16:18:55 +07:00
Nicholas Dudfield
514e60b71c fix(export): age and validate stashed tx data for signature checks 2026-03-02 15:54:53 +07:00
Nicholas Dudfield
2a34e32e05 fix(export): harden addSignature validation and verification 2026-03-02 15:46:07 +07:00
Nicholas Dudfield
b969024a25 fix(export): update duplicates and prevent phantom pending entries 2026-03-02 15:39:43 +07:00
Nicholas Dudfield
f30b9a4c3a fix(export): avoid stale-age poisoning from rejected signatures 2026-03-02 15:35:36 +07:00
Nicholas Dudfield
0e019fec4e fix(export): prune invalid early signatures when stashing tx data 2026-03-02 15:29:42 +07:00
Nicholas Dudfield
7e0c72fd22 fix(export): run stale signature cleanup during TxQ processing 2026-03-02 15:27:30 +07:00
Nicholas Dudfield
07d741cdd7 fix(export): harden collector duplicate and identity handling 2026-03-02 15:25:19 +07:00
Nicholas Dudfield
b99c38c09d test(consensus): add asymmetric delay reveal-timeout scenario 2026-03-02 15:11:01 +07:00
Nicholas Dudfield
64e50209ff fix(consensus): invalidate stale reveals when commitment changes
Add RNG regression tests for non-UNL data, reveal-without-commit, invalid reveal, and commitment-change stale-reveal handling in CSF consensus tests.
2026-03-02 15:04:35 +07:00
Nicholas Dudfield
b1ce2103ad test(csf): add RNG consensus hooks and edge-case tests 2026-03-02 14:28:34 +07:00
Nicholas Dudfield
50c4cf1df3 refactor: move xport_reserve and xport logic into HookAPI class
Move core xport_reserve and xport implementations from applyHook.cpp
DEFINE_HOOK_FUNCTION wrappers into the decoupled HookAPI class, following
the same pattern used for etxn_reserve and emit.
2026-03-02 14:10:03 +07:00
Nicholas Dudfield
6fc14f398d feat(rpc): add disconnect by ip:port [TESTNET] 2026-03-02 12:06:00 +07:00
Nicholas Dudfield
592a8600c7 fix: add missing <mutex> include for GCC compatibility 2026-02-27 16:42:10 +07:00
Nicholas Dudfield
e71768700a chore: update levelization after RuntimeConfig overlay dependency 2026-02-27 16:40:00 +07:00
Nicholas Dudfield
e598e405bd fix: harden RuntimeConfig validation and add startup diagnostics
- Error on unknown message_types instead of silently widening scope
- Make messageCategories optional so per-peer can override global filter
  to "all categories" (nullopt=inherit, empty set=explicitly all)
- Clamp send_drop_pct to 0-100% range
- Add STARTDIAG: logging for consensus startup diagnostics
- Add 3 test cases (11 total, 58 assertions)
2026-02-27 13:38:26 +07:00
Nicholas Dudfield
8af3ce2f5b fix: allow extended proposals in PeerImp and add message type filtering
- Fix convergence regression caused by 2.4.0 merge: replace
  stringIsUint256Sized(currenttxhash) with size() < uint256::size()
  to accept extended proposals (>32 bytes) containing RNG fields
- Add message_types filter to RuntimeConfig for targeting specific
  protocol message categories (proposal, validation, transaction, etc.)
- Add appliesTo() method and messageCategories set to ConfigVals
- Add category name mapping helpers in RPC handler
- Add 2 test cases for message type filtering (8 total)
2026-02-27 13:10:49 +07:00
Nicholas Dudfield
b67cb78b97 feat: add RuntimeConfig service with overlay artificial delays
Add a generic RuntimeConfig service for runtime-configurable parameters,
initially supporting artificial send delays and packet drops for testing
consensus behavior on local testnets.

- RuntimeConfig class with atomic fast-path gate (zero cost when inactive)
- Per-peer targeting via "*" (global) and "ip:port" keys with inheritance
- Pre-merged caching at write time for single-lookup read path
- Admin RPC handler `runtime_config` (set/clear/clear_all/get)
- Env var support: XAHAU_RUNTIME_CONFIG (JSON) or XAHAU_SEND_* vars
- PeerImp::send() integration with delay timer and probabilistic drops
- RPC handler test covering all operations and merge behavior
2026-02-27 09:46:19 +07:00
Nicholas Dudfield
0b1b82282e fix: reject single-signed exports and fix test hook SigningPubKey
Add single-sign rejection check in Change::applyExport() matching
rippled's multi-sign validation: SigningPubKey must be present but
empty, TxnSignature must not be present.

Fix Export_test.cpp hook to encode an empty VL blob for SigningPubKey
instead of 33 zero bytes (AI slop from export-uvtxn branch).
2026-02-25 14:55:55 +07:00
Nicholas Dudfield
d4c5a7e8ab fix: update copyright headers to 2026 XRPL Labs for new files 2026-02-25 14:38:40 +07:00
Nicholas Dudfield
82837864fa fix: extract calculateQuorumThreshold() and revert Import.cpp quorum change
Extract duplicated (n * 80 + 99) / 100 ceiling quorum formula into shared
calculateQuorumThreshold() in ConsensusParms.h, matching the standard
ValidatorList::calculateQuorum(). Used by ExportSignatureCollector,
Change.cpp, and RCLConsensus.cpp.

Revert Import.cpp quorum from ceiling back to original truncating formula
(totalValidatorCount * 0.8) since Import handles XPOP imports, not the
new Export feature. Added TODO for future upgrade.
2026-02-25 14:22:43 +07:00
Nicholas Dudfield
e1caee6459 fix: regenerate hook/sfcodes.h after sfHookExportCount field code change 2026-02-25 13:40:25 +07:00
Nicholas Dudfield
3206b4a4e1 fix: address @tequdev review comments (cbak, render, Change.cpp, markers)
- Remove unnecessary cbak() stubs from ConsensusEntropy test hooks and
  recompile WASM (cbak is optional per Guard.h validator)
- Restore RCLCxPeerPos::render() lost during merge (delegates to
  ConsensusProposal::render())
- Fix Change.cpp applyAmendment() fixInnerObjTemplate2 reversion:
  use STObject::makeInnerObject() and bracket assignment (fbcff932)
- Restore txq-export-quorum-check documentation marker in TxQ.cpp
2026-02-25 13:25:41 +07:00
Nicholas Dudfield
0c2e09050e fix: move sfHookExportCount to Xahau-reserved field code range
sfHookExportCount was at field code 23, colliding with the mainline
rippled UINT16 range. Move to 98 in the Xahau-reserved range.

Also reorder sfExportedTxn (90) before sfAmountEntry (91) for
consistency.
2026-02-25 12:05:28 +07:00
Nicholas Dudfield
83922d5c20 fix: restore XRPL_ASSERT and UNREACHABLE macros reverted during merge
The merge with origin/dev accidentally reverted 19 XRPL_ASSERT() calls
back to plain assert() and 1 UNREACHABLE() back to assert(0). These
macros provide descriptive diagnostic messages on failure and are the
project convention since the rippled 2.4.0 migration.

Files fixed:
- Consensus.h: 9 XRPL_ASSERT reversions
- RCLConsensus.cpp: 5 XRPL_ASSERT reversions
- BuildLedger.cpp: 3 XRPL_ASSERT reversions
- Change.cpp: 1 UNREACHABLE + 1 XRPL_ASSERT reversion
2026-02-25 11:55:07 +07:00
Nicholas Dudfield
6bae42ff01 fix: restore CLOG consensus logging removed during merge
The merge with origin/dev accidentally stripped all CLOG diagnostic
statements from the consensus code path. This restores the clog
parameter to internal Consensus.h functions (checkLedger, phaseOpen,
closeLedger, updateOurPositions, handleWrongLedger, leaveConsensus,
createDisputes) and re-adds all 46 CLOG statements that provide
per-round diagnostic detail for phase transitions, convergence
progress, dispute tracking, and pause decisions.

Also restores the origin/dev structure of Consensus.cpp by removing
the anonymous-namespace wrapper and forwarding overloads that were
merge artifacts.
2026-02-25 11:53:27 +07:00
Nicholas Dudfield
35e86d926e fix(consensus-entropy): align pseudo tx/sle formats and hook handling
Add missing ttEXPORT/ttCONSENSUS_ENTROPY pseudo transaction fields required by runtime logic and ensure corresponding ledger entries carry threading/sequence fields.

Handle ttEXPORT and ttCONSENSUS_ENTROPY in hook stakeholder routing to avoid Unknown transaction type assertion during ledger close.
2026-02-24 18:44:00 +07:00
Nicholas Dudfield
9c4ee9315d chore: update levelization results after merge 2026-02-24 15:56:36 +07:00
Nicholas Dudfield
0f17cf02aa chore: clang-format 2026-02-24 15:51:55 +07:00
Nicholas Dudfield
7753dc3cbe fix(invariants): exempt export and entropy pseudo-ledger entries
Handle ltEXPORTED_TXN and ltCONSENSUS_ENTROPY in LedgerEntryTypesMatch so creating/destroying these pseudo-ledger entries does not trigger XRP balance invariant violations.
2026-02-24 15:48:32 +07:00
Nicholas Dudfield
cc7f3c59ae merge: port export-rng onto post-2.4.0 tree restructure
Resolve the origin/dev post-2.4.0 sync conflicts across the xrpld path migration and macro-based protocol registration changes.

Re-apply export/RNG integration on top of the new structure, including consensus/build plumbing, tx/apply paths, peer ingest, and tests.

Regenerate hook headers and restore a green build via x-run-tests (Export_test build path).
2026-02-24 15:32:45 +07:00
Nicholas Dudfield
e8c1b25ab4 fix: harden export signature trust model and quorum verification
- unify validator trust checks into isExportValidatorTrusted() preferring
  UNLReport with local trust fallback
- add last-line-of-defense sig verification in Change::applyExport()
  requiring 80% (ceil) verified trusted UNL signatures
- filter untrusted export signatures at ingestion in PeerImp
- fix Import quorum from floor(n*0.8) to ceil(n*80%) matching export side
2026-02-24 12:52:23 +07:00
Nicholas Dudfield
b9dd854595 refactor: unify featureExport + featureConsensusEntropy into featureExportRNG
Single amendment flag for both features. numFeatures 94 → 93.
Exclude featureExportRNG from default test set to prevent
ConsensusEntropy pseudo-tx injection from breaking existing tests.
2026-02-21 17:46:46 +07:00
Nicholas Dudfield
3bead8dcb6 merge: integrate origin/export-uvtxn into consensus-phase-entropy
Resolve 14 conflicts keeping both sides. Renumber TOO_LITTLE_ENTROPY
from -46 to -48 to avoid collision with export error codes.
Fix sfHookExportCount to soeOPTIONAL in InnerObjectFormats (only set
when featureExportRNG is enabled).
2026-02-21 17:41:37 +07:00
Nicholas Dudfield
908a78a1d9 fix: regenerate hook/extern.h to match hook_api.macro ordering 2026-02-20 10:11:37 +07:00
Nicholas Dudfield
a9e3dc41d4 fix: add featureExport stub for standalone guard_checker build 2026-02-20 10:05:58 +07:00
Nicholas Dudfield
02990eb4ee Merge remote-tracking branch 'origin/dev' into consensus-phase-entropy
# Conflicts:
#	hook/extern.h
#	src/ripple/app/hook/hook_api.macro
#	src/ripple/protocol/Feature.h
#	src/ripple/protocol/impl/Feature.cpp
2026-02-19 10:57:40 +07:00
Nicholas Dudfield
ce76632322 Merge remote-tracking branch 'origin/dev' into export-uvtxn
# Conflicts:
#	Builds/CMake/RippledCore.cmake
#	hook/extern.h
#	src/ripple/app/hook/Guard.h
#	src/ripple/app/hook/applyHook.h
#	src/ripple/app/hook/guard_checker.cpp
#	src/ripple/app/tx/impl/Change.cpp
#	src/ripple/app/tx/impl/SetHook.cpp
#	src/ripple/protocol/Feature.h
#	src/ripple/protocol/impl/Feature.cpp
#	src/ripple/protocol/jss.h
2026-02-19 10:12:55 +07:00
Nicholas Dudfield
9eac54d690 Merge remote-tracking branch 'origin/dev' into consensus-phase-entropy
# Conflicts:
#	src/ripple/app/hook/Guard.h
#	src/ripple/app/hook/applyHook.h
#	src/ripple/app/tx/impl/SetHook.cpp
2026-02-17 10:12:46 +07:00
Nicholas Dudfield
24e4ac16ad docs(consensus): add extraction markers for remaining RNG sections
Add @@start/@@end comment markers to pseudo-tx submission filtering,
fast-polling, local testnet resource bucketing, and test environment
gating. No logic changes.
2026-02-13 12:52:14 +07:00
Nicholas Dudfield
94ce15d233 docs(consensus): add extraction markers for guided code review
Add @@start/@@end comment markers to key RNG pipeline sections for
automated documentation extraction. No logic changes.
2026-02-13 12:47:22 +07:00
Nicholas Dudfield
8f331a538e fix(consensus): harden proposal parser and guard dice(0) UB
Address findings from code review:

- dice(0): add early return with INVALID_ARGUMENT before modulo
  operation to prevent undefined behavior
- fromSerialIter: return std::optional to safely reject malformed
  payloads (truncated, unknown flag bits, trailing bytes) instead
  of throwing
- Update all callers (PeerImp, RCLConsensus, tests) for optional
- Add unit tests for dice(0) error code and 7 malformed wire cases
2026-02-12 16:18:30 +07:00
Nicholas Dudfield
7425ab0a39 fix(consensus): avoid structured bindings in lambda captures
clang-14 (CI) does not implement P2036R3 — structured bindings cannot
be captured by lambdas. Use explicit .first/.second instead.
2026-02-10 19:02:42 +07:00
Nicholas Dudfield
c5292bfe0d fix(test): use large dice range to avoid deterministic collision
Standalone synthetic entropy produces identical dice(6) results for
consecutive calls due to hash collision mod 6. Switch to dice(1000000)
and add diagnostic output for return code debugging.
2026-02-10 18:53:24 +07:00
Nicholas Dudfield
79b2f9f410 feat(hooks): add consensus entropy definitions to hook headers
Add dice/random externs, TOO_LITTLE_ENTROPY error code, sfEntropyCount
field code, and ttCONSENSUS_ENTROPY transaction type to hook SDK headers.
2026-02-10 18:53:16 +07:00
Nicholas Dudfield
e8358a82b1 feat(hooks): register dice/random with WasmEdge and add hook API tests
- ADD_HOOK_FUNCTION for dice/random (was defined+declared but not registered)
- Relax fairRng() seq check to allow previous ledger entropy (open ledger)
- Add hook tests: dice range, random fill, consecutive calls differ
- TODO: open-ledger entropy semantics need further thought
2026-02-10 17:58:52 +07:00
Nicholas Dudfield
d850e740e1 feat(consensus): standalone synthetic entropy and ConsensusEntropy test
Generate deterministic entropy in standalone mode so Hook APIs (dice/random)
work for testing. Add test suite verifying SLE creation on ledger close.
2026-02-10 17:27:57 +07:00
Nicholas Dudfield
61a166bcb0 feat(hooks): add dice() and random() hook APIs for consensus entropy
Port the Hook API surface from the tt-rng branch, adapted to use our
commit-reveal consensus entropy (ltCONSENSUS_ENTROPY / sfDigest).

Hook APIs:
- dice(sides): returns random int [0, sides) from consensus entropy
- random(write_ptr, write_len): fills buffer with 1-512 random bytes

Internal fairRng() derives per-execution entropy by hashing: ledger
seq + tx ID + hook hash + account + chain position + execution phase
+ consensus entropy + incrementing call counter. This ensures each
call within a single hook execution returns different values.

Quality gate: fairRng returns empty (TOO_LITTLE_ENTROPY) if fewer
than 5 validators contributed, preventing weak entropy from being
consumed by hooks.

Also adds sfEntropyCount and sfLedgerSequence to the consensus
entropy SLE and pseudo-tx, enabling the freshness and quality
checks needed by the Hook API.
2026-02-10 17:12:27 +07:00
Nicholas Dudfield
41a41ec625 feat(consensus): intersect expected proposers with UNL Report and adaptive quorum
setExpectedProposers() now filters incoming proposers against the
on-chain UNL Report, preventing non-UNL nodes from inflating the
expected set and causing unnecessary timeouts.

quorumThreshold() uses expectedProposers_.size() (recent proposers ∩
UNL) when available, falling back to full UNL Report count on cold
boot. This adapts to actual network conditions rather than relying
on a potentially stale UNL Report that over-counts offline validators.

Renamed activeUNLNodeIds_/cacheActiveUNL/isActiveUNLMember to
unlReportNodeIds_/cacheUNLReport/isUNLReportMember to make the
on-chain data source explicit.
2026-02-10 16:14:47 +07:00
Nicholas Dudfield
bc98c589b7 docs(consensus): fix stale quorum comment in phaseEstablish
Update inline comment to reflect that hasQuorumOfCommits() checks
expected proposers first, with 80% of active UNL as fallback.
2026-02-06 16:56:01 +07:00
Nicholas Dudfield
4f009e4698 fix(consensus): proceed with partial commitSet on timeout instead of zero entropy
When expected proposers don't all arrive before rngPIPELINE_TIMEOUT,
check if we still have quorum (80% of UNL). If so, build the commitSet
with available commits and continue to reveals. Only fall back to zero
entropy when truly below quorum.

Previously any missing expected proposer caused a full timeout with zero
entropy for that round. Now: kill 3 of 20 nodes → one 3s timeout round
per kill but entropy preserved (17/16 quorum met).
2026-02-06 16:40:33 +07:00
Nicholas Dudfield
b6811a6f59 feat(consensus): deterministic commitSets via expected proposers and seq=0 proofs
Wait for commits from last round's proposers (falling back to activeUNL
on cold boot) instead of 80% of UNL. This ensures all nodes build the
commitSet at the same moment with the same entries.

Split proof storage: commitProofs_ (seq=0 only, deterministic) and
proposalProofs_ (latest with reveal, for entropySet). Previously the
proof blob contained whichever proposeSeq was last seen, causing
identical commits to produce different SHAMap hashes across nodes.

20-node testnet: all nodes now produce identical commitSet hashes.
2026-02-06 16:27:10 +07:00
Nicholas Dudfield
ae88fd3d24 feat(consensus): add dedicated reveal-phase timeout measured from phase entry
Previously rngPIPELINE_TIMEOUT (3s) was measured from round start,
meaning txSet convergence could eat into the reveal budget. Now reveals
get their own rngREVEAL_TIMEOUT (1.5s) measured from the moment we
enter ConvergingReveal, ensuring consistent time for reveal collection
regardless of how long txSet convergence took.
2026-02-06 16:00:40 +07:00
Nicholas Dudfield
db3ed0c2eb fix(consensus): wait for all committers' reveals and fix local testnet resource charging
- Change hasMinimumReveals() to wait for reveals from ALL committers
  (pendingCommits_.size()) instead of 80% quorum. The commit set is
  deterministic, so we know exactly which reveals to expect.
  rngPIPELINE_TIMEOUT remains the safety valve for crash/partition.
  Fixes reveal-set non-determinism causing entropy divergence on
  15-node testnets.

- Resource manager: preserve port for loopback addresses so local
  testnet nodes each get their own resource bucket instead of all
  sharing one on 127.0.0.1 (causing rate-limit disconnections).

- Make RNG fast-poll interval configurable via XAHAU_RNG_POLL_MS
  env var (default 250ms) for testnet tuning.
2026-02-06 15:22:42 +07:00
Nicholas Dudfield
960808b172 fix(consensus): skip RNG wait when quorum is impossible and base threshold on active UNL
When fewer participants are present than the quorum threshold, skip the
RNG commit wait immediately instead of waiting the full pipeline timeout.
Also base the quorum on activeUNLNodeIds_ (UNL Report with fallback)
instead of the full trusted key set, so the denominator reflects who is
actually active on the network.
2026-02-06 14:34:28 +07:00
Nicholas Dudfield
a9dffd38ff fix(consensus): shorten RNG pipeline timeout to 3s for faster recovery
Add rngPIPELINE_TIMEOUT (3s) to replace ledgerMAX_CONSENSUS (10s) in
the commit/reveal quorum gates. Late-joining nodes enter as
proposing=false and cannot contribute commitments until promoted, so
waiting beyond a few seconds just delays the ZERO-entropy fallback and
penalizes recovery. Add inline comments documenting the late-joiner
constraint and SHAMap sync's role as a dropped-proposal safety net.
2026-02-06 14:04:53 +07:00
Nicholas Dudfield
382e6fa673 fix(consensus): verify reveals match commitments and cache UNL for observers
Prevent grinding attacks by verifying sha512Half(reveal, pubKey, seq)
matches the stored commitment before accepting a reveal. Also move
cacheActiveUNL() into startRound so non-proposing nodes (exchanges,
block explorers) correctly accept RNG data instead of diverging with
zero entropy.
2026-02-06 13:28:55 +07:00
Nicholas Dudfield
2905b0509c perf(consensus): gate RNG SHAMap fetches on sub-state
During ConvergingTx all RNG data arrives via proposal leaves, so
fetching a peer's commitSet before we've built our own just generates
unnecessary traffic. Only fetch commitSetHash once in ConvergingCommit+,
and entropySetHash once in ConvergingReveal.
2026-02-06 13:18:53 +07:00
Nicholas Dudfield
4911c1bf52 feat(consensus): embed proposal signature proofs in RNG SHAMap entries
Prevents spoofed SHAMap entries by embedding verifiable proof blobs
(proposal signature + metadata) in each commit/reveal entry via sfBlob.

- Store ProposalProof in harvestRngData (peers) and propose() (self)
- serializeProof: pack proposeSeq/closeTime/prevLedger/position/sig
- verifyProof: reconstruct signingHash, verify against public key
- Embed proofs in buildCommitSet/buildEntropySet via sfBlob field
- Verify proofs in handleAcquiredRngSet (both diff and visitLeaves paths)
- Add stall fix: gate ConvergingTx on timeout when commits unavailable
- Clear proposalProofs_ in clearRngState
2026-02-06 11:47:42 +07:00
Nicholas Dudfield
1744d21410 docs(consensus): explain union convergence model for RNG sets 2026-02-06 11:17:52 +07:00
Nicholas Dudfield
34ff53f65d feat(consensus): add UNL enforcement for RNG commit-reveal pipeline
Cache active UNL NodeIDs per round from UNL Report (in-ledger),
falling back to getTrustedMasterKeys() on fresh testnets.
Reject non-UNL validators at all entry points: harvestRngData,
buildCommitSet, buildEntropySet, and handleAcquiredRngSet.
2026-02-06 11:12:03 +07:00
Nicholas Dudfield
893f8d5a10 feat(consensus): replace fake hashes with real SHAMap-backed commitSet/entropySet
Build real ephemeral (unbacked) SHAMaps for commitSet and entropySet using
ttCONSENSUS_ENTROPY entries with tfEntropyCommit/tfEntropyReveal flags.
Reuse InboundTransactions pipeline for peer fetch/diff/merge with no new
classes. Encode NodeID in sfAccount to avoid master-vs-signing key mismatch.
Add isPseudoTx guard in ConsensusTransSetSF to prevent pseudo-tx submission.
Route acquired RNG sets via isRngSet/gotRngSet in NetworkOPs mapComplete.
2026-02-06 10:38:06 +07:00
Nicholas Dudfield
3e5389d652 feat(consensus): add 250ms fast-poll for RNG sub-state transitions
During ConvergingCommit and ConvergingReveal sub-states, poll at 250ms
instead of the default 1s ledgerGRANULARITY. This reduces total RNG
pipeline overhead from ~3s to ~500ms while keeping the normal heartbeat
interval unchanged for all other consensus phases.
2026-02-06 09:21:42 +07:00
Nicholas Dudfield
c44dea3acf fix(consensus): resolve commit-reveal pipeline bugs enabling non-zero entropy
Three critical fixes that unblock the RNG commit-reveal pipeline:

- Remove entropy secret regeneration in ConvergingTx->ConvergingCommit
  transition that was overwriting the onClose() secret, breaking reveal
  verification against the original commitment
- Change ExtendedPosition operator== to compare txSetHash only, preventing
  deadlock where nodes transitioning sub-states at different times would
  break haveConsensus() for all peers
- Self-seed own commitment and reveal into pending collections so the
  node counts toward its own quorum checks

Also adds ExtendedPosition_test with signing, suppression, serialization
round-trip and equality tests, iterator safety fix in BuildLedger, wire
compatibility early-return, and RNG debug logging throughout the pipeline.
2026-02-06 09:03:26 +07:00
Nicholas Dudfield
a6dd54fa48 feat(consensus): add featureConsensusEntropy amendment gating
- Register ConsensusEntropy amendment (Supported::yes, DefaultNo)
- Gate entropy pseudo-tx injection behind amendment in doAccept()
- Gate preflight with temDISABLED when amendment not enabled
- Bump numFeatures 90 -> 91
- Exclude featureConsensusEntropy from default test environment to
  avoid breaking existing test transaction count assumptions
2026-02-06 07:29:48 +07:00
Nicholas Dudfield
28bd0a22d3 feat(consensus): add entropy injection, tx ordering, and dispatch registration
- Implement injectEntropyPseudoTx() to combine reveals into final
  entropy hash and inject as pseudo-tx into CanonicalTXSet in doAccept()
- Modify BuildLedger applyTransactions() to apply entropy tx FIRST
  before all other transactions to prevent front-running
- Remove redundant explicit threading in applyConsensusEntropy() as
  sfPreviousTxnID/sfPreviousTxnLgrSeq are set automatically by
  ApplyStateTable::threadItem()
- Register ttCONSENSUS_ENTROPY in applySteps.cpp dispatch tables
  (preflight, preclaim, calculateBaseFee, apply)
- Add ltCONSENSUS_ENTROPY to InvariantCheck.cpp valid type whitelist
2026-02-06 05:43:19 +07:00
Nicholas Dudfield
960fffcf82 feat(consensus): add ttCONSENSUS_ENTROPY pseudo-transaction protocol layer
Add protocol definitions for consensus-derived entropy pseudo-transaction:
- ttCONSENSUS_ENTROPY = 105 transaction type
- ltCONSENSUS_ENTROPY = 0x0058 ledger entry type
- keylet::consensusEntropy() singleton keylet (namespace 'X')
- applyConsensusEntropy() handler in Change.cpp
- Added to isPseudoTx() in STTx.cpp

The entropy value is stored in sfDigest field of the singleton ledger object.
This provides the protocol foundation for same-ledger entropy injection.
2026-02-05 17:26:31 +07:00
Nicholas Dudfield
e7867c07a1 feat(consensus): add RNG sub-state gating logic in phaseEstablish
- Add clearRngState() call in startRoundInternal
- Reset estState_ in closeLedger when entering establish phase
- Implement three-phase RNG checkpoint gating:
  - ConvergingTx: wait for quorum commits, build commitSet
  - ConvergingCommit: reveal entropy, transition immediately
  - ConvergingReveal: wait for reveals or timeout, build entropySet
- Use if constexpr for test framework compatibility
2026-02-05 16:53:00 +07:00
Nicholas Dudfield
a828e8a44d feat(consensus): add RNG wire protocol and harvest logic
- Serialize full ExtendedPosition in share() and propose()
- Deserialize ExtendedPosition in PeerImp using fromSerialIter()
- Add harvestRngData() to collect commits/reveals from peer proposals
- Conditionally call harvest via if constexpr for test compatibility
2026-02-05 16:41:13 +07:00
Nicholas Dudfield
bb33e7cf64 feat(consensus): add ExtendedPosition for RNG entropy support
Introduce data structures for consensus-derived randomness using
commit-reveal scheme:

- Add ExtendedPosition struct with consensus targets (txSetHash,
  commitSetHash, entropySetHash) and pipelined leaves (myCommitment,
  myReveal)
- operator== excludes leaves to allow convergence with unique leaves
- add() includes ALL fields to prevent signature stripping attacks
- Add EstablishState enum for sub-phases: ConvergingTx, ConvergingCommit,
  ConvergingReveal
- Update Consensus template to use Adaptor::Position_t
- Add Position_t typedef to RCLConsensus::Adaptor and test CSF Peer

This is the foundational data structure work for the RNG implementation.
The gating logic and entropy computation will follow.
2026-02-05 16:20:54 +07:00
Nicholas Dudfield
7e8e0654cd chore: add documentation markers for pr-description-outline 2026-01-28 15:00:14 +07:00
Nicholas Dudfield
38af0626e0 chore: add documentation markers for pr-description 2026-01-28 14:50:56 +07:00
Nicholas Dudfield
8500e86f57 chore: remove projected-source documentation markers 2026-01-28 11:30:47 +07:00
Nicholas Dudfield
1fc4fd9bfd chore: regenerate hook headers for export feature 2026-01-28 11:27:13 +07:00
Nicholas Dudfield
e4875e5398 refactor: remove ttEXPORT_SIGN and UVTx infrastructure
- Delete ExportSign.cpp/h transactor (ttEXPORT_SIGN no longer used)
- Remove isUVTx() function and all UVTx checks from Transactor/TxQ
- Remove ttEXPORT_SIGN from TxFormats enum and format definition
- Remove jss::ExportSign
- Move signPendingExports() to ExportSignatureCollector

Export signatures are now collected ephemerally via TMValidation
messages, not via ttEXPORT_SIGN transactions.
2026-01-28 10:59:45 +07:00
Nicholas Dudfield
5b1b142be0 chore: remove stray iostream includes 2026-01-28 10:34:11 +07:00
Nicholas Dudfield
5ba832204a test: remove unused scaffolding from Export_test
- Remove accept_wasm and emit_wasm hooks (not export-related)
- Remove testBasicSetup, testEmitPayment, testXportPayment
- Keep only testXportPaymentWithValidator which tests the export flow
2026-01-28 10:31:49 +07:00
Nicholas Dudfield
1257b3a65c Merge remote-tracking branch 'origin/dev' into export-uvtxn 2026-01-28 10:21:37 +07:00
Nicholas Dudfield
6013ed2cb6 refactor: remove vestigial on-ledger export signature code
- Remove makeExportSignTxns() function (signatures now via TMValidation)
- Simplify ExportSign::doApply() to no-op (ttEXPORT_SIGN kept for protocol)
- Remove sfSigners from ltEXPORTED_TXN format (collected in memory now)
- Remove unused OpenView include and forward declaration
- Remove vestigial comment in TxQ about makeExportSignTxns
2026-01-28 10:19:14 +07:00
Nicholas Dudfield
034010716e feat: add signature verification cache for export signatures
Add cryptographic verification of export signatures as they arrive:
- stashTxnData() caches serialized txn for verification
- verifyAndAddSignature() verifies against cached data, rejects invalid
- isSignatureVerified() / verifySignature() for Transactor fallback
- Cleanup methods updated to clear verification cache

Also removes leftover debug std::cerr from OpenView, STObject, and tests.
2026-01-27 18:03:55 +07:00
Nicholas Dudfield
b28793b0fa chore: clean up export debug logging
- remove DBG_EXPORT macros and all usages
- remove [EXPORT-TRACE] and [EXPORT-TIMING] debug prefixes
- adjust log levels (verbose logs to trace, summaries to debug)
- upgrade "quorum reached" to info level (important event)
- standardize log prefixes to use "Export:"
- re-enable relay loop in OpenLedger.cpp
- remove reentrant call detection debug code
2026-01-27 16:21:02 +07:00
Nicholas Dudfield
4bce392c31 feat: continuous signature broadcasting for export robustness
Validators now sign ALL pending ltEXPORTED_TXN entries every ledger
(not just those from the current ledger). Signatures are cached in
ExportSignatureCollector and re-broadcast until the export is finalized.

Changes:
- Add hasSignatureFrom() and getSignatureFrom() to collector for
  checking/retrieving cached signatures
- signPendingExports() now iterates ALL pending exports, uses cached
  signature if available, otherwise signs fresh
- Signatures keep broadcasting until ltEXPORTED_TXN is deleted

This ensures:
- Late validators can contribute (sign when they come online)
- Network partitions self-heal (signatures propagate on reconnect)
- Node restarts recover (re-sign from ledger state)

The ltEXPORTED_TXN acts as a "ticket" - signatures only valid while it
exists. No explicit expiry check needed; ledger state is the gatekeeper.
2026-01-26 18:25:24 +07:00
Nicholas Dudfield
244a28b981 feat: implement ephemeral export signature collection
Replace on-ledger ttEXPORT_SIGN transactions with ephemeral signature
collection via TMValidation messages. This eliminates O(n²) metadata
bloat from accumulating signatures on-ledger.

Changes:
- Add ExportSignatureCollector for in-memory signature storage with
  quorum tracking (80% UNL threshold)
- Extend TMValidation protobuf with exportSignatures field
- Sign pending exports during validate() and broadcast via validation
- Extract signatures from received TMValidation in PeerImp
- TxQ checks quorum from memory instead of ledger
- Inject ttEXPORT when quorum reached (can be ledger N+1 or N+2)
- Clean up collector after ttEXPORT processed

Includes [EXPORT-TIMING] debug logging for timing analysis.
2026-01-26 17:54:17 +07:00
Nicholas Dudfield
f2838351c9 chore: add [EXPORT-TRACE] debug logging for export flow tracing
adds step-by-step trace logging with [EXPORT-TRACE] prefix to track
the complete export transaction lifecycle:
- STEP-1: xport() creates ltEXPORTED_TXN
- STEP-2a: rawTxInsert ttEXPORT_SIGN in callback
- STEP-2b: doApply ttEXPORT_SIGN
- STEP-3a: rawTxInsert ttEXPORT
- STEP-4: doApply ttEXPORT (cleanup)

filter with: grep '\[EXPORT-TRACE\]'
2026-01-23 08:10:33 +07:00
Nicholas Dudfield
dae082d6a5 chore: format files with clang-format 2026-01-22 16:42:05 +07:00
Nicholas Dudfield
619a4a68f7 fix: resolve export feature bugs and add comprehensive tests
- fix Guard.h: add import_whitelist_2 to signature lookup chain
  (was causing "Function type is inconsistent" errors for xport APIs)
- fix InvariantCheck.cpp: add ltEXPORTED_TXN to valid ledger entry types
  (was causing "invalid ledger entry type added" invariant failures)
- add SetHook.cpp: TODO comment documenting API version confusion

- add Export_test.cpp: comprehensive test suite for export feature
  - testBasicSetup: verify hook installation works
  - testEmitPayment: verify emit() flow works
  - testXportPayment: verify xport() creates ltEXPORTED_TXN
  - includes DebugLogs helper for per-partition log levels
  - parameterized runXportTest helper for future validator tests

Note: validator signing flow (ttEXPORT_SIGN) still needs debugging -
causes internal error on env.close() when validator config enabled.
2026-01-22 09:51:50 +07:00
Nicholas Dudfield
4a6db8bb05 Merge remote-tracking branch 'origin/dev' into export-uvtxn 2026-01-22 08:07:58 +07:00
Nicholas Dudfield
c86479bc58 fix: correct xport api signature and sfExportedTxn type usage
- Fix xport hook API whitelist to declare 4 args (I32, I32, I32, I32)
  instead of 2, matching the actual implementation signature
- Fix TxQ.cpp to use emplace_back with STObject for sfExportedTxn
  instead of setFieldVL, since sfExportedTxn is OBJECT type not VL.
  The previous code would throw "Wrong field type" at runtime.
2026-01-22 07:41:12 +07:00
Nicholas Dudfield
dc6a2dc6ff refactor: separate ExportSign transactor from Change
Move ttEXPORT_SIGN handling to dedicated ExportSign transactor class,
following the same pattern as ttENTROPY/Entropy from the RNG feature.
UVTxns (signed validator transactions) should not be mixed with
pseudo-transactions in the Change transactor.

- Create ExportSign.h/cpp with preflight, preclaim, doApply
- Route ttEXPORT_SIGN through ExportSign in applySteps.cpp
- Remove UVTx branches from Change transactor
- Add documentation markers to View.h for inUNLReport functions
2026-01-22 07:41:12 +07:00
Nicholas Dudfield
c01b9a657b feat: implement uvtxn pattern for ttEXPORT_SIGN
Port the UNL Validator Transaction (UVTxn) pattern from the RNG feature
to allow validators to submit signed ttEXPORT_SIGN transactions without
requiring a funded account.

Changes:
- Add isUVTx() to identify UVTxn transaction types
- Add inUNLReport() templates to check validator UNLReport membership
- Add getValidationSecretKey() to Application for signing
- Modify Transactor for UVTxn bypasses (fee, seq, signature checks)
- Add makeExportSignTxns() to generate validator signatures
- Hook into RCLConsensus to submit ttEXPORT_SIGN during accept
- Update applySteps.cpp routing for ttEXPORT_SIGN
- Remove direct ttEXPORT_SIGN injection from TxQ::accept

Note: Currently uses Change transactor with UVTx branches.
May refactor to dedicated ExportSign transactor class.
2026-01-20 13:44:38 +07:00
Nicholas Dudfield
652b181b5d chore: clang format 2026-01-20 12:44:14 +07:00
RichardAH
8329d78f32 Update src/ripple/app/tx/impl/Import.cpp
Co-authored-by: tequ <git@tequ.dev>
2025-12-21 13:42:46 +10:00
RichardAH
bf4579c1d1 Update src/ripple/app/tx/impl/Change.cpp
Co-authored-by: tequ <git@tequ.dev>
2025-12-21 13:42:37 +10:00
RichardAH
73e099eb23 Update src/ripple/app/hook/impl/applyHook.cpp
Co-authored-by: tequ <git@tequ.dev>
2025-12-21 13:42:29 +10:00
RichardAH
2e311b4259 Update src/ripple/app/hook/applyHook.h
Co-authored-by: tequ <git@tequ.dev>
2025-12-21 13:42:20 +10:00
RichardAH
7c8e940091 Merge branch 'dev' into export 2025-12-19 13:27:02 +10:00
Richard Holland
9b90c50789 featureExport compiling, untested 2025-12-19 14:19:17 +11:00
Richard Holland
a18e2cb2c6 remainder of the export feature... untested uncompiled 2025-12-14 19:04:37 +11:00
Richard Holland
be5f425122 change symbol name to xport 2025-12-14 13:27:44 +11:00
Richard Holland
fc6f4762da export hook apis, untested 2025-12-13 15:46:08 +11:00
153 changed files with 18193 additions and 2357 deletions

3
.gitignore vendored
View File

@@ -127,5 +127,8 @@ bld.rippled/
generated
.vscode
# AI docs (local working documents)
.ai-docs/
# Suggested in-tree build directory
/.build/

4
.testnet/.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
output/
__pycache__/
scenarios/odd-cases/
scenarios/suite-experiments.yml

View File

@@ -0,0 +1,27 @@
"""Scenario: ConsensusEntropy amendment crashes non-supporting node.
Votes ConsensusEntropy accept on all nodes except n4, then waits for n4
to crash as the amendment activates without its support.
x-testnet run --scenario-script consensus_entropy_crash.py
"""
async def scenario(ctx, log):
await ctx.wait_for_ledger_close()
ctx.feature("ConsensusEntropy", vetoed=False, exclude_nodes=[4])
log("Waiting for ConsensusEntropy to be voted for...")
await ctx.wait_for_feature(
"ConsensusEntropy",
check=lambda s: not s.get("vetoed"),
exclude_nodes=[4],
timeout=60,
)
log("Waiting for n4 to crash...")
op = await ctx.wait_for_nodes_down(nodes=[4], timeout=600)
ctx.assert_log("unsupported amendments activated", since=op.started, nodes=[4])
ctx.assert_exit_status(0, nodes=[4])
log("PASS: n4 shut down due to unsupported amendment")

View File

@@ -0,0 +1,52 @@
""":descr: entropy stays valid under transaction load"""
from __future__ import annotations
from helpers import require_entropy, get_entropy_tx, assert_valid_entropy
variants = [
{"label": "light", "min_txns": 5, "max_txns": 10},
{"label": "heavy", "min_txns": 50, "max_txns": 60},
{"label": "super_heavy", "min_txns": 90, "max_txns": 120},
]
async def scenario(ctx, log, *, min_txns=5, max_txns=10, **_):
await require_entropy(ctx, log)
gen = ctx.txn_generator(min_txns=min_txns, max_txns=max_txns)
await gen.start()
await gen.wait_until_ready()
log(f"Transaction generator ready ({min_txns}-{max_txns} txns/ledger)")
# Wait for pipeline warmup + a few txn-bearing ledgers.
await ctx.wait_for_ledgers(3, node_id=0, timeout=60)
start_seq = ctx.validated_ledger_index(0)
await ctx.wait_for_ledgers(10, node_id=0, timeout=120)
end_seq = ctx.validated_ledger_index(0)
log(f"Inspecting ledgers {start_seq + 1}{end_seq}")
digests = set()
total_user_txns = 0
for seq in range(start_seq + 1, end_seq + 1):
ce, user_txns = get_entropy_tx(ctx, seq)
digest, count = assert_valid_entropy(ce, seq, seen_digests=digests)
total_user_txns += len(user_txns)
log(
f" Ledger {seq}: EntropyCount={count} "
f"user_txns={len(user_txns)} Digest={digest[:16]}..."
)
await gen.stop()
log(
f"Verified {end_seq - start_seq} ledgers: {total_user_txns} user txns, "
f"all entropy valid and unique"
)
if total_user_txns == 0:
raise AssertionError("No user transactions were included in any ledger")
log("PASS")

View File

@@ -0,0 +1,117 @@
""":descr: 4/5 liveness, 3/5 zero-entropy fallback, recovery"""
from __future__ import annotations
from helpers import require_entropy, get_entropy_tx, entropy_fields
async def scenario(ctx, log):
await require_entropy(ctx, log)
# Baseline: wait 1 ledger to confirm network is healthy.
await ctx.wait_for_ledgers(1, node_id=0, timeout=30)
# --- 4/5 liveness ---
ctx.stop_node(4)
await ctx.wait_for_nodes_down(nodes=[4], timeout=30)
await ctx.wait_for_ledgers(1, node_id=0, timeout=30)
log("4/5: liveness OK")
# Snapshot validated seq before dropping to 3/5.
val_before = ctx.validated_ledger_index(0)
# --- 3/5 degraded window ---
ctx.stop_node(3)
await ctx.wait_for_nodes_down(nodes=[3], timeout=30)
# 10s ≈ 3 rounds at 3s cadence.
await ctx.sleep(10)
val_after = ctx.validated_ledger_index(0)
log(f"3/5: validated ledger {val_before}{val_after}")
# Accepted/built ledgers may still later appear as validated once the full
# network rejoins. For ConsensusEntropy the key invariant is that every
# ledger created during this sub-quorum window carries ZERO entropy.
degraded_zero = 0
degraded_end = val_after or val_before
if val_before and degraded_end and degraded_end > val_before:
for seq in range(val_before + 1, degraded_end + 1):
ce, _ = get_entropy_tx(ctx, seq)
digest, entropy_count, is_zero = entropy_fields(ce)
if not is_zero:
raise AssertionError(
f"Ledger {seq}: expected ZERO entropy during 3/5 window, "
f"got Digest={digest[:16]}... EntropyCount={entropy_count}"
)
degraded_zero += 1
log(f" Degraded ledger {seq}: EntropyCount={entropy_count} ZERO")
log(f"3/5 entropy summary: {degraded_zero} zero")
# Log checks tied to actual transition mechanics:
# - seq=1 proposals are emitted once commit-set phase is entered
# - ConvergingCommit transition is the gateway out of seq=0-only behavior
# - establish gate blocked indicates tx-consensus/pause prevented accept
ctx.log_level("LedgerConsensus", "trace")
op = await ctx.sleep(6, name="stall_window")
ctx.assert_not_log(
r"RNG: transitioned to ConvergingCommit", within=op.window, nodes=[0, 1, 2]
)
ctx.assert_not_log(r"RNG: propose seq=1", within=op.window, nodes=[0, 1, 2])
gate_blocked = ctx.search_logs(
r"STALLDIAG: establish gate blocked reason=(pause|no-tx-consensus)",
within=op.window,
nodes=[0, 1, 2],
)
log(f"3/5: establish gate-blocked logs in 6s: {gate_blocked.count}")
skips = ctx.search_logs(r"RNG: bootstrap skip", within=op.window, nodes=[0, 1, 2])
log(f"3/5: RNG bootstrap skips in 6s: {skips.count}")
# --- Recovery: restart nodes, verify ledger advancement ---
ctx.start_node(3)
ctx.start_node(4)
await ctx.wait_for_ledgers(1, node_id=0, timeout=120)
val_recovered = ctx.validated_ledger_index(0)
pre_recovery = max(v for v in [val_before, val_after] if v is not None)
log(f"Recovered: validated seq {pre_recovery}{val_recovered}")
if not val_recovered or val_recovered <= pre_recovery:
raise AssertionError(
f"Validated ledger did not advance after recovery "
f"({pre_recovery}{val_recovered})"
)
# Inspect post-recovery ledgers separately from the degraded window above.
# Once the network is back at quorum, non-zero entropy is valid again but
# must still be quorum-met.
zero_count = 0
nonzero_count = 0
for seq in range(pre_recovery + 1, val_recovered + 1):
ce, _ = get_entropy_tx(ctx, seq)
digest, entropy_count, is_zero = entropy_fields(ce)
if is_zero:
zero_count += 1
else:
nonzero_count += 1
if entropy_count < 4:
raise AssertionError(
f"Ledger {seq}: non-zero entropy with sub-quorum "
f"EntropyCount={entropy_count} (need >= 4)"
)
log(
f" Ledger {seq}: EntropyCount={entropy_count} "
f"{'ZERO' if is_zero else 'REAL'}"
)
log(f"Entropy summary: {zero_count} zero, {nonzero_count} non-zero")
log("PASS")

View File

@@ -0,0 +1,46 @@
""":descr: drop 2 nodes (3/5 stall), restart both, verify recovery"""
from __future__ import annotations
async def scenario(ctx, log):
await ctx.wait_for_ledger_close(timeout=120)
feature = ctx.feature_check("ConsensusEntropy", node_id=0)
if not feature or not feature.get("enabled", False):
raise AssertionError(f"ConsensusEntropy not enabled: {feature}")
await ctx.wait_for_ledgers(1, node_id=0, timeout=60)
log("Baseline OK")
# Drop 2 nodes → validation stall.
ctx.stop_node(3)
ctx.stop_node(4)
await ctx.wait_for_nodes_down(nodes=[3, 4], timeout=30)
info = ctx.rpc.server_info(node_id=0)
val_before = info.get("info", {}).get("validated_ledger", {}).get("seq", 0)
log(f"Stalled at validated seq {val_before}")
# Let it sit for a few rounds in degraded state.
await ctx.sleep(6)
# Bring both nodes back.
ctx.start_node(3)
ctx.start_node(4)
log("Restarted n3 and n4, waiting for recovery...")
# Recovery: wait for ANY validated ledger advance on n0.
await ctx.wait_for_ledger_close(node_id=0, timeout=60)
info = ctx.rpc.server_info(node_id=0)
val_after = info.get("info", {}).get("validated_ledger", {}).get("seq", 0)
log(f"Recovered: validated seq {val_before}{val_after}")
if val_after <= val_before:
raise AssertionError(
f"Validated ledger did not advance after recovery "
f"({val_before}{val_after})"
)
log("PASS")

View File

@@ -0,0 +1,27 @@
""":descr: all 5 nodes healthy, every ledger has valid unique quorum-met entropy"""
from __future__ import annotations
from helpers import require_entropy, get_entropy_tx, assert_valid_entropy
async def scenario(ctx, log):
await require_entropy(ctx, log)
# Wait for RNG pipeline to warm up past bootstrap skip.
await ctx.wait_for_ledgers(3, node_id=0, timeout=60)
log("Pipeline warmed up")
start_seq = ctx.validated_ledger_index(0)
await ctx.wait_for_ledgers(10, node_id=0, timeout=120)
end_seq = ctx.validated_ledger_index(0)
log(f"Inspecting ledgers {start_seq + 1}{end_seq}")
digests = set()
for seq in range(start_seq + 1, end_seq + 1):
ce, _ = get_entropy_tx(ctx, seq)
digest, count = assert_valid_entropy(ce, seq, seen_digests=digests)
log(f" Ledger {seq}: EntropyCount={count} Digest={digest[:16]}...")
log(f"Verified {end_seq - start_seq} ledgers: all quorum entropy, all unique")
log("PASS")

View File

@@ -0,0 +1,86 @@
defaults:
network:
node_count: 5
launcher: tmux
slave_delay: 0.2
features:
- ConsensusEntropy
- Export
track_features:
- ConsensusEntropy
- Export
log_levels:
TxQ: info
Protocol: debug
Peer: debug
LedgerConsensus: debug
NetworkOPs: info
env:
XAHAU_RESOURCE_PER_PORT: "1"
XAHAU_RNG_POLL_MS: "333"
tests:
# --- CE + Export (80% quorum, SHAMap convergence) ---
- name: steady_state_export_ce
script: .testnet/scenarios/export/steady_state_export.py
- name: retriable_export_ce
script: .testnet/scenarios/export/retriable_export.py
- name: export_degradation_ce
script: .testnet/scenarios/export/export_degradation.py
network:
node_env:
3:
XAHAUD_NO_EXPORT_SIG: "1"
4:
XAHAUD_NO_EXPORT_SIG: "1"
# CE + Export: 1 node suppressed, 4/5 = 80% quorum, should succeed
- name: export_ce_one_node_down
script: .testnet/scenarios/export/export_quorum.py
params:
expect_success: true
network:
node_env:
4:
XAHAUD_NO_EXPORT_SIG: "1"
# --- Export only, no CE (80% active-view quorum) ---
- name: export_only_all_up
script: .testnet/scenarios/export/export_quorum.py
params:
expect_success: true
network:
features:
- Export
track_features:
- Export
- name: export_only_one_node_down
script: .testnet/scenarios/export/export_quorum.py
params:
expect_success: true
network:
features:
- Export
track_features:
- Export
node_env:
4:
XAHAUD_NO_EXPORT_SIG: "1"
- name: export_only_two_nodes_down
script: .testnet/scenarios/export/export_quorum.py
params:
expect_success: false
network:
features:
- Export
track_features:
- Export
node_env:
3:
XAHAUD_NO_EXPORT_SIG: "1"
4:
XAHAUD_NO_EXPORT_SIG: "1"

View File

@@ -0,0 +1,102 @@
""":descr: Submit ttEXPORT with 2 nodes suppressing export sigs, verify it
retries via terRETRY_EXPORT until LLS expiry (not enough sigs for quorum).
Nodes 3 and 4 have XAHAUD_NO_EXPORT_SIG=1, so only 3/5 nodes provide
export signatures. With 80% quorum = ceil(5*0.8) = 4 required, the
export cannot reach quorum and should expire via tecEXPORT_EXPIRED.
Flow:
1. Fund alice and bob
2. alice submits ttEXPORT with tight LLS
3. Export retries (only 3/5 sigs available, need 4)
4. Verify export expires with tecEXPORT_EXPIRED
5. Verify subsequent payment still works (sequence not permanently blocked)
"""
from __future__ import annotations
from export_helpers import require_export, assert_shadow_ticket
async def scenario(ctx, log):
await require_export(ctx, log)
# --- Setup ---
await ctx.fund_accounts({"alice": 10000, "bob": 1000})
log("Accounts funded")
alice = ctx.account("alice")
bob = ctx.account("bob")
current_seq = ctx.validated_ledger_index(0)
log(f"Current ledger: {current_seq}")
log("Nodes 3,4 have XAHAUD_NO_EXPORT_SIG=1 (3/5 sigs, need 4)")
# --- Submit ttEXPORT (should retry then expire -- only 3/5 sigs) ---
result = await ctx.submit_and_wait(
{
"TransactionType": "Export",
"LastLedgerSequence": current_seq + 8,
"Fee": "1000000",
"ExportedTxn": {
"TransactionType": "Payment",
"Account": alice.address,
"Destination": bob.address,
"Amount": "1000000",
"Fee": "10",
"Sequence": 0,
"TicketSequence": 1,
"FirstLedgerSequence": current_seq + 1,
"LastLedgerSequence": current_seq + 6,
"Flags": 2147483648,
"SigningPubKey": "",
},
},
alice.wallet,
timeout=60,
)
final_seq = ctx.validated_ledger_index(0)
engine_result = result.get("engine_result", "")
log(f"Export completed at ledger {final_seq}, result: {engine_result}")
# With only 3/5 sigs and 80% quorum (4 required), export MUST fail
if engine_result == "tesSUCCESS":
raise AssertionError(
"Export should NOT have succeeded with only 3/5 sigs "
"(need 4 for 80% quorum) -- check XAHAUD_NO_EXPORT_SIG config"
)
# Should be tecEXPORT_EXPIRED (LLS reached without quorum)
if engine_result != "tecEXPORT_EXPIRED":
log(f"WARNING: expected tecEXPORT_EXPIRED, got {engine_result}")
log(f"Export failed as expected ({engine_result})")
# No shadow ticket should exist (export never reached quorum)
assert_shadow_ticket(ctx, alice.address, log, expect_exists=False)
# --- Verify subsequent payment works regardless ---
log("Submitting payment from alice to bob...")
pay_result = await ctx.submit_and_wait(
{
"TransactionType": "Payment",
"Destination": bob.address,
"Amount": "1000000",
"Fee": "12",
},
alice.wallet,
timeout=30,
)
pay_engine = pay_result.get("engine_result", "")
log(f"Payment result: {pay_engine}")
if pay_engine != "tesSUCCESS":
raise AssertionError(
f"Payment failed after expired export: {pay_engine} "
f"-- sequence may be blocked"
)
log("Payment succeeded -- account not permanently blocked")
log("PASS")

View File

@@ -0,0 +1,144 @@
"""Shared helpers for Export scenario tests."""
from __future__ import annotations
async def require_export(ctx, log):
"""Wait for first ledger and assert Export amendment is enabled."""
await ctx.wait_for_ledger_close(timeout=120)
feature = ctx.feature_check("Export", node_id=0)
if not feature or not feature.get("enabled", False):
raise AssertionError(f"Export not enabled: {feature}")
log("Export amendment enabled")
def find_export_txns(ctx, seq):
"""Find Export transactions in a ledger.
Returns list of Export transaction dicts.
"""
result = ctx.ledger(seq, transactions=True)
if not result:
return []
txns = result.get("ledger", {}).get("transactions", [])
return [tx for tx in txns if tx.get("TransactionType") == "Export"]
def dst_param(address):
"""Encode an address as a HookParameter entry for the DST param."""
from xrpl.core.addresscodec import decode_classic_address
dst_hex = decode_classic_address(address).hex().upper()
return {
"HookParameter": {
"HookParameterName": "445354", # "DST"
"HookParameterValue": dst_hex,
}
}
def assert_hook_accepted(meta, log, *, expected_emits=1):
"""Assert hook executed with ACCEPT and the expected emit count.
Checks sfHookExecutions in transaction metadata.
Returns the hook execution entry for further inspection.
"""
hook_execs = meta.get("HookExecutions", [])
if not hook_execs:
raise AssertionError("No HookExecutions in metadata")
exec_entry = hook_execs[0].get("HookExecution", {})
hook_result = exec_entry.get("HookResult", -1)
emit_count = exec_entry.get("HookEmitCount", -1)
return_code = exec_entry.get("HookReturnCode", "")
log(f" HookResult={hook_result} EmitCount={emit_count} ReturnCode={return_code}")
# HookResult 3 = ExitType::ACCEPT
if hook_result != 3:
raise AssertionError(
f"Hook did not ACCEPT: HookResult={hook_result} "
f"ReturnCode={return_code}"
)
if emit_count != expected_emits:
raise AssertionError(
f"Expected {expected_emits} emits, got {emit_count}"
)
# ReturnCode 0 = success; non-zero = ASSERT line number in hook
if return_code and str(return_code) != "0":
raise AssertionError(
f"Hook returned error code {return_code} "
f"(likely ASSERT failure at that line)"
)
return exec_entry
def assert_export_result(meta, log, *, require_signers=True):
"""Assert ExportResult is present and well-formed in metadata.
Returns the ExportResult dict.
"""
export_result = meta.get("ExportResult", {})
if not export_result:
raise AssertionError("ExportResult not found in metadata")
# Must have LedgerSequence and TransactionHash
if "LedgerSequence" not in export_result:
raise AssertionError("ExportResult missing LedgerSequence")
if "TransactionHash" not in export_result:
raise AssertionError("ExportResult missing TransactionHash")
# Must have the inner ExportedTxn object
inner = export_result.get("ExportedTxn", {})
if not inner:
raise AssertionError("ExportResult missing ExportedTxn (multisigned blob)")
log(f" ExportResult: seq={export_result['LedgerSequence']} "
f"hash={export_result['TransactionHash'][:16]}...")
# Inner tx should have Account, Destination, TransactionType
if "Account" not in inner:
raise AssertionError("ExportedTxn missing Account")
if "TransactionType" not in inner:
raise AssertionError("ExportedTxn missing TransactionType")
# Should have empty SigningPubKey (multisigned)
if inner.get("SigningPubKey", "NOT_EMPTY") != "":
raise AssertionError(
f"ExportedTxn SigningPubKey should be empty, "
f"got '{inner.get('SigningPubKey')}'"
)
if require_signers:
signers = inner.get("Signers", [])
if not signers:
raise AssertionError("ExportedTxn has no Signers (multisig not applied)")
log(f" Signers: {len(signers)} validator(s)")
return export_result
def assert_shadow_ticket(ctx, account_address, log, *, expect_exists=True):
"""Assert shadow ticket exists (or doesn't) for the account."""
obj_result = ctx.rpc.request(
0, "account_objects", {"account": account_address}
)
all_objects = (obj_result or {}).get("account_objects", [])
shadow_tickets = [
obj for obj in all_objects
if obj.get("LedgerEntryType") == "ShadowTicket"
]
log(f" Shadow tickets: {len(shadow_tickets)}")
if expect_exists and not shadow_tickets:
raise AssertionError("Expected shadow ticket but none found")
if not expect_exists and shadow_tickets:
raise AssertionError(
f"Expected no shadow tickets but found {len(shadow_tickets)}"
)
return shadow_tickets

View File

@@ -0,0 +1,112 @@
""":descr: Test Export quorum behavior. When enough active validators sign,
the export should succeed whether or not CE is enabled. When fewer than the
active-view quorum sign, the export should expire.
Parameterized via `expect_success` kwarg from suite.yml.
Flow:
1. Fund alice and bob
2. alice submits ttEXPORT
3. Verify result matches expectation (tesSUCCESS or tecEXPORT_EXPIRED)
4. Verify ExportResult + shadow ticket on success, absence on failure
5. Verify subsequent payment works regardless
"""
from __future__ import annotations
from export_helpers import (
require_export,
assert_export_result,
assert_shadow_ticket,
)
async def scenario(ctx, log, expect_success=True):
await require_export(ctx, log)
# --- Setup ---
await ctx.fund_accounts({"alice": 10000, "bob": 1000})
log("Accounts funded")
alice = ctx.account("alice")
bob = ctx.account("bob")
current_seq = ctx.validated_ledger_index(0)
log(f"Current ledger: {current_seq}")
outcome = "success" if expect_success else "failure (below quorum)"
log(f"Expecting export {outcome}")
# --- Submit ttEXPORT ---
result = await ctx.submit_and_wait(
{
"TransactionType": "Export",
"LastLedgerSequence": current_seq + 10,
"Fee": "1000000",
"ExportedTxn": {
"TransactionType": "Payment",
"Account": alice.address,
"Destination": bob.address,
"Amount": "1000000",
"Fee": "10",
"Sequence": 0,
"TicketSequence": 1,
"FirstLedgerSequence": current_seq + 1,
"LastLedgerSequence": current_seq + 8,
"Flags": 2147483648,
"SigningPubKey": "",
},
},
alice.wallet,
timeout=60,
)
final_seq = ctx.validated_ledger_index(0)
engine_result = result.get("engine_result", "")
meta = result.get("meta", {})
log(f"Export at ledger {final_seq}, result: {engine_result}")
if expect_success:
if engine_result != "tesSUCCESS":
raise AssertionError(
f"Expected tesSUCCESS, got {engine_result}"
)
# Assert ExportResult is well-formed with signers
assert_export_result(meta, log, require_signers=True)
# Assert shadow ticket was created
assert_shadow_ticket(ctx, alice.address, log, expect_exists=True)
log("Export succeeded as expected (active-view quorum reached)")
else:
if engine_result == "tesSUCCESS":
raise AssertionError(
"Export should NOT have succeeded below active-view quorum"
)
log(f"Export failed as expected ({engine_result})")
# No shadow ticket should exist
assert_shadow_ticket(ctx, alice.address, log, expect_exists=False)
# --- Verify subsequent payment works ---
log("Submitting payment from alice to bob...")
pay_result = await ctx.submit_and_wait(
{
"TransactionType": "Payment",
"Destination": bob.address,
"Amount": "1000000",
"Fee": "12",
},
alice.wallet,
timeout=30,
)
pay_engine = pay_result.get("engine_result", "")
log(f"Payment result: {pay_engine}")
if pay_engine != "tesSUCCESS":
raise AssertionError(f"Payment failed: {pay_engine}")
log("Payment succeeded -- account not blocked")
log("PASS")

View File

@@ -0,0 +1,94 @@
""":descr: Submit ttEXPORT directly (no hook), verify it succeeds with
ExportResult in metadata. Then submit a payment from the same account
to verify sequence handling doesn't block subsequent transactions.
Flow:
1. Fund alice and bob
2. alice submits ttEXPORT with inner payment -> tesSUCCESS (provisional)
3. Validators attach sigs via proposals -> quorum -> ExportResult in metadata
4. alice submits a Payment to bob -> should succeed (sequence not blocked)
"""
from __future__ import annotations
from export_helpers import require_export, assert_export_result, assert_shadow_ticket
async def scenario(ctx, log):
await require_export(ctx, log)
# --- Setup ---
await ctx.fund_accounts({"alice": 10000, "bob": 1000})
log("Accounts funded")
alice = ctx.account("alice")
bob = ctx.account("bob")
current_seq = ctx.validated_ledger_index(0)
log(f"Current ledger: {current_seq}")
# --- 1. Submit ttEXPORT ---
result = await ctx.submit_and_wait(
{
"TransactionType": "Export",
"LastLedgerSequence": current_seq + 15,
"Fee": "1000000",
"ExportedTxn": {
"TransactionType": "Payment",
"Account": alice.address,
"Destination": bob.address,
"Amount": "1000000",
"Fee": "10",
"Sequence": 0,
"TicketSequence": 1,
"FirstLedgerSequence": current_seq + 1,
"LastLedgerSequence": current_seq + 10,
"Flags": 2147483648,
"SigningPubKey": "",
},
},
alice.wallet,
timeout=60,
)
export_seq = ctx.validated_ledger_index(0)
engine_result = result.get("engine_result", "")
log(f"Export completed at ledger {export_seq}, result: {engine_result}")
if engine_result != "tesSUCCESS":
raise AssertionError(
f"Expected tesSUCCESS for export, got {engine_result}"
)
# Assert ExportResult is well-formed with signers
meta = result.get("meta", {})
assert_export_result(meta, log, require_signers=True)
# Assert shadow ticket was created
assert_shadow_ticket(ctx, alice.address, log, expect_exists=True)
# --- 2. Submit Payment from same account ---
log("Submitting payment from alice to bob...")
pay_result = await ctx.submit_and_wait(
{
"TransactionType": "Payment",
"Destination": bob.address,
"Amount": "1000000",
"Fee": "12",
},
alice.wallet,
timeout=30,
)
pay_engine = pay_result.get("engine_result", "")
log(f"Payment result: {pay_engine}")
if pay_engine != "tesSUCCESS":
raise AssertionError(f"Payment failed: {pay_engine}")
log(
f"Both transactions succeeded: "
f"Export at ledger {export_seq}, Payment at ledger {ctx.validated_ledger_index(0)}"
)
log("Sequence handling OK - export didn't block subsequent txns")
log("PASS")

View File

@@ -0,0 +1,211 @@
""":descr: install xport hook, trigger export, verify emitted ttEXPORT lifecycle
1. Fund alice (hook holder), bob (trigger), carol (export destination)
2. Install xport hook on alice
3. bob pays alice with DST=carol → hook calls xport() → emits ttEXPORT
4. Emitted ttEXPORT enters open ledger, validators attach sigs via proposals
5. Verify Export transaction appears in a subsequent ledger
"""
from __future__ import annotations
from export_helpers import (
require_export,
find_export_txns,
dst_param,
assert_hook_accepted,
assert_export_result,
assert_shadow_ticket,
)
# C source for the xport hook — verbatim from src/test/app/Export_test_hooks.h
# On Payment to the hook account, exports a 1 XAH payment to the DST param.
XPORT_HOOK_C = r"""
#include <stdint.h>
extern int32_t _g(uint32_t id, uint32_t maxiter);
extern int64_t accept(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t rollback(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t xport(uint32_t write_ptr, uint32_t write_len, uint32_t read_ptr, uint32_t read_len);
extern int64_t xport_reserve(uint32_t count);
extern int64_t hook_account(uint32_t write_ptr, uint32_t write_len);
extern int64_t otxn_param(uint32_t write_ptr, uint32_t write_len, uint32_t name_ptr, uint32_t name_len);
extern int64_t otxn_type(void);
extern int64_t ledger_seq(void);
#define SBUF(x) (uint32_t)(x), sizeof(x)
#define ASSERT(x) if (!(x)) rollback((uint32_t)#x, sizeof(#x), __LINE__)
#define ttPAYMENT 0
#define tfCANONICAL 0x80000000UL
#define amAMOUNT 1
#define amFEE 8
#define atACCOUNT 1
#define atDESTINATION 3
#define ENCODE_TT(buf_out, tt) \
buf_out[0] = 0x12U; buf_out[1] = (tt >> 8) & 0xFFU; buf_out[2] = tt & 0xFFU; buf_out += 3;
#define ENCODE_FLAGS(buf_out, flags) \
buf_out[0] = 0x22U; buf_out[1] = (flags >> 24) & 0xFFU; buf_out[2] = (flags >> 16) & 0xFFU; \
buf_out[3] = (flags >> 8) & 0xFFU; buf_out[4] = flags & 0xFFU; buf_out += 5;
#define ENCODE_SEQUENCE(buf_out, seq) \
buf_out[0] = 0x24U; buf_out[1] = (seq >> 24) & 0xFFU; buf_out[2] = (seq >> 16) & 0xFFU; \
buf_out[3] = (seq >> 8) & 0xFFU; buf_out[4] = seq & 0xFFU; buf_out += 5;
#define ENCODE_FLS(buf_out, fls) \
buf_out[0] = 0x20U; buf_out[1] = 0x1AU; buf_out[2] = (fls >> 24) & 0xFFU; \
buf_out[3] = (fls >> 16) & 0xFFU; buf_out[4] = (fls >> 8) & 0xFFU; \
buf_out[5] = fls & 0xFFU; buf_out += 6;
#define ENCODE_LLS(buf_out, lls) \
buf_out[0] = 0x20U; buf_out[1] = 0x1BU; buf_out[2] = (lls >> 24) & 0xFFU; \
buf_out[3] = (lls >> 16) & 0xFFU; buf_out[4] = (lls >> 8) & 0xFFU; \
buf_out[5] = lls & 0xFFU; buf_out += 6;
#define ENCODE_DROPS(buf_out, drops, amt_type) \
buf_out[0] = 0x60U + amt_type; buf_out[1] = 0x40U + ((drops >> 56) & 0x3FU); \
buf_out[2] = (drops >> 48) & 0xFFU; buf_out[3] = (drops >> 40) & 0xFFU; \
buf_out[4] = (drops >> 32) & 0xFFU; buf_out[5] = (drops >> 24) & 0xFFU; \
buf_out[6] = (drops >> 16) & 0xFFU; buf_out[7] = (drops >> 8) & 0xFFU; \
buf_out[8] = drops & 0xFFU; buf_out += 9;
#define ENCODE_SIGNING_PUBKEY_EMPTY(buf_out) \
buf_out[0] = 0x73U; buf_out[1] = 0x00U; buf_out += 2;
#define ENCODE_ACCOUNT(buf_out, acc, acc_type) \
buf_out[0] = 0x80U + acc_type; buf_out[1] = 0x14U; \
for (int i = 0; i < 20; ++i) buf_out[2+i] = acc[i]; buf_out += 22;
#define PREPARE_PAYMENT_SIMPLE_SIZE 270U
int64_t hook(uint32_t reserved) {
_g(1, 1);
if (otxn_type() != ttPAYMENT)
return accept(0, 0, 0);
ASSERT(xport_reserve(1) == 1);
uint8_t dst[20];
int64_t dst_len = otxn_param(SBUF(dst), "DST", 3);
ASSERT(dst_len == 20);
uint8_t acc[20];
ASSERT(hook_account(SBUF(acc)) == 20);
uint32_t cls = (uint32_t)ledger_seq();
uint8_t tx[PREPARE_PAYMENT_SIMPLE_SIZE];
uint8_t* buf = tx;
ENCODE_TT(buf, ttPAYMENT);
ENCODE_FLAGS(buf, tfCANONICAL);
ENCODE_SEQUENCE(buf, 0);
ENCODE_FLS(buf, cls + 1);
ENCODE_LLS(buf, cls + 5);
// sfTicketSequence = UINT32 field 41 = 0x20 0x29
buf[0] = 0x20U; buf[1] = 0x29U;
buf[2] = 0; buf[3] = 0; buf[4] = 0; buf[5] = 1;
buf += 6;
uint64_t drops = 1000000;
ENCODE_DROPS(buf, drops, amAMOUNT);
ENCODE_DROPS(buf, 10, amFEE);
ENCODE_SIGNING_PUBKEY_EMPTY(buf);
ENCODE_ACCOUNT(buf, acc, atACCOUNT);
ENCODE_ACCOUNT(buf, dst, atDESTINATION);
uint8_t hash[32];
int64_t xport_result = xport(SBUF(hash), (uint32_t)tx, buf - tx);
ASSERT(xport_result == 32);
return accept(0, 0, 0);
}
"""
async def scenario(ctx, log):
# Wait for network to start and amendments to activate
await require_export(ctx, log)
# --- Setup ---
await ctx.fund_accounts({"alice": 10000, "bob": 10000, "carol": 1000})
log("Accounts funded")
alice = ctx.account("alice")
carol = ctx.account("carol")
# Compile and install xport hook on alice
wasm = ctx.compile_hook(XPORT_HOOK_C, label="xport")
await ctx.submit_and_wait(
{
"TransactionType": "SetHook",
"Hooks": [
{
"Hook": {
"CreateCode": wasm.hex().upper(),
"HookOn": "0" * 64,
"HookNamespace": "0" * 64,
"HookApiVersion": 0,
"Flags": 1, # hsfOVERRIDE
}
}
],
"Fee": "100000000",
},
alice.wallet,
)
log(
f"Hook installed on alice ({alice.address[:12]}...) "
f"ledger {ctx.validated_ledger_index(0)}"
)
# --- Trigger ---
# bob pays alice → hook calls xport() → emits ttEXPORT
trigger_result = await ctx.submit_and_wait(
{
"TransactionType": "Payment",
"Destination": alice.address,
"Amount": "100000000",
"Fee": "1000000",
"HookParameters": [dst_param(carol.address)],
},
ctx.account("bob").wallet,
)
trigger_seq = ctx.validated_ledger_index(0)
log(f"Export triggered at ledger {trigger_seq}")
# Assert hook fired with ACCEPT and emitted 1 tx
trigger_meta = trigger_result.get("meta", {})
assert_hook_accepted(trigger_meta, log, expected_emits=1)
# --- Verify: check each ledger close for the Export transaction ---
max_ledgers = 10
for i in range(max_ledgers):
await ctx.wait_for_ledgers(1, node_id=0, timeout=30)
seq = ctx.validated_ledger_index(0)
exports = find_export_txns(ctx, seq)
if exports:
export_tx = exports[0]
meta = export_tx.get("meta", export_tx.get("metaData", {}))
result = meta.get("TransactionResult", "")
log(f"Ledger {seq}: Export txn found, result={result}")
if result != "tesSUCCESS":
raise AssertionError(f"Export did not succeed: {result}")
# Assert ExportResult is well-formed with signers and inner tx
assert_export_result(meta, log, require_signers=True)
# Assert shadow ticket was created
assert_shadow_ticket(ctx, alice.address, log, expect_exists=True)
log("PASS")
return
log(f"Ledger {seq}: no Export txn yet")
raise AssertionError(
f"No Export transaction found after {max_ledgers} ledger closes"
)

View File

@@ -0,0 +1,60 @@
"""Shared helpers for ConsensusEntropy scenario tests."""
from __future__ import annotations
ZERO_DIGEST = "0" * 64
async def require_entropy(ctx, log):
"""Wait for first ledger and assert ConsensusEntropy is enabled."""
await ctx.wait_for_ledger_close(timeout=120)
feature = ctx.feature_check("ConsensusEntropy", node_id=0)
if not feature or not feature.get("enabled", False):
raise AssertionError(f"ConsensusEntropy not enabled: {feature}")
log("ConsensusEntropy enabled")
def get_entropy_tx(ctx, seq):
"""Fetch ledger and return (ce_tx, user_txns) or raise."""
result = ctx.ledger(seq, transactions=True)
if not result:
raise AssertionError(f"Ledger {seq}: fetch failed")
txns = result.get("ledger", {}).get("transactions", [])
ce = [tx for tx in txns if tx.get("TransactionType") == "ConsensusEntropy"]
user = [tx for tx in txns if tx.get("TransactionType") != "ConsensusEntropy"]
if len(ce) != 1:
raise AssertionError(
f"Ledger {seq}: expected 1 ConsensusEntropy txn, got {len(ce)}"
)
return ce[0], user
def entropy_fields(ce_tx):
"""Return (digest, entropy_count, is_zero) from a ConsensusEntropy tx."""
digest = ce_tx.get("Digest", "")
entropy_count = ce_tx.get("EntropyCount", -1)
is_zero = digest == ZERO_DIGEST and entropy_count == 0
return digest, entropy_count, is_zero
def assert_valid_entropy(ce_tx, seq, seen_digests=None):
"""Assert non-zero quorum-met entropy. Optionally check uniqueness."""
digest, entropy_count, is_zero = entropy_fields(ce_tx)
if is_zero or not digest:
raise AssertionError(f"Ledger {seq}: zero/empty Digest")
if entropy_count < 4:
raise AssertionError(
f"Ledger {seq}: EntropyCount={entropy_count} < 4 (sub-quorum)"
)
if seen_digests is not None:
if digest in seen_digests:
raise AssertionError(f"Ledger {seq}: duplicate Digest {digest[:16]}...")
seen_digests.add(digest)
return digest, entropy_count

View File

@@ -0,0 +1,42 @@
defaults:
network:
node_count: 5
launcher: tmux
slave_delay: 0.2
features:
- ConsensusEntropy
track_features:
- ConsensusEntropy
log_levels:
TxQ: info
Protocol: debug
Peer: debug
LedgerConsensus: debug
NetworkOPs: info
env:
XAHAU_RESOURCE_PER_PORT: "1"
XAHAU_RNG_POLL_MS: "333"
tests:
- name: steady_state_entropy
script: .testnet/scenarios/entropy/steady_state_entropy.py
- name: steady_state_entropy_fast_start
script: .testnet/scenarios/entropy/steady_state_entropy.py
network:
env:
XAHAUD_BOOTSTRAP_FAST_START: "1"
- name: entropy_with_transactions
script: .testnet/scenarios/entropy/entropy_with_transactions.py
- name: quorum_recovery_smoke
script: .testnet/scenarios/entropy/quorum_recovery_smoke.py
- name: quorum_degradation_smoke
script: .testnet/scenarios/entropy/quorum_degradation_smoke.py
network:
log_levels:
LedgerConsensus: trace
# Export scenarios: see export-suite.yml

View File

@@ -26,7 +26,7 @@ Loop: xrpld.app xrpld.nodestore
xrpld.app > xrpld.nodestore
Loop: xrpld.app xrpld.overlay
xrpld.overlay ~= xrpld.app
xrpld.overlay == xrpld.app
Loop: xrpld.app xrpld.peerfinder
xrpld.app > xrpld.peerfinder
@@ -47,7 +47,7 @@ Loop: xrpld.net xrpld.rpc
xrpld.rpc > xrpld.net
Loop: xrpld.overlay xrpld.rpc
xrpld.rpc ~= xrpld.overlay
xrpld.rpc > xrpld.overlay
Loop: xrpld.perflog xrpld.rpc
xrpld.rpc ~= xrpld.perflog

View File

@@ -43,6 +43,7 @@ test.consensus > xrpld.app
test.consensus > xrpld.consensus
test.consensus > xrpld.core
test.consensus > xrpld.ledger
test.consensus > xrpl.json
test.consensus > xrpl.protocol
test.core > test.jtx
test.core > test.toplevel

View File

@@ -22,6 +22,9 @@ target_compile_definitions (opts
$<$<BOOL:${beast_no_unit_test_inline}>:BEAST_NO_UNIT_TEST_INLINE=1>
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
$<$<BOOL:${single_io_service_thread}>:RIPPLE_SINGLE_IO_SERVICE_THREAD=1>
# Enhanced logging is enabled for Debug builds, or explicitly via
# -DBEAST_ENHANCED_LOGGING=ON for other build types.
$<$<OR:$<CONFIG:Debug>,$<BOOL:${BEAST_ENHANCED_LOGGING}>>:BEAST_ENHANCED_LOGGING=1>
$<$<BOOL:${voidstar}>:ENABLE_VOIDSTAR>)
target_compile_options (opts
INTERFACE

View File

@@ -47,5 +47,8 @@
#define MEM_OVERLAP -43
#define TOO_MANY_STATE_MODIFICATIONS -44
#define TOO_MANY_NAMESPACES -45
#define EXPORT_FAILURE -46
#define TOO_MANY_EXPORTED_TXN -47
#define TOO_LITTLE_ENTROPY -48
#define HOOK_ERROR_CODES
#endif //HOOK_ERROR_CODES

View File

@@ -336,5 +336,24 @@ prepare(
uint32_t read_ptr,
uint32_t read_len);
extern int64_t
xport_reserve(uint32_t count);
extern int64_t
xport(
uint32_t write_ptr,
uint32_t write_len,
uint32_t read_ptr,
uint32_t read_len);
extern int64_t
xport_cancel(uint32_t ticket_seq);
extern int64_t
dice(uint32_t sides);
extern int64_t
random(uint32_t write_ptr, uint32_t write_len);
#define HOOK_EXTERN
#endif // HOOK_EXTERN

View File

@@ -9,6 +9,7 @@
#define sfUNLModifyDisabling ((16U << 16U) + 17U)
#define sfHookResult ((16U << 16U) + 18U)
#define sfWasLockingChainSend ((16U << 16U) + 19U)
#define sfSidecarType ((16U << 16U) + 20U)
#define sfLedgerEntryType ((1U << 16U) + 1U)
#define sfTransactionType ((1U << 16U) + 2U)
#define sfSignerWeight ((1U << 16U) + 3U)
@@ -22,6 +23,8 @@
#define sfHookApiVersion ((1U << 16U) + 20U)
#define sfHookStateScale ((1U << 16U) + 21U)
#define sfLedgerFixType ((1U << 16U) + 22U)
#define sfHookExportCount ((1U << 16U) + 98U)
#define sfEntropyCount ((1U << 16U) + 99U)
#define sfNetworkID ((2U << 16U) + 1U)
#define sfFlags ((2U << 16U) + 2U)
#define sfSourceTag ((2U << 16U) + 3U)
@@ -80,6 +83,7 @@
#define sfRewardTime ((2U << 16U) + 98U)
#define sfRewardLgrFirst ((2U << 16U) + 99U)
#define sfRewardLgrLast ((2U << 16U) + 100U)
#define sfCancelTicketSequence ((2U << 16U) + 101U)
#define sfIndexNext ((3U << 16U) + 1U)
#define sfIndexPrevious ((3U << 16U) + 2U)
#define sfBookNode ((3U << 16U) + 3U)
@@ -159,6 +163,7 @@
#define sfEmittedTxnID ((5U << 16U) + 97U)
#define sfGovernanceMarks ((5U << 16U) + 98U)
#define sfGovernanceFlags ((5U << 16U) + 99U)
#define sfEntropyDigest ((5U << 16U) + 100U)
#define sfNumber ((9U << 16U) + 1U)
#define sfAmount ((6U << 16U) + 1U)
#define sfBalance ((6U << 16U) + 2U)
@@ -220,7 +225,6 @@
#define sfProvider ((7U << 16U) + 30U)
#define sfMPTokenMetadata ((7U << 16U) + 31U)
#define sfCredentialType ((7U << 16U) + 32U)
#define sfJsonTxBody ((7U << 16U) + 33U)
#define sfRemarkValue ((7U << 16U) + 98U)
#define sfRemarkName ((7U << 16U) + 99U)
#define sfAccount ((8U << 16U) + 1U)
@@ -287,6 +291,7 @@
#define sfXChainCreateAccountAttestationCollectionElement ((14U << 16U) + 31U)
#define sfPriceData ((14U << 16U) + 32U)
#define sfCredential ((14U << 16U) + 33U)
#define sfExportedTxn ((14U << 16U) + 90U)
#define sfAmountEntry ((14U << 16U) + 91U)
#define sfMintURIToken ((14U << 16U) + 92U)
#define sfHookEmission ((14U << 16U) + 93U)
@@ -294,6 +299,7 @@
#define sfActiveValidator ((14U << 16U) + 95U)
#define sfGenesisMint ((14U << 16U) + 96U)
#define sfRemark ((14U << 16U) + 97U)
#define sfExportResult ((14U << 16U) + 98U)
#define sfSigners ((15U << 16U) + 3U)
#define sfSignerEntries ((15U << 16U) + 4U)
#define sfTemplate ((15U << 16U) + 5U)

View File

@@ -61,6 +61,7 @@
#define ttNFTOKEN_MODIFY 70
#define ttPERMISSIONED_DOMAIN_SET 71
#define ttPERMISSIONED_DOMAIN_DELETE 72
#define ttEXPORT 91
#define ttCRON 92
#define ttCRON_SET 93
#define ttREMARKS_SET 94
@@ -74,3 +75,4 @@
#define ttUNL_MODIFY 102
#define ttEMIT_FAILURE 103
#define ttUNL_REPORT 104
#define ttCONSENSUS_ENTROPY 105

View File

@@ -15,6 +15,8 @@
#define uint256 std::string
#define featureHooksUpdate1 "1"
#define featureHooksUpdate2 "1"
#define featureExport "1"
#define featureConsensusEntropy "1"
#define fix20250131 "1"
namespace hook_api {
struct Rules
@@ -383,7 +385,10 @@ enum hook_return_code : int64_t {
MEM_OVERLAP = -43, // one or more specified buffers are the same memory
TOO_MANY_STATE_MODIFICATIONS = -44, // more than 5000 modified state
// entires in the combined hook chains
TOO_MANY_NAMESPACES = -45
TOO_MANY_NAMESPACES = -45,
EXPORT_FAILURE = -46,
TOO_MANY_EXPORTED_TXN = -47,
TOO_LITTLE_ENTROPY = -48,
};
enum ExitType : uint8_t {
@@ -397,6 +402,7 @@ const uint16_t max_state_modifications = 256;
const uint8_t max_slots = 255;
const uint8_t max_nonce = 255;
const uint8_t max_emit = 255;
const uint8_t max_export = 2;
const uint8_t max_params = 16;
const double fee_base_multiplier = 1.1f;
@@ -437,10 +443,6 @@ getImportWhitelist(Rules const& rules)
return whitelist;
}
#undef HOOK_API_DEFINITION
#undef I32
#undef I64
enum GuardRulesVersion : uint64_t {
GuardRuleFix20250131 = 0x00000001,
};

View File

@@ -372,3 +372,28 @@ HOOK_API_DEFINITION(
HOOK_API_DEFINITION(
int64_t, prepare, (uint32_t, uint32_t, uint32_t, uint32_t),
featureHooksUpdate2)
// int64_t xport_reserve(uint32_t count);
HOOK_API_DEFINITION(
int64_t, xport_reserve, (uint32_t),
featureExport)
// int64_t xport(uint32_t write_ptr, uint32_t write_len, uint32_t read_ptr, uint32_t read_len);
HOOK_API_DEFINITION(
int64_t, xport, (uint32_t, uint32_t, uint32_t, uint32_t),
featureExport)
// int64_t xport_cancel(uint32_t ticket_seq);
HOOK_API_DEFINITION(
int64_t, xport_cancel, (uint32_t),
featureExport)
// int64_t dice(uint32_t sides);
HOOK_API_DEFINITION(
int64_t, dice, (uint32_t),
featureConsensusEntropy)
// int64_t random(uint32_t write_ptr, uint32_t write_len);
HOOK_API_DEFINITION(
int64_t, random, (uint32_t, uint32_t),
featureConsensusEntropy)

View File

@@ -0,0 +1,2 @@
---
DisableFormat: true

View File

@@ -166,6 +166,14 @@ message TMProposeSet
// Number of hops traveled
optional uint32 hops = 12 [deprecated=true];
// Export signatures for pending exports seen in the proposal set. The
// proposal's ExtendedPosition includes a digest of this repeated field, so
// these side-channel blobs are covered by the proposal signature.
// Each entry is: txnHash (32 bytes) + validator pubkey (33 bytes)
// + multisign signature (variable length). Validators attach these
// so export quorum can be reached within the same consensus round.
repeated bytes exportSignatures = 13;
}
enum TxSetStatus
@@ -384,4 +392,3 @@ message TMHaveTransactions
{
repeated bytes hashes = 1;
}

View File

@@ -0,0 +1,33 @@
#ifndef RIPPLE_PROTOCOL_EXPORT_LIMITS_H_INCLUDED
#define RIPPLE_PROTOCOL_EXPORT_LIMITS_H_INCLUDED
#include <cstdint>
namespace ripple {
// Export system caps.
//
// These limits bound the DoS surface of the export signature system:
// - Each pending export requires every validator to sign it every round
// (sign-once, attach once via TMProposeSet)
// - Inbound signature processing involves crypto verification per sig
// - The open-ledger cap (maxPendingExports) is the root constraint;
// signing throughput and inbound processing are transitively bounded by it
struct ExportLimits
{
// Maximum exports a single hook execution may produce
// (also enforced by hook_api::max_export in Enum.h)
static constexpr std::uint8_t maxExportsPerHook = 2;
// Maximum pending export transactions in an open/apply ledger.
// Hook-emitted export backlog drains into the open ledger at this cap.
// This transitively caps:
// - signatures per TMProposeSet message (1 per pending export)
// - inbound proposal signature processing (clamped to this)
// - validator signing work per round
static constexpr std::uint8_t maxPendingExports = 8;
};
} // namespace ripple
#endif

View File

@@ -80,7 +80,7 @@ namespace detail {
// Feature.cpp. Because it's only used to reserve storage, and determine how
// large to make the FeatureBitset, it MAY be larger. It MUST NOT be less than
// the actual number of amendments. A LogicError on startup will verify this.
static constexpr std::size_t numFeatures = 114;
static constexpr std::size_t numFeatures = 115;
/** Amendments that this server supports and the default voting behavior.
Whether they are enabled depends on the Rules defined in the validated

View File

@@ -96,6 +96,9 @@ enum class HashPrefix : std::uint32_t {
/** Credentials signature */
credential = detail::make_hash_prefix('C', 'R', 'D'),
/** consensus extension sidecar object */
sidecar = detail::make_hash_prefix('S', 'C', 'R'),
};
template <class Hasher>

View File

@@ -62,6 +62,9 @@ emittedDir() noexcept;
Keylet
emittedTxn(uint256 const& id) noexcept;
Keylet
shadowTicket(AccountID const& account, std::uint32_t ticketSeq) noexcept;
Keylet
hookDefinition(uint256 const& hash) noexcept;
@@ -118,6 +121,10 @@ negativeUNL() noexcept;
Keylet const&
UNLReport() noexcept;
/** The (fixed) index of the object containing consensus-derived entropy. */
Keylet const&
consensusEntropy() noexcept;
/** The beginning of an order book */
struct book_t
{

View File

@@ -1,81 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2025 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
*/
//==============================================================================
#ifndef RIPPLE_PROTOCOL_JSONTX_H_INCLUDED
#define RIPPLE_PROTOCOL_JSONTX_H_INCLUDED
#include <xrpl/basics/Blob.h>
#include <xrpl/basics/Expected.h>
#include <xrpl/basics/Slice.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/protocol/STTx.h>
namespace ripple {
namespace jsonTx {
/** Returns true iff the STObject declares a sfJsonTxBody field.
Used as a routing predicate: a transaction with this field present
is claimed to use the "json-tx" signing scheme and must be validated
through jsonTx::checkSignature / checkStructuralEquivalence. An
empty body field still counts as "claimed json-tx" so the empty
case is reported as a clean failure instead of silently falling
back to the classical sig path.
*/
[[nodiscard]] bool
hasBody(STObject const& obj) noexcept;
/** Borrow a Slice over the ASCII body bytes.
Returns an empty slice if sfJsonTxBody is not present. The slice is
valid for as long as the STObject the field belongs to.
*/
[[nodiscard]] Slice
body(STObject const& obj);
/** SHA-512-Half of the body bytes.
This is the deterministic "ASCII signing digest" used by json-tx:
the bytes the client sees as their message are hashed with SHA-512
and truncated to 256 bits, the same digest convention rippled uses
elsewhere. Returns a zero-valued hash if sfJsonTxBody is absent.
*/
[[nodiscard]] uint256
bodyHash(STObject const& obj);
/** Signature check only: verify sfTxnSignature against the raw bytes of
sfJsonTxBody using sfSigningPubKey.
The classical signing payload is NOT used. This is the json-tx
analogue of STTx::checkSingleSign and is intended to be called from
the same code path (e.g. STTx::checkSign).
Precondition: `stx` carries a non-empty sfJsonTxBody.
*/
[[nodiscard]] Expected<void, std::string>
checkSignature(STTx const& stx);
/** Structural-equivalence check: parse sfJsonTxBody as JSON and confirm
it serialises to the same canonical binary as the other structural
fields of `stx` (excluding sfTxnSignature and sfJsonTxBody).
This is a local-check style rule -- it should run alongside
passesLocalChecks, not inside the signature verification path.
Precondition: `stx` carries a non-empty sfJsonTxBody.
*/
[[nodiscard]] Expected<void, std::string>
checkStructuralEquivalence(STTx const& stx);
} // namespace jsonTx
} // namespace ripple
#endif

View File

@@ -0,0 +1,21 @@
#ifndef RIPPLE_PROTOCOL_SIDECAR_TYPE_H_INCLUDED
#define RIPPLE_PROTOCOL_SIDECAR_TYPE_H_INCLUDED
#include <cstdint>
namespace ripple {
/// Discriminator for sidecar set entries (SHAMap leaves used for
/// consensus extension data: RNG commit/reveal, export signatures).
///
/// Stored in sfSidecarType (UINT8) on each STObject entry.
/// Makes sidecar sets self-describing — no content-sniffing needed.
enum SidecarType : std::uint8_t {
sidecarRngCommit = 1,
sidecarRngReveal = 2,
sidecarExportSig = 3,
};
} // namespace ripple
#endif

View File

@@ -68,6 +68,7 @@ enum TELcodes : TERUnderlyingType {
telNON_LOCAL_EMITTED_TXN,
telIMPORT_VL_KEY_NOT_RECOGNISED,
telCAN_NOT_QUEUE_IMPORT,
telSHADOW_TICKET_REQUIRED,
telENV_RPC_FAILED,
};
@@ -233,8 +234,10 @@ enum TERcodes : TERUnderlyingType {
terQUEUED, // Transaction is being held in TxQ until fee drops
terPRE_TICKET, // Ticket is not yet in ledger but might be on its way
terNO_AMM, // AMM doesn't exist for the asset pair
terNO_HOOK // Transaction requires a non-existent hook definition
terNO_HOOK, // Transaction requires a non-existent hook definition
// (referenced by sfHookHash)
terRETRY_EXPORT // Export does not yet have enough validator signatures.
// Retained in retriable set for next ledger.
};
//------------------------------------------------------------------------------
@@ -362,6 +365,7 @@ enum TECcodes : TERUnderlyingType {
tecARRAY_TOO_LARGE = 197,
tecLOCKED = 198,
tecBAD_CREDENTIALS = 199,
tecEXPORT_EXPIRED = 200,
tecLAST_POSSIBLE_ENTRY = 255,
};

View File

@@ -274,6 +274,13 @@ enum BridgeModifyFlags : uint32_t {
tfClearAccountCreateAmount = 0x00010000,
};
constexpr std::uint32_t tfBridgeModifyMask = ~(tfUniversal | tfClearAccountCreateAmount);
// ConsensusEntropy flags (used on ttCONSENSUS_ENTROPY SHAMap entries):
enum ConsensusEntropyFlags : uint32_t {
tfEntropyCommit = 0x00000001, // entry is a commitment in commitSet
tfEntropyReveal = 0x00000002, // entry is a reveal in entropySet
};
// flag=0 (no tfEntropyCommit/tfEntropyReveal) = final injected pseudo-tx
// clang-format on
} // namespace ripple

View File

@@ -140,6 +140,12 @@ public:
mHookEmissions = hookEmissions;
}
void
setExportResult(STObject const& exportResult)
{
mExportResult = exportResult;
}
bool
hasHookExecutions() const
{
@@ -152,6 +158,12 @@ public:
return static_cast<bool>(mHookEmissions);
}
bool
hasExportResult() const
{
return static_cast<bool>(mExportResult);
}
STAmount
getDeliveredAmount() const
{
@@ -176,6 +188,7 @@ private:
std::optional<STAmount> mDelivered;
std::optional<STArray> mHookExecutions;
std::optional<STArray> mHookEmissions;
std::optional<STObject> mExportResult;
STArray mNodes;
};

View File

@@ -31,7 +31,6 @@
// If you add an amendment here, then do not forget to increment `numFeatures`
// in include/xrpl/protocol/Feature.h.
XRPL_FEATURE(JsonTx, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(HookAPISerializedType240, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionedDomains, Supported::no, VoteBehavior::DefaultNo)
XRPL_FEATURE(DynamicNFT, Supported::no, VoteBehavior::DefaultNo)
@@ -57,6 +56,8 @@ XRPL_FEATURE(AMM, Supported::yes, VoteBehavior::DefaultNo
XRPL_FIX (ReducedOffersV1, Supported::yes, VoteBehavior::DefaultYes)
XRPL_FEATURE(HooksUpdate2, Supported::yes, VoteBehavior::DefaultNo);
XRPL_FEATURE(HookOnV2, Supported::yes, VoteBehavior::DefaultNo);
XRPL_FEATURE(Export, Supported::yes, VoteBehavior::DefaultNo);
XRPL_FEATURE(ConsensusEntropy, Supported::yes, VoteBehavior::DefaultNo);
XRPL_FIX (HookAPI20251128, Supported::yes, VoteBehavior::DefaultYes);
XRPL_FIX (CronStacking, Supported::yes, VoteBehavior::DefaultYes);
XRPL_FEATURE(ExtendedHookState, Supported::yes, VoteBehavior::DefaultNo);

View File

@@ -223,6 +223,20 @@ LEDGER_ENTRY(ltURI_TOKEN, 0x0055, URIToken, uri_token, ({
{sfPreviousTxnLgrSeq, soeREQUIRED},
}))
/** The ledger object which stores consensus-derived entropy.
\note This is a singleton: only one such object exists in the ledger.
\sa keylet::consensusEntropy
*/
LEDGER_ENTRY_DUPLICATE(ltCONSENSUS_ENTROPY, 0x0058, ConsensusEntropy, consensus_entropy, ({
{sfDigest, soeREQUIRED},
{sfEntropyCount, soeREQUIRED},
{sfLedgerSequence, soeREQUIRED},
{sfPreviousTxnID, soeREQUIRED},
{sfPreviousTxnLgrSeq, soeREQUIRED},
}))
/** A ledger object which describes an account.
\sa keylet::account
@@ -590,6 +604,22 @@ LEDGER_ENTRY(ltDID, 0x008D, DID, did, ({
{sfPreviousTxnLgrSeq, soeREQUIRED},
}))
//@@start shadow-ticket-ledger-entry
/** A shadow ticket for export replay protection.
Created when a transaction is exported. Consumed when
proof-of-execution is imported back. Account-owned (pays reserve).
\sa keylet::shadowTicket
*/
LEDGER_ENTRY(ltSHADOW_TICKET, 0x5374, ShadowTicket, shadow_ticket, ({
{sfAccount, soeREQUIRED},
{sfTicketSequence, soeREQUIRED},
{sfTransactionHash, soeREQUIRED},
{sfLedgerSequence, soeREQUIRED},
{sfOwnerNode, soeREQUIRED},
}))
//@@end shadow-ticket-ledger-entry
#undef EXPAND
#undef LEDGER_ENTRY_DUPLICATE

View File

@@ -42,6 +42,7 @@ TYPED_SFIELD(sfTickSize, UINT8, 16)
TYPED_SFIELD(sfUNLModifyDisabling, UINT8, 17)
TYPED_SFIELD(sfHookResult, UINT8, 18)
TYPED_SFIELD(sfWasLockingChainSend, UINT8, 19)
TYPED_SFIELD(sfSidecarType, UINT8, 20)
// 16-bit integers (common)
TYPED_SFIELD(sfLedgerEntryType, UINT16, 1, SField::sMD_Never)
@@ -59,6 +60,8 @@ TYPED_SFIELD(sfHookExecutionIndex, UINT16, 19)
TYPED_SFIELD(sfHookApiVersion, UINT16, 20)
TYPED_SFIELD(sfHookStateScale, UINT16, 21)
TYPED_SFIELD(sfLedgerFixType, UINT16, 22)
TYPED_SFIELD(sfHookExportCount, UINT16, 98)
TYPED_SFIELD(sfEntropyCount, UINT16, 99)
// 32-bit integers (common)
TYPED_SFIELD(sfNetworkID, UINT32, 1)
@@ -123,6 +126,7 @@ TYPED_SFIELD(sfImportSequence, UINT32, 97)
TYPED_SFIELD(sfRewardTime, UINT32, 98)
TYPED_SFIELD(sfRewardLgrFirst, UINT32, 99)
TYPED_SFIELD(sfRewardLgrLast, UINT32, 100)
TYPED_SFIELD(sfCancelTicketSequence, UINT32, 101)
// 64-bit integers (common)
TYPED_SFIELD(sfIndexNext, UINT64, 1)
@@ -217,6 +221,7 @@ TYPED_SFIELD(sfHookCanEmit, UINT256, 96)
TYPED_SFIELD(sfEmittedTxnID, UINT256, 97)
TYPED_SFIELD(sfGovernanceMarks, UINT256, 98)
TYPED_SFIELD(sfGovernanceFlags, UINT256, 99)
TYPED_SFIELD(sfEntropyDigest, UINT256, 100)
// number (common)
TYPED_SFIELD(sfNumber, NUMBER, 1)
@@ -292,10 +297,6 @@ TYPED_SFIELD(sfAssetClass, VL, 29)
TYPED_SFIELD(sfProvider, VL, 30)
TYPED_SFIELD(sfMPTokenMetadata, VL, 31)
TYPED_SFIELD(sfCredentialType, VL, 32)
// json-tx: the exact ASCII bytes the client signed; authoritative over
// the classical signing payload when present. Not part of the classical
// signing-payload computation (the bytes ARE the signing payload).
TYPED_SFIELD(sfJsonTxBody, VL, 33, SField::sMD_Default, SField::notSigning)
TYPED_SFIELD(sfRemarkValue, VL, 98)
TYPED_SFIELD(sfRemarkName, VL, 99)
@@ -383,6 +384,7 @@ UNTYPED_SFIELD(sfXChainClaimAttestationCollectionElement, OBJECT, 30)
UNTYPED_SFIELD(sfXChainCreateAccountAttestationCollectionElement, OBJECT, 31)
UNTYPED_SFIELD(sfPriceData, OBJECT, 32)
UNTYPED_SFIELD(sfCredential, OBJECT, 33)
UNTYPED_SFIELD(sfExportedTxn, OBJECT, 90)
UNTYPED_SFIELD(sfAmountEntry, OBJECT, 91)
UNTYPED_SFIELD(sfMintURIToken, OBJECT, 92)
UNTYPED_SFIELD(sfHookEmission, OBJECT, 93)
@@ -390,6 +392,7 @@ UNTYPED_SFIELD(sfImportVLKey, OBJECT, 94)
UNTYPED_SFIELD(sfActiveValidator, OBJECT, 95)
UNTYPED_SFIELD(sfGenesisMint, OBJECT, 96)
UNTYPED_SFIELD(sfRemark, OBJECT, 97)
UNTYPED_SFIELD(sfExportResult, OBJECT, 98)
// array of objects (common)
// ARRAY/1 is reserved for end of array

View File

@@ -500,6 +500,17 @@ TRANSACTION(ttPERMISSIONED_DOMAIN_DELETE, 72, PermissionedDomainDelete, ({
{sfDomainID, soeREQUIRED},
}))
//@@start export-transaction-types
/* User-submittable export: creates a cross-chain transaction for
validator signing. Retries via terRETRY_EXPORT until quorum.
Also supports shadow ticket cancellation via sfCancelTicketSequence.
At least one of sfExportedTxn or sfCancelTicketSequence must be present. */
TRANSACTION(ttEXPORT, 91, Export, ({
{sfExportedTxn, soeOPTIONAL},
{sfCancelTicketSequence, soeOPTIONAL},
}))
//@@end export-transaction-types
/* A pseudo-txn alarm signal for invoking a hook, emitted by validators after alarm set conditions are met */
TRANSACTION(ttCRON, 92, Cron, ({
{sfOwner, soeREQUIRED},
@@ -605,3 +616,10 @@ TRANSACTION(ttUNL_REPORT, 104, UNLReport, ({
{sfActiveValidator, soeOPTIONAL},
{sfImportVLKey, soeOPTIONAL},
}))
TRANSACTION(ttCONSENSUS_ENTROPY, 105, ConsensusEntropy, ({
{sfLedgerSequence, soeREQUIRED},
{sfDigest, soeREQUIRED},
{sfEntropyCount, soeREQUIRED},
{sfBlob, soeOPTIONAL},
}))

View File

@@ -109,14 +109,22 @@ public:
Consumer
newInboundEndpoint(beast::IP::Endpoint const& address)
{
//@@start rng-local-testnet-resource-bucket
// Inbound connections from the same IP normally share one
// resource bucket (port stripped) for DoS protection. For
// loopback addresses, preserve the port so local testnet nodes
// each get their own bucket instead of all sharing one.
auto const key = is_loopback(address) ? address : address.at_port(0);
//@@end rng-local-testnet-resource-bucket
Entry* entry(nullptr);
{
std::lock_guard _(lock_);
auto [resultIt, resultInserted] = table_.emplace(
std::piecewise_construct,
std::make_tuple(kindInbound, address.at_port(0)), // Key
std::make_tuple(m_clock.now())); // Entry
std::make_tuple(kindInbound, key),
std::make_tuple(m_clock.now()));
entry = &resultIt->second;
entry->key = &resultIt->first;

View File

@@ -1,20 +0,0 @@
[project]
name = "json-tx"
version = "0.0.1"
description = "Prototype: canonical packing of (tx_json_str, signature) using ripple-binary-codec output as an LZ dictionary"
requires-python = ">=3.10"
dependencies = [
"xrpl-py>=4.0.0",
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.hatch.build.targets.wheel]
packages = ["src/json_tx"]
[tool.uv]
dev-dependencies = [
"pytest>=8.0",
]

View File

@@ -1,24 +0,0 @@
from json_tx import patch # noqa: F401 -- side-effect: register JsonTxCompressed
from json_tx.codec import (
JSON_TX_FIELD,
TAGS,
canonical_json,
compress_stream,
decompress_stream,
pack,
pack_wire,
unpack,
unpack_wire,
)
__all__ = [
"JSON_TX_FIELD",
"TAGS",
"canonical_json",
"compress_stream",
"decompress_stream",
"pack",
"pack_wire",
"unpack",
"unpack_wire",
]

View File

@@ -1,79 +0,0 @@
"""Demo: compare classical binary, raw JSON+sig, and JsonTxCompressed wire."""
from __future__ import annotations
import json
from xrpl.core.binarycodec import encode, encode_for_signing
from xrpl.core.keypairs import (
derive_classic_address,
derive_keypair,
generate_seed,
sign,
)
from json_tx import canonical_json, compress_stream, pack_wire, unpack_wire
SAMPLE_TX = {
"TransactionType": "Payment",
"Account": "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh",
"Destination": "rf1BiGeXwwQoi8Z2ueFYTEXSwuJYfV2Jpn",
"Amount": "1000000",
"Fee": "12",
"Sequence": 1,
"Flags": 2147483648,
"SigningPubKey": "",
}
def _report(label: str, tx: dict, tx_json_str: str, priv: str) -> None:
signature = bytes.fromhex(sign(tx_json_str.encode().hex(), priv))
# 1. Classical signed wire: full binary with TxnSignature (what xrpl does today).
classical_signing = bytes.fromhex(encode_for_signing(tx))
classical_signed_dict = dict(tx)
classical_signed_dict["TxnSignature"] = signature.hex().upper()
classical_wire = bytes.fromhex(encode(classical_signed_dict))
# 2. Naive JSON submission: tx_json_str + signature (what json-tx wants to replace).
raw_json_plus_sig = len(tx_json_str) + len(signature)
# 3. json-tx wire: classical binary (ex TxnSignature) + JsonTxCompressed + TxnSignature.
stream = compress_stream(tx_json_str, tx_json=tx)
jsontx_wire = pack_wire(tx_json_str, signature)
print(f"\n== {label} ==")
print(f" tx_json_str : {len(tx_json_str):5d} bytes")
print(f" signature : {len(signature):5d} bytes")
print(f" classical binary (signing payload): {len(classical_signing):5d} bytes")
print(f" classical wire (binary + sig) : {len(classical_wire):5d} bytes <- today")
print(f" raw JSON + sig (bytes sent) : {raw_json_plus_sig:5d} bytes <- naive json submit")
print(f" JsonTxCompressed stream alone : {len(stream):5d} bytes [mode=0x{stream[0]:02x}]")
print(f" json-tx wire (classical + stream) : {len(jsontx_wire):5d} bytes <- proposed")
delta_vs_classical = len(jsontx_wire) - len(classical_wire)
print(f" overhead vs classical wire : {delta_vs_classical:+5d} bytes")
delta_vs_raw = len(jsontx_wire) - raw_json_plus_sig
print(f" overhead vs raw JSON+sig : {delta_vs_raw:+5d} bytes")
recovered_tx, recovered_str, recovered_sig = unpack_wire(jsontx_wire)
assert recovered_str == tx_json_str
assert recovered_sig == signature
assert recovered_tx == tx
def main() -> None:
seed = generate_seed()
pub, priv = derive_keypair(seed)
tx = dict(SAMPLE_TX)
tx["Account"] = derive_classic_address(pub)
tx["SigningPubKey"] = pub
_report("canonical tx_json_str (ordinal order, no whitespace)",
tx, canonical_json(tx), priv)
_report("non-canonical tx_json_str (insertion order + spaces)",
tx, json.dumps(tx, separators=(", ", ": ")), priv)
if __name__ == "__main__":
main()

View File

@@ -1,384 +0,0 @@
"""
json-tx: field-aware packer for (tx_json_str, signature).
The ripple binary codec already decomposes a transaction into ordered
(field_name, canonical_bytes) pairs. We reuse that as the dictionary.
Opcode stream:
OP_FIELD i -> render field i exactly as it appears in tx_json_str
OP_TAG t -> emit a glue snippet from TAGS (',', ':', '"', ...)
OP_RAW n <bytes> -> n bytes of literal passthrough
OP_END -> terminator
The stream is what gets stored in the `JsonTxCompressed` Blob field on
the wire. The TxnSignature field still carries the ed25519/secp256k1
signature, but that signature is now over the ASCII `tx_json_str`, not
the classical signing payload.
"""
from __future__ import annotations
import json
from dataclasses import dataclass
from typing import Any
from xrpl.core.binarycodec.definitions.field_instance import FieldInstance
from xrpl.core.binarycodec.types.st_object import STObject
# `patch` registers the JsonTxCompressed field. Imported for its side effect.
from json_tx import patch as _patch # noqa: F401
OP_FIELD = 0x01 # emit `"<name>":<canonical-value>` (tight pair)
OP_TAG = 0x02 # emit a structural glue byte
OP_RAW = 0x03 # length-prefixed literal bytes
OP_NAME = 0x04 # emit `"<name>"` for field i
OP_VALUE = 0x05 # emit canonical rendering of field i's value
OP_END = 0x00
# Mode byte at the head of every stream.
MODE_CANONICAL = 0x00 # body is an OP_* stream; tx_json_str reconstructs via dictionary
MODE_VERBATIM = 0x01 # body is raw UTF-8 tx_json_str, length-prefixed
JSON_TX_FIELD = _patch.FIELD_NAME
# Structural glue. INVARIANT: no tag may end in `"` -- otherwise it would
# eat the leading `"` of a field NAME/FIELD rendering and prevent re-align.
# Order: longest first so the greedy matcher picks the most specific glue.
TAGS: list[bytes] = [
# comma-based field separators
b",\n ",
b",\n\t",
b",\n ",
b",\n ",
b",\n",
b", ",
# colon-based name:value separators
b": ",
b": ",
# leading indent after `{`
b"\n ",
b"\n\t",
b"\n ",
b"\n ",
# single chars
b"{",
b"}",
b"[",
b"]",
b",",
b":",
b'"',
b" ",
b"\n",
b"\t",
]
# ---------- varint (unsigned LEB128) ----------
def _vw(n: int) -> bytes:
if n < 0:
raise ValueError("varint must be non-negative")
out = bytearray()
while True:
b = n & 0x7F
n >>= 7
if n:
out.append(b | 0x80)
else:
out.append(b)
return bytes(out)
def _vr(buf: bytes, i: int) -> tuple[int, int]:
n = 0
shift = 0
while True:
b = buf[i]
i += 1
n |= (b & 0x7F) << shift
if not (b & 0x80):
return n, i
shift += 7
# ---------- canonical field extraction ----------
@dataclass
class CanonField:
name: str
instance: FieldInstance
canonical_bytes: bytes
value: Any
def _ordered_fields_from_dict(tx_json: dict, *, skip: set[str]) -> list[CanonField]:
"""Serialize tx_json once through STObject, then re-parse to slice each field."""
from xrpl.core.binarycodec.binary_wrappers.binary_parser import BinaryParser
from xrpl.core.binarycodec.definitions import definitions
tx_for_enc = {k: v for k, v in tx_json.items() if k not in skip}
st = STObject.from_value(tx_for_enc)
blob = bytes(st)
parser = BinaryParser(blob.hex())
total = len(parser)
fields: list[CanonField] = []
while not parser.is_end():
start = total - len(parser)
fi = parser.read_field()
parser.read_field_value(fi)
end = total - len(parser)
fields.append(
CanonField(
name=fi.name,
instance=definitions.get_field_instance(fi.name),
canonical_bytes=blob[start:end],
value=tx_json[fi.name],
)
)
return fields
def _render_name(name: str) -> bytes:
"""Render `"Name"` including the enclosing double-quotes."""
return json.dumps(name, separators=(",", ":")).encode()
def _render_value(value: Any) -> bytes:
"""Render the canonical JSON form of a value."""
return json.dumps(value, separators=(",", ":")).encode()
def _render_field_json(name: str, value: Any) -> bytes:
"""Render one tight `"Name":<value>` pair -- no whitespace."""
return _render_name(name) + b":" + _render_value(value)
def canonical_json(tx_json: dict) -> str:
"""Serialize tx_json with fields in canonical (ordinal) order.
The signed ASCII JSON must match this ordering so the opcode stream
can walk fields in lock-step with the binary dictionary. Any field the
codec does not recognize falls to the tail in insertion order.
"""
from xrpl.core.binarycodec.definitions import definitions
known, unknown = [], []
for k, v in tx_json.items():
try:
fi = definitions.get_field_instance(k)
known.append((fi.ordinal, k, v))
except Exception:
unknown.append((k, v))
known.sort(key=lambda x: x[0])
ordered = [(k, v) for _o, k, v in known] + unknown
body = ",".join(
f"{json.dumps(k, separators=(',', ':'))}:"
f"{json.dumps(v, separators=(',', ':'))}"
for k, v in ordered
)
return "{" + body + "}"
# ---------- stream codec (opcode layer only) ----------
def compress_stream(tx_json_str: str, *, tx_json: dict | None = None) -> bytes:
"""Encode tx_json_str using tx_json's fields as dictionary.
Opcodes (after the mode byte):
OP_FIELD i -- `"Name":<canonical-value>` tight pair (no whitespace)
OP_NAME i -- `"Name"` alone (enclosing quotes included)
OP_VALUE i -- canonical rendering of field i's value
OP_TAG t -- structural glue from TAGS
OP_RAW n.. -- literal passthrough
The matcher at each cursor position tries, longest-match first:
1. OP_FIELD against any unused field pair
2. OP_NAME against any unused field name
3. OP_VALUE against any unused field value
4. OP_TAG
5. OP_RAW (one byte, coalesced)
If the resulting OP stream is not shorter than a verbatim copy, we
emit MODE_VERBATIM instead.
"""
if tx_json is None:
tx_json = json.loads(tx_json_str)
src = tx_json_str.encode()
fields = _ordered_fields_from_dict(
tx_json, skip={"TxnSignature", JSON_TX_FIELD}
)
name_render = [_render_name(f.name) for f in fields]
value_render = [_render_value(f.value) for f in fields]
pair_render = [name_render[i] + b":" + value_render[i] for i in range(len(fields))]
unused_pair = set(range(len(fields)))
unused_name = set(range(len(fields)))
unused_value = set(range(len(fields)))
body = bytearray()
raw_buf = bytearray()
def flush_raw() -> None:
if raw_buf:
body.append(OP_RAW)
body.extend(_vw(len(raw_buf)))
body.extend(raw_buf)
raw_buf.clear()
def best_match(candidates: set[int], renders: list[bytes], at: int) -> tuple[int, int]:
best_idx, best_len = -1, 0
for idx in candidates:
r = renders[idx]
if len(r) > best_len and src.startswith(r, at):
best_idx, best_len = idx, len(r)
return best_idx, best_len
i = 0
while i < len(src):
# Try the tight FIELD match first -- cheapest per byte of output.
idx, hit = best_match(unused_pair, pair_render, i)
if idx >= 0:
flush_raw()
body.append(OP_FIELD)
body.extend(_vw(idx))
i += hit
unused_pair.discard(idx)
unused_name.discard(idx)
unused_value.discard(idx)
continue
# Then NAME (standalone), preferring longer names over shorter ones.
idx, hit = best_match(unused_name, name_render, i)
if idx >= 0:
flush_raw()
body.append(OP_NAME)
body.extend(_vw(idx))
i += hit
unused_name.discard(idx)
unused_pair.discard(idx) # no longer a "pair" candidate
continue
# Then VALUE (standalone).
idx, hit = best_match(unused_value, value_render, i)
if idx >= 0:
flush_raw()
body.append(OP_VALUE)
body.extend(_vw(idx))
i += hit
unused_value.discard(idx)
unused_pair.discard(idx)
continue
# Structural glue.
tag_hit = -1
for t_idx, tag in enumerate(TAGS):
if src.startswith(tag, i):
tag_hit = t_idx
break
if tag_hit >= 0:
flush_raw()
body.append(OP_TAG)
body.extend(_vw(tag_hit))
i += len(TAGS[tag_hit])
continue
raw_buf.append(src[i])
i += 1
flush_raw()
body.append(OP_END)
op_form = bytes([MODE_CANONICAL]) + bytes(body)
verbatim = bytes([MODE_VERBATIM]) + _vw(len(src)) + src
return op_form if len(op_form) <= len(verbatim) else verbatim
def decompress_stream(stream: bytes, tx_json_for_dict: dict) -> str:
"""Rebuild tx_json_str from a json-tx stream + a parsed tx dict (dictionary source)."""
mode = stream[0]
i = 1
if mode == MODE_VERBATIM:
ln, i = _vr(stream, i)
return stream[i : i + ln].decode()
if mode != MODE_CANONICAL:
raise ValueError(f"unknown json-tx stream mode 0x{mode:02x}")
fields = _ordered_fields_from_dict(
tx_json_for_dict,
skip={"TxnSignature", JSON_TX_FIELD},
)
name_render = [_render_name(f.name) for f in fields]
value_render = [_render_value(f.value) for f in fields]
pair_render = [name_render[i] + b":" + value_render[i] for i in range(len(fields))]
out = bytearray()
while i < len(stream):
op = stream[i]
i += 1
if op == OP_END:
break
if op == OP_FIELD:
idx, i = _vr(stream, i)
out += pair_render[idx]
elif op == OP_NAME:
idx, i = _vr(stream, i)
out += name_render[idx]
elif op == OP_VALUE:
idx, i = _vr(stream, i)
out += value_render[idx]
elif op == OP_TAG:
idx, i = _vr(stream, i)
out += TAGS[idx]
elif op == OP_RAW:
ln, i = _vr(stream, i)
out += stream[i : i + ln]
i += ln
else:
raise ValueError(f"unknown opcode 0x{op:02x} at offset {i - 1}")
return out.decode()
# ---------- wire-tx pack/unpack (full binary with JsonTxCompressed) ----------
def pack_wire(tx_json_str: str, signature: bytes) -> bytes:
"""Build the on-wire binary tx: canonical binary + JsonTxCompressed + TxnSignature.
`tx_json_str` is the exact ASCII bytes the client signed -- any field
order / whitespace. `signature` is the raw signature over those bytes.
"""
from xrpl.core.binarycodec.main import encode
tx_json = json.loads(tx_json_str)
stream = compress_stream(tx_json_str, tx_json=tx_json)
wire_dict = dict(tx_json)
wire_dict[JSON_TX_FIELD] = stream.hex().upper()
wire_dict["TxnSignature"] = signature.hex().upper()
return bytes.fromhex(encode(wire_dict))
def unpack_wire(wire: bytes) -> tuple[dict, str, bytes]:
"""Decode the wire tx back into (tx_json_dict, tx_json_str, signature)."""
from xrpl.core.binarycodec.main import decode
decoded = decode(wire.hex().upper())
stream_hex = decoded.pop(JSON_TX_FIELD)
sig_hex = decoded.pop("TxnSignature")
# The dictionary is the other fields of the tx, i.e. the decoded dict
# minus the scaffolding keys (already removed above).
tx_json_str = decompress_stream(bytes.fromhex(stream_hex), decoded)
return json.loads(tx_json_str), tx_json_str, bytes.fromhex(sig_hex)
# ---------- convenience ----------
def pack(tx_json: dict, signature: bytes) -> bytes:
"""Alias for pack_wire for the common case."""
return pack_wire(tx_json, signature)
def unpack(wire: bytes) -> tuple[dict, bytes]:
tx_json, _, sig = unpack_wire(wire)
return tx_json, sig

View File

@@ -1,50 +0,0 @@
"""Runtime monkey-patch: register a `JsonTxCompressed` Blob field.
Import this module (or call `register_json_tx_field()`) before using the
binary codec so that a transaction dict containing `JsonTxCompressed`
will serialize it as a Blob and parse it back out.
The field is intentionally `isSigningField=False` — the ASCII JSON is
what TxnSignature signs, not the binary form, so this field must not
participate in any classical signing payload.
"""
from __future__ import annotations
from xrpl.core.binarycodec.definitions import definitions as _d
from xrpl.core.binarycodec.definitions.field_header import FieldHeader
from xrpl.core.binarycodec.definitions.field_info import FieldInfo
FIELD_NAME = "JsonTxCompressed"
_TYPE_NAME = "Blob"
def _pick_free_nth_for_type(type_name: str) -> int:
"""Find an unused `nth` code within the given type so there's no header clash."""
type_code = _d._TYPE_ORDINAL_MAP[type_name]
taken = {
h.field_code for h in _d._FIELD_HEADER_NAME_MAP if h.type_code == type_code
}
for n in range(1, 255):
if n not in taken:
return n
raise RuntimeError(f"no free nth code for type {type_name}")
def register_json_tx_field() -> None:
if FIELD_NAME in _d._FIELD_INFO_MAP:
return
nth = _pick_free_nth_for_type(_TYPE_NAME)
info = FieldInfo(
nth=nth,
is_variable_length_encoded=True,
is_serialized=True,
is_signing_field=False,
type_name=_TYPE_NAME,
)
header = FieldHeader(_d._TYPE_ORDINAL_MAP[_TYPE_NAME], nth)
_d._FIELD_INFO_MAP[FIELD_NAME] = info
_d._FIELD_HEADER_NAME_MAP[header] = FIELD_NAME
register_json_tx_field()

View File

@@ -1,63 +0,0 @@
import json
from xrpl.core.keypairs import derive_classic_address, derive_keypair, generate_seed, sign
from json_tx import compress_stream, decompress_stream, pack_wire, unpack_wire
def _signed(tx: dict) -> tuple[dict, str, bytes]:
seed = generate_seed()
pub, priv = derive_keypair(seed)
tx = dict(tx)
tx["Account"] = derive_classic_address(pub)
tx["SigningPubKey"] = pub
from json_tx import canonical_json
tx_json_str = canonical_json(tx)
sig = bytes.fromhex(sign(tx_json_str.encode().hex(), priv))
return tx, tx_json_str, sig
def test_wire_roundtrip():
tx, tx_json_str, sig = _signed({
"TransactionType": "Payment",
"Destination": "rf1BiGeXwwQoi8Z2ueFYTEXSwuJYfV2Jpn",
"Amount": "1000000",
"Fee": "12",
"Sequence": 1,
"Flags": 2147483648,
})
wire = pack_wire(tx_json_str, sig)
recovered_tx, recovered_str, recovered_sig = unpack_wire(wire)
assert recovered_str == tx_json_str
assert recovered_sig == sig
assert recovered_tx == tx
def test_stream_roundtrip_direct():
tx, tx_json_str, _ = _signed({
"TransactionType": "Payment",
"Destination": "rf1BiGeXwwQoi8Z2ueFYTEXSwuJYfV2Jpn",
"Amount": "2500000",
"Fee": "15",
"Sequence": 42,
"Flags": 0,
})
stream = compress_stream(tx_json_str, tx_json=tx)
rebuilt = decompress_stream(stream, tx)
assert rebuilt == tx_json_str
def test_raw_fallback_preserved():
# Unusual whitespace -> RAW opcodes. Dictionary still reconstructs losslessly.
tx = {
"TransactionType": "Payment",
"Account": "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh",
"Destination": "rf1BiGeXwwQoi8Z2ueFYTEXSwuJYfV2Jpn",
"Amount": "1",
"Fee": "12",
"Sequence": 1,
"SigningPubKey": "",
}
odd = '{ "TransactionType" : "Payment" }'
stream = compress_stream(odd, tx_json=tx)
assert decompress_stream(stream, tx) == odd

476
json-tx-py/uv.lock generated
View File

@@ -1,476 +0,0 @@
version = 1
revision = 3
requires-python = ">=3.10"
[[package]]
name = "anyio"
version = "4.13.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "exceptiongroup", marker = "python_full_version < '3.11'" },
{ name = "idna" },
{ name = "typing-extensions", marker = "python_full_version < '3.13'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/19/14/2c5dd9f512b66549ae92767a9c7b330ae88e1932ca57876909410251fe13/anyio-4.13.0.tar.gz", hash = "sha256:334b70e641fd2221c1505b3890c69882fe4a2df910cba14d97019b90b24439dc", size = 231622, upload-time = "2026-03-24T12:59:09.671Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/da/42/e921fccf5015463e32a3cf6ee7f980a6ed0f395ceeaa45060b61d86486c2/anyio-4.13.0-py3-none-any.whl", hash = "sha256:08b310f9e24a9594186fd75b4f73f4a4152069e3853f1ed8bfbf58369f4ad708", size = 114353, upload-time = "2026-03-24T12:59:08.246Z" },
]
[[package]]
name = "base58"
version = "2.1.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/7f/45/8ae61209bb9015f516102fa559a2914178da1d5868428bd86a1b4421141d/base58-2.1.1.tar.gz", hash = "sha256:c5d0cb3f5b6e81e8e35da5754388ddcc6d0d14b6c6a132cb93d69ed580a7278c", size = 6528, upload-time = "2021-10-30T22:12:17.858Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/4a/45/ec96b29162a402fc4c1c5512d114d7b3787b9d1c2ec241d9568b4816ee23/base58-2.1.1-py3-none-any.whl", hash = "sha256:11a36f4d3ce51dfc1043f3218591ac4eb1ceb172919cebe05b52a5bcc8d245c2", size = 5621, upload-time = "2021-10-30T22:12:16.658Z" },
]
[[package]]
name = "certifi"
version = "2026.4.22"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/25/ee/6caf7a40c36a1220410afe15a1cc64993a1f864871f698c0f93acb72842a/certifi-2026.4.22.tar.gz", hash = "sha256:8d455352a37b71bf76a79caa83a3d6c25afee4a385d632127b6afb3963f1c580", size = 137077, upload-time = "2026-04-22T11:26:11.191Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/22/30/7cd8fdcdfbc5b869528b079bfb76dcdf6056b1a2097a662e5e8c04f42965/certifi-2026.4.22-py3-none-any.whl", hash = "sha256:3cb2210c8f88ba2318d29b0388d1023c8492ff72ecdde4ebdaddbb13a31b1c4a", size = 135707, upload-time = "2026-04-22T11:26:09.372Z" },
]
[[package]]
name = "colorama"
version = "0.4.6"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
]
[[package]]
name = "deprecated"
version = "1.3.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "wrapt" },
]
sdist = { url = "https://files.pythonhosted.org/packages/49/85/12f0a49a7c4ffb70572b6c2ef13c90c88fd190debda93b23f026b25f9634/deprecated-1.3.1.tar.gz", hash = "sha256:b1b50e0ff0c1fddaa5708a2c6b0a6588bb09b892825ab2b214ac9ea9d92a5223", size = 2932523, upload-time = "2025-10-30T08:19:02.757Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/84/d0/205d54408c08b13550c733c4b85429e7ead111c7f0014309637425520a9a/deprecated-1.3.1-py2.py3-none-any.whl", hash = "sha256:597bfef186b6f60181535a29fbe44865ce137a5079f295b479886c82729d5f3f", size = 11298, upload-time = "2025-10-30T08:19:00.758Z" },
]
[[package]]
name = "ecpy"
version = "1.2.5"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e0/48/3f8c1a252e3a46fd04e6fabc5e11c933b9c39cf84edd4e7c906e29c23750/ECPy-1.2.5.tar.gz", hash = "sha256:9635cffb9b6ecf7fd7f72aea1665829ac74a1d272006d0057d45a621aae20228", size = 38458, upload-time = "2020-10-26T11:56:16.313Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e8/35/4a113189f7138035a21bd255d30dc7bffc77c942c93b7948d2eac2e22429/ECPy-1.2.5-py3-none-any.whl", hash = "sha256:559c92e42406d9d1a6b2b8fc26e6ad7bc985f33903b72f426a56cb1073a25ce3", size = 43075, upload-time = "2020-10-26T11:56:13.613Z" },
]
[[package]]
name = "exceptiongroup"
version = "1.3.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions", marker = "python_full_version < '3.13'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/50/79/66800aadf48771f6b62f7eb014e352e5d06856655206165d775e675a02c9/exceptiongroup-1.3.1.tar.gz", hash = "sha256:8b412432c6055b0b7d14c310000ae93352ed6754f70fa8f7c34141f91c4e3219", size = 30371, upload-time = "2025-11-21T23:01:54.787Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/8a/0e/97c33bf5009bdbac74fd2beace167cab3f978feb69cc36f1ef79360d6c4e/exceptiongroup-1.3.1-py3-none-any.whl", hash = "sha256:a7a39a3bd276781e98394987d3a5701d0c4edffb633bb7a5144577f82c773598", size = 16740, upload-time = "2025-11-21T23:01:53.443Z" },
]
[[package]]
name = "h11"
version = "0.16.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/01/ee/02a2c011bdab74c6fb3c75474d40b3052059d95df7e73351460c8588d963/h11-0.16.0.tar.gz", hash = "sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1", size = 101250, upload-time = "2025-04-24T03:35:25.427Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/04/4b/29cac41a4d98d144bf5f6d33995617b185d14b22401f75ca86f384e87ff1/h11-0.16.0-py3-none-any.whl", hash = "sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86", size = 37515, upload-time = "2025-04-24T03:35:24.344Z" },
]
[[package]]
name = "httpcore"
version = "1.0.9"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "certifi" },
{ name = "h11" },
]
sdist = { url = "https://files.pythonhosted.org/packages/06/94/82699a10bca87a5556c9c59b5963f2d039dbd239f25bc2a63907a05a14cb/httpcore-1.0.9.tar.gz", hash = "sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8", size = 85484, upload-time = "2025-04-24T22:06:22.219Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7e/f5/f66802a942d491edb555dd61e3a9961140fd64c90bce1eafd741609d334d/httpcore-1.0.9-py3-none-any.whl", hash = "sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55", size = 78784, upload-time = "2025-04-24T22:06:20.566Z" },
]
[[package]]
name = "httpx"
version = "0.28.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
{ name = "certifi" },
{ name = "httpcore" },
{ name = "idna" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b1/df/48c586a5fe32a0f01324ee087459e112ebb7224f646c0b5023f5e79e9956/httpx-0.28.1.tar.gz", hash = "sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc", size = 141406, upload-time = "2024-12-06T15:37:23.222Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517, upload-time = "2024-12-06T15:37:21.509Z" },
]
[[package]]
name = "idna"
version = "3.13"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ce/cc/762dfb036166873f0059f3b7de4565e1b5bc3d6f28a414c13da27e442f99/idna-3.13.tar.gz", hash = "sha256:585ea8fe5d69b9181ec1afba340451fba6ba764af97026f92a91d4eef164a242", size = 194210, upload-time = "2026-04-22T16:42:42.314Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/5d/13/ad7d7ca3808a898b4612b6fe93cde56b53f3034dcde235acb1f0e1df24c6/idna-3.13-py3-none-any.whl", hash = "sha256:892ea0cde124a99ce773decba204c5552b69c3c67ffd5f232eb7696135bc8bb3", size = 68629, upload-time = "2026-04-22T16:42:40.909Z" },
]
[[package]]
name = "iniconfig"
version = "2.3.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
]
[[package]]
name = "json-tx"
version = "0.0.1"
source = { editable = "." }
dependencies = [
{ name = "xrpl-py" },
]
[package.dev-dependencies]
dev = [
{ name = "pytest" },
]
[package.metadata]
requires-dist = [{ name = "xrpl-py", specifier = ">=4.0.0" }]
[package.metadata.requires-dev]
dev = [{ name = "pytest", specifier = ">=8.0" }]
[[package]]
name = "packaging"
version = "26.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/df/de/0d2b39fb4af88a0258f3bac87dfcbb48e73fbdea4a2ed0e2213f9a4c2f9a/packaging-26.1.tar.gz", hash = "sha256:f042152b681c4bfac5cae2742a55e103d27ab2ec0f3d88037136b6bfe7c9c5de", size = 215519, upload-time = "2026-04-14T21:12:49.362Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7a/c2/920ef838e2f0028c8262f16101ec09ebd5969864e5a64c4c05fad0617c56/packaging-26.1-py3-none-any.whl", hash = "sha256:5d9c0669c6285e491e0ced2eee587eaf67b670d94a19e94e3984a481aba6802f", size = 95831, upload-time = "2026-04-14T21:12:47.56Z" },
]
[[package]]
name = "pluggy"
version = "1.6.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
]
[[package]]
name = "pycryptodome"
version = "3.23.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/8e/a6/8452177684d5e906854776276ddd34eca30d1b1e15aa1ee9cefc289a33f5/pycryptodome-3.23.0.tar.gz", hash = "sha256:447700a657182d60338bab09fdb27518f8856aecd80ae4c6bdddb67ff5da44ef", size = 4921276, upload-time = "2025-05-17T17:21:45.242Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/04/5d/bdb09489b63cd34a976cc9e2a8d938114f7a53a74d3dd4f125ffa49dce82/pycryptodome-3.23.0-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:0011f7f00cdb74879142011f95133274741778abba114ceca229adbf8e62c3e4", size = 2495152, upload-time = "2025-05-17T17:20:20.833Z" },
{ url = "https://files.pythonhosted.org/packages/a7/ce/7840250ed4cc0039c433cd41715536f926d6e86ce84e904068eb3244b6a6/pycryptodome-3.23.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:90460fc9e088ce095f9ee8356722d4f10f86e5be06e2354230a9880b9c549aae", size = 1639348, upload-time = "2025-05-17T17:20:23.171Z" },
{ url = "https://files.pythonhosted.org/packages/ee/f0/991da24c55c1f688d6a3b5a11940567353f74590734ee4a64294834ae472/pycryptodome-3.23.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4764e64b269fc83b00f682c47443c2e6e85b18273712b98aa43bcb77f8570477", size = 2184033, upload-time = "2025-05-17T17:20:25.424Z" },
{ url = "https://files.pythonhosted.org/packages/54/16/0e11882deddf00f68b68dd4e8e442ddc30641f31afeb2bc25588124ac8de/pycryptodome-3.23.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eb8f24adb74984aa0e5d07a2368ad95276cf38051fe2dc6605cbcf482e04f2a7", size = 2270142, upload-time = "2025-05-17T17:20:27.808Z" },
{ url = "https://files.pythonhosted.org/packages/d5/fc/4347fea23a3f95ffb931f383ff28b3f7b1fe868739182cb76718c0da86a1/pycryptodome-3.23.0-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d97618c9c6684a97ef7637ba43bdf6663a2e2e77efe0f863cce97a76af396446", size = 2309384, upload-time = "2025-05-17T17:20:30.765Z" },
{ url = "https://files.pythonhosted.org/packages/6e/d9/c5261780b69ce66d8cfab25d2797bd6e82ba0241804694cd48be41add5eb/pycryptodome-3.23.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:9a53a4fe5cb075075d515797d6ce2f56772ea7e6a1e5e4b96cf78a14bac3d265", size = 2183237, upload-time = "2025-05-17T17:20:33.736Z" },
{ url = "https://files.pythonhosted.org/packages/5a/6f/3af2ffedd5cfa08c631f89452c6648c4d779e7772dfc388c77c920ca6bbf/pycryptodome-3.23.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:763d1d74f56f031788e5d307029caef067febf890cd1f8bf61183ae142f1a77b", size = 2343898, upload-time = "2025-05-17T17:20:36.086Z" },
{ url = "https://files.pythonhosted.org/packages/9a/dc/9060d807039ee5de6e2f260f72f3d70ac213993a804f5e67e0a73a56dd2f/pycryptodome-3.23.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:954af0e2bd7cea83ce72243b14e4fb518b18f0c1649b576d114973e2073b273d", size = 2269197, upload-time = "2025-05-17T17:20:38.414Z" },
{ url = "https://files.pythonhosted.org/packages/f9/34/e6c8ca177cb29dcc4967fef73f5de445912f93bd0343c9c33c8e5bf8cde8/pycryptodome-3.23.0-cp313-cp313t-win32.whl", hash = "sha256:257bb3572c63ad8ba40b89f6fc9d63a2a628e9f9708d31ee26560925ebe0210a", size = 1768600, upload-time = "2025-05-17T17:20:40.688Z" },
{ url = "https://files.pythonhosted.org/packages/e4/1d/89756b8d7ff623ad0160f4539da571d1f594d21ee6d68be130a6eccb39a4/pycryptodome-3.23.0-cp313-cp313t-win_amd64.whl", hash = "sha256:6501790c5b62a29fcb227bd6b62012181d886a767ce9ed03b303d1f22eb5c625", size = 1799740, upload-time = "2025-05-17T17:20:42.413Z" },
{ url = "https://files.pythonhosted.org/packages/5d/61/35a64f0feaea9fd07f0d91209e7be91726eb48c0f1bfc6720647194071e4/pycryptodome-3.23.0-cp313-cp313t-win_arm64.whl", hash = "sha256:9a77627a330ab23ca43b48b130e202582e91cc69619947840ea4d2d1be21eb39", size = 1703685, upload-time = "2025-05-17T17:20:44.388Z" },
{ url = "https://files.pythonhosted.org/packages/db/6c/a1f71542c969912bb0e106f64f60a56cc1f0fabecf9396f45accbe63fa68/pycryptodome-3.23.0-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:187058ab80b3281b1de11c2e6842a357a1f71b42cb1e15bce373f3d238135c27", size = 2495627, upload-time = "2025-05-17T17:20:47.139Z" },
{ url = "https://files.pythonhosted.org/packages/6e/4e/a066527e079fc5002390c8acdd3aca431e6ea0a50ffd7201551175b47323/pycryptodome-3.23.0-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:cfb5cd445280c5b0a4e6187a7ce8de5a07b5f3f897f235caa11f1f435f182843", size = 1640362, upload-time = "2025-05-17T17:20:50.392Z" },
{ url = "https://files.pythonhosted.org/packages/50/52/adaf4c8c100a8c49d2bd058e5b551f73dfd8cb89eb4911e25a0c469b6b4e/pycryptodome-3.23.0-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:67bd81fcbe34f43ad9422ee8fd4843c8e7198dd88dd3d40e6de42ee65fbe1490", size = 2182625, upload-time = "2025-05-17T17:20:52.866Z" },
{ url = "https://files.pythonhosted.org/packages/5f/e9/a09476d436d0ff1402ac3867d933c61805ec2326c6ea557aeeac3825604e/pycryptodome-3.23.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c8987bd3307a39bc03df5c8e0e3d8be0c4c3518b7f044b0f4c15d1aa78f52575", size = 2268954, upload-time = "2025-05-17T17:20:55.027Z" },
{ url = "https://files.pythonhosted.org/packages/f9/c5/ffe6474e0c551d54cab931918127c46d70cab8f114e0c2b5a3c071c2f484/pycryptodome-3.23.0-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aa0698f65e5b570426fc31b8162ed4603b0c2841cbb9088e2b01641e3065915b", size = 2308534, upload-time = "2025-05-17T17:20:57.279Z" },
{ url = "https://files.pythonhosted.org/packages/18/28/e199677fc15ecf43010f2463fde4c1a53015d1fe95fb03bca2890836603a/pycryptodome-3.23.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:53ecbafc2b55353edcebd64bf5da94a2a2cdf5090a6915bcca6eca6cc452585a", size = 2181853, upload-time = "2025-05-17T17:20:59.322Z" },
{ url = "https://files.pythonhosted.org/packages/ce/ea/4fdb09f2165ce1365c9eaefef36625583371ee514db58dc9b65d3a255c4c/pycryptodome-3.23.0-cp37-abi3-musllinux_1_2_i686.whl", hash = "sha256:156df9667ad9f2ad26255926524e1c136d6664b741547deb0a86a9acf5ea631f", size = 2342465, upload-time = "2025-05-17T17:21:03.83Z" },
{ url = "https://files.pythonhosted.org/packages/22/82/6edc3fc42fe9284aead511394bac167693fb2b0e0395b28b8bedaa07ef04/pycryptodome-3.23.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:dea827b4d55ee390dc89b2afe5927d4308a8b538ae91d9c6f7a5090f397af1aa", size = 2267414, upload-time = "2025-05-17T17:21:06.72Z" },
{ url = "https://files.pythonhosted.org/packages/59/fe/aae679b64363eb78326c7fdc9d06ec3de18bac68be4b612fc1fe8902693c/pycryptodome-3.23.0-cp37-abi3-win32.whl", hash = "sha256:507dbead45474b62b2bbe318eb1c4c8ee641077532067fec9c1aa82c31f84886", size = 1768484, upload-time = "2025-05-17T17:21:08.535Z" },
{ url = "https://files.pythonhosted.org/packages/54/2f/e97a1b8294db0daaa87012c24a7bb714147c7ade7656973fd6c736b484ff/pycryptodome-3.23.0-cp37-abi3-win_amd64.whl", hash = "sha256:c75b52aacc6c0c260f204cbdd834f76edc9fb0d8e0da9fbf8352ef58202564e2", size = 1799636, upload-time = "2025-05-17T17:21:10.393Z" },
{ url = "https://files.pythonhosted.org/packages/18/3d/f9441a0d798bf2b1e645adc3265e55706aead1255ccdad3856dbdcffec14/pycryptodome-3.23.0-cp37-abi3-win_arm64.whl", hash = "sha256:11eeeb6917903876f134b56ba11abe95c0b0fd5e3330def218083c7d98bbcb3c", size = 1703675, upload-time = "2025-05-17T17:21:13.146Z" },
{ url = "https://files.pythonhosted.org/packages/d9/12/e33935a0709c07de084d7d58d330ec3f4daf7910a18e77937affdb728452/pycryptodome-3.23.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:ddb95b49df036ddd264a0ad246d1be5b672000f12d6961ea2c267083a5e19379", size = 1623886, upload-time = "2025-05-17T17:21:20.614Z" },
{ url = "https://files.pythonhosted.org/packages/22/0b/aa8f9419f25870889bebf0b26b223c6986652bdf071f000623df11212c90/pycryptodome-3.23.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8e95564beb8782abfd9e431c974e14563a794a4944c29d6d3b7b5ea042110b4", size = 1672151, upload-time = "2025-05-17T17:21:22.666Z" },
{ url = "https://files.pythonhosted.org/packages/d4/5e/63f5cbde2342b7f70a39e591dbe75d9809d6338ce0b07c10406f1a140cdc/pycryptodome-3.23.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:14e15c081e912c4b0d75632acd8382dfce45b258667aa3c67caf7a4d4c13f630", size = 1664461, upload-time = "2025-05-17T17:21:25.225Z" },
{ url = "https://files.pythonhosted.org/packages/d6/92/608fbdad566ebe499297a86aae5f2a5263818ceeecd16733006f1600403c/pycryptodome-3.23.0-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a7fc76bf273353dc7e5207d172b83f569540fc9a28d63171061c42e361d22353", size = 1702440, upload-time = "2025-05-17T17:21:27.991Z" },
{ url = "https://files.pythonhosted.org/packages/d1/92/2eadd1341abd2989cce2e2740b4423608ee2014acb8110438244ee97d7ff/pycryptodome-3.23.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:45c69ad715ca1a94f778215a11e66b7ff989d792a4d63b68dc586a1da1392ff5", size = 1803005, upload-time = "2025-05-17T17:21:31.37Z" },
]
[[package]]
name = "pygments"
version = "2.20.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/c3/b2/bc9c9196916376152d655522fdcebac55e66de6603a76a02bca1b6414f6c/pygments-2.20.0.tar.gz", hash = "sha256:6757cd03768053ff99f3039c1a36d6c0aa0b263438fcab17520b30a303a82b5f", size = 4955991, upload-time = "2026-03-29T13:29:33.898Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f4/7e/a72dd26f3b0f4f2bf1dd8923c85f7ceb43172af56d63c7383eb62b332364/pygments-2.20.0-py3-none-any.whl", hash = "sha256:81a9e26dd42fd28a23a2d169d86d7ac03b46e2f8b59ed4698fb4785f946d0176", size = 1231151, upload-time = "2026-03-29T13:29:30.038Z" },
]
[[package]]
name = "pytest"
version = "9.0.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "exceptiongroup", marker = "python_full_version < '3.11'" },
{ name = "iniconfig" },
{ name = "packaging" },
{ name = "pluggy" },
{ name = "pygments" },
{ name = "tomli", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/7d/0d/549bd94f1a0a402dc8cf64563a117c0f3765662e2e668477624baeec44d5/pytest-9.0.3.tar.gz", hash = "sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c", size = 1572165, upload-time = "2026-04-07T17:16:18.027Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d4/24/a372aaf5c9b7208e7112038812994107bc65a84cd00e0354a88c2c77a617/pytest-9.0.3-py3-none-any.whl", hash = "sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9", size = 375249, upload-time = "2026-04-07T17:16:16.13Z" },
]
[[package]]
name = "tomli"
version = "2.4.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/22/de/48c59722572767841493b26183a0d1cc411d54fd759c5607c4590b6563a6/tomli-2.4.1.tar.gz", hash = "sha256:7c7e1a961a0b2f2472c1ac5b69affa0ae1132c39adcb67aba98568702b9cc23f", size = 17543, upload-time = "2026-03-25T20:22:03.828Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f4/11/db3d5885d8528263d8adc260bb2d28ebf1270b96e98f0e0268d32b8d9900/tomli-2.4.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f8f0fc26ec2cc2b965b7a3b87cd19c5c6b8c5e5f436b984e85f486d652285c30", size = 154704, upload-time = "2026-03-25T20:21:10.473Z" },
{ url = "https://files.pythonhosted.org/packages/6d/f7/675db52c7e46064a9aa928885a9b20f4124ecb9bc2e1ce74c9106648d202/tomli-2.4.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4ab97e64ccda8756376892c53a72bd1f964e519c77236368527f758fbc36a53a", size = 149454, upload-time = "2026-03-25T20:21:12.036Z" },
{ url = "https://files.pythonhosted.org/packages/61/71/81c50943cf953efa35bce7646caab3cf457a7d8c030b27cfb40d7235f9ee/tomli-2.4.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:96481a5786729fd470164b47cdb3e0e58062a496f455ee41b4403be77cb5a076", size = 237561, upload-time = "2026-03-25T20:21:13.098Z" },
{ url = "https://files.pythonhosted.org/packages/48/c1/f41d9cb618acccca7df82aaf682f9b49013c9397212cb9f53219e3abac37/tomli-2.4.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5a881ab208c0baf688221f8cecc5401bd291d67e38a1ac884d6736cbcd8247e9", size = 243824, upload-time = "2026-03-25T20:21:14.569Z" },
{ url = "https://files.pythonhosted.org/packages/22/e4/5a816ecdd1f8ca51fb756ef684b90f2780afc52fc67f987e3c61d800a46d/tomli-2.4.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:47149d5bd38761ac8be13a84864bf0b7b70bc051806bc3669ab1cbc56216b23c", size = 242227, upload-time = "2026-03-25T20:21:15.712Z" },
{ url = "https://files.pythonhosted.org/packages/6b/49/2b2a0ef529aa6eec245d25f0c703e020a73955ad7edf73e7f54ddc608aa5/tomli-2.4.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ec9bfaf3ad2df51ace80688143a6a4ebc09a248f6ff781a9945e51937008fcbc", size = 247859, upload-time = "2026-03-25T20:21:17.001Z" },
{ url = "https://files.pythonhosted.org/packages/83/bd/6c1a630eaca337e1e78c5903104f831bda934c426f9231429396ce3c3467/tomli-2.4.1-cp311-cp311-win32.whl", hash = "sha256:ff2983983d34813c1aeb0fa89091e76c3a22889ee83ab27c5eeb45100560c049", size = 97204, upload-time = "2026-03-25T20:21:18.079Z" },
{ url = "https://files.pythonhosted.org/packages/42/59/71461df1a885647e10b6bb7802d0b8e66480c61f3f43079e0dcd315b3954/tomli-2.4.1-cp311-cp311-win_amd64.whl", hash = "sha256:5ee18d9ebdb417e384b58fe414e8d6af9f4e7a0ae761519fb50f721de398dd4e", size = 108084, upload-time = "2026-03-25T20:21:18.978Z" },
{ url = "https://files.pythonhosted.org/packages/b8/83/dceca96142499c069475b790e7913b1044c1a4337e700751f48ed723f883/tomli-2.4.1-cp311-cp311-win_arm64.whl", hash = "sha256:c2541745709bad0264b7d4705ad453b76ccd191e64aa6f0fc66b69a293a45ece", size = 95285, upload-time = "2026-03-25T20:21:20.309Z" },
{ url = "https://files.pythonhosted.org/packages/c1/ba/42f134a3fe2b370f555f44b1d72feebb94debcab01676bf918d0cb70e9aa/tomli-2.4.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c742f741d58a28940ce01d58f0ab2ea3ced8b12402f162f4d534dfe18ba1cd6a", size = 155924, upload-time = "2026-03-25T20:21:21.626Z" },
{ url = "https://files.pythonhosted.org/packages/dc/c7/62d7a17c26487ade21c5422b646110f2162f1fcc95980ef7f63e73c68f14/tomli-2.4.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:7f86fd587c4ed9dd76f318225e7d9b29cfc5a9d43de44e5754db8d1128487085", size = 150018, upload-time = "2026-03-25T20:21:23.002Z" },
{ url = "https://files.pythonhosted.org/packages/5c/05/79d13d7c15f13bdef410bdd49a6485b1c37d28968314eabee452c22a7fda/tomli-2.4.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ff18e6a727ee0ab0388507b89d1bc6a22b138d1e2fa56d1ad494586d61d2eae9", size = 244948, upload-time = "2026-03-25T20:21:24.04Z" },
{ url = "https://files.pythonhosted.org/packages/10/90/d62ce007a1c80d0b2c93e02cab211224756240884751b94ca72df8a875ca/tomli-2.4.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:136443dbd7e1dee43c68ac2694fde36b2849865fa258d39bf822c10e8068eac5", size = 253341, upload-time = "2026-03-25T20:21:25.177Z" },
{ url = "https://files.pythonhosted.org/packages/1a/7e/caf6496d60152ad4ed09282c1885cca4eea150bfd007da84aea07bcc0a3e/tomli-2.4.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:5e262d41726bc187e69af7825504c933b6794dc3fbd5945e41a79bb14c31f585", size = 248159, upload-time = "2026-03-25T20:21:26.364Z" },
{ url = "https://files.pythonhosted.org/packages/99/e7/c6f69c3120de34bbd882c6fba7975f3d7a746e9218e56ab46a1bc4b42552/tomli-2.4.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:5cb41aa38891e073ee49d55fbc7839cfdb2bc0e600add13874d048c94aadddd1", size = 253290, upload-time = "2026-03-25T20:21:27.46Z" },
{ url = "https://files.pythonhosted.org/packages/d6/2f/4a3c322f22c5c66c4b836ec58211641a4067364f5dcdd7b974b4c5da300c/tomli-2.4.1-cp312-cp312-win32.whl", hash = "sha256:da25dc3563bff5965356133435b757a795a17b17d01dbc0f42fb32447ddfd917", size = 98141, upload-time = "2026-03-25T20:21:28.492Z" },
{ url = "https://files.pythonhosted.org/packages/24/22/4daacd05391b92c55759d55eaee21e1dfaea86ce5c571f10083360adf534/tomli-2.4.1-cp312-cp312-win_amd64.whl", hash = "sha256:52c8ef851d9a240f11a88c003eacb03c31fc1c9c4ec64a99a0f922b93874fda9", size = 108847, upload-time = "2026-03-25T20:21:29.386Z" },
{ url = "https://files.pythonhosted.org/packages/68/fd/70e768887666ddd9e9f5d85129e84910f2db2796f9096aa02b721a53098d/tomli-2.4.1-cp312-cp312-win_arm64.whl", hash = "sha256:f758f1b9299d059cc3f6546ae2af89670cb1c4d48ea29c3cacc4fe7de3058257", size = 95088, upload-time = "2026-03-25T20:21:30.677Z" },
{ url = "https://files.pythonhosted.org/packages/07/06/b823a7e818c756d9a7123ba2cda7d07bc2dd32835648d1a7b7b7a05d848d/tomli-2.4.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:36d2bd2ad5fb9eaddba5226aa02c8ec3fa4f192631e347b3ed28186d43be6b54", size = 155866, upload-time = "2026-03-25T20:21:31.65Z" },
{ url = "https://files.pythonhosted.org/packages/14/6f/12645cf7f08e1a20c7eb8c297c6f11d31c1b50f316a7e7e1e1de6e2e7b7e/tomli-2.4.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:eb0dc4e38e6a1fd579e5d50369aa2e10acfc9cace504579b2faabb478e76941a", size = 149887, upload-time = "2026-03-25T20:21:33.028Z" },
{ url = "https://files.pythonhosted.org/packages/5c/e0/90637574e5e7212c09099c67ad349b04ec4d6020324539297b634a0192b0/tomli-2.4.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c7f2c7f2b9ca6bdeef8f0fa897f8e05085923eb091721675170254cbc5b02897", size = 243704, upload-time = "2026-03-25T20:21:34.51Z" },
{ url = "https://files.pythonhosted.org/packages/10/8f/d3ddb16c5a4befdf31a23307f72828686ab2096f068eaf56631e136c1fdd/tomli-2.4.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f3c6818a1a86dd6dca7ddcaaf76947d5ba31aecc28cb1b67009a5877c9a64f3f", size = 251628, upload-time = "2026-03-25T20:21:36.012Z" },
{ url = "https://files.pythonhosted.org/packages/e3/f1/dbeeb9116715abee2485bf0a12d07a8f31af94d71608c171c45f64c0469d/tomli-2.4.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:d312ef37c91508b0ab2cee7da26ec0b3ed2f03ce12bd87a588d771ae15dcf82d", size = 247180, upload-time = "2026-03-25T20:21:37.136Z" },
{ url = "https://files.pythonhosted.org/packages/d3/74/16336ffd19ed4da28a70959f92f506233bd7cfc2332b20bdb01591e8b1d1/tomli-2.4.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:51529d40e3ca50046d7606fa99ce3956a617f9b36380da3b7f0dd3dd28e68cb5", size = 251674, upload-time = "2026-03-25T20:21:38.298Z" },
{ url = "https://files.pythonhosted.org/packages/16/f9/229fa3434c590ddf6c0aa9af64d3af4b752540686cace29e6281e3458469/tomli-2.4.1-cp313-cp313-win32.whl", hash = "sha256:2190f2e9dd7508d2a90ded5ed369255980a1bcdd58e52f7fe24b8162bf9fedbd", size = 97976, upload-time = "2026-03-25T20:21:39.316Z" },
{ url = "https://files.pythonhosted.org/packages/6a/1e/71dfd96bcc1c775420cb8befe7a9d35f2e5b1309798f009dca17b7708c1e/tomli-2.4.1-cp313-cp313-win_amd64.whl", hash = "sha256:8d65a2fbf9d2f8352685bc1364177ee3923d6baf5e7f43ea4959d7d8bc326a36", size = 108755, upload-time = "2026-03-25T20:21:40.248Z" },
{ url = "https://files.pythonhosted.org/packages/83/7a/d34f422a021d62420b78f5c538e5b102f62bea616d1d75a13f0a88acb04a/tomli-2.4.1-cp313-cp313-win_arm64.whl", hash = "sha256:4b605484e43cdc43f0954ddae319fb75f04cc10dd80d830540060ee7cd0243cd", size = 95265, upload-time = "2026-03-25T20:21:41.219Z" },
{ url = "https://files.pythonhosted.org/packages/3c/fb/9a5c8d27dbab540869f7c1f8eb0abb3244189ce780ba9cd73f3770662072/tomli-2.4.1-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:fd0409a3653af6c147209d267a0e4243f0ae46b011aa978b1080359fddc9b6cf", size = 155726, upload-time = "2026-03-25T20:21:42.23Z" },
{ url = "https://files.pythonhosted.org/packages/62/05/d2f816630cc771ad836af54f5001f47a6f611d2d39535364f148b6a92d6b/tomli-2.4.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:a120733b01c45e9a0c34aeef92bf0cf1d56cfe81ed9d47d562f9ed591a9828ac", size = 149859, upload-time = "2026-03-25T20:21:43.386Z" },
{ url = "https://files.pythonhosted.org/packages/ce/48/66341bdb858ad9bd0ceab5a86f90eddab127cf8b046418009f2125630ecb/tomli-2.4.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:559db847dc486944896521f68d8190be1c9e719fced785720d2216fe7022b662", size = 244713, upload-time = "2026-03-25T20:21:44.474Z" },
{ url = "https://files.pythonhosted.org/packages/df/6d/c5fad00d82b3c7a3ab6189bd4b10e60466f22cfe8a08a9394185c8a8111c/tomli-2.4.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:01f520d4f53ef97964a240a035ec2a869fe1a37dde002b57ebc4417a27ccd853", size = 252084, upload-time = "2026-03-25T20:21:45.62Z" },
{ url = "https://files.pythonhosted.org/packages/00/71/3a69e86f3eafe8c7a59d008d245888051005bd657760e96d5fbfb0b740c2/tomli-2.4.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7f94b27a62cfad8496c8d2513e1a222dd446f095fca8987fceef261225538a15", size = 247973, upload-time = "2026-03-25T20:21:46.937Z" },
{ url = "https://files.pythonhosted.org/packages/67/50/361e986652847fec4bd5e4a0208752fbe64689c603c7ae5ea7cb16b1c0ca/tomli-2.4.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:ede3e6487c5ef5d28634ba3f31f989030ad6af71edfb0055cbbd14189ff240ba", size = 256223, upload-time = "2026-03-25T20:21:48.467Z" },
{ url = "https://files.pythonhosted.org/packages/8c/9a/b4173689a9203472e5467217e0154b00e260621caa227b6fa01feab16998/tomli-2.4.1-cp314-cp314-win32.whl", hash = "sha256:3d48a93ee1c9b79c04bb38772ee1b64dcf18ff43085896ea460ca8dec96f35f6", size = 98973, upload-time = "2026-03-25T20:21:49.526Z" },
{ url = "https://files.pythonhosted.org/packages/14/58/640ac93bf230cd27d002462c9af0d837779f8773bc03dee06b5835208214/tomli-2.4.1-cp314-cp314-win_amd64.whl", hash = "sha256:88dceee75c2c63af144e456745e10101eb67361050196b0b6af5d717254dddf7", size = 109082, upload-time = "2026-03-25T20:21:50.506Z" },
{ url = "https://files.pythonhosted.org/packages/d5/2f/702d5e05b227401c1068f0d386d79a589bb12bf64c3d2c72ce0631e3bc49/tomli-2.4.1-cp314-cp314-win_arm64.whl", hash = "sha256:b8c198f8c1805dc42708689ed6864951fd2494f924149d3e4bce7710f8eb5232", size = 96490, upload-time = "2026-03-25T20:21:51.474Z" },
{ url = "https://files.pythonhosted.org/packages/45/4b/b877b05c8ba62927d9865dd980e34a755de541eb65fffba52b4cc495d4d2/tomli-2.4.1-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:d4d8fe59808a54658fcc0160ecfb1b30f9089906c50b23bcb4c69eddc19ec2b4", size = 164263, upload-time = "2026-03-25T20:21:52.543Z" },
{ url = "https://files.pythonhosted.org/packages/24/79/6ab420d37a270b89f7195dec5448f79400d9e9c1826df982f3f8e97b24fd/tomli-2.4.1-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:7008df2e7655c495dd12d2a4ad038ff878d4ca4b81fccaf82b714e07eae4402c", size = 160736, upload-time = "2026-03-25T20:21:53.674Z" },
{ url = "https://files.pythonhosted.org/packages/02/e0/3630057d8eb170310785723ed5adcdfb7d50cb7e6455f85ba8a3deed642b/tomli-2.4.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1d8591993e228b0c930c4bb0db464bdad97b3289fb981255d6c9a41aedc84b2d", size = 270717, upload-time = "2026-03-25T20:21:55.129Z" },
{ url = "https://files.pythonhosted.org/packages/7a/b4/1613716072e544d1a7891f548d8f9ec6ce2faf42ca65acae01d76ea06bb0/tomli-2.4.1-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:734e20b57ba95624ecf1841e72b53f6e186355e216e5412de414e3c51e5e3c41", size = 278461, upload-time = "2026-03-25T20:21:56.228Z" },
{ url = "https://files.pythonhosted.org/packages/05/38/30f541baf6a3f6df77b3df16b01ba319221389e2da59427e221ef417ac0c/tomli-2.4.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:8a650c2dbafa08d42e51ba0b62740dae4ecb9338eefa093aa5c78ceb546fcd5c", size = 274855, upload-time = "2026-03-25T20:21:57.653Z" },
{ url = "https://files.pythonhosted.org/packages/77/a3/ec9dd4fd2c38e98de34223b995a3b34813e6bdadf86c75314c928350ed14/tomli-2.4.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:504aa796fe0569bb43171066009ead363de03675276d2d121ac1a4572397870f", size = 283144, upload-time = "2026-03-25T20:21:59.089Z" },
{ url = "https://files.pythonhosted.org/packages/ef/be/605a6261cac79fba2ec0c9827e986e00323a1945700969b8ee0b30d85453/tomli-2.4.1-cp314-cp314t-win32.whl", hash = "sha256:b1d22e6e9387bf4739fbe23bfa80e93f6b0373a7f1b96c6227c32bef95a4d7a8", size = 108683, upload-time = "2026-03-25T20:22:00.214Z" },
{ url = "https://files.pythonhosted.org/packages/12/64/da524626d3b9cc40c168a13da8335fe1c51be12c0a63685cc6db7308daae/tomli-2.4.1-cp314-cp314t-win_amd64.whl", hash = "sha256:2c1c351919aca02858f740c6d33adea0c5deea37f9ecca1cc1ef9e884a619d26", size = 121196, upload-time = "2026-03-25T20:22:01.169Z" },
{ url = "https://files.pythonhosted.org/packages/5a/cd/e80b62269fc78fc36c9af5a6b89c835baa8af28ff5ad28c7028d60860320/tomli-2.4.1-cp314-cp314t-win_arm64.whl", hash = "sha256:eab21f45c7f66c13f2a9e0e1535309cee140182a9cdae1e041d02e47291e8396", size = 100393, upload-time = "2026-03-25T20:22:02.137Z" },
{ url = "https://files.pythonhosted.org/packages/7b/61/cceae43728b7de99d9b847560c262873a1f6c98202171fd5ed62640b494b/tomli-2.4.1-py3-none-any.whl", hash = "sha256:0d85819802132122da43cb86656f8d1f8c6587d54ae7dcaf30e90533028b49fe", size = 14583, upload-time = "2026-03-25T20:22:03.012Z" },
]
[[package]]
name = "types-deprecated"
version = "1.3.1.20260408"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/1a/db/076de3e81b106d3cec17aec9640ab1b2d02f29bad441de280459c161ce65/types_deprecated-1.3.1.20260408.tar.gz", hash = "sha256:62d6a86d0cc754c14bb2de31162d069b1c6a07ce11ee65e5258f8f75308eb3a3", size = 8524, upload-time = "2026-04-08T04:26:39.894Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/53/d0/d3258379deb749d949c3c72313981c9d2cceec518b87dcf506f022f5d49f/types_deprecated-1.3.1.20260408-py3-none-any.whl", hash = "sha256:b64e1eab560d4fa9394a27a3099211344b0e0f2f3ac8026d825c86e70d65cdd5", size = 9079, upload-time = "2026-04-08T04:26:38.752Z" },
]
[[package]]
name = "typing-extensions"
version = "4.15.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391, upload-time = "2025-08-25T13:49:26.313Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" },
]
[[package]]
name = "websockets"
version = "16.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/04/24/4b2031d72e840ce4c1ccb255f693b15c334757fc50023e4db9537080b8c4/websockets-16.0.tar.gz", hash = "sha256:5f6261a5e56e8d5c42a4497b364ea24d94d9563e8fbd44e78ac40879c60179b5", size = 179346, upload-time = "2026-01-10T09:23:47.181Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/20/74/221f58decd852f4b59cc3354cccaf87e8ef695fede361d03dc9a7396573b/websockets-16.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:04cdd5d2d1dacbad0a7bf36ccbcd3ccd5a30ee188f2560b7a62a30d14107b31a", size = 177343, upload-time = "2026-01-10T09:22:21.28Z" },
{ url = "https://files.pythonhosted.org/packages/19/0f/22ef6107ee52ab7f0b710d55d36f5a5d3ef19e8a205541a6d7ffa7994e5a/websockets-16.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:8ff32bb86522a9e5e31439a58addbb0166f0204d64066fb955265c4e214160f0", size = 175021, upload-time = "2026-01-10T09:22:22.696Z" },
{ url = "https://files.pythonhosted.org/packages/10/40/904a4cb30d9b61c0e278899bf36342e9b0208eb3c470324a9ecbaac2a30f/websockets-16.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:583b7c42688636f930688d712885cf1531326ee05effd982028212ccc13e5957", size = 175320, upload-time = "2026-01-10T09:22:23.94Z" },
{ url = "https://files.pythonhosted.org/packages/9d/2f/4b3ca7e106bc608744b1cdae041e005e446124bebb037b18799c2d356864/websockets-16.0-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:7d837379b647c0c4c2355c2499723f82f1635fd2c26510e1f587d89bc2199e72", size = 183815, upload-time = "2026-01-10T09:22:25.469Z" },
{ url = "https://files.pythonhosted.org/packages/86/26/d40eaa2a46d4302becec8d15b0fc5e45bdde05191e7628405a19cf491ccd/websockets-16.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:df57afc692e517a85e65b72e165356ed1df12386ecb879ad5693be08fac65dde", size = 185054, upload-time = "2026-01-10T09:22:27.101Z" },
{ url = "https://files.pythonhosted.org/packages/b0/ba/6500a0efc94f7373ee8fefa8c271acdfd4dca8bd49a90d4be7ccabfc397e/websockets-16.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:2b9f1e0d69bc60a4a87349d50c09a037a2607918746f07de04df9e43252c77a3", size = 184565, upload-time = "2026-01-10T09:22:28.293Z" },
{ url = "https://files.pythonhosted.org/packages/04/b4/96bf2cee7c8d8102389374a2616200574f5f01128d1082f44102140344cc/websockets-16.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:335c23addf3d5e6a8633f9f8eda77efad001671e80b95c491dd0924587ece0b3", size = 183848, upload-time = "2026-01-10T09:22:30.394Z" },
{ url = "https://files.pythonhosted.org/packages/02/8e/81f40fb00fd125357814e8c3025738fc4ffc3da4b6b4a4472a82ba304b41/websockets-16.0-cp310-cp310-win32.whl", hash = "sha256:37b31c1623c6605e4c00d466c9d633f9b812ea430c11c8a278774a1fde1acfa9", size = 178249, upload-time = "2026-01-10T09:22:32.083Z" },
{ url = "https://files.pythonhosted.org/packages/b4/5f/7e40efe8df57db9b91c88a43690ac66f7b7aa73a11aa6a66b927e44f26fa/websockets-16.0-cp310-cp310-win_amd64.whl", hash = "sha256:8e1dab317b6e77424356e11e99a432b7cb2f3ec8c5ab4dabbcee6add48f72b35", size = 178685, upload-time = "2026-01-10T09:22:33.345Z" },
{ url = "https://files.pythonhosted.org/packages/f2/db/de907251b4ff46ae804ad0409809504153b3f30984daf82a1d84a9875830/websockets-16.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:31a52addea25187bde0797a97d6fc3d2f92b6f72a9370792d65a6e84615ac8a8", size = 177340, upload-time = "2026-01-10T09:22:34.539Z" },
{ url = "https://files.pythonhosted.org/packages/f3/fa/abe89019d8d8815c8781e90d697dec52523fb8ebe308bf11664e8de1877e/websockets-16.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:417b28978cdccab24f46400586d128366313e8a96312e4b9362a4af504f3bbad", size = 175022, upload-time = "2026-01-10T09:22:36.332Z" },
{ url = "https://files.pythonhosted.org/packages/58/5d/88ea17ed1ded2079358b40d31d48abe90a73c9e5819dbcde1606e991e2ad/websockets-16.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:af80d74d4edfa3cb9ed973a0a5ba2b2a549371f8a741e0800cb07becdd20f23d", size = 175319, upload-time = "2026-01-10T09:22:37.602Z" },
{ url = "https://files.pythonhosted.org/packages/d2/ae/0ee92b33087a33632f37a635e11e1d99d429d3d323329675a6022312aac2/websockets-16.0-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:08d7af67b64d29823fed316505a89b86705f2b7981c07848fb5e3ea3020c1abe", size = 184631, upload-time = "2026-01-10T09:22:38.789Z" },
{ url = "https://files.pythonhosted.org/packages/c8/c5/27178df583b6c5b31b29f526ba2da5e2f864ecc79c99dae630a85d68c304/websockets-16.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7be95cfb0a4dae143eaed2bcba8ac23f4892d8971311f1b06f3c6b78952ee70b", size = 185870, upload-time = "2026-01-10T09:22:39.893Z" },
{ url = "https://files.pythonhosted.org/packages/87/05/536652aa84ddc1c018dbb7e2c4cbcd0db884580bf8e95aece7593fde526f/websockets-16.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d6297ce39ce5c2e6feb13c1a996a2ded3b6832155fcfc920265c76f24c7cceb5", size = 185361, upload-time = "2026-01-10T09:22:41.016Z" },
{ url = "https://files.pythonhosted.org/packages/6d/e2/d5332c90da12b1e01f06fb1b85c50cfc489783076547415bf9f0a659ec19/websockets-16.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:1c1b30e4f497b0b354057f3467f56244c603a79c0d1dafce1d16c283c25f6e64", size = 184615, upload-time = "2026-01-10T09:22:42.442Z" },
{ url = "https://files.pythonhosted.org/packages/77/fb/d3f9576691cae9253b51555f841bc6600bf0a983a461c79500ace5a5b364/websockets-16.0-cp311-cp311-win32.whl", hash = "sha256:5f451484aeb5cafee1ccf789b1b66f535409d038c56966d6101740c1614b86c6", size = 178246, upload-time = "2026-01-10T09:22:43.654Z" },
{ url = "https://files.pythonhosted.org/packages/54/67/eaff76b3dbaf18dcddabc3b8c1dba50b483761cccff67793897945b37408/websockets-16.0-cp311-cp311-win_amd64.whl", hash = "sha256:8d7f0659570eefb578dacde98e24fb60af35350193e4f56e11190787bee77dac", size = 178684, upload-time = "2026-01-10T09:22:44.941Z" },
{ url = "https://files.pythonhosted.org/packages/84/7b/bac442e6b96c9d25092695578dda82403c77936104b5682307bd4deb1ad4/websockets-16.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:71c989cbf3254fbd5e84d3bff31e4da39c43f884e64f2551d14bb3c186230f00", size = 177365, upload-time = "2026-01-10T09:22:46.787Z" },
{ url = "https://files.pythonhosted.org/packages/b0/fe/136ccece61bd690d9c1f715baaeefd953bb2360134de73519d5df19d29ca/websockets-16.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:8b6e209ffee39ff1b6d0fa7bfef6de950c60dfb91b8fcead17da4ee539121a79", size = 175038, upload-time = "2026-01-10T09:22:47.999Z" },
{ url = "https://files.pythonhosted.org/packages/40/1e/9771421ac2286eaab95b8575b0cb701ae3663abf8b5e1f64f1fd90d0a673/websockets-16.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:86890e837d61574c92a97496d590968b23c2ef0aeb8a9bc9421d174cd378ae39", size = 175328, upload-time = "2026-01-10T09:22:49.809Z" },
{ url = "https://files.pythonhosted.org/packages/18/29/71729b4671f21e1eaa5d6573031ab810ad2936c8175f03f97f3ff164c802/websockets-16.0-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:9b5aca38b67492ef518a8ab76851862488a478602229112c4b0d58d63a7a4d5c", size = 184915, upload-time = "2026-01-10T09:22:51.071Z" },
{ url = "https://files.pythonhosted.org/packages/97/bb/21c36b7dbbafc85d2d480cd65df02a1dc93bf76d97147605a8e27ff9409d/websockets-16.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e0334872c0a37b606418ac52f6ab9cfd17317ac26365f7f65e203e2d0d0d359f", size = 186152, upload-time = "2026-01-10T09:22:52.224Z" },
{ url = "https://files.pythonhosted.org/packages/4a/34/9bf8df0c0cf88fa7bfe36678dc7b02970c9a7d5e065a3099292db87b1be2/websockets-16.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a0b31e0b424cc6b5a04b8838bbaec1688834b2383256688cf47eb97412531da1", size = 185583, upload-time = "2026-01-10T09:22:53.443Z" },
{ url = "https://files.pythonhosted.org/packages/47/88/4dd516068e1a3d6ab3c7c183288404cd424a9a02d585efbac226cb61ff2d/websockets-16.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:485c49116d0af10ac698623c513c1cc01c9446c058a4e61e3bf6c19dff7335a2", size = 184880, upload-time = "2026-01-10T09:22:55.033Z" },
{ url = "https://files.pythonhosted.org/packages/91/d6/7d4553ad4bf1c0421e1ebd4b18de5d9098383b5caa1d937b63df8d04b565/websockets-16.0-cp312-cp312-win32.whl", hash = "sha256:eaded469f5e5b7294e2bdca0ab06becb6756ea86894a47806456089298813c89", size = 178261, upload-time = "2026-01-10T09:22:56.251Z" },
{ url = "https://files.pythonhosted.org/packages/c3/f0/f3a17365441ed1c27f850a80b2bc680a0fa9505d733fe152fdf5e98c1c0b/websockets-16.0-cp312-cp312-win_amd64.whl", hash = "sha256:5569417dc80977fc8c2d43a86f78e0a5a22fee17565d78621b6bb264a115d4ea", size = 178693, upload-time = "2026-01-10T09:22:57.478Z" },
{ url = "https://files.pythonhosted.org/packages/cc/9c/baa8456050d1c1b08dd0ec7346026668cbc6f145ab4e314d707bb845bf0d/websockets-16.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:878b336ac47938b474c8f982ac2f7266a540adc3fa4ad74ae96fea9823a02cc9", size = 177364, upload-time = "2026-01-10T09:22:59.333Z" },
{ url = "https://files.pythonhosted.org/packages/7e/0c/8811fc53e9bcff68fe7de2bcbe75116a8d959ac699a3200f4847a8925210/websockets-16.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:52a0fec0e6c8d9a784c2c78276a48a2bdf099e4ccc2a4cad53b27718dbfd0230", size = 175039, upload-time = "2026-01-10T09:23:01.171Z" },
{ url = "https://files.pythonhosted.org/packages/aa/82/39a5f910cb99ec0b59e482971238c845af9220d3ab9fa76dd9162cda9d62/websockets-16.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:e6578ed5b6981005df1860a56e3617f14a6c307e6a71b4fff8c48fdc50f3ed2c", size = 175323, upload-time = "2026-01-10T09:23:02.341Z" },
{ url = "https://files.pythonhosted.org/packages/bd/28/0a25ee5342eb5d5f297d992a77e56892ecb65e7854c7898fb7d35e9b33bd/websockets-16.0-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:95724e638f0f9c350bb1c2b0a7ad0e83d9cc0c9259f3ea94e40d7b02a2179ae5", size = 184975, upload-time = "2026-01-10T09:23:03.756Z" },
{ url = "https://files.pythonhosted.org/packages/f9/66/27ea52741752f5107c2e41fda05e8395a682a1e11c4e592a809a90c6a506/websockets-16.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c0204dc62a89dc9d50d682412c10b3542d748260d743500a85c13cd1ee4bde82", size = 186203, upload-time = "2026-01-10T09:23:05.01Z" },
{ url = "https://files.pythonhosted.org/packages/37/e5/8e32857371406a757816a2b471939d51c463509be73fa538216ea52b792a/websockets-16.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:52ac480f44d32970d66763115edea932f1c5b1312de36df06d6b219f6741eed8", size = 185653, upload-time = "2026-01-10T09:23:06.301Z" },
{ url = "https://files.pythonhosted.org/packages/9b/67/f926bac29882894669368dc73f4da900fcdf47955d0a0185d60103df5737/websockets-16.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6e5a82b677f8f6f59e8dfc34ec06ca6b5b48bc4fcda346acd093694cc2c24d8f", size = 184920, upload-time = "2026-01-10T09:23:07.492Z" },
{ url = "https://files.pythonhosted.org/packages/3c/a1/3d6ccdcd125b0a42a311bcd15a7f705d688f73b2a22d8cf1c0875d35d34a/websockets-16.0-cp313-cp313-win32.whl", hash = "sha256:abf050a199613f64c886ea10f38b47770a65154dc37181bfaff70c160f45315a", size = 178255, upload-time = "2026-01-10T09:23:09.245Z" },
{ url = "https://files.pythonhosted.org/packages/6b/ae/90366304d7c2ce80f9b826096a9e9048b4bb760e44d3b873bb272cba696b/websockets-16.0-cp313-cp313-win_amd64.whl", hash = "sha256:3425ac5cf448801335d6fdc7ae1eb22072055417a96cc6b31b3861f455fbc156", size = 178689, upload-time = "2026-01-10T09:23:10.483Z" },
{ url = "https://files.pythonhosted.org/packages/f3/1d/e88022630271f5bd349ed82417136281931e558d628dd52c4d8621b4a0b2/websockets-16.0-cp314-cp314-macosx_10_15_universal2.whl", hash = "sha256:8cc451a50f2aee53042ac52d2d053d08bf89bcb31ae799cb4487587661c038a0", size = 177406, upload-time = "2026-01-10T09:23:12.178Z" },
{ url = "https://files.pythonhosted.org/packages/f2/78/e63be1bf0724eeb4616efb1ae1c9044f7c3953b7957799abb5915bffd38e/websockets-16.0-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:daa3b6ff70a9241cf6c7fc9e949d41232d9d7d26fd3522b1ad2b4d62487e9904", size = 175085, upload-time = "2026-01-10T09:23:13.511Z" },
{ url = "https://files.pythonhosted.org/packages/bb/f4/d3c9220d818ee955ae390cf319a7c7a467beceb24f05ee7aaaa2414345ba/websockets-16.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:fd3cb4adb94a2a6e2b7c0d8d05cb94e6f1c81a0cf9dc2694fb65c7e8d94c42e4", size = 175328, upload-time = "2026-01-10T09:23:14.727Z" },
{ url = "https://files.pythonhosted.org/packages/63/bc/d3e208028de777087e6fb2b122051a6ff7bbcca0d6df9d9c2bf1dd869ae9/websockets-16.0-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:781caf5e8eee67f663126490c2f96f40906594cb86b408a703630f95550a8c3e", size = 185044, upload-time = "2026-01-10T09:23:15.939Z" },
{ url = "https://files.pythonhosted.org/packages/ad/6e/9a0927ac24bd33a0a9af834d89e0abc7cfd8e13bed17a86407a66773cc0e/websockets-16.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:caab51a72c51973ca21fa8a18bd8165e1a0183f1ac7066a182ff27107b71e1a4", size = 186279, upload-time = "2026-01-10T09:23:17.148Z" },
{ url = "https://files.pythonhosted.org/packages/b9/ca/bf1c68440d7a868180e11be653c85959502efd3a709323230314fda6e0b3/websockets-16.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:19c4dc84098e523fd63711e563077d39e90ec6702aff4b5d9e344a60cb3c0cb1", size = 185711, upload-time = "2026-01-10T09:23:18.372Z" },
{ url = "https://files.pythonhosted.org/packages/c4/f8/fdc34643a989561f217bb477cbc47a3a07212cbda91c0e4389c43c296ebf/websockets-16.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:a5e18a238a2b2249c9a9235466b90e96ae4795672598a58772dd806edc7ac6d3", size = 184982, upload-time = "2026-01-10T09:23:19.652Z" },
{ url = "https://files.pythonhosted.org/packages/dd/d1/574fa27e233764dbac9c52730d63fcf2823b16f0856b3329fc6268d6ae4f/websockets-16.0-cp314-cp314-win32.whl", hash = "sha256:a069d734c4a043182729edd3e9f247c3b2a4035415a9172fd0f1b71658a320a8", size = 177915, upload-time = "2026-01-10T09:23:21.458Z" },
{ url = "https://files.pythonhosted.org/packages/8a/f1/ae6b937bf3126b5134ce1f482365fde31a357c784ac51852978768b5eff4/websockets-16.0-cp314-cp314-win_amd64.whl", hash = "sha256:c0ee0e63f23914732c6d7e0cce24915c48f3f1512ec1d079ed01fc629dab269d", size = 178381, upload-time = "2026-01-10T09:23:22.715Z" },
{ url = "https://files.pythonhosted.org/packages/06/9b/f791d1db48403e1f0a27577a6beb37afae94254a8c6f08be4a23e4930bc0/websockets-16.0-cp314-cp314t-macosx_10_15_universal2.whl", hash = "sha256:a35539cacc3febb22b8f4d4a99cc79b104226a756aa7400adc722e83b0d03244", size = 177737, upload-time = "2026-01-10T09:23:24.523Z" },
{ url = "https://files.pythonhosted.org/packages/bd/40/53ad02341fa33b3ce489023f635367a4ac98b73570102ad2cdd770dacc9a/websockets-16.0-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:b784ca5de850f4ce93ec85d3269d24d4c82f22b7212023c974c401d4980ebc5e", size = 175268, upload-time = "2026-01-10T09:23:25.781Z" },
{ url = "https://files.pythonhosted.org/packages/74/9b/6158d4e459b984f949dcbbb0c5d270154c7618e11c01029b9bbd1bb4c4f9/websockets-16.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:569d01a4e7fba956c5ae4fc988f0d4e187900f5497ce46339c996dbf24f17641", size = 175486, upload-time = "2026-01-10T09:23:27.033Z" },
{ url = "https://files.pythonhosted.org/packages/e5/2d/7583b30208b639c8090206f95073646c2c9ffd66f44df967981a64f849ad/websockets-16.0-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:50f23cdd8343b984957e4077839841146f67a3d31ab0d00e6b824e74c5b2f6e8", size = 185331, upload-time = "2026-01-10T09:23:28.259Z" },
{ url = "https://files.pythonhosted.org/packages/45/b0/cce3784eb519b7b5ad680d14b9673a31ab8dcb7aad8b64d81709d2430aa8/websockets-16.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:152284a83a00c59b759697b7f9e9cddf4e3c7861dd0d964b472b70f78f89e80e", size = 186501, upload-time = "2026-01-10T09:23:29.449Z" },
{ url = "https://files.pythonhosted.org/packages/19/60/b8ebe4c7e89fb5f6cdf080623c9d92789a53636950f7abacfc33fe2b3135/websockets-16.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:bc59589ab64b0022385f429b94697348a6a234e8ce22544e3681b2e9331b5944", size = 186062, upload-time = "2026-01-10T09:23:31.368Z" },
{ url = "https://files.pythonhosted.org/packages/88/a8/a080593f89b0138b6cba1b28f8df5673b5506f72879322288b031337c0b8/websockets-16.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:32da954ffa2814258030e5a57bc73a3635463238e797c7375dc8091327434206", size = 185356, upload-time = "2026-01-10T09:23:32.627Z" },
{ url = "https://files.pythonhosted.org/packages/c2/b6/b9afed2afadddaf5ebb2afa801abf4b0868f42f8539bfe4b071b5266c9fe/websockets-16.0-cp314-cp314t-win32.whl", hash = "sha256:5a4b4cc550cb665dd8a47f868c8d04c8230f857363ad3c9caf7a0c3bf8c61ca6", size = 178085, upload-time = "2026-01-10T09:23:33.816Z" },
{ url = "https://files.pythonhosted.org/packages/9f/3e/28135a24e384493fa804216b79a6a6759a38cc4ff59118787b9fb693df93/websockets-16.0-cp314-cp314t-win_amd64.whl", hash = "sha256:b14dc141ed6d2dde437cddb216004bcac6a1df0935d79656387bd41632ba0bbd", size = 178531, upload-time = "2026-01-10T09:23:35.016Z" },
{ url = "https://files.pythonhosted.org/packages/72/07/c98a68571dcf256e74f1f816b8cc5eae6eb2d3d5cfa44d37f801619d9166/websockets-16.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:349f83cd6c9a415428ee1005cadb5c2c56f4389bc06a9af16103c3bc3dcc8b7d", size = 174947, upload-time = "2026-01-10T09:23:36.166Z" },
{ url = "https://files.pythonhosted.org/packages/7e/52/93e166a81e0305b33fe416338be92ae863563fe7bce446b0f687b9df5aea/websockets-16.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:4a1aba3340a8dca8db6eb5a7986157f52eb9e436b74813764241981ca4888f03", size = 175260, upload-time = "2026-01-10T09:23:37.409Z" },
{ url = "https://files.pythonhosted.org/packages/56/0c/2dbf513bafd24889d33de2ff0368190a0e69f37bcfa19009ef819fe4d507/websockets-16.0-pp311-pypy311_pp73-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:f4a32d1bd841d4bcbffdcb3d2ce50c09c3909fbead375ab28d0181af89fd04da", size = 176071, upload-time = "2026-01-10T09:23:39.158Z" },
{ url = "https://files.pythonhosted.org/packages/a5/8f/aea9c71cc92bf9b6cc0f7f70df8f0b420636b6c96ef4feee1e16f80f75dd/websockets-16.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0298d07ee155e2e9fda5be8a9042200dd2e3bb0b8a38482156576f863a9d457c", size = 176968, upload-time = "2026-01-10T09:23:41.031Z" },
{ url = "https://files.pythonhosted.org/packages/9a/3f/f70e03f40ffc9a30d817eef7da1be72ee4956ba8d7255c399a01b135902a/websockets-16.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:a653aea902e0324b52f1613332ddf50b00c06fdaf7e92624fbf8c77c78fa5767", size = 178735, upload-time = "2026-01-10T09:23:42.259Z" },
{ url = "https://files.pythonhosted.org/packages/6f/28/258ebab549c2bf3e64d2b0217b973467394a9cea8c42f70418ca2c5d0d2e/websockets-16.0-py3-none-any.whl", hash = "sha256:1637db62fad1dc833276dded54215f2c7fa46912301a24bd94d45d46a011ceec", size = 171598, upload-time = "2026-01-10T09:23:45.395Z" },
]
[[package]]
name = "wrapt"
version = "2.1.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/2e/64/925f213fdcbb9baeb1530449ac71a4d57fc361c053d06bf78d0c5c7cd80c/wrapt-2.1.2.tar.gz", hash = "sha256:3996a67eecc2c68fd47b4e3c564405a5777367adfd9b8abb58387b63ee83b21e", size = 81678, upload-time = "2026-03-06T02:53:25.134Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/da/d2/387594fb592d027366645f3d7cc9b4d7ca7be93845fbaba6d835a912ef3c/wrapt-2.1.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:4b7a86d99a14f76facb269dc148590c01aaf47584071809a70da30555228158c", size = 60669, upload-time = "2026-03-06T02:52:40.671Z" },
{ url = "https://files.pythonhosted.org/packages/c9/18/3f373935bc5509e7ac444c8026a56762e50c1183e7061797437ca96c12ce/wrapt-2.1.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a819e39017f95bf7aede768f75915635aa8f671f2993c036991b8d3bfe8dbb6f", size = 61603, upload-time = "2026-03-06T02:54:21.032Z" },
{ url = "https://files.pythonhosted.org/packages/c2/7a/32758ca2853b07a887a4574b74e28843919103194bb47001a304e24af62f/wrapt-2.1.2-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:5681123e60aed0e64c7d44f72bbf8b4ce45f79d81467e2c4c728629f5baf06eb", size = 113632, upload-time = "2026-03-06T02:53:54.121Z" },
{ url = "https://files.pythonhosted.org/packages/1d/d5/eeaa38f670d462e97d978b3b0d9ce06d5b91e54bebac6fbed867809216e7/wrapt-2.1.2-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2b8b28e97a44d21836259739ae76284e180b18abbb4dcfdff07a415cf1016c3e", size = 115644, upload-time = "2026-03-06T02:54:53.33Z" },
{ url = "https://files.pythonhosted.org/packages/e3/09/2a41506cb17affb0bdf9d5e2129c8c19e192b388c4c01d05e1b14db23c00/wrapt-2.1.2-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:cef91c95a50596fcdc31397eb6955476f82ae8a3f5a8eabdc13611b60ee380ba", size = 112016, upload-time = "2026-03-06T02:54:43.274Z" },
{ url = "https://files.pythonhosted.org/packages/64/15/0e6c3f5e87caadc43db279724ee36979246d5194fa32fed489c73643ba59/wrapt-2.1.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:dad63212b168de8569b1c512f4eac4b57f2c6934b30df32d6ee9534a79f1493f", size = 114823, upload-time = "2026-03-06T02:54:29.392Z" },
{ url = "https://files.pythonhosted.org/packages/56/b2/0ad17c8248f4e57bedf44938c26ec3ee194715f812d2dbbd9d7ff4be6c06/wrapt-2.1.2-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:d307aa6888d5efab2c1cde09843d48c843990be13069003184b67d426d145394", size = 111244, upload-time = "2026-03-06T02:54:02.149Z" },
{ url = "https://files.pythonhosted.org/packages/ff/04/bcdba98c26f2c6522c7c09a726d5d9229120163493620205b2f76bd13c01/wrapt-2.1.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:c87cf3f0c85e27b3ac7d9ad95da166bf8739ca215a8b171e8404a2d739897a45", size = 113307, upload-time = "2026-03-06T02:54:12.428Z" },
{ url = "https://files.pythonhosted.org/packages/0e/1b/5e2883c6bc14143924e465a6fc5a92d09eeabe35310842a481fb0581f832/wrapt-2.1.2-cp310-cp310-win32.whl", hash = "sha256:d1c5fea4f9fe3762e2b905fdd67df51e4be7a73b7674957af2d2ade71a5c075d", size = 57986, upload-time = "2026-03-06T02:54:26.823Z" },
{ url = "https://files.pythonhosted.org/packages/42/5a/4efc997bccadd3af5749c250b49412793bc41e13a83a486b2b54a33e240c/wrapt-2.1.2-cp310-cp310-win_amd64.whl", hash = "sha256:d8f7740e1af13dff2684e4d56fe604a7e04d6c94e737a60568d8d4238b9a0c71", size = 60336, upload-time = "2026-03-06T02:54:18Z" },
{ url = "https://files.pythonhosted.org/packages/c1/f5/a2bb833e20181b937e87c242645ed5d5aa9c373006b0467bfe1a35c727d0/wrapt-2.1.2-cp310-cp310-win_arm64.whl", hash = "sha256:1c6cc827c00dc839350155f316f1f8b4b0c370f52b6a19e782e2bda89600c7dc", size = 58757, upload-time = "2026-03-06T02:53:51.545Z" },
{ url = "https://files.pythonhosted.org/packages/c7/81/60c4471fce95afa5922ca09b88a25f03c93343f759aae0f31fb4412a85c7/wrapt-2.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:96159a0ee2b0277d44201c3b5be479a9979cf154e8c82fa5df49586a8e7679bb", size = 60666, upload-time = "2026-03-06T02:52:58.934Z" },
{ url = "https://files.pythonhosted.org/packages/6b/be/80e80e39e7cb90b006a0eaf11c73ac3a62bbfb3068469aec15cc0bc795de/wrapt-2.1.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:98ba61833a77b747901e9012072f038795de7fc77849f1faa965464f3f87ff2d", size = 61601, upload-time = "2026-03-06T02:53:00.487Z" },
{ url = "https://files.pythonhosted.org/packages/b0/be/d7c88cd9293c859fc74b232abdc65a229bb953997995d6912fc85af18323/wrapt-2.1.2-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:767c0dbbe76cae2a60dd2b235ac0c87c9cccf4898aef8062e57bead46b5f6894", size = 114057, upload-time = "2026-03-06T02:52:44.08Z" },
{ url = "https://files.pythonhosted.org/packages/ea/25/36c04602831a4d685d45a93b3abea61eca7fe35dab6c842d6f5d570ef94a/wrapt-2.1.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9c691a6bc752c0cc4711cc0c00896fcd0f116abc253609ef64ef930032821842", size = 116099, upload-time = "2026-03-06T02:54:56.74Z" },
{ url = "https://files.pythonhosted.org/packages/5c/4e/98a6eb417ef551dc277bec1253d5246b25003cf36fdf3913b65cb7657a56/wrapt-2.1.2-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:f3b7d73012ea75aee5844de58c88f44cf62d0d62711e39da5a82824a7c4626a8", size = 112457, upload-time = "2026-03-06T02:53:52.842Z" },
{ url = "https://files.pythonhosted.org/packages/cb/a6/a6f7186a5297cad8ec53fd7578533b28f795fdf5372368c74bd7e6e9841c/wrapt-2.1.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:577dff354e7acd9d411eaf4bfe76b724c89c89c8fc9b7e127ee28c5f7bcb25b6", size = 115351, upload-time = "2026-03-06T02:53:32.684Z" },
{ url = "https://files.pythonhosted.org/packages/97/6f/06e66189e721dbebd5cf20e138acc4d1150288ce118462f2fcbff92d38db/wrapt-2.1.2-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:3d7b6fd105f8b24e5bd23ccf41cb1d1099796524bcc6f7fbb8fe576c44befbc9", size = 111748, upload-time = "2026-03-06T02:53:08.455Z" },
{ url = "https://files.pythonhosted.org/packages/ef/43/4808b86f499a51370fbdbdfa6cb91e9b9169e762716456471b619fca7a70/wrapt-2.1.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:866abdbf4612e0b34764922ef8b1c5668867610a718d3053d59e24a5e5fcfc15", size = 113783, upload-time = "2026-03-06T02:53:02.02Z" },
{ url = "https://files.pythonhosted.org/packages/91/2c/a3f28b8fa7ac2cefa01cfcaca3471f9b0460608d012b693998cd61ef43df/wrapt-2.1.2-cp311-cp311-win32.whl", hash = "sha256:5a0a0a3a882393095573344075189eb2d566e0fd205a2b6414e9997b1b800a8b", size = 57977, upload-time = "2026-03-06T02:53:27.844Z" },
{ url = "https://files.pythonhosted.org/packages/3f/c3/2b1c7bd07a27b1db885a2fab469b707bdd35bddf30a113b4917a7e2139d2/wrapt-2.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:64a07a71d2730ba56f11d1a4b91f7817dc79bc134c11516b75d1921a7c6fcda1", size = 60336, upload-time = "2026-03-06T02:54:28.104Z" },
{ url = "https://files.pythonhosted.org/packages/ec/5c/76ece7b401b088daa6503d6264dd80f9a727df3e6042802de9a223084ea2/wrapt-2.1.2-cp311-cp311-win_arm64.whl", hash = "sha256:b89f095fe98bc12107f82a9f7d570dc83a0870291aeb6b1d7a7d35575f55d98a", size = 58756, upload-time = "2026-03-06T02:53:16.319Z" },
{ url = "https://files.pythonhosted.org/packages/4c/b6/1db817582c49c7fcbb7df6809d0f515af29d7c2fbf57eb44c36e98fb1492/wrapt-2.1.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:ff2aad9c4cda28a8f0653fc2d487596458c2a3f475e56ba02909e950a9efa6a9", size = 61255, upload-time = "2026-03-06T02:52:45.663Z" },
{ url = "https://files.pythonhosted.org/packages/a2/16/9b02a6b99c09227c93cd4b73acc3678114154ec38da53043c0ddc1fba0dc/wrapt-2.1.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6433ea84e1cfacf32021d2a4ee909554ade7fd392caa6f7c13f1f4bf7b8e8748", size = 61848, upload-time = "2026-03-06T02:53:48.728Z" },
{ url = "https://files.pythonhosted.org/packages/af/aa/ead46a88f9ec3a432a4832dfedb84092fc35af2d0ba40cd04aea3889f247/wrapt-2.1.2-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:c20b757c268d30d6215916a5fa8461048d023865d888e437fab451139cad6c8e", size = 121433, upload-time = "2026-03-06T02:54:40.328Z" },
{ url = "https://files.pythonhosted.org/packages/3a/9f/742c7c7cdf58b59085a1ee4b6c37b013f66ac33673a7ef4aaed5e992bc33/wrapt-2.1.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:79847b83eb38e70d93dc392c7c5b587efe65b3e7afcc167aa8abd5d60e8761c8", size = 123013, upload-time = "2026-03-06T02:53:26.58Z" },
{ url = "https://files.pythonhosted.org/packages/e8/44/2c3dd45d53236b7ed7c646fcf212251dc19e48e599debd3926b52310fafb/wrapt-2.1.2-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:f8fba1bae256186a83d1875b2b1f4e2d1242e8fac0f58ec0d7e41b26967b965c", size = 117326, upload-time = "2026-03-06T02:53:11.547Z" },
{ url = "https://files.pythonhosted.org/packages/74/e2/b17d66abc26bd96f89dec0ecd0ef03da4a1286e6ff793839ec431b9fae57/wrapt-2.1.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e3d3b35eedcf5f7d022291ecd7533321c4775f7b9cd0050a31a68499ba45757c", size = 121444, upload-time = "2026-03-06T02:54:09.5Z" },
{ url = "https://files.pythonhosted.org/packages/3c/62/e2977843fdf9f03daf1586a0ff49060b1b2fc7ff85a7ea82b6217c1ae36e/wrapt-2.1.2-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:6f2c5390460de57fa9582bc8a1b7a6c86e1a41dfad74c5225fc07044c15cc8d1", size = 116237, upload-time = "2026-03-06T02:54:03.884Z" },
{ url = "https://files.pythonhosted.org/packages/88/dd/27fc67914e68d740bce512f11734aec08696e6b17641fef8867c00c949fc/wrapt-2.1.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:7dfa9f2cf65d027b951d05c662cc99ee3bd01f6e4691ed39848a7a5fffc902b2", size = 120563, upload-time = "2026-03-06T02:53:20.412Z" },
{ url = "https://files.pythonhosted.org/packages/ec/9f/b750b3692ed2ef4705cb305bd68858e73010492b80e43d2a4faa5573cbe7/wrapt-2.1.2-cp312-cp312-win32.whl", hash = "sha256:eba8155747eb2cae4a0b913d9ebd12a1db4d860fc4c829d7578c7b989bd3f2f0", size = 58198, upload-time = "2026-03-06T02:53:37.732Z" },
{ url = "https://files.pythonhosted.org/packages/8e/b2/feecfe29f28483d888d76a48f03c4c4d8afea944dbee2b0cd3380f9df032/wrapt-2.1.2-cp312-cp312-win_amd64.whl", hash = "sha256:1c51c738d7d9faa0b3601708e7e2eda9bf779e1b601dce6c77411f2a1b324a63", size = 60441, upload-time = "2026-03-06T02:52:47.138Z" },
{ url = "https://files.pythonhosted.org/packages/44/e1/e328f605d6e208547ea9fd120804fcdec68536ac748987a68c47c606eea8/wrapt-2.1.2-cp312-cp312-win_arm64.whl", hash = "sha256:c8e46ae8e4032792eb2f677dbd0d557170a8e5524d22acc55199f43efedd39bf", size = 58836, upload-time = "2026-03-06T02:53:22.053Z" },
{ url = "https://files.pythonhosted.org/packages/4c/7a/d936840735c828b38d26a854e85d5338894cda544cb7a85a9d5b8b9c4df7/wrapt-2.1.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:787fd6f4d67befa6fe2abdffcbd3de2d82dfc6fb8a6d850407c53332709d030b", size = 61259, upload-time = "2026-03-06T02:53:41.922Z" },
{ url = "https://files.pythonhosted.org/packages/5e/88/9a9b9a90ac8ca11c2fdb6a286cb3a1fc7dd774c00ed70929a6434f6bc634/wrapt-2.1.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:4bdf26e03e6d0da3f0e9422fd36bcebf7bc0eeb55fdf9c727a09abc6b9fe472e", size = 61851, upload-time = "2026-03-06T02:52:48.672Z" },
{ url = "https://files.pythonhosted.org/packages/03/a9/5b7d6a16fd6533fed2756900fc8fc923f678179aea62ada6d65c92718c00/wrapt-2.1.2-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:bbac24d879aa22998e87f6b3f481a5216311e7d53c7db87f189a7a0266dafffb", size = 121446, upload-time = "2026-03-06T02:54:14.013Z" },
{ url = "https://files.pythonhosted.org/packages/45/bb/34c443690c847835cfe9f892be78c533d4f32366ad2888972c094a897e39/wrapt-2.1.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:16997dfb9d67addc2e3f41b62a104341e80cac52f91110dece393923c0ebd5ca", size = 123056, upload-time = "2026-03-06T02:54:10.829Z" },
{ url = "https://files.pythonhosted.org/packages/93/b9/ff205f391cb708f67f41ea148545f2b53ff543a7ac293b30d178af4d2271/wrapt-2.1.2-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:162e4e2ba7542da9027821cb6e7c5e068d64f9a10b5f15512ea28e954893a267", size = 117359, upload-time = "2026-03-06T02:53:03.623Z" },
{ url = "https://files.pythonhosted.org/packages/1f/3d/1ea04d7747825119c3c9a5e0874a40b33594ada92e5649347c457d982805/wrapt-2.1.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f29c827a8d9936ac320746747a016c4bc66ef639f5cd0d32df24f5eacbf9c69f", size = 121479, upload-time = "2026-03-06T02:53:45.844Z" },
{ url = "https://files.pythonhosted.org/packages/78/cc/ee3a011920c7a023b25e8df26f306b2484a531ab84ca5c96260a73de76c0/wrapt-2.1.2-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:a9dd9813825f7ecb018c17fd147a01845eb330254dff86d3b5816f20f4d6aaf8", size = 116271, upload-time = "2026-03-06T02:54:46.356Z" },
{ url = "https://files.pythonhosted.org/packages/98/fd/e5ff7ded41b76d802cf1191288473e850d24ba2e39a6ec540f21ae3b57cb/wrapt-2.1.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6f8dbdd3719e534860d6a78526aafc220e0241f981367018c2875178cf83a413", size = 120573, upload-time = "2026-03-06T02:52:50.163Z" },
{ url = "https://files.pythonhosted.org/packages/47/c5/242cae3b5b080cd09bacef0591691ba1879739050cc7c801ff35c8886b66/wrapt-2.1.2-cp313-cp313-win32.whl", hash = "sha256:5c35b5d82b16a3bc6e0a04349b606a0582bc29f573786aebe98e0c159bc48db6", size = 58205, upload-time = "2026-03-06T02:53:47.494Z" },
{ url = "https://files.pythonhosted.org/packages/12/69/c358c61e7a50f290958809b3c61ebe8b3838ea3e070d7aac9814f95a0528/wrapt-2.1.2-cp313-cp313-win_amd64.whl", hash = "sha256:f8bc1c264d8d1cf5b3560a87bbdd31131573eb25f9f9447bb6252b8d4c44a3a1", size = 60452, upload-time = "2026-03-06T02:53:30.038Z" },
{ url = "https://files.pythonhosted.org/packages/8e/66/c8a6fcfe321295fd8c0ab1bd685b5a01462a9b3aa2f597254462fc2bc975/wrapt-2.1.2-cp313-cp313-win_arm64.whl", hash = "sha256:3beb22f674550d5634642c645aba4c72a2c66fb185ae1aebe1e955fae5a13baf", size = 58842, upload-time = "2026-03-06T02:52:52.114Z" },
{ url = "https://files.pythonhosted.org/packages/da/55/9c7052c349106e0b3f17ae8db4b23a691a963c334de7f9dbd60f8f74a831/wrapt-2.1.2-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:0fc04bc8664a8bc4c8e00b37b5355cffca2535209fba1abb09ae2b7c76ddf82b", size = 63075, upload-time = "2026-03-06T02:53:19.108Z" },
{ url = "https://files.pythonhosted.org/packages/09/a8/ce7b4006f7218248dd71b7b2b732d0710845a0e49213b18faef64811ffef/wrapt-2.1.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:a9b9d50c9af998875a1482a038eb05755dfd6fe303a313f6a940bb53a83c3f18", size = 63719, upload-time = "2026-03-06T02:54:33.452Z" },
{ url = "https://files.pythonhosted.org/packages/e4/e5/2ca472e80b9e2b7a17f106bb8f9df1db11e62101652ce210f66935c6af67/wrapt-2.1.2-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:2d3ff4f0024dd224290c0eabf0240f1bfc1f26363431505fb1b0283d3b08f11d", size = 152643, upload-time = "2026-03-06T02:52:42.721Z" },
{ url = "https://files.pythonhosted.org/packages/36/42/30f0f2cefca9d9cbf6835f544d825064570203c3e70aa873d8ae12e23791/wrapt-2.1.2-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3278c471f4468ad544a691b31bb856374fbdefb7fee1a152153e64019379f015", size = 158805, upload-time = "2026-03-06T02:54:25.441Z" },
{ url = "https://files.pythonhosted.org/packages/bb/67/d08672f801f604889dcf58f1a0b424fe3808860ede9e03affc1876b295af/wrapt-2.1.2-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:a8914c754d3134a3032601c6984db1c576e6abaf3fc68094bb8ab1379d75ff92", size = 145990, upload-time = "2026-03-06T02:53:57.456Z" },
{ url = "https://files.pythonhosted.org/packages/68/a7/fd371b02e73babec1de6ade596e8cd9691051058cfdadbfd62a5898f3295/wrapt-2.1.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:ff95d4264e55839be37bafe1536db2ab2de19da6b65f9244f01f332b5286cfbf", size = 155670, upload-time = "2026-03-06T02:54:55.309Z" },
{ url = "https://files.pythonhosted.org/packages/86/2d/9fe0095dfdb621009f40117dcebf41d7396c2c22dca6eac779f4c007b86c/wrapt-2.1.2-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:76405518ca4e1b76fbb1b9f686cff93aebae03920cc55ceeec48ff9f719c5f67", size = 144357, upload-time = "2026-03-06T02:54:24.092Z" },
{ url = "https://files.pythonhosted.org/packages/0e/b6/ec7b4a254abbe4cde9fa15c5d2cca4518f6b07d0f1b77d4ee9655e30280e/wrapt-2.1.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:c0be8b5a74c5824e9359b53e7e58bef71a729bacc82e16587db1c4ebc91f7c5a", size = 150269, upload-time = "2026-03-06T02:53:31.268Z" },
{ url = "https://files.pythonhosted.org/packages/6e/6b/2fabe8ebf148f4ee3c782aae86a795cc68ffe7d432ef550f234025ce0cfa/wrapt-2.1.2-cp313-cp313t-win32.whl", hash = "sha256:f01277d9a5fc1862f26f7626da9cf443bebc0abd2f303f41c5e995b15887dabd", size = 59894, upload-time = "2026-03-06T02:54:15.391Z" },
{ url = "https://files.pythonhosted.org/packages/ca/fb/9ba66fc2dedc936de5f8073c0217b5d4484e966d87723415cc8262c5d9c2/wrapt-2.1.2-cp313-cp313t-win_amd64.whl", hash = "sha256:84ce8f1c2104d2f6daa912b1b5b039f331febfeee74f8042ad4e04992bd95c8f", size = 63197, upload-time = "2026-03-06T02:54:41.943Z" },
{ url = "https://files.pythonhosted.org/packages/c0/1c/012d7423c95d0e337117723eb8ecf73c622ce15a97847e84cf3f8f26cd7e/wrapt-2.1.2-cp313-cp313t-win_arm64.whl", hash = "sha256:a93cd767e37faeddbe07d8fc4212d5cba660af59bdb0f6372c93faaa13e6e679", size = 60363, upload-time = "2026-03-06T02:54:48.093Z" },
{ url = "https://files.pythonhosted.org/packages/39/25/e7ea0b417db02bb796182a5316398a75792cd9a22528783d868755e1f669/wrapt-2.1.2-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:1370e516598854e5b4366e09ce81e08bfe94d42b0fd569b88ec46cc56d9164a9", size = 61418, upload-time = "2026-03-06T02:53:55.706Z" },
{ url = "https://files.pythonhosted.org/packages/ec/0f/fa539e2f6a770249907757eaeb9a5ff4deb41c026f8466c1c6d799088a9b/wrapt-2.1.2-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:6de1a3851c27e0bd6a04ca993ea6f80fc53e6c742ee1601f486c08e9f9b900a9", size = 61914, upload-time = "2026-03-06T02:52:53.37Z" },
{ url = "https://files.pythonhosted.org/packages/53/37/02af1867f5b1441aaeda9c82deed061b7cd1372572ddcd717f6df90b5e93/wrapt-2.1.2-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:de9f1a2bbc5ac7f6012ec24525bdd444765a2ff64b5985ac6e0692144838542e", size = 120417, upload-time = "2026-03-06T02:54:30.74Z" },
{ url = "https://files.pythonhosted.org/packages/c3/b7/0138a6238c8ba7476c77cf786a807f871672b37f37a422970342308276e7/wrapt-2.1.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:970d57ed83fa040d8b20c52fe74a6ae7e3775ae8cff5efd6a81e06b19078484c", size = 122797, upload-time = "2026-03-06T02:54:51.539Z" },
{ url = "https://files.pythonhosted.org/packages/e1/ad/819ae558036d6a15b7ed290d5b14e209ca795dd4da9c58e50c067d5927b0/wrapt-2.1.2-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:3969c56e4563c375861c8df14fa55146e81ac11c8db49ea6fb7f2ba58bc1ff9a", size = 117350, upload-time = "2026-03-06T02:54:37.651Z" },
{ url = "https://files.pythonhosted.org/packages/8b/2d/afc18dc57a4600a6e594f77a9ae09db54f55ba455440a54886694a84c71b/wrapt-2.1.2-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:57d7c0c980abdc5f1d98b11a2aa3bb159790add80258c717fa49a99921456d90", size = 121223, upload-time = "2026-03-06T02:54:35.221Z" },
{ url = "https://files.pythonhosted.org/packages/b9/5b/5ec189b22205697bc56eb3b62aed87a1e0423e9c8285d0781c7a83170d15/wrapt-2.1.2-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:776867878e83130c7a04237010463372e877c1c994d449ca6aaafeab6aab2586", size = 116287, upload-time = "2026-03-06T02:54:19.654Z" },
{ url = "https://files.pythonhosted.org/packages/f7/2d/f84939a7c9b5e6cdd8a8d0f6a26cabf36a0f7e468b967720e8b0cd2bdf69/wrapt-2.1.2-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:fab036efe5464ec3291411fabb80a7a39e2dd80bae9bcbeeca5087fdfa891e19", size = 119593, upload-time = "2026-03-06T02:54:16.697Z" },
{ url = "https://files.pythonhosted.org/packages/0b/fe/ccd22a1263159c4ac811ab9374c061bcb4a702773f6e06e38de5f81a1bdc/wrapt-2.1.2-cp314-cp314-win32.whl", hash = "sha256:e6ed62c82ddf58d001096ae84ce7f833db97ae2263bff31c9b336ba8cfe3f508", size = 58631, upload-time = "2026-03-06T02:53:06.498Z" },
{ url = "https://files.pythonhosted.org/packages/65/0a/6bd83be7bff2e7efaac7b4ac9748da9d75a34634bbbbc8ad077d527146df/wrapt-2.1.2-cp314-cp314-win_amd64.whl", hash = "sha256:467e7c76315390331c67073073d00662015bb730c566820c9ca9b54e4d67fd04", size = 60875, upload-time = "2026-03-06T02:53:50.252Z" },
{ url = "https://files.pythonhosted.org/packages/6c/c0/0b3056397fe02ff80e5a5d72d627c11eb885d1ca78e71b1a5c1e8c7d45de/wrapt-2.1.2-cp314-cp314-win_arm64.whl", hash = "sha256:da1f00a557c66225d53b095a97eace0fc5349e3bfda28fa34ffae238978ee575", size = 59164, upload-time = "2026-03-06T02:53:59.128Z" },
{ url = "https://files.pythonhosted.org/packages/71/ed/5d89c798741993b2371396eb9d4634f009ff1ad8a6c78d366fe2883ea7a6/wrapt-2.1.2-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:62503ffbc2d3a69891cf29beeaccdb4d5e0a126e2b6a851688d4777e01428dbb", size = 63163, upload-time = "2026-03-06T02:52:54.873Z" },
{ url = "https://files.pythonhosted.org/packages/c6/8c/05d277d182bf36b0a13d6bd393ed1dec3468a25b59d01fba2dd70fe4d6ae/wrapt-2.1.2-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c7e6cd120ef837d5b6f860a6ea3745f8763805c418bb2f12eeb1fa6e25f22d22", size = 63723, upload-time = "2026-03-06T02:52:56.374Z" },
{ url = "https://files.pythonhosted.org/packages/f4/27/6c51ec1eff4413c57e72d6106bb8dec6f0c7cdba6503d78f0fa98767bcc9/wrapt-2.1.2-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:3769a77df8e756d65fbc050333f423c01ae012b4f6731aaf70cf2bef61b34596", size = 152652, upload-time = "2026-03-06T02:53:23.79Z" },
{ url = "https://files.pythonhosted.org/packages/db/4c/d7dd662d6963fc7335bfe29d512b02b71cdfa23eeca7ab3ac74a67505deb/wrapt-2.1.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a76d61a2e851996150ba0f80582dd92a870643fa481f3b3846f229de88caf044", size = 158807, upload-time = "2026-03-06T02:53:35.742Z" },
{ url = "https://files.pythonhosted.org/packages/b4/4d/1e5eea1a78d539d346765727422976676615814029522c76b87a95f6bcdd/wrapt-2.1.2-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:6f97edc9842cf215312b75fe737ee7c8adda75a89979f8e11558dfff6343cc4b", size = 146061, upload-time = "2026-03-06T02:52:57.574Z" },
{ url = "https://files.pythonhosted.org/packages/89/bc/62cabea7695cd12a288023251eeefdcb8465056ddaab6227cb78a2de005b/wrapt-2.1.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:4006c351de6d5007aa33a551f600404ba44228a89e833d2fadc5caa5de8edfbf", size = 155667, upload-time = "2026-03-06T02:53:39.422Z" },
{ url = "https://files.pythonhosted.org/packages/e9/99/6f2888cd68588f24df3a76572c69c2de28287acb9e1972bf0c83ce97dbc1/wrapt-2.1.2-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:a9372fc3639a878c8e7d87e1556fa209091b0a66e912c611e3f833e2c4202be2", size = 144392, upload-time = "2026-03-06T02:54:22.41Z" },
{ url = "https://files.pythonhosted.org/packages/40/51/1dfc783a6c57971614c48e361a82ca3b6da9055879952587bc99fe1a7171/wrapt-2.1.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:3144b027ff30cbd2fca07c0a87e67011adb717eb5f5bd8496325c17e454257a3", size = 150296, upload-time = "2026-03-06T02:54:07.848Z" },
{ url = "https://files.pythonhosted.org/packages/6c/38/cbb8b933a0201076c1f64fc42883b0023002bdc14a4964219154e6ff3350/wrapt-2.1.2-cp314-cp314t-win32.whl", hash = "sha256:3b8d15e52e195813efe5db8cec156eebe339aaf84222f4f4f051a6c01f237ed7", size = 60539, upload-time = "2026-03-06T02:54:00.594Z" },
{ url = "https://files.pythonhosted.org/packages/82/dd/e5176e4b241c9f528402cebb238a36785a628179d7d8b71091154b3e4c9e/wrapt-2.1.2-cp314-cp314t-win_amd64.whl", hash = "sha256:08ffa54146a7559f5b8df4b289b46d963a8e74ed16ba3687f99896101a3990c5", size = 63969, upload-time = "2026-03-06T02:54:39Z" },
{ url = "https://files.pythonhosted.org/packages/5c/99/79f17046cf67e4a95b9987ea129632ba8bcec0bc81f3fb3d19bdb0bd60cd/wrapt-2.1.2-cp314-cp314t-win_arm64.whl", hash = "sha256:72aaa9d0d8e4ed0e2e98019cea47a21f823c9dd4b43c7b77bba6679ffcca6a00", size = 60554, upload-time = "2026-03-06T02:53:14.132Z" },
{ url = "https://files.pythonhosted.org/packages/1a/c7/8528ac2dfa2c1e6708f647df7ae144ead13f0a31146f43c7264b4942bf12/wrapt-2.1.2-py3-none-any.whl", hash = "sha256:b8fd6fa2b2c4e7621808f8c62e8317f4aae56e59721ad933bac5239d913cf0e8", size = 43993, upload-time = "2026-03-06T02:53:12.905Z" },
]
[[package]]
name = "xrpl-py"
version = "4.5.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "base58" },
{ name = "deprecated" },
{ name = "ecpy" },
{ name = "httpx" },
{ name = "pycryptodome" },
{ name = "types-deprecated" },
{ name = "typing-extensions" },
{ name = "websockets" },
]
sdist = { url = "https://files.pythonhosted.org/packages/51/e7/8faf71e5b9b314d329a13cdc7966ad565a6e52a875adeaee0f778b1b8ef1/xrpl_py-4.5.0.tar.gz", hash = "sha256:3ee25fcb748bdf6afe18aad8f74ba71ffa23bf681409fda3a9eb029e4381fc74", size = 175681, upload-time = "2026-02-12T23:41:52.176Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1d/3c/65c853f906f5003c06c5e0cbc50e47bac2ecca17957a9d3a3d823efd8c38/xrpl_py-4.5.0-py3-none-any.whl", hash = "sha256:aa4720b5bf8070d8303346111f1095ec9afe13abdf49b2dc4b988d28ebc227ca", size = 314897, upload-time = "2026-02-12T23:41:50.609Z" },
]

View File

@@ -31,6 +31,7 @@
#include <cassert>
#include <cstring>
#include <ctime>
#include <exception>
#include <fstream>
#include <functional>
#include <iostream>
@@ -351,9 +352,18 @@ Logs::format(
if (useLocalTime)
{
auto now = std::chrono::system_clock::now();
auto local = date::make_zoned(date::current_zone(), now);
output = date::format(fmt, local);
try
{
auto now = std::chrono::system_clock::now();
auto local = date::make_zoned(date::current_zone(), now);
output = date::format(fmt, local);
}
catch (std::exception const&)
{
// Enhanced logging should not make startup fatal if tzdb lookup is
// unavailable or misconfigured. Fall back to UTC formatting.
output = date::format(fmt, std::chrono::system_clock::now());
}
}
else
{

View File

@@ -72,6 +72,7 @@ enum class LedgerNameSpace : std::uint16_t {
HOOK_DEFINITION = 'D',
EMITTED_TXN = 'E',
EMITTED_DIR = 'F',
SHADOW_TICKET = 0x5374, // St
NFTOKEN_OFFER = 'q',
NFTOKEN_BUY_OFFERS = 'h',
NFTOKEN_SELL_OFFERS = 'i',
@@ -79,6 +80,7 @@ enum class LedgerNameSpace : std::uint16_t {
IMPORT_VLSEQ = 'I',
UNL_REPORT = 'R',
CRON = 'L',
CONSENSUS_ENTROPY = 'X',
AMM = 'A',
BRIDGE = 'H',
XCHAIN_CLAIM_ID = 'Q',
@@ -186,6 +188,15 @@ emittedTxn(uint256 const& id) noexcept
return {ltEMITTED_TXN, indexHash(LedgerNameSpace::EMITTED_TXN, id)};
}
Keylet
shadowTicket(AccountID const& account, std::uint32_t ticketSeq) noexcept
{
return {
ltSHADOW_TICKET,
indexHash(
LedgerNameSpace::SHADOW_TICKET, account, std::uint32_t(ticketSeq))};
}
Keylet
hook(AccountID const& id) noexcept
{
@@ -544,6 +555,14 @@ cron(uint32_t timestamp, std::optional<AccountID> const& id)
return {ltCRON, uint256::fromVoid(h)};
}
Keylet const&
consensusEntropy() noexcept
{
static Keylet const ret{
ltCONSENSUS_ENTROPY, indexHash(LedgerNameSpace::CONSENSUS_ENTROPY)};
return ret;
}
Keylet
amm(Asset const& issue1, Asset const& issue2) noexcept
{

View File

@@ -78,6 +78,7 @@ InnerObjectFormats::InnerObjectFormats()
{sfHookExecutionIndex, soeREQUIRED},
{sfHookStateChangeCount, soeREQUIRED},
{sfHookEmitCount, soeREQUIRED},
{sfHookExportCount, soeOPTIONAL},
{sfFlags, soeOPTIONAL}});
add(sfHookEmission.jsonName,

View File

@@ -1,187 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2025 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
*/
//==============================================================================
#include <xrpl/protocol/JsonTx.h>
#include <xrpl/basics/StringUtilities.h>
#include <xrpl/basics/strHex.h>
#include <xrpl/json/json_reader.h>
#include <xrpl/protocol/PublicKey.h>
#include <xrpl/protocol/SField.h>
#include <xrpl/protocol/STBase.h>
#include <xrpl/protocol/STBlob.h>
#include <xrpl/protocol/STObject.h>
#include <xrpl/protocol/STParsedJSON.h>
#include <xrpl/protocol/Serializer.h>
#include <xrpl/protocol/digest.h>
#include <xrpl/protocol/jss.h>
#include <algorithm>
#include <initializer_list>
#include <string>
#include <vector>
namespace ripple {
namespace jsonTx {
namespace {
/** Canonical serialization of `obj` with the given fields removed.
STObject's own serialization already sorts by field code, but we
have to walk the fields ourselves to skip the json-tx wrapper
entries rather than mutate the object. */
Blob
canonicalSerialization(
STObject const& obj,
std::initializer_list<SField const*> skip)
{
std::vector<STBase const*> fields;
for (auto const& entry : obj)
{
if (entry.getSType() == STI_NOTPRESENT)
continue;
bool skipped = false;
for (SField const* s : skip)
if (entry.getFName() == *s)
{
skipped = true;
break;
}
if (!skipped)
fields.push_back(&entry);
}
std::sort(
fields.begin(), fields.end(), [](STBase const* a, STBase const* b) {
return a->getFName().fieldCode < b->getFName().fieldCode;
});
Serializer s;
for (STBase const* f : fields)
{
f->addFieldID(s);
f->add(s);
auto const sType = f->getSType();
if (sType == STI_ARRAY || sType == STI_OBJECT)
s.addFieldID(sType, 1);
}
return s.getData();
}
} // namespace
bool
hasBody(STObject const& obj) noexcept
{
try
{
return obj.isFieldPresent(sfJsonTxBody);
}
catch (...)
{
return false;
}
}
Slice
body(STObject const& obj)
{
if (!obj.isFieldPresent(sfJsonTxBody))
return Slice{};
// peekAtField gives us a view into the STObject's owned storage;
// STBlob::value() returns a Slice over that storage directly.
auto const& field = obj.peekAtField(sfJsonTxBody);
return static_cast<STBlob const&>(field).value();
}
uint256
bodyHash(STObject const& obj)
{
auto const s = body(obj);
if (s.empty())
return uint256{};
return sha512Half(s);
}
Expected<void, std::string>
checkSignature(STTx const& stx)
{
if (!hasBody(stx))
return Unexpected<std::string>("JsonTxBody field is missing.");
auto const bodySlice = body(stx);
if (bodySlice.empty())
return Unexpected<std::string>("JsonTxBody is empty.");
if (!stx.isFieldPresent(sfSigningPubKey))
return Unexpected<std::string>("SigningPubKey is missing.");
Blob const spk = stx.getFieldVL(sfSigningPubKey);
if (!publicKeyType(makeSlice(spk)))
return Unexpected<std::string>("SigningPubKey is not a valid key.");
if (!stx.isFieldPresent(sfTxnSignature))
return Unexpected<std::string>("TxnSignature is missing.");
Blob const sig = stx.getFieldVL(sfTxnSignature);
if (sig.empty())
return Unexpected<std::string>("TxnSignature is empty.");
if (!verify(PublicKey(makeSlice(spk)), bodySlice, makeSlice(sig)))
return Unexpected<std::string>(
"Signature over JsonTxBody failed verification.");
return {};
}
Expected<void, std::string>
checkStructuralEquivalence(STTx const& stx)
{
if (!hasBody(stx))
return Unexpected<std::string>("JsonTxBody field is missing.");
auto const bodySlice = body(stx);
if (bodySlice.empty())
return Unexpected<std::string>("JsonTxBody is empty.");
std::string const bodyStr(
reinterpret_cast<char const*>(bodySlice.data()), bodySlice.size());
Json::Value parsed;
Json::Reader reader;
if (!reader.parse(bodyStr, parsed) || !parsed.isObject())
return Unexpected<std::string>(
"JsonTxBody is not a valid JSON object.");
STParsedJSONObject parsedObj("JsonTxBody", parsed);
if (!parsedObj.object)
return Unexpected<std::string>(
"JsonTxBody does not parse into a valid STObject: " +
(parsedObj.error.isMember(jss::error_message)
? parsedObj.error[jss::error_message].asString()
: std::string("unknown parse error")));
// The json-tx wrapper fields (TxnSignature, JsonTxBody) are excluded
// from both sides: TxnSignature covers the body bytes (not the
// binary), and JsonTxBody is the body itself.
std::initializer_list<SField const*> const skip{
&sfTxnSignature, &sfJsonTxBody};
if (canonicalSerialization(stx, skip) !=
canonicalSerialization(*parsedObj.object, skip))
return Unexpected<std::string>(
"JsonTxBody content does not match the structural fields "
"of the transaction.");
return {};
}
} // namespace jsonTx
} // namespace ripple

View File

@@ -25,7 +25,6 @@
#include <xrpl/json/to_string.h>
#include <xrpl/protocol/Feature.h>
#include <xrpl/protocol/HashPrefix.h>
#include <xrpl/protocol/JsonTx.h>
#include <xrpl/protocol/Protocol.h>
#include <xrpl/protocol/PublicKey.h>
#include <xrpl/protocol/STAccount.h>
@@ -217,14 +216,6 @@ STTx::checkSign(
{
try
{
// json-tx: when sfJsonTxBody is present and the amendment is
// active, the signature covers the raw ASCII bytes of the body
// instead of the classical signing payload. Structural
// equivalence between the body and the other STTx fields is
// enforced separately in passesLocalChecks.
if (rules.enabled(featureJsonTx) && jsonTx::hasBody(*this))
return jsonTx::checkSignature(*this);
// Determine whether we're single- or multi-signing by looking
// at the SigningPubKey. If it's empty we must be
// multi-signing. Otherwise we're single-signing.
@@ -672,21 +663,6 @@ passesLocalChecks(STObject const& st, std::string& reason)
return false;
}
// json-tx: if the tx carries sfJsonTxBody, its parsed content must
// match the other structural fields. We can only run this when the
// object is actually an STTx -- passesLocalChecks is also called on
// nested STObjects that don't participate in the json-tx scheme.
if (auto const* stx = dynamic_cast<STTx const*>(&st);
stx && jsonTx::hasBody(*stx))
{
if (auto const result = jsonTx::checkStructuralEquivalence(*stx);
!result)
{
reason = result.error();
return false;
}
}
return true;
}
@@ -708,7 +684,8 @@ isPseudoTx(STObject const& tx)
auto tt = safe_cast<TxType>(*t);
return tt == ttAMENDMENT || tt == ttFEE || tt == ttUNL_MODIFY ||
tt == ttEMIT_FAILURE || tt == ttUNL_REPORT || tt == ttCRON;
tt == ttEMIT_FAILURE || tt == ttUNL_REPORT || tt == ttCRON ||
tt == ttCONSENSUS_ENTROPY;
}
} // namespace ripple

View File

@@ -124,6 +124,7 @@ transResults()
MAKE_ERROR(tecARRAY_TOO_LARGE, "Array is too large."),
MAKE_ERROR(tecLOCKED, "Fund is locked."),
MAKE_ERROR(tecBAD_CREDENTIALS, "Bad credentials."),
MAKE_ERROR(tecEXPORT_EXPIRED, "Export expired without reaching signature quorum."),
MAKE_ERROR(tefALREADY, "The exact transaction was already in this ledger."),
MAKE_ERROR(tefBAD_ADD_AUTH, "Not authorized to add account."),
@@ -171,6 +172,7 @@ transResults()
MAKE_ERROR(telNON_LOCAL_EMITTED_TXN, "Emitted transaction cannot be applied because it was not generated locally."),
MAKE_ERROR(telIMPORT_VL_KEY_NOT_RECOGNISED, "Import vl key was not recognized."),
MAKE_ERROR(telCAN_NOT_QUEUE_IMPORT, "Import transaction was not able to be directly applied and cannot be queued."),
MAKE_ERROR(telSHADOW_TICKET_REQUIRED, "The imported transaction uses a TicketSequence but no shadow ticket exists."),
MAKE_ERROR(telENV_RPC_FAILED, "Unit test RPC failure."),
MAKE_ERROR(temMALFORMED, "Malformed transaction."),
@@ -238,6 +240,7 @@ transResults()
MAKE_ERROR(terPRE_TICKET, "Ticket is not yet in ledger."),
MAKE_ERROR(terNO_HOOK, "No hook with that hash exists on the ledger."),
MAKE_ERROR(terNO_AMM, "AMM doesn't exist for the asset pair."),
MAKE_ERROR(terRETRY_EXPORT, "Export awaiting validator signatures."),
MAKE_ERROR(tesSUCCESS, "The transaction was applied. Only final in a validated ledger."),
MAKE_ERROR(tesPARTIAL, "The transaction was applied but should be submitted again until returning tesSUCCESS."),

View File

@@ -43,8 +43,7 @@ TxFormats::TxFormats()
{sfSigningPubKey, soeREQUIRED},
{sfTicketSequence, soeOPTIONAL},
{sfTxnSignature, soeOPTIONAL},
{sfJsonTxBody, soeOPTIONAL}, // json-tx: ASCII bytes that were signed
{sfSigners, soeOPTIONAL}, // submit_multisigned
{sfSigners, soeOPTIONAL}, // submit_multisigned
{sfEmitDetails, soeOPTIONAL},
{sfFirstLedgerSequence, soeOPTIONAL},
{sfNetworkID, soeOPTIONAL},

View File

@@ -49,6 +49,11 @@ TxMeta::TxMeta(
if (obj.isFieldPresent(sfHookEmissions))
setHookEmissions(obj.getFieldArray(sfHookEmissions));
if (obj.isFieldPresent(sfExportResult))
setExportResult(const_cast<STObject&>(obj)
.getField(sfExportResult)
.downcast<STObject>());
}
TxMeta::TxMeta(uint256 const& txid, std::uint32_t ledger, STObject const& obj)
@@ -75,6 +80,11 @@ TxMeta::TxMeta(uint256 const& txid, std::uint32_t ledger, STObject const& obj)
if (obj.isFieldPresent(sfHookEmissions))
setHookEmissions(obj.getFieldArray(sfHookEmissions));
if (obj.isFieldPresent(sfExportResult))
setExportResult(const_cast<STObject&>(obj)
.getField(sfExportResult)
.downcast<STObject>());
}
TxMeta::TxMeta(uint256 const& txid, std::uint32_t ledger, Blob const& vec)
@@ -245,6 +255,14 @@ TxMeta::getAsObject() const
if (hasHookEmissions())
metaData.setFieldArray(sfHookEmissions, getHookEmissions());
if (hasExportResult())
{
Serializer s;
mExportResult->add(s);
SerialIter sit(s.slice());
metaData.emplace_back(STObject(sit, sfExportResult));
}
return metaData;
}

View File

@@ -0,0 +1,437 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2026 XRPL Labs
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <test/app/ConsensusEntropy_test_hooks.h>
#include <test/jtx.h>
#include <test/jtx/hook.h>
#include <xrpl/beast/unit_test.h>
#include <xrpl/hook/Enum.h>
#include <xrpl/protocol/Feature.h>
#include <xrpl/protocol/Indexes.h>
#include <xrpl/protocol/SField.h>
#include <xrpl/protocol/TxFlags.h>
#include <xrpl/protocol/jss.h>
namespace ripple {
namespace test {
using TestHook = std::vector<uint8_t> const&;
#define BEAST_REQUIRE(x) \
{ \
BEAST_EXPECT(!!(x)); \
if (!(x)) \
return; \
}
#define HSFEE fee(100'000'000)
#define M(m) memo(m, "", "")
class ConsensusEntropy_test : public beast::unit_test::suite
{
static void
overrideFlag(Json::Value& jv)
{
jv[jss::Flags] = hsfOVERRIDE;
}
void
testSLECreated()
{
testcase("SLE created on ledger close");
using namespace jtx;
Env env{
*this,
envconfig(),
supported_amendments() | featureConsensusEntropy,
nullptr};
BEAST_EXPECT(!env.le(keylet::consensusEntropy()));
env.close();
auto const sle = env.le(keylet::consensusEntropy());
BEAST_REQUIRE(sle);
auto const digest = sle->getFieldH256(sfDigest);
BEAST_EXPECT(digest != uint256{});
auto const count = sle->getFieldU16(sfEntropyCount);
BEAST_EXPECT(count >= 5);
auto const sleSeq = sle->getFieldU32(sfLedgerSequence);
BEAST_EXPECT(sleSeq == env.closed()->seq());
}
void
testSLEUpdatedOnSubsequentClose()
{
testcase("SLE updated on subsequent ledger close");
using namespace jtx;
Env env{
*this,
envconfig(),
supported_amendments() | featureConsensusEntropy,
nullptr};
env.close();
auto const sle1 = env.le(keylet::consensusEntropy());
BEAST_REQUIRE(sle1);
auto const digest1 = sle1->getFieldH256(sfDigest);
auto const seq1 = sle1->getFieldU32(sfLedgerSequence);
env.close();
auto const sle2 = env.le(keylet::consensusEntropy());
BEAST_REQUIRE(sle2);
auto const digest2 = sle2->getFieldH256(sfDigest);
auto const seq2 = sle2->getFieldU32(sfLedgerSequence);
BEAST_EXPECT(digest2 != digest1);
BEAST_EXPECT(seq2 == seq1 + 1);
}
void
testNoSLEWithoutAmendment()
{
testcase("No SLE without amendment");
using namespace jtx;
Env env{*this};
env.close();
env.close();
BEAST_EXPECT(!env.le(keylet::consensusEntropy()));
}
void
testDice()
{
testcase("Hook dice() API");
using namespace jtx;
Env env{
*this,
envconfig(),
supported_amendments() | featureConsensusEntropy,
nullptr};
auto const alice = Account{"alice"};
env.fund(XRP(10000), alice);
env.close();
// Entropy SLE must exist before hook can use dice()
BEAST_REQUIRE(env.le(keylet::consensusEntropy()));
// Set the hook
TestHook hook = consensusentropy_test_wasm[R"[test.hook](
#include <stdint.h>
extern int32_t _g(uint32_t, uint32_t);
extern int64_t accept(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t rollback(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t dice(uint32_t sides);
#define GUARD(maxiter) _g((1ULL << 31U) + __LINE__, (maxiter)+1)
int64_t hook(uint32_t r)
{
_g(1,1);
// dice(6) should return 0..5
int64_t result = dice(6);
// negative means error
if (result < 0)
rollback(0, 0, result);
if (result >= 6)
rollback(0, 0, -1);
// return the dice result as the accept code
return accept(0, 0, result);
}
)[test.hook]"];
env(ripple::test::jtx::hook(alice, {{hso(hook, overrideFlag)}}, 0),
M("set dice hook"),
HSFEE);
env.close();
// Invoke the hook
Json::Value invoke;
invoke[jss::TransactionType] = "Invoke";
invoke[jss::Account] = alice.human();
env(invoke, M("test dice"), fee(XRP(1)));
auto meta = env.meta();
BEAST_REQUIRE(meta);
BEAST_REQUIRE(meta->isFieldPresent(sfHookExecutions));
auto const hookExecutions = meta->getFieldArray(sfHookExecutions);
BEAST_REQUIRE(hookExecutions.size() == 1);
auto const returnCode = hookExecutions[0].getFieldU64(sfHookReturnCode);
std::cerr << " dice(6) returnCode = " << returnCode << " (hex 0x"
<< std::hex << returnCode << std::dec << ")\n";
// dice(6) returns 0..5
BEAST_EXPECT(returnCode <= 5);
// Result should be 3 (accept)
BEAST_EXPECT(hookExecutions[0].getFieldU8(sfHookResult) == 3);
}
void
testRandom()
{
testcase("Hook random() API");
using namespace jtx;
Env env{
*this,
envconfig(),
supported_amendments() | featureConsensusEntropy,
nullptr};
auto const alice = Account{"alice"};
env.fund(XRP(10000), alice);
env.close();
BEAST_REQUIRE(env.le(keylet::consensusEntropy()));
// Hook calls random() to fill a 32-byte buffer, then checks
// the buffer is not all zeroes.
TestHook hook = consensusentropy_test_wasm[R"[test.hook](
#include <stdint.h>
extern int32_t _g(uint32_t, uint32_t);
extern int64_t accept(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t rollback(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t random(uint32_t write_ptr, uint32_t write_len);
#define GUARD(maxiter) _g((1ULL << 31U) + __LINE__, (maxiter)+1)
int64_t hook(uint32_t r)
{
_g(1,1);
uint8_t buf[32];
for (int i = 0; GUARD(32), i < 32; ++i)
buf[i] = 0;
int64_t result = random((uint32_t)buf, 32);
// Should return 32 (bytes written)
if (result != 32)
rollback(0, 0, result);
// Verify buffer is not all zeroes
int nonzero = 0;
for (int i = 0; GUARD(32), i < 32; ++i)
if (buf[i] != 0) nonzero = 1;
if (!nonzero)
rollback(0, 0, -2);
return accept(0, 0, 0);
}
)[test.hook]"];
env(ripple::test::jtx::hook(alice, {{hso(hook, overrideFlag)}}, 0),
M("set random hook"),
HSFEE);
env.close();
Json::Value invoke;
invoke[jss::TransactionType] = "Invoke";
invoke[jss::Account] = alice.human();
env(invoke, M("test random"), fee(XRP(1)));
auto meta = env.meta();
BEAST_REQUIRE(meta);
BEAST_REQUIRE(meta->isFieldPresent(sfHookExecutions));
auto const hookExecutions = meta->getFieldArray(sfHookExecutions);
BEAST_REQUIRE(hookExecutions.size() == 1);
// Return code 0 = all checks passed in the hook
BEAST_EXPECT(hookExecutions[0].getFieldU64(sfHookReturnCode) == 0);
BEAST_EXPECT(hookExecutions[0].getFieldU8(sfHookResult) == 3);
}
void
testDiceConsecutiveCallsDiffer()
{
testcase("Hook dice() consecutive calls return different values");
using namespace jtx;
Env env{
*this,
envconfig(),
supported_amendments() | featureConsensusEntropy,
nullptr};
auto const alice = Account{"alice"};
env.fund(XRP(10000), alice);
env.close();
BEAST_REQUIRE(env.le(keylet::consensusEntropy()));
// dice(1000000) twice — large range makes collision near-impossible
// encode r1 in low 20 bits, r2 in high bits
TestHook hook = consensusentropy_test_wasm[R"[test.hook](
#include <stdint.h>
extern int32_t _g(uint32_t, uint32_t);
extern int64_t accept(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t rollback(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t dice(uint32_t sides);
int64_t hook(uint32_t r)
{
_g(1,1);
int64_t r1 = dice(1000000);
if (r1 < 0)
rollback(0, 0, r1);
int64_t r2 = dice(1000000);
if (r2 < 0)
rollback(0, 0, r2);
// consecutive calls should differ (rngCallCounter)
if (r1 == r2)
rollback(0, 0, -1);
return accept(0, 0, r1 | (r2 << 20));
}
)[test.hook]"];
env(ripple::test::jtx::hook(alice, {{hso(hook, overrideFlag)}}, 0),
M("set dice hook"),
HSFEE);
env.close();
Json::Value invoke;
invoke[jss::TransactionType] = "Invoke";
invoke[jss::Account] = alice.human();
env(invoke, M("test dice consecutive"), fee(XRP(1)));
auto meta = env.meta();
BEAST_REQUIRE(meta);
BEAST_REQUIRE(meta->isFieldPresent(sfHookExecutions));
auto const hookExecutions = meta->getFieldArray(sfHookExecutions);
BEAST_REQUIRE(hookExecutions.size() == 1);
auto const rc = hookExecutions[0].getFieldU64(sfHookReturnCode);
auto const r1 = rc & 0xFFFFF;
auto const r2 = (rc >> 20) & 0xFFFFF;
std::cerr << " two-call dice(1000000): returnCode=" << rc << " hex=0x"
<< std::hex << rc << std::dec << " r1=" << r1 << " r2=" << r2
<< "\n";
// hookResult 3 = accept (would be 1 if r1==r2 triggered rollback)
BEAST_EXPECT(hookExecutions[0].getFieldU8(sfHookResult) == 3);
BEAST_EXPECT(r1 < 1000000);
BEAST_EXPECT(r2 < 1000000);
BEAST_EXPECT(r1 != r2);
}
void
testDiceZeroSides()
{
testcase("Hook dice(0) returns INVALID_ARGUMENT");
using namespace jtx;
Env env{
*this,
envconfig(),
supported_amendments() | featureConsensusEntropy,
nullptr};
auto const alice = Account{"alice"};
env.fund(XRP(10000), alice);
env.close();
BEAST_REQUIRE(env.le(keylet::consensusEntropy()));
// Hook calls dice(0) and returns whatever dice returns.
// dice(0) should return INVALID_ARGUMENT (-7).
TestHook hook = consensusentropy_test_wasm[R"[test.hook](
#include <stdint.h>
extern int32_t _g(uint32_t, uint32_t);
extern int64_t accept(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t dice(uint32_t sides);
int64_t hook(uint32_t r)
{
_g(1,1);
int64_t result = dice(0);
// dice(0) should return negative error code, pass it through
return accept(0, 0, result);
}
)[test.hook]"];
env(ripple::test::jtx::hook(alice, {{hso(hook, overrideFlag)}}, 0),
M("set dice0 hook"),
HSFEE);
env.close();
Json::Value invoke;
invoke[jss::TransactionType] = "Invoke";
invoke[jss::Account] = alice.human();
env(invoke, M("test dice(0)"), fee(XRP(1)));
auto meta = env.meta();
BEAST_REQUIRE(meta);
BEAST_REQUIRE(meta->isFieldPresent(sfHookExecutions));
auto const hookExecutions = meta->getFieldArray(sfHookExecutions);
BEAST_REQUIRE(hookExecutions.size() == 1);
// INVALID_ARGUMENT = -7, encoded as 0x8000000000000000 + abs(code)
// (see applyHook.cpp unsigned_exit_code encoding)
auto const rawCode = hookExecutions[0].getFieldU64(sfHookReturnCode);
int64_t returnCode = (rawCode & 0x8000000000000000ULL)
? -static_cast<int64_t>(rawCode & 0x7FFFFFFFFFFFFFFFULL)
: static_cast<int64_t>(rawCode);
std::cerr << " dice(0) returnCode = " << returnCode << " (raw 0x"
<< std::hex << rawCode << std::dec << ")\n";
BEAST_EXPECT(returnCode == -7);
BEAST_EXPECT(hookExecutions[0].getFieldU8(sfHookResult) == 3);
}
void
run() override
{
testSLECreated();
testSLEUpdatedOnSubsequentClose();
testNoSLEWithoutAmendment();
testDice();
testDiceZeroSides();
testRandom();
testDiceConsecutiveCallsDiffer();
}
};
BEAST_DEFINE_TESTSUITE(ConsensusEntropy, app, ripple);
} // namespace test
} // namespace ripple

View File

@@ -0,0 +1,235 @@
// This file is generated by build_test_hooks.py
#ifndef CONSENSUSENTROPY_TEST_WASM_INCLUDED
#define CONSENSUSENTROPY_TEST_WASM_INCLUDED
#include <map>
#include <stdint.h>
#include <string>
#include <vector>
namespace ripple {
namespace test {
std::map<std::string, std::vector<uint8_t>> consensusentropy_test_wasm = {
/* ==== WASM: 0 ==== */
{R"[test.hook](
#include <stdint.h>
extern int32_t _g(uint32_t, uint32_t);
extern int64_t accept(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t rollback(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t dice(uint32_t sides);
#define GUARD(maxiter) _g((1ULL << 31U) + __LINE__, (maxiter)+1)
int64_t hook(uint32_t r)
{
_g(1,1);
// dice(6) should return 0..5
int64_t result = dice(6);
// negative means error
if (result < 0)
rollback(0, 0, result);
if (result >= 6)
rollback(0, 0, -1);
// return the dice result as the accept code
return accept(0, 0, result);
}
)[test.hook]",
{
0x00U, 0x61U, 0x73U, 0x6DU, 0x01U, 0x00U, 0x00U, 0x00U, 0x01U, 0x13U,
0x03U, 0x60U, 0x02U, 0x7FU, 0x7FU, 0x01U, 0x7FU, 0x60U, 0x01U, 0x7FU,
0x01U, 0x7EU, 0x60U, 0x03U, 0x7FU, 0x7FU, 0x7EU, 0x01U, 0x7EU, 0x02U,
0x31U, 0x04U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x02U, 0x5FU, 0x67U, 0x00U,
0x00U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x04U, 0x64U, 0x69U, 0x63U, 0x65U,
0x00U, 0x01U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x08U, 0x72U, 0x6FU, 0x6CU,
0x6CU, 0x62U, 0x61U, 0x63U, 0x6BU, 0x00U, 0x02U, 0x03U, 0x65U, 0x6EU,
0x76U, 0x06U, 0x61U, 0x63U, 0x63U, 0x65U, 0x70U, 0x74U, 0x00U, 0x02U,
0x03U, 0x02U, 0x01U, 0x01U, 0x05U, 0x03U, 0x01U, 0x00U, 0x02U, 0x06U,
0x21U, 0x05U, 0x7FU, 0x01U, 0x41U, 0x80U, 0x88U, 0x04U, 0x0BU, 0x7FU,
0x00U, 0x41U, 0x80U, 0x08U, 0x0BU, 0x7FU, 0x00U, 0x41U, 0x80U, 0x08U,
0x0BU, 0x7FU, 0x00U, 0x41U, 0x80U, 0x88U, 0x04U, 0x0BU, 0x7FU, 0x00U,
0x41U, 0x80U, 0x08U, 0x0BU, 0x07U, 0x08U, 0x01U, 0x04U, 0x68U, 0x6FU,
0x6FU, 0x6BU, 0x00U, 0x04U, 0x0AU, 0xD0U, 0x80U, 0x00U, 0x01U, 0xCCU,
0x80U, 0x00U, 0x01U, 0x02U, 0x7EU, 0x41U, 0x01U, 0x41U, 0x01U, 0x10U,
0x80U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x41U, 0x06U, 0x10U, 0x81U,
0x80U, 0x80U, 0x80U, 0x00U, 0x22U, 0x01U, 0x21U, 0x02U, 0x02U, 0x40U,
0x02U, 0x40U, 0x20U, 0x01U, 0x42U, 0x00U, 0x53U, 0x0DU, 0x00U, 0x42U,
0x7FU, 0x21U, 0x02U, 0x20U, 0x01U, 0x42U, 0x06U, 0x53U, 0x0DU, 0x01U,
0x0BU, 0x41U, 0x00U, 0x41U, 0x00U, 0x20U, 0x02U, 0x10U, 0x82U, 0x80U,
0x80U, 0x80U, 0x00U, 0x1AU, 0x0BU, 0x41U, 0x00U, 0x41U, 0x00U, 0x20U,
0x01U, 0x10U, 0x83U, 0x80U, 0x80U, 0x80U, 0x00U, 0x0BU,
}},
/* ==== WASM: 1 ==== */
{R"[test.hook](
#include <stdint.h>
extern int32_t _g(uint32_t, uint32_t);
extern int64_t accept(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t rollback(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t random(uint32_t write_ptr, uint32_t write_len);
#define GUARD(maxiter) _g((1ULL << 31U) + __LINE__, (maxiter)+1)
int64_t hook(uint32_t r)
{
_g(1,1);
uint8_t buf[32];
for (int i = 0; GUARD(32), i < 32; ++i)
buf[i] = 0;
int64_t result = random((uint32_t)buf, 32);
// Should return 32 (bytes written)
if (result != 32)
rollback(0, 0, result);
// Verify buffer is not all zeroes
int nonzero = 0;
for (int i = 0; GUARD(32), i < 32; ++i)
if (buf[i] != 0) nonzero = 1;
if (!nonzero)
rollback(0, 0, -2);
return accept(0, 0, 0);
}
)[test.hook]",
{
0x00U, 0x61U, 0x73U, 0x6DU, 0x01U, 0x00U, 0x00U, 0x00U, 0x01U, 0x19U,
0x04U, 0x60U, 0x02U, 0x7FU, 0x7FU, 0x01U, 0x7FU, 0x60U, 0x02U, 0x7FU,
0x7FU, 0x01U, 0x7EU, 0x60U, 0x03U, 0x7FU, 0x7FU, 0x7EU, 0x01U, 0x7EU,
0x60U, 0x01U, 0x7FU, 0x01U, 0x7EU, 0x02U, 0x33U, 0x04U, 0x03U, 0x65U,
0x6EU, 0x76U, 0x02U, 0x5FU, 0x67U, 0x00U, 0x00U, 0x03U, 0x65U, 0x6EU,
0x76U, 0x06U, 0x72U, 0x61U, 0x6EU, 0x64U, 0x6FU, 0x6DU, 0x00U, 0x01U,
0x03U, 0x65U, 0x6EU, 0x76U, 0x08U, 0x72U, 0x6FU, 0x6CU, 0x6CU, 0x62U,
0x61U, 0x63U, 0x6BU, 0x00U, 0x02U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x06U,
0x61U, 0x63U, 0x63U, 0x65U, 0x70U, 0x74U, 0x00U, 0x02U, 0x03U, 0x02U,
0x01U, 0x03U, 0x05U, 0x03U, 0x01U, 0x00U, 0x02U, 0x06U, 0x21U, 0x05U,
0x7FU, 0x01U, 0x41U, 0x80U, 0x88U, 0x04U, 0x0BU, 0x7FU, 0x00U, 0x41U,
0x80U, 0x08U, 0x0BU, 0x7FU, 0x00U, 0x41U, 0x80U, 0x08U, 0x0BU, 0x7FU,
0x00U, 0x41U, 0x80U, 0x88U, 0x04U, 0x0BU, 0x7FU, 0x00U, 0x41U, 0x80U,
0x08U, 0x0BU, 0x07U, 0x08U, 0x01U, 0x04U, 0x68U, 0x6FU, 0x6FU, 0x6BU,
0x00U, 0x04U, 0x0AU, 0x86U, 0x82U, 0x00U, 0x01U, 0x82U, 0x82U, 0x00U,
0x03U, 0x02U, 0x7FU, 0x01U, 0x7EU, 0x02U, 0x7FU, 0x23U, 0x80U, 0x80U,
0x80U, 0x80U, 0x00U, 0x41U, 0x20U, 0x6BU, 0x22U, 0x01U, 0x24U, 0x80U,
0x80U, 0x80U, 0x80U, 0x00U, 0x41U, 0x01U, 0x41U, 0x01U, 0x10U, 0x80U,
0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x41U, 0x8EU, 0x80U, 0x80U, 0x80U,
0x78U, 0x41U, 0x21U, 0x10U, 0x80U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU,
0x41U, 0x00U, 0x21U, 0x02U, 0x03U, 0x40U, 0x41U, 0x8EU, 0x80U, 0x80U,
0x80U, 0x78U, 0x41U, 0x21U, 0x10U, 0x00U, 0x1AU, 0x20U, 0x01U, 0x20U,
0x02U, 0x6AU, 0x41U, 0x00U, 0x3AU, 0x00U, 0x00U, 0x41U, 0x8EU, 0x80U,
0x80U, 0x80U, 0x78U, 0x41U, 0x21U, 0x1AU, 0x01U, 0x01U, 0x01U, 0x01U,
0x01U, 0x1AU, 0x20U, 0x02U, 0x41U, 0x01U, 0x6AU, 0x22U, 0x02U, 0x41U,
0x20U, 0x47U, 0x0DU, 0x00U, 0x0BU, 0x02U, 0x40U, 0x20U, 0x01U, 0x41U,
0x20U, 0x10U, 0x81U, 0x80U, 0x80U, 0x80U, 0x00U, 0x22U, 0x03U, 0x42U,
0x20U, 0x51U, 0x0DU, 0x00U, 0x41U, 0x00U, 0x41U, 0x00U, 0x20U, 0x03U,
0x10U, 0x82U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x0BU, 0x41U, 0x99U,
0x80U, 0x80U, 0x80U, 0x78U, 0x41U, 0x21U, 0x10U, 0x80U, 0x80U, 0x80U,
0x80U, 0x00U, 0x1AU, 0x41U, 0x00U, 0x21U, 0x02U, 0x41U, 0x00U, 0x21U,
0x04U, 0x03U, 0x40U, 0x41U, 0x99U, 0x80U, 0x80U, 0x80U, 0x78U, 0x41U,
0x21U, 0x10U, 0x80U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x20U, 0x01U,
0x20U, 0x02U, 0x6AU, 0x2DU, 0x00U, 0x00U, 0x21U, 0x05U, 0x41U, 0x01U,
0x20U, 0x04U, 0x20U, 0x05U, 0x1BU, 0x21U, 0x04U, 0x20U, 0x02U, 0x41U,
0x01U, 0x6AU, 0x22U, 0x02U, 0x41U, 0x20U, 0x47U, 0x0DU, 0x00U, 0x0BU,
0x02U, 0x40U, 0x20U, 0x04U, 0x0DU, 0x00U, 0x41U, 0x00U, 0x41U, 0x00U,
0x42U, 0x7EU, 0x10U, 0x82U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x0BU,
0x41U, 0x00U, 0x41U, 0x00U, 0x42U, 0x00U, 0x10U, 0x83U, 0x80U, 0x80U,
0x80U, 0x00U, 0x21U, 0x03U, 0x20U, 0x01U, 0x41U, 0x20U, 0x6AU, 0x24U,
0x80U, 0x80U, 0x80U, 0x80U, 0x00U, 0x20U, 0x03U, 0x0BU,
}},
/* ==== WASM: 2 ==== */
{R"[test.hook](
#include <stdint.h>
extern int32_t _g(uint32_t, uint32_t);
extern int64_t accept(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t rollback(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t dice(uint32_t sides);
int64_t hook(uint32_t r)
{
_g(1,1);
int64_t r1 = dice(1000000);
if (r1 < 0)
rollback(0, 0, r1);
int64_t r2 = dice(1000000);
if (r2 < 0)
rollback(0, 0, r2);
// consecutive calls should differ (rngCallCounter)
if (r1 == r2)
rollback(0, 0, -1);
return accept(0, 0, r1 | (r2 << 20));
}
)[test.hook]",
{
0x00U, 0x61U, 0x73U, 0x6DU, 0x01U, 0x00U, 0x00U, 0x00U, 0x01U, 0x13U,
0x03U, 0x60U, 0x02U, 0x7FU, 0x7FU, 0x01U, 0x7FU, 0x60U, 0x01U, 0x7FU,
0x01U, 0x7EU, 0x60U, 0x03U, 0x7FU, 0x7FU, 0x7EU, 0x01U, 0x7EU, 0x02U,
0x31U, 0x04U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x02U, 0x5FU, 0x67U, 0x00U,
0x00U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x04U, 0x64U, 0x69U, 0x63U, 0x65U,
0x00U, 0x01U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x08U, 0x72U, 0x6FU, 0x6CU,
0x6CU, 0x62U, 0x61U, 0x63U, 0x6BU, 0x00U, 0x02U, 0x03U, 0x65U, 0x6EU,
0x76U, 0x06U, 0x61U, 0x63U, 0x63U, 0x65U, 0x70U, 0x74U, 0x00U, 0x02U,
0x03U, 0x02U, 0x01U, 0x01U, 0x05U, 0x03U, 0x01U, 0x00U, 0x02U, 0x06U,
0x21U, 0x05U, 0x7FU, 0x01U, 0x41U, 0x80U, 0x88U, 0x04U, 0x0BU, 0x7FU,
0x00U, 0x41U, 0x80U, 0x08U, 0x0BU, 0x7FU, 0x00U, 0x41U, 0x80U, 0x08U,
0x0BU, 0x7FU, 0x00U, 0x41U, 0x80U, 0x88U, 0x04U, 0x0BU, 0x7FU, 0x00U,
0x41U, 0x80U, 0x08U, 0x0BU, 0x07U, 0x08U, 0x01U, 0x04U, 0x68U, 0x6FU,
0x6FU, 0x6BU, 0x00U, 0x04U, 0x0AU, 0xFEU, 0x80U, 0x00U, 0x01U, 0xFAU,
0x80U, 0x00U, 0x01U, 0x02U, 0x7EU, 0x41U, 0x01U, 0x41U, 0x01U, 0x10U,
0x80U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x02U, 0x40U, 0x41U, 0xC0U,
0x84U, 0x3DU, 0x10U, 0x81U, 0x80U, 0x80U, 0x80U, 0x00U, 0x22U, 0x01U,
0x42U, 0x7FU, 0x55U, 0x0DU, 0x00U, 0x41U, 0x00U, 0x41U, 0x00U, 0x20U,
0x01U, 0x10U, 0x82U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x0BU, 0x02U,
0x40U, 0x41U, 0xC0U, 0x84U, 0x3DU, 0x10U, 0x81U, 0x80U, 0x80U, 0x80U,
0x00U, 0x22U, 0x02U, 0x42U, 0x7FU, 0x55U, 0x0DU, 0x00U, 0x41U, 0x00U,
0x41U, 0x00U, 0x20U, 0x02U, 0x10U, 0x82U, 0x80U, 0x80U, 0x80U, 0x00U,
0x1AU, 0x0BU, 0x02U, 0x40U, 0x20U, 0x01U, 0x20U, 0x02U, 0x52U, 0x0DU,
0x00U, 0x41U, 0x00U, 0x41U, 0x00U, 0x42U, 0x7FU, 0x10U, 0x82U, 0x80U,
0x80U, 0x80U, 0x00U, 0x1AU, 0x0BU, 0x41U, 0x00U, 0x41U, 0x00U, 0x20U,
0x02U, 0x42U, 0x14U, 0x86U, 0x20U, 0x01U, 0x84U, 0x10U, 0x83U, 0x80U,
0x80U, 0x80U, 0x00U, 0x0BU,
}},
/* ==== WASM: 3 ==== */
{R"[test.hook](
#include <stdint.h>
extern int32_t _g(uint32_t, uint32_t);
extern int64_t accept(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t dice(uint32_t sides);
int64_t hook(uint32_t r)
{
_g(1,1);
int64_t result = dice(0);
// dice(0) should return negative error code, pass it through
return accept(0, 0, result);
}
)[test.hook]",
{
0x00U, 0x61U, 0x73U, 0x6DU, 0x01U, 0x00U, 0x00U, 0x00U, 0x01U, 0x13U,
0x03U, 0x60U, 0x02U, 0x7FU, 0x7FU, 0x01U, 0x7FU, 0x60U, 0x01U, 0x7FU,
0x01U, 0x7EU, 0x60U, 0x03U, 0x7FU, 0x7FU, 0x7EU, 0x01U, 0x7EU, 0x02U,
0x22U, 0x03U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x02U, 0x5FU, 0x67U, 0x00U,
0x00U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x04U, 0x64U, 0x69U, 0x63U, 0x65U,
0x00U, 0x01U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x06U, 0x61U, 0x63U, 0x63U,
0x65U, 0x70U, 0x74U, 0x00U, 0x02U, 0x03U, 0x02U, 0x01U, 0x01U, 0x05U,
0x03U, 0x01U, 0x00U, 0x02U, 0x06U, 0x21U, 0x05U, 0x7FU, 0x01U, 0x41U,
0x80U, 0x88U, 0x04U, 0x0BU, 0x7FU, 0x00U, 0x41U, 0x80U, 0x08U, 0x0BU,
0x7FU, 0x00U, 0x41U, 0x80U, 0x08U, 0x0BU, 0x7FU, 0x00U, 0x41U, 0x80U,
0x88U, 0x04U, 0x0BU, 0x7FU, 0x00U, 0x41U, 0x80U, 0x08U, 0x0BU, 0x07U,
0x08U, 0x01U, 0x04U, 0x68U, 0x6FU, 0x6FU, 0x6BU, 0x00U, 0x03U, 0x0AU,
0xA3U, 0x80U, 0x00U, 0x01U, 0x9FU, 0x80U, 0x00U, 0x00U, 0x41U, 0x01U,
0x41U, 0x01U, 0x10U, 0x80U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x41U,
0x00U, 0x41U, 0x00U, 0x41U, 0x00U, 0x10U, 0x81U, 0x80U, 0x80U, 0x80U,
0x00U, 0x10U, 0x82U, 0x80U, 0x80U, 0x80U, 0x00U, 0x0BU,
}},
};
}
} // namespace ripple
#endif

View File

@@ -0,0 +1,162 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <xrpld/app/misc/ExportSigCollector.h>
#include <xrpl/basics/StringUtilities.h>
#include <xrpl/beast/unit_test.h>
#include <xrpl/protocol/digest.h>
#include <cstring>
namespace ripple {
namespace test {
namespace {
uint256
makeHash(char const* label)
{
return sha512Half(Slice(label, std::strlen(label)));
}
PublicKey
makePublicKey(char const* hex)
{
auto const raw = strUnHex(hex);
return PublicKey{makeSlice(*raw)};
}
Buffer
makeSignature(std::uint8_t seed)
{
std::uint8_t bytes[] = {
seed,
static_cast<std::uint8_t>(seed + 1),
static_cast<std::uint8_t>(seed + 2)};
return Buffer(bytes, sizeof(bytes));
}
} // namespace
class ExportSigCollector_test : public beast::unit_test::suite
{
PublicKey const validator_ = makePublicKey(
"0388935426E0D08083314842EDFBB2D517BD47699F9A4527318A8E10468C97C05"
"2");
public:
void
testCleanupUsesFirstSeenSeq()
{
testcase("cleanup uses first seen sequence");
ExportSigCollector collector;
auto const tx = makeHash("cleanup-verified");
auto const sig = makeSignature(1);
collector.addVerifiedSignature(tx, validator_, sig, 10);
BEAST_EXPECT(collector.signatureCount(tx) == 1);
collector.cleanupStale(266);
BEAST_EXPECT(collector.signatureCount(tx) == 1);
collector.cleanupStale(267);
BEAST_EXPECT(collector.signatureCount(tx) == 0);
}
void
testUpgradeSetsFirstSeenSeq()
{
testcase("upgrade sets first seen sequence");
ExportSigCollector collector;
auto const tx = makeHash("cleanup-upgraded");
auto const sig = makeSignature(5);
collector.addUnverifiedSignature(tx, validator_, sig);
BEAST_EXPECT(collector.hasUnverifiedSignatures());
collector.upgradeSignature(tx, validator_, sig, 10);
BEAST_EXPECT(!collector.hasUnverifiedSignatures());
BEAST_EXPECT(collector.signatureCount(tx) == 1);
collector.cleanupStale(266);
BEAST_EXPECT(collector.signatureCount(tx) == 1);
collector.cleanupStale(267);
BEAST_EXPECT(collector.signatureCount(tx) == 0);
}
void
testRemoveInvalidUnverifiedSignature()
{
testcase("remove invalid unverified signature");
ExportSigCollector collector;
auto const tx = makeHash("remove-invalid");
auto const sig = makeSignature(9);
auto const otherSig = makeSignature(10);
collector.addUnverifiedSignature(tx, validator_, sig, 10);
BEAST_EXPECT(collector.hasUnverifiedSignatures());
BEAST_EXPECT(!collector.removeSignature(tx, validator_, otherSig));
BEAST_EXPECT(collector.hasUnverifiedSignatures());
BEAST_EXPECT(collector.removeSignature(tx, validator_, sig));
BEAST_EXPECT(!collector.hasUnverifiedSignatures());
BEAST_EXPECT(collector.signatureCount(tx) == 0);
}
void
testClearAll()
{
testcase("clear all signatures and round state");
ExportSigCollector collector;
auto const verifiedTx = makeHash("clear-all-verified");
auto const unverifiedTx = makeHash("clear-all-unverified");
auto const sig = makeSignature(12);
collector.addVerifiedSignature(verifiedTx, validator_, sig, 10);
collector.addUnverifiedSignature(unverifiedTx, validator_, sig, 10);
BEAST_EXPECT(collector.signatureCount(verifiedTx) == 1);
BEAST_EXPECT(collector.hasUnverifiedSignatures());
BEAST_EXPECT(collector.markSent(verifiedTx));
BEAST_EXPECT(!collector.markSent(verifiedTx));
collector.clearAll();
BEAST_EXPECT(collector.signatureCount(verifiedTx) == 0);
BEAST_EXPECT(!collector.hasUnverifiedSignatures());
BEAST_EXPECT(collector.markSent(verifiedTx));
}
void
run() override
{
testCleanupUsesFirstSeenSeq();
testUpgradeSetsFirstSeenSeq();
testRemoveInvalidUnverifiedSignature();
testClearAll();
}
};
BEAST_DEFINE_TESTSUITE(ExportSigCollector, app, ripple);
} // namespace test
} // namespace ripple

1078
src/test/app/Export_test.cpp Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,483 @@
// This file is generated by build_test_hooks.py
#ifndef EXPORT_TEST_WASM_INCLUDED
#define EXPORT_TEST_WASM_INCLUDED
#include <map>
#include <stdint.h>
#include <string>
#include <vector>
namespace ripple {
namespace test {
std::map<std::string, std::vector<uint8_t>> export_test_wasm = {
/* ==== WASM: 0 ==== */
{R"[test.hook](
#include <stdint.h>
extern int32_t _g(uint32_t id, uint32_t maxiter);
extern int64_t accept(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t rollback(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t xport(uint32_t write_ptr, uint32_t write_len, uint32_t read_ptr, uint32_t read_len);
extern int64_t xport_reserve(uint32_t count);
extern int64_t hook_account(uint32_t write_ptr, uint32_t write_len);
extern int64_t otxn_param(uint32_t write_ptr, uint32_t write_len, uint32_t name_ptr, uint32_t name_len);
extern int64_t otxn_type(void);
extern int64_t ledger_seq(void);
#define SBUF(x) (uint32_t)(x), sizeof(x)
#define ASSERT(x) if (!(x)) rollback((uint32_t)#x, sizeof(#x), __LINE__)
#define ttPAYMENT 0
#define tfCANONICAL 0x80000000UL
#define amAMOUNT 1
#define amFEE 8
#define atACCOUNT 1
#define atDESTINATION 3
#define ENCODE_TT(buf_out, tt) \
buf_out[0] = 0x12U; \
buf_out[1] = (tt >> 8) & 0xFFU; \
buf_out[2] = tt & 0xFFU; \
buf_out += 3;
#define ENCODE_FLAGS(buf_out, flags) \
buf_out[0] = 0x22U; \
buf_out[1] = (flags >> 24) & 0xFFU; \
buf_out[2] = (flags >> 16) & 0xFFU; \
buf_out[3] = (flags >> 8) & 0xFFU; \
buf_out[4] = flags & 0xFFU; \
buf_out += 5;
#define ENCODE_SEQUENCE(buf_out, seq) \
buf_out[0] = 0x24U; \
buf_out[1] = (seq >> 24) & 0xFFU; \
buf_out[2] = (seq >> 16) & 0xFFU; \
buf_out[3] = (seq >> 8) & 0xFFU; \
buf_out[4] = seq & 0xFFU; \
buf_out += 5;
#define ENCODE_FLS(buf_out, fls) \
buf_out[0] = 0x20U; \
buf_out[1] = 0x1AU; \
buf_out[2] = (fls >> 24) & 0xFFU; \
buf_out[3] = (fls >> 16) & 0xFFU; \
buf_out[4] = (fls >> 8) & 0xFFU; \
buf_out[5] = fls & 0xFFU; \
buf_out += 6;
#define ENCODE_LLS(buf_out, lls) \
buf_out[0] = 0x20U; \
buf_out[1] = 0x1BU; \
buf_out[2] = (lls >> 24) & 0xFFU; \
buf_out[3] = (lls >> 16) & 0xFFU; \
buf_out[4] = (lls >> 8) & 0xFFU; \
buf_out[5] = lls & 0xFFU; \
buf_out += 6;
#define ENCODE_DROPS(buf_out, drops, amt_type) \
buf_out[0] = 0x60U + amt_type; \
buf_out[1] = 0x40U + ((drops >> 56) & 0x3FU); \
buf_out[2] = (drops >> 48) & 0xFFU; \
buf_out[3] = (drops >> 40) & 0xFFU; \
buf_out[4] = (drops >> 32) & 0xFFU; \
buf_out[5] = (drops >> 24) & 0xFFU; \
buf_out[6] = (drops >> 16) & 0xFFU; \
buf_out[7] = (drops >> 8) & 0xFFU; \
buf_out[8] = drops & 0xFFU; \
buf_out += 9;
#define ENCODE_SIGNING_PUBKEY_EMPTY(buf_out) \
buf_out[0] = 0x73U; \
buf_out[1] = 0x00U; \
buf_out += 2;
#define ENCODE_ACCOUNT(buf_out, acc, acc_type) \
buf_out[0] = 0x80U + acc_type; \
buf_out[1] = 0x14U; \
for (int i = 0; i < 20; ++i) buf_out[2+i] = acc[i]; \
buf_out += 22;
#define PREPARE_PAYMENT_SIMPLE_SIZE 270U
int64_t hook(uint32_t reserved) {
_g(1, 1);
if (otxn_type() != ttPAYMENT)
return accept(0, 0, 0);
ASSERT(xport_reserve(1) == 1);
uint8_t dst[20];
int64_t dst_len = otxn_param(SBUF(dst), "DST", 3);
ASSERT(dst_len == 20);
uint8_t acc[20];
ASSERT(hook_account(SBUF(acc)) == 20);
uint32_t cls = (uint32_t)ledger_seq();
uint8_t tx[PREPARE_PAYMENT_SIMPLE_SIZE];
uint8_t* buf = tx;
ENCODE_TT(buf, ttPAYMENT);
ENCODE_FLAGS(buf, tfCANONICAL);
ENCODE_SEQUENCE(buf, 0);
ENCODE_FLS(buf, cls + 1);
ENCODE_LLS(buf, cls + 5);
// sfTicketSequence = UINT32 field 41 = 0x20 0x29
buf[0] = 0x20U; buf[1] = 0x29U;
buf[2] = 0; buf[3] = 0; buf[4] = 0; buf[5] = 1;
buf += 6;
uint64_t drops = 1000000;
ENCODE_DROPS(buf, drops, amAMOUNT);
ENCODE_DROPS(buf, 10, amFEE);
ENCODE_SIGNING_PUBKEY_EMPTY(buf);
ENCODE_ACCOUNT(buf, acc, atACCOUNT);
ENCODE_ACCOUNT(buf, dst, atDESTINATION);
uint8_t hash[32];
int64_t xport_result = xport(SBUF(hash), (uint32_t)tx, buf - tx);
ASSERT(xport_result == 32);
return accept(0, 0, 0);
}
)[test.hook]",
{
0x00U, 0x61U, 0x73U, 0x6DU, 0x01U, 0x00U, 0x00U, 0x00U, 0x01U, 0x25U,
0x06U, 0x60U, 0x02U, 0x7FU, 0x7FU, 0x01U, 0x7FU, 0x60U, 0x00U, 0x01U,
0x7EU, 0x60U, 0x03U, 0x7FU, 0x7FU, 0x7EU, 0x01U, 0x7EU, 0x60U, 0x01U,
0x7FU, 0x01U, 0x7EU, 0x60U, 0x04U, 0x7FU, 0x7FU, 0x7FU, 0x7FU, 0x01U,
0x7EU, 0x60U, 0x02U, 0x7FU, 0x7FU, 0x01U, 0x7EU, 0x02U, 0x8BU, 0x01U,
0x09U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x02U, 0x5FU, 0x67U, 0x00U, 0x00U,
0x03U, 0x65U, 0x6EU, 0x76U, 0x09U, 0x6FU, 0x74U, 0x78U, 0x6EU, 0x5FU,
0x74U, 0x79U, 0x70U, 0x65U, 0x00U, 0x01U, 0x03U, 0x65U, 0x6EU, 0x76U,
0x06U, 0x61U, 0x63U, 0x63U, 0x65U, 0x70U, 0x74U, 0x00U, 0x02U, 0x03U,
0x65U, 0x6EU, 0x76U, 0x0DU, 0x78U, 0x70U, 0x6FU, 0x72U, 0x74U, 0x5FU,
0x72U, 0x65U, 0x73U, 0x65U, 0x72U, 0x76U, 0x65U, 0x00U, 0x03U, 0x03U,
0x65U, 0x6EU, 0x76U, 0x08U, 0x72U, 0x6FU, 0x6CU, 0x6CU, 0x62U, 0x61U,
0x63U, 0x6BU, 0x00U, 0x02U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x0AU, 0x6FU,
0x74U, 0x78U, 0x6EU, 0x5FU, 0x70U, 0x61U, 0x72U, 0x61U, 0x6DU, 0x00U,
0x04U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x0CU, 0x68U, 0x6FU, 0x6FU, 0x6BU,
0x5FU, 0x61U, 0x63U, 0x63U, 0x6FU, 0x75U, 0x6EU, 0x74U, 0x00U, 0x05U,
0x03U, 0x65U, 0x6EU, 0x76U, 0x0AU, 0x6CU, 0x65U, 0x64U, 0x67U, 0x65U,
0x72U, 0x5FU, 0x73U, 0x65U, 0x71U, 0x00U, 0x01U, 0x03U, 0x65U, 0x6EU,
0x76U, 0x05U, 0x78U, 0x70U, 0x6FU, 0x72U, 0x74U, 0x00U, 0x04U, 0x03U,
0x02U, 0x01U, 0x03U, 0x05U, 0x03U, 0x01U, 0x00U, 0x02U, 0x06U, 0x21U,
0x05U, 0x7FU, 0x01U, 0x41U, 0xE0U, 0x88U, 0x04U, 0x0BU, 0x7FU, 0x00U,
0x41U, 0xD9U, 0x08U, 0x0BU, 0x7FU, 0x00U, 0x41U, 0x80U, 0x08U, 0x0BU,
0x7FU, 0x00U, 0x41U, 0xE0U, 0x88U, 0x04U, 0x0BU, 0x7FU, 0x00U, 0x41U,
0x80U, 0x08U, 0x0BU, 0x07U, 0x08U, 0x01U, 0x04U, 0x68U, 0x6FU, 0x6FU,
0x6BU, 0x00U, 0x09U, 0x0AU, 0xC5U, 0x84U, 0x00U, 0x01U, 0xC1U, 0x84U,
0x00U, 0x03U, 0x01U, 0x7FU, 0x01U, 0x7EU, 0x02U, 0x7FU, 0x23U, 0x80U,
0x80U, 0x80U, 0x80U, 0x00U, 0x41U, 0xF0U, 0x02U, 0x6BU, 0x22U, 0x01U,
0x24U, 0x80U, 0x80U, 0x80U, 0x80U, 0x00U, 0x41U, 0x01U, 0x41U, 0x01U,
0x10U, 0x80U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x02U, 0x40U, 0x02U,
0x40U, 0x10U, 0x81U, 0x80U, 0x80U, 0x80U, 0x00U, 0x50U, 0x0DU, 0x00U,
0x41U, 0x00U, 0x41U, 0x00U, 0x42U, 0x00U, 0x10U, 0x82U, 0x80U, 0x80U,
0x80U, 0x00U, 0x21U, 0x02U, 0x0CU, 0x01U, 0x0BU, 0x02U, 0x40U, 0x41U,
0x01U, 0x10U, 0x83U, 0x80U, 0x80U, 0x80U, 0x00U, 0x42U, 0x01U, 0x51U,
0x0DU, 0x00U, 0x41U, 0x80U, 0x88U, 0x80U, 0x80U, 0x00U, 0x41U, 0x16U,
0x42U, 0xDFU, 0x00U, 0x10U, 0x84U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU,
0x0BU, 0x02U, 0x40U, 0x20U, 0x01U, 0x41U, 0xD0U, 0x02U, 0x6AU, 0x41U,
0x14U, 0x41U, 0x96U, 0x88U, 0x80U, 0x80U, 0x00U, 0x41U, 0x03U, 0x10U,
0x85U, 0x80U, 0x80U, 0x80U, 0x00U, 0x42U, 0x14U, 0x51U, 0x0DU, 0x00U,
0x41U, 0x9AU, 0x88U, 0x80U, 0x80U, 0x00U, 0x41U, 0x0EU, 0x42U, 0xE3U,
0x00U, 0x10U, 0x84U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x0BU, 0x02U,
0x40U, 0x20U, 0x01U, 0x41U, 0xB0U, 0x02U, 0x6AU, 0x41U, 0x14U, 0x10U,
0x86U, 0x80U, 0x80U, 0x80U, 0x00U, 0x42U, 0x14U, 0x51U, 0x0DU, 0x00U,
0x41U, 0xA8U, 0x88U, 0x80U, 0x80U, 0x00U, 0x41U, 0x1EU, 0x42U, 0xE6U,
0x00U, 0x10U, 0x84U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x0BU, 0x10U,
0x87U, 0x80U, 0x80U, 0x80U, 0x00U, 0x21U, 0x02U, 0x20U, 0x01U, 0x41U,
0xCEU, 0x00U, 0x6AU, 0x41U, 0x00U, 0x3BU, 0x01U, 0x00U, 0x20U, 0x01U,
0x41U, 0xC0U, 0x00U, 0x3AU, 0x00U, 0x49U, 0x20U, 0x01U, 0x42U, 0x80U,
0x80U, 0x80U, 0x80U, 0xF0U, 0xC1U, 0x90U, 0xA0U, 0xE8U, 0x00U, 0x37U,
0x00U, 0x41U, 0x20U, 0x01U, 0x42U, 0xA0U, 0xD2U, 0x80U, 0x80U, 0x80U,
0xA0U, 0xC0U, 0xB0U, 0xC0U, 0x00U, 0x37U, 0x00U, 0x39U, 0x20U, 0x01U,
0x41U, 0xA0U, 0x36U, 0x3BU, 0x00U, 0x33U, 0x20U, 0x01U, 0x41U, 0xA0U,
0x34U, 0x3BU, 0x00U, 0x2DU, 0x20U, 0x01U, 0x41U, 0x00U, 0x36U, 0x00U,
0x29U, 0x20U, 0x01U, 0x41U, 0x24U, 0x3AU, 0x00U, 0x28U, 0x20U, 0x01U,
0x42U, 0x92U, 0x80U, 0x80U, 0x90U, 0x82U, 0x10U, 0x37U, 0x03U, 0x20U,
0x20U, 0x01U, 0x41U, 0x00U, 0x36U, 0x01U, 0x4AU, 0x20U, 0x01U, 0x20U,
0x02U, 0xA7U, 0x22U, 0x03U, 0x41U, 0x05U, 0x6AU, 0x22U, 0x04U, 0x3AU,
0x00U, 0x38U, 0x20U, 0x01U, 0x20U, 0x04U, 0x41U, 0x08U, 0x76U, 0x3AU,
0x00U, 0x37U, 0x20U, 0x01U, 0x20U, 0x04U, 0x41U, 0x10U, 0x76U, 0x3AU,
0x00U, 0x36U, 0x20U, 0x01U, 0x20U, 0x04U, 0x41U, 0x18U, 0x76U, 0x3AU,
0x00U, 0x35U, 0x20U, 0x01U, 0x20U, 0x03U, 0x41U, 0x01U, 0x6AU, 0x22U,
0x04U, 0x3AU, 0x00U, 0x32U, 0x20U, 0x01U, 0x20U, 0x04U, 0x41U, 0x08U,
0x76U, 0x3AU, 0x00U, 0x31U, 0x20U, 0x01U, 0x20U, 0x04U, 0x41U, 0x10U,
0x76U, 0x3AU, 0x00U, 0x30U, 0x20U, 0x01U, 0x20U, 0x04U, 0x41U, 0x18U,
0x76U, 0x3AU, 0x00U, 0x2FU, 0x20U, 0x01U, 0x41U, 0xDDU, 0x00U, 0x6AU,
0x20U, 0x01U, 0x29U, 0x03U, 0xB8U, 0x02U, 0x37U, 0x00U, 0x00U, 0x20U,
0x01U, 0x41U, 0xE5U, 0x00U, 0x6AU, 0x20U, 0x01U, 0x41U, 0xB0U, 0x02U,
0x6AU, 0x41U, 0x10U, 0x6AU, 0x28U, 0x02U, 0x00U, 0x36U, 0x00U, 0x00U,
0x20U, 0x01U, 0x41U, 0xF3U, 0x00U, 0x6AU, 0x20U, 0x01U, 0x29U, 0x03U,
0xD8U, 0x02U, 0x37U, 0x00U, 0x00U, 0x20U, 0x01U, 0x41U, 0xFBU, 0x00U,
0x6AU, 0x20U, 0x01U, 0x41U, 0xD0U, 0x02U, 0x6AU, 0x41U, 0x10U, 0x6AU,
0x28U, 0x02U, 0x00U, 0x36U, 0x00U, 0x00U, 0x20U, 0x01U, 0x41U, 0x14U,
0x3AU, 0x00U, 0x54U, 0x20U, 0x01U, 0x41U, 0x8AU, 0xE6U, 0x81U, 0x88U,
0x78U, 0x36U, 0x02U, 0x50U, 0x20U, 0x01U, 0x41U, 0x83U, 0x29U, 0x3BU,
0x00U, 0x69U, 0x20U, 0x01U, 0x20U, 0x01U, 0x29U, 0x03U, 0xB0U, 0x02U,
0x37U, 0x00U, 0x55U, 0x20U, 0x01U, 0x20U, 0x01U, 0x29U, 0x03U, 0xD0U,
0x02U, 0x37U, 0x00U, 0x6BU, 0x02U, 0x40U, 0x20U, 0x01U, 0x41U, 0x20U,
0x20U, 0x01U, 0x41U, 0x20U, 0x6AU, 0x41U, 0xDFU, 0x00U, 0x10U, 0x88U,
0x80U, 0x80U, 0x80U, 0x00U, 0x42U, 0x20U, 0x51U, 0x0DU, 0x00U, 0x41U,
0xC6U, 0x88U, 0x80U, 0x80U, 0x00U, 0x41U, 0x13U, 0x42U, 0x81U, 0x01U,
0x10U, 0x84U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x0BU, 0x41U, 0x00U,
0x41U, 0x00U, 0x42U, 0x00U, 0x10U, 0x82U, 0x80U, 0x80U, 0x80U, 0x00U,
0x21U, 0x02U, 0x0BU, 0x20U, 0x01U, 0x41U, 0xF0U, 0x02U, 0x6AU, 0x24U,
0x80U, 0x80U, 0x80U, 0x80U, 0x00U, 0x20U, 0x02U, 0x0BU, 0x0BU, 0x60U,
0x01U, 0x00U, 0x41U, 0x80U, 0x08U, 0x0BU, 0x59U, 0x78U, 0x70U, 0x6FU,
0x72U, 0x74U, 0x5FU, 0x72U, 0x65U, 0x73U, 0x65U, 0x72U, 0x76U, 0x65U,
0x28U, 0x31U, 0x29U, 0x20U, 0x3DU, 0x3DU, 0x20U, 0x31U, 0x00U, 0x44U,
0x53U, 0x54U, 0x00U, 0x64U, 0x73U, 0x74U, 0x5FU, 0x6CU, 0x65U, 0x6EU,
0x20U, 0x3DU, 0x3DU, 0x20U, 0x32U, 0x30U, 0x00U, 0x68U, 0x6FU, 0x6FU,
0x6BU, 0x5FU, 0x61U, 0x63U, 0x63U, 0x6FU, 0x75U, 0x6EU, 0x74U, 0x28U,
0x53U, 0x42U, 0x55U, 0x46U, 0x28U, 0x61U, 0x63U, 0x63U, 0x29U, 0x29U,
0x20U, 0x3DU, 0x3DU, 0x20U, 0x32U, 0x30U, 0x00U, 0x78U, 0x70U, 0x6FU,
0x72U, 0x74U, 0x5FU, 0x72U, 0x65U, 0x73U, 0x75U, 0x6CU, 0x74U, 0x20U,
0x3DU, 0x3DU, 0x20U, 0x33U, 0x32U, 0x00U,
}},
/* ==== WASM: 1 ==== */
{R"[test.hook](
#include <stdint.h>
extern int32_t _g(uint32_t id, uint32_t maxiter);
extern int64_t accept(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t rollback(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t xport(uint32_t write_ptr, uint32_t write_len, uint32_t read_ptr, uint32_t read_len);
extern int64_t xport_reserve(uint32_t count);
extern int64_t hook_account(uint32_t write_ptr, uint32_t write_len);
extern int64_t otxn_param(uint32_t write_ptr, uint32_t write_len, uint32_t name_ptr, uint32_t name_len);
extern int64_t otxn_type(void);
extern int64_t ledger_seq(void);
#define SBUF(x) (uint32_t)(x), sizeof(x)
#define ASSERT(x) if (!(x)) rollback((uint32_t)#x, sizeof(#x), __LINE__)
#define ttPAYMENT 0
#define tfCANONICAL 0x80000000UL
#define amAMOUNT 1
#define amFEE 8
#define atACCOUNT 1
#define atDESTINATION 3
#define ENCODE_TT(buf_out, tt) \
buf_out[0] = 0x12U; \
buf_out[1] = (tt >> 8) & 0xFFU; \
buf_out[2] = tt & 0xFFU; \
buf_out += 3;
#define ENCODE_FLAGS(buf_out, flags) \
buf_out[0] = 0x22U; \
buf_out[1] = (flags >> 24) & 0xFFU; \
buf_out[2] = (flags >> 16) & 0xFFU; \
buf_out[3] = (flags >> 8) & 0xFFU; \
buf_out[4] = flags & 0xFFU; \
buf_out += 5;
#define ENCODE_SEQUENCE(buf_out, seq) \
buf_out[0] = 0x24U; \
buf_out[1] = (seq >> 24) & 0xFFU; \
buf_out[2] = (seq >> 16) & 0xFFU; \
buf_out[3] = (seq >> 8) & 0xFFU; \
buf_out[4] = seq & 0xFFU; \
buf_out += 5;
// sfNetworkID = UINT32 field 1 = 0x21
#define ENCODE_NETWORK_ID(buf_out, id) \
buf_out[0] = 0x21U; \
buf_out[1] = (id >> 24) & 0xFFU; \
buf_out[2] = (id >> 16) & 0xFFU; \
buf_out[3] = (id >> 8) & 0xFFU; \
buf_out[4] = id & 0xFFU; \
buf_out += 5;
#define ENCODE_FLS(buf_out, fls) \
buf_out[0] = 0x20U; \
buf_out[1] = 0x1AU; \
buf_out[2] = (fls >> 24) & 0xFFU; \
buf_out[3] = (fls >> 16) & 0xFFU; \
buf_out[4] = (fls >> 8) & 0xFFU; \
buf_out[5] = fls & 0xFFU; \
buf_out += 6;
#define ENCODE_LLS(buf_out, lls) \
buf_out[0] = 0x20U; \
buf_out[1] = 0x1BU; \
buf_out[2] = (lls >> 24) & 0xFFU; \
buf_out[3] = (lls >> 16) & 0xFFU; \
buf_out[4] = (lls >> 8) & 0xFFU; \
buf_out[5] = lls & 0xFFU; \
buf_out += 6;
#define ENCODE_DROPS(buf_out, drops, amt_type) \
buf_out[0] = 0x60U + amt_type; \
buf_out[1] = 0x40U + ((drops >> 56) & 0x3FU); \
buf_out[2] = (drops >> 48) & 0xFFU; \
buf_out[3] = (drops >> 40) & 0xFFU; \
buf_out[4] = (drops >> 32) & 0xFFU; \
buf_out[5] = (drops >> 24) & 0xFFU; \
buf_out[6] = (drops >> 16) & 0xFFU; \
buf_out[7] = (drops >> 8) & 0xFFU; \
buf_out[8] = drops & 0xFFU; \
buf_out += 9;
#define ENCODE_SIGNING_PUBKEY_EMPTY(buf_out) \
buf_out[0] = 0x73U; \
buf_out[1] = 0x00U; \
buf_out += 2;
#define ENCODE_ACCOUNT(buf_out, acc, acc_type) \
buf_out[0] = 0x80U + acc_type; \
buf_out[1] = 0x14U; \
for (int i = 0; i < 20; ++i) buf_out[2+i] = acc[i]; \
buf_out += 22;
#define PREPARE_PAYMENT_SIMPLE_SIZE 270U
int64_t hook(uint32_t reserved) {
_g(1, 1);
if (otxn_type() != ttPAYMENT)
return accept(0, 0, 0);
ASSERT(xport_reserve(1) == 1);
uint8_t dst[20];
int64_t dst_len = otxn_param(SBUF(dst), "DST", 3);
ASSERT(dst_len == 20);
uint8_t acc[20];
ASSERT(hook_account(SBUF(acc)) == 20);
uint32_t cls = (uint32_t)ledger_seq();
uint8_t tx[PREPARE_PAYMENT_SIMPLE_SIZE];
uint8_t* buf = tx;
ENCODE_TT(buf, ttPAYMENT);
ENCODE_NETWORK_ID(buf, 21337); // must precede Sequence (canonical order)
ENCODE_FLAGS(buf, tfCANONICAL);
ENCODE_SEQUENCE(buf, 0);
ENCODE_FLS(buf, cls + 1);
ENCODE_LLS(buf, cls + 5);
uint64_t drops = 1000000;
ENCODE_DROPS(buf, drops, amAMOUNT);
ENCODE_DROPS(buf, 10, amFEE);
ENCODE_SIGNING_PUBKEY_EMPTY(buf);
ENCODE_ACCOUNT(buf, acc, atACCOUNT);
ENCODE_ACCOUNT(buf, dst, atDESTINATION);
uint8_t hash[32];
int64_t xport_result = xport(SBUF(hash), (uint32_t)tx, buf - tx);
// xport should return EXPORT_FAILURE (-46), ASSERT will rollback
ASSERT(xport_result == 32);
return accept(0, 0, 0);
}
)[test.hook]",
{
0x00U, 0x61U, 0x73U, 0x6DU, 0x01U, 0x00U, 0x00U, 0x00U, 0x01U, 0x25U,
0x06U, 0x60U, 0x02U, 0x7FU, 0x7FU, 0x01U, 0x7FU, 0x60U, 0x00U, 0x01U,
0x7EU, 0x60U, 0x03U, 0x7FU, 0x7FU, 0x7EU, 0x01U, 0x7EU, 0x60U, 0x01U,
0x7FU, 0x01U, 0x7EU, 0x60U, 0x04U, 0x7FU, 0x7FU, 0x7FU, 0x7FU, 0x01U,
0x7EU, 0x60U, 0x02U, 0x7FU, 0x7FU, 0x01U, 0x7EU, 0x02U, 0x8BU, 0x01U,
0x09U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x02U, 0x5FU, 0x67U, 0x00U, 0x00U,
0x03U, 0x65U, 0x6EU, 0x76U, 0x09U, 0x6FU, 0x74U, 0x78U, 0x6EU, 0x5FU,
0x74U, 0x79U, 0x70U, 0x65U, 0x00U, 0x01U, 0x03U, 0x65U, 0x6EU, 0x76U,
0x06U, 0x61U, 0x63U, 0x63U, 0x65U, 0x70U, 0x74U, 0x00U, 0x02U, 0x03U,
0x65U, 0x6EU, 0x76U, 0x0DU, 0x78U, 0x70U, 0x6FU, 0x72U, 0x74U, 0x5FU,
0x72U, 0x65U, 0x73U, 0x65U, 0x72U, 0x76U, 0x65U, 0x00U, 0x03U, 0x03U,
0x65U, 0x6EU, 0x76U, 0x08U, 0x72U, 0x6FU, 0x6CU, 0x6CU, 0x62U, 0x61U,
0x63U, 0x6BU, 0x00U, 0x02U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x0AU, 0x6FU,
0x74U, 0x78U, 0x6EU, 0x5FU, 0x70U, 0x61U, 0x72U, 0x61U, 0x6DU, 0x00U,
0x04U, 0x03U, 0x65U, 0x6EU, 0x76U, 0x0CU, 0x68U, 0x6FU, 0x6FU, 0x6BU,
0x5FU, 0x61U, 0x63U, 0x63U, 0x6FU, 0x75U, 0x6EU, 0x74U, 0x00U, 0x05U,
0x03U, 0x65U, 0x6EU, 0x76U, 0x0AU, 0x6CU, 0x65U, 0x64U, 0x67U, 0x65U,
0x72U, 0x5FU, 0x73U, 0x65U, 0x71U, 0x00U, 0x01U, 0x03U, 0x65U, 0x6EU,
0x76U, 0x05U, 0x78U, 0x70U, 0x6FU, 0x72U, 0x74U, 0x00U, 0x04U, 0x03U,
0x02U, 0x01U, 0x03U, 0x05U, 0x03U, 0x01U, 0x00U, 0x02U, 0x06U, 0x21U,
0x05U, 0x7FU, 0x01U, 0x41U, 0xE0U, 0x88U, 0x04U, 0x0BU, 0x7FU, 0x00U,
0x41U, 0xD9U, 0x08U, 0x0BU, 0x7FU, 0x00U, 0x41U, 0x80U, 0x08U, 0x0BU,
0x7FU, 0x00U, 0x41U, 0xE0U, 0x88U, 0x04U, 0x0BU, 0x7FU, 0x00U, 0x41U,
0x80U, 0x08U, 0x0BU, 0x07U, 0x08U, 0x01U, 0x04U, 0x68U, 0x6FU, 0x6FU,
0x6BU, 0x00U, 0x09U, 0x0AU, 0xCDU, 0x84U, 0x00U, 0x01U, 0xC9U, 0x84U,
0x00U, 0x03U, 0x01U, 0x7FU, 0x01U, 0x7EU, 0x02U, 0x7FU, 0x23U, 0x80U,
0x80U, 0x80U, 0x80U, 0x00U, 0x41U, 0xF0U, 0x02U, 0x6BU, 0x22U, 0x01U,
0x24U, 0x80U, 0x80U, 0x80U, 0x80U, 0x00U, 0x41U, 0x01U, 0x41U, 0x01U,
0x10U, 0x80U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x02U, 0x40U, 0x02U,
0x40U, 0x10U, 0x81U, 0x80U, 0x80U, 0x80U, 0x00U, 0x50U, 0x0DU, 0x00U,
0x41U, 0x00U, 0x41U, 0x00U, 0x42U, 0x00U, 0x10U, 0x82U, 0x80U, 0x80U,
0x80U, 0x00U, 0x21U, 0x02U, 0x0CU, 0x01U, 0x0BU, 0x02U, 0x40U, 0x41U,
0x01U, 0x10U, 0x83U, 0x80U, 0x80U, 0x80U, 0x00U, 0x42U, 0x01U, 0x51U,
0x0DU, 0x00U, 0x41U, 0x80U, 0x88U, 0x80U, 0x80U, 0x00U, 0x41U, 0x16U,
0x42U, 0xE8U, 0x00U, 0x10U, 0x84U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU,
0x0BU, 0x02U, 0x40U, 0x20U, 0x01U, 0x41U, 0xD0U, 0x02U, 0x6AU, 0x41U,
0x14U, 0x41U, 0x96U, 0x88U, 0x80U, 0x80U, 0x00U, 0x41U, 0x03U, 0x10U,
0x85U, 0x80U, 0x80U, 0x80U, 0x00U, 0x42U, 0x14U, 0x51U, 0x0DU, 0x00U,
0x41U, 0x9AU, 0x88U, 0x80U, 0x80U, 0x00U, 0x41U, 0x0EU, 0x42U, 0xECU,
0x00U, 0x10U, 0x84U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x0BU, 0x02U,
0x40U, 0x20U, 0x01U, 0x41U, 0xB0U, 0x02U, 0x6AU, 0x41U, 0x14U, 0x10U,
0x86U, 0x80U, 0x80U, 0x80U, 0x00U, 0x42U, 0x14U, 0x51U, 0x0DU, 0x00U,
0x41U, 0xA8U, 0x88U, 0x80U, 0x80U, 0x00U, 0x41U, 0x1EU, 0x42U, 0xEFU,
0x00U, 0x10U, 0x84U, 0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x0BU, 0x10U,
0x87U, 0x80U, 0x80U, 0x80U, 0x00U, 0x21U, 0x02U, 0x20U, 0x01U, 0x41U,
0xC0U, 0x00U, 0x3AU, 0x00U, 0x48U, 0x20U, 0x01U, 0x42U, 0x80U, 0x80U,
0x80U, 0x80U, 0xF0U, 0xC1U, 0x90U, 0xA0U, 0xE8U, 0x00U, 0x37U, 0x03U,
0x40U, 0x20U, 0x01U, 0x41U, 0xE1U, 0x80U, 0x01U, 0x3BU, 0x01U, 0x3EU,
0x20U, 0x01U, 0x41U, 0xA0U, 0x36U, 0x3BU, 0x01U, 0x38U, 0x20U, 0x01U,
0x41U, 0xA0U, 0x34U, 0x3BU, 0x01U, 0x32U, 0x20U, 0x01U, 0x41U, 0x00U,
0x36U, 0x01U, 0x2EU, 0x20U, 0x01U, 0x41U, 0x80U, 0xC8U, 0x00U, 0x3BU,
0x01U, 0x2CU, 0x20U, 0x01U, 0x41U, 0xA2U, 0x80U, 0x02U, 0x36U, 0x02U,
0x28U, 0x20U, 0x01U, 0x42U, 0x92U, 0x80U, 0x80U, 0x88U, 0x82U, 0x80U,
0xC0U, 0xA9U, 0xD9U, 0x00U, 0x37U, 0x03U, 0x20U, 0x20U, 0x01U, 0x20U,
0x02U, 0xA7U, 0x22U, 0x03U, 0x41U, 0x05U, 0x6AU, 0x22U, 0x04U, 0x3AU,
0x00U, 0x3DU, 0x20U, 0x01U, 0x20U, 0x04U, 0x41U, 0x08U, 0x76U, 0x3AU,
0x00U, 0x3CU, 0x20U, 0x01U, 0x20U, 0x04U, 0x41U, 0x10U, 0x76U, 0x3AU,
0x00U, 0x3BU, 0x20U, 0x01U, 0x20U, 0x04U, 0x41U, 0x18U, 0x76U, 0x3AU,
0x00U, 0x3AU, 0x20U, 0x01U, 0x20U, 0x03U, 0x41U, 0x01U, 0x6AU, 0x22U,
0x04U, 0x3AU, 0x00U, 0x37U, 0x20U, 0x01U, 0x20U, 0x04U, 0x41U, 0x08U,
0x76U, 0x3AU, 0x00U, 0x36U, 0x20U, 0x01U, 0x20U, 0x04U, 0x41U, 0x10U,
0x76U, 0x3AU, 0x00U, 0x35U, 0x20U, 0x01U, 0x20U, 0x04U, 0x41U, 0x18U,
0x76U, 0x3AU, 0x00U, 0x34U, 0x20U, 0x01U, 0x41U, 0xCDU, 0x00U, 0x6AU,
0x41U, 0x00U, 0x3BU, 0x00U, 0x00U, 0x20U, 0x01U, 0x41U, 0xDCU, 0x00U,
0x6AU, 0x20U, 0x01U, 0x29U, 0x03U, 0xB8U, 0x02U, 0x37U, 0x02U, 0x00U,
0x20U, 0x01U, 0x41U, 0xE4U, 0x00U, 0x6AU, 0x20U, 0x01U, 0x41U, 0xB0U,
0x02U, 0x6AU, 0x41U, 0x10U, 0x6AU, 0x28U, 0x02U, 0x00U, 0x36U, 0x02U,
0x00U, 0x20U, 0x01U, 0x41U, 0xF2U, 0x00U, 0x6AU, 0x20U, 0x01U, 0x29U,
0x03U, 0xD8U, 0x02U, 0x37U, 0x01U, 0x00U, 0x20U, 0x01U, 0x41U, 0xFAU,
0x00U, 0x6AU, 0x20U, 0x01U, 0x41U, 0xD0U, 0x02U, 0x6AU, 0x41U, 0x10U,
0x6AU, 0x28U, 0x02U, 0x00U, 0x36U, 0x01U, 0x00U, 0x20U, 0x01U, 0x41U,
0x00U, 0x36U, 0x00U, 0x49U, 0x20U, 0x01U, 0x41U, 0x8AU, 0xE6U, 0x81U,
0x88U, 0x78U, 0x36U, 0x00U, 0x4FU, 0x20U, 0x01U, 0x41U, 0x14U, 0x3AU,
0x00U, 0x53U, 0x20U, 0x01U, 0x41U, 0x83U, 0x29U, 0x3BU, 0x01U, 0x68U,
0x20U, 0x01U, 0x20U, 0x01U, 0x29U, 0x03U, 0xB0U, 0x02U, 0x37U, 0x02U,
0x54U, 0x20U, 0x01U, 0x20U, 0x01U, 0x29U, 0x03U, 0xD0U, 0x02U, 0x37U,
0x01U, 0x6AU, 0x02U, 0x40U, 0x20U, 0x01U, 0x41U, 0x20U, 0x20U, 0x01U,
0x41U, 0x20U, 0x6AU, 0x41U, 0xDEU, 0x00U, 0x10U, 0x88U, 0x80U, 0x80U,
0x80U, 0x00U, 0x42U, 0x20U, 0x51U, 0x0DU, 0x00U, 0x41U, 0xC6U, 0x88U,
0x80U, 0x80U, 0x00U, 0x41U, 0x13U, 0x42U, 0x88U, 0x01U, 0x10U, 0x84U,
0x80U, 0x80U, 0x80U, 0x00U, 0x1AU, 0x0BU, 0x41U, 0x00U, 0x41U, 0x00U,
0x42U, 0x00U, 0x10U, 0x82U, 0x80U, 0x80U, 0x80U, 0x00U, 0x21U, 0x02U,
0x0BU, 0x20U, 0x01U, 0x41U, 0xF0U, 0x02U, 0x6AU, 0x24U, 0x80U, 0x80U,
0x80U, 0x80U, 0x00U, 0x20U, 0x02U, 0x0BU, 0x0BU, 0x60U, 0x01U, 0x00U,
0x41U, 0x80U, 0x08U, 0x0BU, 0x59U, 0x78U, 0x70U, 0x6FU, 0x72U, 0x74U,
0x5FU, 0x72U, 0x65U, 0x73U, 0x65U, 0x72U, 0x76U, 0x65U, 0x28U, 0x31U,
0x29U, 0x20U, 0x3DU, 0x3DU, 0x20U, 0x31U, 0x00U, 0x44U, 0x53U, 0x54U,
0x00U, 0x64U, 0x73U, 0x74U, 0x5FU, 0x6CU, 0x65U, 0x6EU, 0x20U, 0x3DU,
0x3DU, 0x20U, 0x32U, 0x30U, 0x00U, 0x68U, 0x6FU, 0x6FU, 0x6BU, 0x5FU,
0x61U, 0x63U, 0x63U, 0x6FU, 0x75U, 0x6EU, 0x74U, 0x28U, 0x53U, 0x42U,
0x55U, 0x46U, 0x28U, 0x61U, 0x63U, 0x63U, 0x29U, 0x29U, 0x20U, 0x3DU,
0x3DU, 0x20U, 0x32U, 0x30U, 0x00U, 0x78U, 0x70U, 0x6FU, 0x72U, 0x74U,
0x5FU, 0x72U, 0x65U, 0x73U, 0x75U, 0x6CU, 0x74U, 0x20U, 0x3DU, 0x3DU,
0x20U, 0x33U, 0x32U, 0x00U,
}},
};
}
} // namespace ripple
#endif

View File

@@ -1,436 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2025 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
*/
//==============================================================================
#include <test/jtx.h>
#include <xrpl/basics/Slice.h>
#include <xrpl/basics/strHex.h>
#include <xrpl/json/json_reader.h>
#include <xrpl/protocol/Feature.h>
#include <xrpl/protocol/JsonTx.h>
#include <xrpl/protocol/PublicKey.h>
#include <xrpl/protocol/STParsedJSON.h>
#include <xrpl/protocol/SecretKey.h>
#include <xrpl/protocol/Sign.h>
#include <xrpl/protocol/digest.h>
#include <xrpl/protocol/jss.h>
namespace ripple {
namespace test {
struct JsonTx_test : public beast::unit_test::suite
{
// Build a canonical Payment tx_json_str for `from` -> `to` with the given
// drops amount and sequence. Field order follows XRPL ordinal order so
// the json-tx codec can compress it maximally; tests here don't care
// about compression but the node accepts any valid JSON shape.
static std::string
buildPaymentJson(
jtx::Account const& from,
jtx::Account const& to,
std::uint64_t amountDrops,
std::uint32_t sequence,
std::uint64_t fee)
{
std::string const pkHex = strHex(from.pk().slice());
std::ostringstream os;
os << "{"
<< R"("TransactionType":"Payment",)"
<< R"("Flags":2147483648,)"
<< R"("Sequence":)" << sequence << ","
<< R"("Amount":")" << amountDrops << R"(",)"
<< R"("Fee":")" << fee << R"(",)"
<< R"("SigningPubKey":")" << pkHex << R"(",)"
<< R"("Account":")" << from.human() << R"(",)"
<< R"("Destination":")" << to.human() << R"(")"
<< "}";
return os.str();
}
// Sign the UTF-8 bytes of tx_json_str with `from`'s secret key.
static Buffer
signBody(jtx::Account const& from, std::string const& tx_json_str)
{
return sign(
from.pk(),
from.sk(),
Slice{tx_json_str.data(), tx_json_str.size()});
}
static Json::Value
rpcSubmit(
jtx::Env& env,
std::string const& tx_json_str,
Buffer const& signature)
{
Json::Value params(Json::objectValue);
params["tx_json_str"] = tx_json_str;
params[jss::signature] = strHex(signature);
return env.rpc("json", "submit_json_tx", to_string(params));
}
// ---------- RPC-level tests ----------
void
testEnabledGate(FeatureBitset features)
{
testcase("enabled (feature gate)");
using namespace jtx;
for (bool const withFeature : {true, false})
{
auto const amend =
withFeature ? features : features - featureJsonTx;
Env env{*this, amend};
Account const alice{"alice"};
Account const bob{"bob"};
env.fund(XRP(1000), alice, bob);
env.close();
auto const fee = env.current()->fees().base.drops();
auto const txJson =
buildPaymentJson(alice, bob, 1'000'000, env.seq(alice), fee);
auto const sig = signBody(alice, txJson);
auto const result = rpcSubmit(env, txJson, sig);
env.close();
auto const& inner = result[jss::result];
if (withFeature)
{
BEAST_EXPECT(inner[jss::engine_result] == "tesSUCCESS");
BEAST_EXPECT(inner[jss::applied].asBool());
}
else
{
// Amendment is off -> the RPC itself rejects the
// submission before any classical verification.
BEAST_EXPECT(inner[jss::error].asString() == "notEnabled");
}
}
}
void
testBasicRoundtrip(FeatureBitset features)
{
testcase("basic payment roundtrip");
using namespace jtx;
Env env{*this, features};
Account const alice{"alice"};
Account const bob{"bob"};
env.fund(XRP(1000), alice, bob);
env.close();
auto const preAlice = env.balance(alice);
auto const preBob = env.balance(bob);
auto const feeDrops = env.current()->fees().base;
auto const txJson = buildPaymentJson(
alice, bob, 1'000'000, env.seq(alice), feeDrops.drops());
auto const sig = signBody(alice, txJson);
auto const result = rpcSubmit(env, txJson, sig);
env.close();
auto const& inner = result[jss::result];
BEAST_EXPECT(inner[jss::engine_result] == "tesSUCCESS");
BEAST_EXPECT(inner[jss::applied].asBool());
BEAST_EXPECT(inner["tx_json_str"].asString() == txJson);
BEAST_EXPECT(env.balance(alice) == preAlice - XRP(1) - feeDrops);
BEAST_EXPECT(env.balance(bob) == preBob + XRP(1));
}
void
testMissingParams(FeatureBitset features)
{
testcase("missing params");
using namespace jtx;
Env env{*this, features};
Account const alice{"alice"};
env.fund(XRP(1000), alice);
env.close();
auto const txJson = buildPaymentJson(
alice,
Account{"bob"},
1'000'000,
env.seq(alice),
env.current()->fees().base.drops());
auto const sig = signBody(alice, txJson);
// No tx_json_str.
{
Json::Value p(Json::objectValue);
p[jss::signature] = strHex(sig);
auto const r = env.rpc("json", "submit_json_tx", to_string(p));
BEAST_EXPECT(r[jss::result][jss::error] == "invalidParams");
}
// No signature.
{
Json::Value p(Json::objectValue);
p["tx_json_str"] = txJson;
auto const r = env.rpc("json", "submit_json_tx", to_string(p));
BEAST_EXPECT(r[jss::result][jss::error] == "invalidParams");
}
// Signature is not valid hex.
{
Json::Value p(Json::objectValue);
p["tx_json_str"] = txJson;
p[jss::signature] = "notahex";
auto const r = env.rpc("json", "submit_json_tx", to_string(p));
BEAST_EXPECT(r[jss::result][jss::error] == "invalidParams");
}
}
void
testInvalidJson(FeatureBitset features)
{
testcase("invalid tx_json_str");
using namespace jtx;
Env env{*this, features};
Account const alice{"alice"};
env.fund(XRP(1000), alice);
env.close();
// Syntactically invalid JSON.
{
std::string const bad = "{not valid json";
auto const sig = signBody(alice, bad);
auto const r = rpcSubmit(env, bad, sig);
BEAST_EXPECT(
r[jss::result][jss::error].asString() == "invalidTransaction");
}
// Valid JSON but not an object.
{
std::string const arr = R"([1,2,3])";
auto const sig = signBody(alice, arr);
auto const r = rpcSubmit(env, arr, sig);
BEAST_EXPECT(
r[jss::result][jss::error].asString() == "invalidTransaction");
}
}
void
testBadSignature(FeatureBitset features)
{
testcase("bad signature");
using namespace jtx;
Env env{*this, features};
Account const alice{"alice"};
Account const bob{"bob"};
env.fund(XRP(1000), alice, bob);
env.close();
auto const txJson = buildPaymentJson(
alice,
bob,
1'000'000,
env.seq(alice),
env.current()->fees().base.drops());
auto sig = signBody(alice, txJson);
// Flip a bit in the signature.
std::vector<std::uint8_t> corrupted(
sig.data(), sig.data() + sig.size());
corrupted.at(0) ^= 0x01;
Buffer bad(corrupted.data(), corrupted.size());
auto const result = rpcSubmit(env, txJson, bad);
BEAST_EXPECT(
result[jss::result][jss::error].asString() == "invalidTransaction");
}
void
testSignatureOverDifferentBytesFails(FeatureBitset features)
{
testcase("signature over different bytes");
using namespace jtx;
Env env{*this, features};
Account const alice{"alice"};
Account const bob{"bob"};
env.fund(XRP(1000), alice, bob);
env.close();
auto const txJson = buildPaymentJson(
alice,
bob,
1'000'000,
env.seq(alice),
env.current()->fees().base.drops());
// Sign a different string -- same tx, different bytes.
auto const otherJson = txJson + " ";
auto const sig = signBody(alice, otherJson);
auto const result = rpcSubmit(env, txJson, sig);
BEAST_EXPECT(
result[jss::result][jss::error].asString() == "invalidTransaction");
}
void
testWrongSigningPubKey(FeatureBitset features)
{
testcase("wrong SigningPubKey");
using namespace jtx;
Env env{*this, features};
Account const alice{"alice"};
Account const bob{"bob"};
Account const mallory{"mallory"};
env.fund(XRP(1000), alice, bob, mallory);
env.close();
// Alice's account, but signed with mallory's key: claim alice's
// SigningPubKey -> sig fails to verify. Then also try with
// mallory's SigningPubKey: sig verifies but Account mismatch
// causes downstream failure (tecNO_AUTH or similar).
auto txJson = buildPaymentJson(
alice,
bob,
1'000'000,
env.seq(alice),
env.current()->fees().base.drops());
auto const badSig = signBody(mallory, txJson);
auto const r = rpcSubmit(env, txJson, badSig);
BEAST_EXPECT(
r[jss::result][jss::error].asString() == "invalidTransaction");
}
// ---------- helper-level unit tests (no RPC) ----------
void
testHelperDirectly(FeatureBitset features)
{
testcase("helper functions direct");
using namespace jtx;
Env env{*this, features};
Account const alice{"alice"};
Account const bob{"bob"};
env.fund(XRP(1000), alice, bob);
env.close();
std::string const txJson = buildPaymentJson(
alice,
bob,
1'000'000,
env.seq(alice),
env.current()->fees().base.drops());
auto const sig = signBody(alice, txJson);
// Assemble an STTx mirroring what the node would produce from
// submit_json_tx.
Json::Value parsed;
Json::Reader reader;
BEAST_EXPECT(reader.parse(txJson, parsed) && parsed.isObject());
parsed[jss::TxnSignature] = strHex(sig);
parsed[sfJsonTxBody.jsonName] = strHex(txJson);
STParsedJSONObject parsedObj("tx", parsed);
BEAST_EXPECT(static_cast<bool>(parsedObj.object));
STTx const stx(std::move(*parsedObj.object));
// hasBody / body / bodyHash
BEAST_EXPECT(jsonTx::hasBody(stx));
auto const slice = jsonTx::body(stx);
BEAST_EXPECT(slice.size() == txJson.size());
BEAST_EXPECT(
std::memcmp(slice.data(), txJson.data(), txJson.size()) == 0);
auto const h = jsonTx::bodyHash(stx);
BEAST_EXPECT(h == sha512Half(Slice{txJson.data(), txJson.size()}));
// Signature check passes on a well-formed tx.
BEAST_EXPECT(static_cast<bool>(jsonTx::checkSignature(stx)));
// Structural equivalence passes.
BEAST_EXPECT(
static_cast<bool>(jsonTx::checkStructuralEquivalence(stx)));
// Tamper with the body -- change Amount in the ASCII without
// touching the structural fields. Signature will still be over
// the original bytes, so we re-sign over the new body so we
// isolate the structural-equivalence check.
std::string tampered = txJson;
auto const needle = std::string(R"("Amount":"1000000")");
auto const pos = tampered.find(needle);
BEAST_EXPECT(pos != std::string::npos);
tampered.replace(pos, needle.size(), R"("Amount":"9000000")");
auto const tamperedSig = signBody(alice, tampered);
Json::Value mismatched = parsed;
mismatched[jss::TxnSignature] = strHex(tamperedSig);
mismatched[sfJsonTxBody.jsonName] = strHex(tampered);
STParsedJSONObject mismatchedObj("tx", mismatched);
BEAST_EXPECT(static_cast<bool>(mismatchedObj.object));
STTx const mismatchedTx(std::move(*mismatchedObj.object));
// Signature now verifies over the tampered body...
BEAST_EXPECT(static_cast<bool>(jsonTx::checkSignature(mismatchedTx)));
// ...but structural equivalence fails.
BEAST_EXPECT(!jsonTx::checkStructuralEquivalence(mismatchedTx));
}
void
testHelperEmptyAndMissing(FeatureBitset features)
{
testcase("helpers on non-jsontx STTx");
using namespace jtx;
Env env{*this, features};
Account const alice{"alice"};
env.fund(XRP(1000), alice);
env.close();
// A noop signed classically carries no sfJsonTxBody.
auto const jt = env.jt(noop(alice));
BEAST_EXPECT(!jsonTx::hasBody(*jt.stx));
BEAST_EXPECT(jsonTx::body(*jt.stx).empty());
BEAST_EXPECT(jsonTx::bodyHash(*jt.stx) == uint256{});
}
void
testWithFeats(FeatureBitset features)
{
testEnabledGate(features);
testBasicRoundtrip(features);
testMissingParams(features);
testInvalidJson(features);
testBadSignature(features);
testSignatureOverDifferentBytesFails(features);
testWrongSigningPubKey(features);
testHelperDirectly(features);
testHelperEmptyAndMissing(features);
}
public:
void
run() override
{
using namespace jtx;
auto const sa = supported_amendments();
testWithFeats(sa);
}
};
BEAST_DEFINE_TESTSUITE(JsonTx, app, ripple);
} // namespace test
} // namespace ripple

301
src/test/app/XPOP_test.cpp Normal file
View File

@@ -0,0 +1,301 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2026 XRPL Labs
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <test/jtx.h>
#include <test/jtx/import.h>
#include <test/jtx/xpop.h>
#include <xrpld/app/ledger/LedgerMaster.h>
#include <xrpld/app/proof/LedgerProof.h>
#include <xrpld/app/proof/ProofBuilder.h>
#include <xrpld/app/proof/XPOPv1.h>
#include <xrpl/protocol/Import.h>
#include <xrpl/protocol/jss.h>
namespace ripple {
namespace test {
struct XPOP_test : public beast::unit_test::suite
{
void
testBuildLedgerProof()
{
testcase("Build LedgerProof from a payment");
using namespace jtx;
Env env{*this};
Account const alice{"alice"};
Account const bob{"bob"};
env.fund(XRP(10000), alice, bob);
env.close();
// Submit a payment and close the ledger.
env(pay(alice, bob, XRP(100)));
env.close();
// Get the tx hash from the last closed ledger.
auto const lcl = env.app().getLedgerMaster().getClosedLedger();
BEAST_EXPECT(lcl);
// Find a payment tx in the ledger.
uint256 paymentHash;
bool found = false;
lcl->txMap().visitLeaves(
[&](boost::intrusive_ptr<SHAMapItem const> const& item) {
if (!found)
{
paymentHash = item->key();
found = true;
}
});
BEAST_EXPECT(found);
// Build the proof.
auto const lp = proof::buildLedgerProof(*lcl, paymentHash);
BEAST_EXPECT(lp.has_value());
if (lp)
{
// Verify header fields are populated.
BEAST_EXPECT(lp->ledgerIndex > 0);
BEAST_EXPECT(lp->totalCoins > 0);
BEAST_EXPECT(lp->parentHash != uint256{});
BEAST_EXPECT(lp->txRoot != uint256{});
BEAST_EXPECT(lp->accountRoot != uint256{});
// Verify tx blob is non-empty.
BEAST_EXPECT(!lp->txBlob.empty());
BEAST_EXPECT(!lp->metaBlob.empty());
// Verify merkle proof exists and is valid.
BEAST_EXPECT(lp->txProof.has_value());
if (lp->txProof)
{
auto const computedRoot = lp->txProof->computeRoot();
BEAST_EXPECT(computedRoot.has_value());
if (computedRoot)
BEAST_EXPECT(*computedRoot == lp->txRoot);
}
// Verify ledger hash reconstruction.
auto const computedHash = lp->computeLedgerHash();
BEAST_EXPECT(computedHash == lcl->info().hash);
}
}
void
testBuildXPOPv1()
{
testcase("Build XPOP v1 JSON from a payment");
using namespace jtx;
Env env{*this};
Account const alice{"alice"};
Account const bob{"bob"};
env.fund(XRP(10000), alice, bob);
env.close();
env(pay(alice, bob, XRP(100)));
env.close();
auto const lcl = env.app().getLedgerMaster().getClosedLedger();
BEAST_EXPECT(lcl);
// Find a tx.
uint256 txHash;
lcl->txMap().visitLeaves(
[&](boost::intrusive_ptr<SHAMapItem const> const& item) {
txHash = item->key();
});
// Build XPOP using the test helper.
auto const xpop = xpop::buildTestXPOP(env, txHash, 3);
BEAST_EXPECT(!xpop.isNull());
// Verify structure.
BEAST_EXPECT(xpop.isMember(jss::ledger));
BEAST_EXPECT(xpop.isMember(jss::transaction));
BEAST_EXPECT(xpop.isMember(jss::validation));
// Ledger section.
auto const& lgr = xpop[jss::ledger];
BEAST_EXPECT(lgr.isMember(jss::index));
BEAST_EXPECT(lgr.isMember(jss::coins));
BEAST_EXPECT(lgr.isMember(jss::phash));
BEAST_EXPECT(lgr.isMember(jss::txroot));
BEAST_EXPECT(lgr.isMember(jss::acroot));
BEAST_EXPECT(lgr.isMember(jss::close));
BEAST_EXPECT(lgr.isMember(jss::pclose));
BEAST_EXPECT(lgr.isMember(jss::cres));
BEAST_EXPECT(lgr.isMember(jss::flags));
// Transaction section.
auto const& txn = xpop[jss::transaction];
BEAST_EXPECT(txn.isMember(jss::blob));
BEAST_EXPECT(txn.isMember(jss::meta));
BEAST_EXPECT(txn.isMember(jss::proof));
BEAST_EXPECT(txn[jss::blob].asString().size() > 0);
BEAST_EXPECT(txn[jss::meta].asString().size() > 0);
// Validation section.
auto const& val = xpop[jss::validation];
BEAST_EXPECT(val.isMember(jss::data));
BEAST_EXPECT(val.isMember(jss::unl));
BEAST_EXPECT(val[jss::data].size() == 3); // 3 validators
auto const& unl = val[jss::unl];
BEAST_EXPECT(unl.isMember(jss::public_key));
BEAST_EXPECT(unl.isMember(jss::manifest));
BEAST_EXPECT(unl.isMember(jss::blob));
BEAST_EXPECT(unl.isMember(jss::signature));
BEAST_EXPECT(unl.isMember(jss::version));
}
void
testMerkleProofVerification()
{
testcase("Merkle proof verifies against tx root");
using namespace jtx;
Env env{*this};
Account const alice{"alice"};
Account const bob{"bob"};
Account const carol{"carol"};
env.fund(XRP(10000), alice, bob, carol);
env.close();
// Multiple transactions to create a deeper trie.
env(pay(alice, bob, XRP(10)));
env(pay(bob, carol, XRP(5)));
env(pay(carol, alice, XRP(1)));
env.close();
auto const lcl = env.app().getLedgerMaster().getClosedLedger();
BEAST_EXPECT(lcl);
// Verify proof for each transaction in the ledger.
int proofCount = 0;
lcl->txMap().visitLeaves(
[&](boost::intrusive_ptr<SHAMapItem const> const& item) {
auto const lp = proof::buildLedgerProof(*lcl, item->key());
BEAST_EXPECT(lp.has_value());
if (lp && lp->txProof)
{
// Proof must verify against the ledger's tx root.
BEAST_EXPECT(lp->txProof->verify(lp->txRoot));
// JSON v1 serialization must round-trip.
auto const json = lp->txProof->toJsonV1();
BEAST_EXPECT(!json.isNull());
BEAST_EXPECT(json.isArray());
++proofCount;
}
});
// We should have proven at least 3 transactions.
BEAST_EXPECT(proofCount >= 3);
}
void
testImportWithGeneratedXPOP()
{
testcase("Import accepts dynamically generated XPOP");
using namespace jtx;
// Create XPOP context (VL publisher + validators).
auto const xpopCtx = xpop::TestXPOPContext::create(3);
// --- Source "network": generate a payment and build XPOP ---
Env srcEnv{*this};
Account const alice{"alice"};
Account const bob{"bob"};
srcEnv.fund(XRP(10000), alice, bob);
srcEnv.close();
// Import requires: no sfNetworkID + sfOperationLimit = dest NETWORK_ID.
Json::Value payTx;
payTx[jss::TransactionType] = jss::Payment;
payTx[jss::Account] = alice.human();
payTx[jss::Destination] = bob.human();
payTx[jss::Amount] = "100000000";
payTx[sfOperationLimit.jsonName] = 21337;
srcEnv(payTx, fee(XRP(1)));
srcEnv.close();
// Find the tx hash and build the XPOP.
auto const srcLcl = srcEnv.app().getLedgerMaster().getClosedLedger();
BEAST_EXPECT(srcLcl);
uint256 paymentHash;
srcLcl->txMap().visitLeaves(
[&](boost::intrusive_ptr<SHAMapItem const> const& item) {
paymentHash = item->key();
});
auto const xpopJson = xpopCtx.buildXPOP(*srcLcl, paymentHash);
BEAST_EXPECT(!xpopJson.isNull());
// --- Destination "network": import the XPOP ---
Env dstEnv{*this, xpopCtx.makeEnvConfig(21337)};
// Burn some XRP so B2M can credit.
auto const master = Account("masterpassphrase");
dstEnv(noop(master), fee(10'000'000'000), ter(tesSUCCESS));
dstEnv.close();
Account const importAlice{"alice"};
dstEnv.fund(XRP(1000), importAlice);
dstEnv.close();
auto const feeDrops = dstEnv.current()->fees().base;
// Submit the import — should succeed (B2M path).
dstEnv(
import::import(importAlice, xpopJson),
fee(feeDrops * 10),
ter(tesSUCCESS));
dstEnv.close();
}
void
run() override
{
testBuildLedgerProof();
testBuildXPOPv1();
testMerkleProofVerification();
testImportWithGeneratedXPOP();
}
};
BEAST_DEFINE_TESTSUITE(XPOP, app, ripple);
} // namespace test
} // namespace ripple

View File

@@ -0,0 +1,732 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <test/jtx.h>
#include <xrpld/app/consensus/ConsensusExtensions.h>
#include <xrpld/app/ledger/Ledger.h>
#include <xrpld/app/misc/ValidatorKeys.h>
#include <xrpld/consensus/ConsensusExtensionsTick.h>
#include <xrpld/consensus/ConsensusProposal.h>
#include <xrpl/basics/StringUtilities.h>
#include <xrpl/beast/unit_test.h>
#include <xrpl/protocol/Feature.h>
#include <xrpl/protocol/Indexes.h>
#include <xrpl/protocol/digest.h>
#include <cstring>
namespace ripple {
namespace test {
namespace {
uint256
makeHash(char const* label)
{
return sha512Half(Slice(label, std::strlen(label)));
}
NodeID
makeNode(std::uint8_t id)
{
NodeID node;
node.zero();
node.data()[NodeID::size() - 1] = id;
return node;
}
std::string
makeExportSigBlob(uint256 const& txHash, PublicKey const& publicKey)
{
std::string blob;
blob.append(reinterpret_cast<char const*>(txHash.data()), uint256::size());
blob.append(
reinterpret_cast<char const*>(publicKey.data()), publicKey.size());
blob.push_back('\x30');
return blob;
}
struct FakeTxSet
{
using ID = uint256;
uint256 hash;
uint256
id() const
{
return hash;
}
};
class FakePeerPosition
{
public:
using Proposal = ConsensusProposal<NodeID, uint256, ExtendedPosition>;
FakePeerPosition(NodeID const& nodeId, ExtendedPosition const& position)
: proposal_(
uint256{},
Proposal::seqJoin,
position,
NetClock::time_point{},
NetClock::time_point{},
nodeId)
{
}
Proposal const&
proposal() const
{
return proposal_;
}
private:
Proposal proposal_;
};
struct FakeExtensions
{
enum class SidecarKind : uint8_t { commit, reveal, exportSig };
beast::Journal j_{beast::Journal::getNullSink()};
EstablishState estState_{EstablishState::ConvergingTx};
std::chrono::steady_clock::time_point revealPhaseStart_{};
std::chrono::steady_clock::time_point commitHashConflictStart_{};
bool explicitFinalProposalSent_{false};
bool entropySetPublished_{false};
std::chrono::steady_clock::time_point entropyPublishStart_{};
bool exportSigGateStarted_{false};
std::chrono::steady_clock::time_point exportSigGateStart_{};
bool exportSigConvergenceFailed_{false};
bool rngOn{false};
bool localExportSigs{true};
bool consensusExportTxns{false};
bool exportOn{true};
bool entropyFailed{false};
std::size_t exportQuorum{4};
uint256 exportHash{makeHash("local-export-sig-set")};
uint256 entropyHash{makeHash("local-entropy-set")};
std::vector<uint256> fetchedExportSets;
std::vector<uint256> fetchedEntropySets;
int exportBuilds = 0;
int entropyBuilds = 0;
bool
rngEnabled() const
{
return rngOn;
}
bool
exportEnabled() const
{
return exportOn;
}
std::size_t
quorumThreshold() const
{
return exportQuorum;
}
std::size_t
exportSigQuorumThreshold() const
{
return exportQuorum;
}
std::size_t
pendingCommitCount() const
{
return rngOn ? exportQuorum : 0;
}
std::size_t
pendingRevealCount() const
{
return rngOn ? exportQuorum : 0;
}
std::size_t
expectedProposerCount() const
{
return 0;
}
bool
hasQuorumOfCommits() const
{
return rngOn;
}
bool
hasMinimumReveals() const
{
return rngOn;
}
bool
hasAnyReveals() const
{
return rngOn;
}
uint256
buildCommitSet(LedgerIndex)
{
return makeHash("commit-set");
}
uint256
buildEntropySet(LedgerIndex)
{
++entropyBuilds;
return entropyHash;
}
uint256
getEntropySecret() const
{
return makeHash("entropy-secret");
}
void
selfSeedReveal()
{
}
void
setEntropyFailed()
{
entropyFailed = true;
}
void
fetchRngSetIfNeeded(std::optional<uint256> const& hash, SidecarKind kind)
{
if (kind == SidecarKind::reveal && hash)
fetchedEntropySets.push_back(*hash);
else if (kind == SidecarKind::exportSig && hash)
fetchedExportSets.push_back(*hash);
}
bool
shouldSendExplicitFinalProposal() const
{
return false;
}
std::optional<FakeTxSet>
buildExplicitFinalProposalTxSet(FakeTxSet const&, LedgerIndex)
{
return std::nullopt;
}
bool
hasPendingExportSigs() const
{
return localExportSigs;
}
bool
hasConsensusExportTxns() const
{
return consensusExportTxns;
}
uint256
buildExportSigSet(LedgerIndex)
{
++exportBuilds;
return exportHash;
}
void
setExportSigConvergenceFailed()
{
exportSigConvergenceFailed_ = true;
}
};
struct ExportTickHarness
{
ExtendedPosition position{makeHash("tx-set")};
FakeTxSet txns{position.txSetHash};
hash_map<NodeID, FakePeerPosition> peers;
ConsensusParms parms;
NetClock::time_point netNow{NetClock::duration{123}};
std::chrono::steady_clock::time_point start{};
std::size_t prevProposers = 4;
int updates = 0;
int proposes = 0;
void
addPeer(
std::uint8_t id,
std::optional<uint256> exportSigSetHash,
uint256 txSetHash = makeHash("tx-set"))
{
ExtendedPosition peerPosition{txSetHash};
peerPosition.exportSigSetHash = exportSigSetHash;
peers.emplace(
makeNode(id), FakePeerPosition{makeNode(id), peerPosition});
}
void
addEntropyPeer(
std::uint8_t id,
std::optional<uint256> entropySetHash,
uint256 txSetHash = makeHash("tx-set"))
{
ExtendedPosition peerPosition{txSetHash};
peerPosition.entropySetHash = entropySetHash;
peers.emplace(
makeNode(id), FakePeerPosition{makeNode(id), peerPosition});
}
ExtensionTickResult
tick(FakeExtensions& ext, std::chrono::milliseconds elapsed = {})
{
ConsensusTick<ExtendedPosition, FakePeerPosition, FakeTxSet> ctx{
.buildSeq = 2,
.now = netNow,
.nowSteady = start + elapsed,
.roundTime = elapsed,
.mode = ConsensusMode::proposing,
.prevProposers = prevProposers,
.peerPositions = peers,
.parms = parms,
.haveCloseTimeConsensus = true,
.convergePercent = 100,
.j = beast::Journal{beast::Journal::getNullSink()},
.getPosition = [&]() -> ExtendedPosition const& {
return position;
},
.updatePosition =
[&](ExtendedPosition const& newPosition) {
position = newPosition;
++updates;
},
.propose = [&]() { ++proposes; },
.haveConsensus = []() { return true; },
.cacheAndShareTxSet = [](FakeTxSet const&) {},
.getTxns = [&]() -> FakeTxSet const& { return txns; }};
return extensionsTick(ext, ctx);
}
};
} // namespace
class ConsensusExtensions_test : public beast::unit_test::suite
{
std::vector<PublicKey>
makeValidatorKeys() const
{
std::vector<std::string> const rawKeys = {
"0388935426E0D08083314842EDFBB2D517BD47699F9A4527318A8E10468C97C05"
"2",
"02691AC5AE1C4C333AE5DF8A93BDC495F0EEBFC6DB0DA7EB6EF808F3AFC006E3F"
"E"};
std::vector<PublicKey> keys;
keys.reserve(rawKeys.size());
for (auto const& rawKey : rawKeys)
{
auto const pkHex = strUnHex(rawKey);
keys.emplace_back(makeSlice(*pkHex));
}
return keys;
}
void
testActiveValidatorViewAppliesNegativeUNL()
{
testcase("Active validator view applies NegativeUNL");
using namespace jtx;
Env env{
*this,
envconfig(),
supported_amendments() | featureNegativeUNL,
nullptr};
auto const vlKeys = makeValidatorKeys();
auto const genesis = std::make_shared<Ledger>(
create_genesis,
env.app().config(),
std::vector<uint256>{},
env.app().getNodeFamily());
auto l = std::make_shared<Ledger>(
*genesis, env.app().timeKeeper().closeTime());
BEAST_EXPECT(l->rules().enabled(featureNegativeUNL));
auto report = std::make_shared<SLE>(keylet::UNLReport());
std::vector<STObject> activeValidators;
for (auto const& pk : vlKeys)
{
activeValidators.push_back(
STObject::makeInnerObject(sfActiveValidator));
activeValidators.back().setFieldVL(sfPublicKey, pk);
}
report->setFieldArray(
sfActiveValidators, STArray(activeValidators, sfActiveValidators));
auto negUnl = std::make_shared<SLE>(keylet::negativeUNL());
std::vector<STObject> disabledValidators;
disabledValidators.push_back(
STObject::makeInnerObject(sfDisabledValidator));
disabledValidators.back().setFieldVL(sfPublicKey, vlKeys[0]);
disabledValidators.back().setFieldU32(sfFirstLedgerSequence, l->seq());
negUnl->setFieldArray(
sfDisabledValidators,
STArray(disabledValidators, sfDisabledValidators));
OpenView accum(&*l);
accum.rawInsert(report);
accum.rawInsert(negUnl);
accum.apply(*l);
ConsensusExtensions ce{env.app(), env.journal};
auto const view = ce.makeActiveValidatorView(l);
BEAST_EXPECT(view->fromUNLReport);
BEAST_EXPECT(view->size() == 1);
BEAST_EXPECT(!view->containsMaster(vlKeys[0]));
BEAST_EXPECT(!view->containsNode(calcNodeID(vlKeys[0])));
BEAST_EXPECT(view->containsMaster(vlKeys[1]));
BEAST_EXPECT(view->containsNode(calcNodeID(vlKeys[1])));
}
void
testExportSigGateRequiresQuorumAlignment()
{
testcase("Export sig gate requires quorum alignment");
FakeExtensions ext;
ExportTickHarness harness;
auto const localHash = ext.exportHash;
harness.addPeer(1, localHash);
harness.addPeer(2, localHash);
auto result = harness.tick(ext);
BEAST_EXPECT(!result.readyForAccept);
BEAST_EXPECT(harness.position.exportSigSetHash == localHash);
BEAST_EXPECT(ext.exportSigGateStarted_);
result = harness.tick(ext, std::chrono::milliseconds{100});
BEAST_EXPECT(!result.readyForAccept);
BEAST_EXPECT(!ext.exportSigConvergenceFailed_);
result = harness.tick(
ext,
harness.parms.rngREVEAL_TIMEOUT * 2 + std::chrono::milliseconds{1});
BEAST_EXPECT(result.readyForAccept);
BEAST_EXPECT(ext.exportSigConvergenceFailed_);
}
void
testRngEntropyGateRequiresFullObservation()
{
testcase("RNG entropy gate requires full sidecar observation");
FakeExtensions ext;
ext.rngOn = true;
ext.exportOn = false;
ext.estState_ = EstablishState::ConvergingReveal;
ExportTickHarness harness;
auto const localHash = ext.entropyHash;
harness.addEntropyPeer(1, localHash);
harness.addEntropyPeer(2, localHash);
harness.addEntropyPeer(3, localHash);
harness.addEntropyPeer(4, std::nullopt);
auto result = harness.tick(ext);
BEAST_EXPECT(!result.readyForAccept);
BEAST_EXPECT(harness.position.entropySetHash == localHash);
BEAST_EXPECT(ext.entropySetPublished_);
// Quorum alignment is not safe if a tx-converged peer has not
// advertised any entropySetHash. Otherwise local observation order
// can split non-zero entropy from deterministic zero fallback.
result = harness.tick(ext, std::chrono::milliseconds{100});
BEAST_EXPECT(!result.readyForAccept);
BEAST_EXPECT(!ext.entropyFailed);
BEAST_EXPECT(harness.position.entropySetHash == localHash);
result = harness.tick(
ext,
harness.parms.rngREVEAL_TIMEOUT * 2 + std::chrono::milliseconds{1});
BEAST_EXPECT(result.readyForAccept);
BEAST_EXPECT(ext.entropyFailed);
BEAST_EXPECT(!harness.position.entropySetHash);
}
void
testRngFastPathWaitsAfterEntropyPublish()
{
testcase("RNG fast path waits after entropy publish");
FakeExtensions ext;
ext.rngOn = true;
ext.exportOn = false;
ext.estState_ = EstablishState::ConvergingCommit;
ExportTickHarness harness;
auto const localHash = ext.entropyHash;
harness.addEntropyPeer(1, localHash);
harness.addEntropyPeer(2, localHash);
harness.addEntropyPeer(3, localHash);
harness.addEntropyPeer(4, localHash);
auto result = harness.tick(ext);
BEAST_EXPECT(!result.readyForAccept);
BEAST_EXPECT(ext.estState_ == EstablishState::ConvergingReveal);
BEAST_EXPECT(ext.entropySetPublished_);
BEAST_EXPECT(harness.position.entropySetHash == localHash);
result = harness.tick(ext, std::chrono::milliseconds{100});
BEAST_EXPECT(result.readyForAccept);
BEAST_EXPECT(!ext.entropyFailed);
BEAST_EXPECT(harness.position.entropySetHash == localHash);
}
void
testExportSigGateAllowsAlignedQuorumDespiteMinorityConflict()
{
testcase("Export sig gate ignores minority conflict after quorum");
FakeExtensions ext;
ExportTickHarness harness;
auto const localHash = ext.exportHash;
auto const conflictHash = makeHash("conflicting-export-sig-set");
harness.addPeer(1, localHash);
harness.addPeer(2, localHash);
harness.addPeer(3, localHash);
harness.addPeer(4, conflictHash);
auto result = harness.tick(ext);
BEAST_EXPECT(!result.readyForAccept);
result = harness.tick(ext, std::chrono::milliseconds{100});
BEAST_EXPECT(result.readyForAccept);
BEAST_EXPECT(!ext.exportSigConvergenceFailed_);
BEAST_EXPECT(ext.fetchedExportSets.size() == 1);
BEAST_EXPECT(ext.fetchedExportSets.front() == conflictHash);
}
void
testExportSigGateRequiresFullObservation()
{
testcase("Export sig gate requires full sidecar observation");
FakeExtensions ext;
ExportTickHarness harness;
auto const localHash = ext.exportHash;
harness.addPeer(1, localHash);
harness.addPeer(2, localHash);
harness.addPeer(3, localHash);
harness.addPeer(4, std::nullopt);
auto result = harness.tick(ext);
BEAST_EXPECT(!result.readyForAccept);
BEAST_EXPECT(harness.position.exportSigSetHash == localHash);
BEAST_EXPECT(ext.exportSigGateStarted_);
// Local quorum alignment is not enough if a tx-converged peer has
// not advertised any exportSigSetHash yet.
result = harness.tick(ext, std::chrono::milliseconds{100});
BEAST_EXPECT(!result.readyForAccept);
BEAST_EXPECT(!ext.exportSigConvergenceFailed_);
result = harness.tick(
ext,
harness.parms.rngREVEAL_TIMEOUT * 2 + std::chrono::milliseconds{1});
BEAST_EXPECT(result.readyForAccept);
BEAST_EXPECT(ext.exportSigConvergenceFailed_);
}
void
testExportSigGateFetchesAdvertisedPeerSets()
{
testcase("Export sig gate fetches advertised peer sets");
FakeExtensions ext;
ext.localExportSigs = false;
ExportTickHarness harness;
auto const peerHash = makeHash("peer-export-sig-set");
harness.addPeer(1, peerHash);
auto result = harness.tick(ext);
BEAST_EXPECT(!result.readyForAccept);
BEAST_EXPECT(ext.exportSigGateStarted_);
BEAST_EXPECT(!harness.position.exportSigSetHash);
BEAST_EXPECT(ext.fetchedExportSets.size() == 1);
BEAST_EXPECT(ext.fetchedExportSets.front() == peerHash);
result = harness.tick(
ext,
harness.parms.rngREVEAL_TIMEOUT * 2 + std::chrono::milliseconds{1});
BEAST_EXPECT(result.readyForAccept);
BEAST_EXPECT(ext.exportSigConvergenceFailed_);
}
void
testExportSigGateBoundsCandidateObservationWindow()
{
testcase("Export sig gate bounds candidate observation window");
FakeExtensions ext;
ext.localExportSigs = false;
ext.consensusExportTxns = true;
ExportTickHarness harness;
auto result = harness.tick(ext);
BEAST_EXPECT(!result.readyForAccept);
BEAST_EXPECT(ext.exportSigGateStarted_);
BEAST_EXPECT(!harness.position.exportSigSetHash);
BEAST_EXPECT(ext.fetchedExportSets.empty());
BEAST_EXPECT(!ext.exportSigConvergenceFailed_);
result = harness.tick(ext, std::chrono::milliseconds{100});
BEAST_EXPECT(!result.readyForAccept);
BEAST_EXPECT(!ext.exportSigConvergenceFailed_);
result = harness.tick(
ext,
harness.parms.rngREVEAL_TIMEOUT * 2 + std::chrono::milliseconds{1});
BEAST_EXPECT(result.readyForAccept);
BEAST_EXPECT(ext.exportSigConvergenceFailed_);
}
void
testExportSigGateSkipsWhenExportDisabled()
{
testcase("Export sig gate skips when Export disabled");
FakeExtensions ext;
ext.exportOn = false;
ExportTickHarness harness;
harness.addPeer(1, ext.exportHash);
auto result = harness.tick(ext);
BEAST_EXPECT(result.readyForAccept);
BEAST_EXPECT(!ext.exportSigGateStarted_);
BEAST_EXPECT(!harness.position.exportSigSetHash);
BEAST_EXPECT(ext.exportBuilds == 0);
BEAST_EXPECT(ext.fetchedExportSets.empty());
}
void
testExportDisabledRoundClearsCollector()
{
testcase("Export disabled round clears collector");
using namespace jtx;
Env env{*this, envconfig(), supported_amendments(), nullptr};
ConsensusExtensions ce{env.app(), env.journal};
auto const tx = makeHash("export-disabled-clears-collector");
auto const pk = makeValidatorKeys().front();
std::uint8_t const sigBytes[] = {1, 2, 3};
Buffer const sig{sigBytes, sizeof(sigBytes)};
ce.setExportEnabledThisRound(true);
ce.exportSigCollector().addVerifiedSignature(tx, pk, sig, 10);
ce.clearRngState();
BEAST_EXPECT(ce.exportSigCollector().signatureCount(tx) == 1);
ce.setExportEnabledThisRound(false);
ce.clearRngState();
BEAST_EXPECT(ce.exportSigCollector().signatureCount(tx) == 0);
}
void
testReplayedProposalHarvestsExportSigs()
{
testcase("Replayed proposal harvests export signatures");
using namespace jtx;
Env env{
*this, envconfig(validator, ""), supported_amendments(), nullptr};
auto const& valKeys = env.app().getValidatorKeys();
BEAST_EXPECT(valKeys.keys);
if (!valKeys.keys)
return;
ConsensusExtensions ce{env.app(), env.journal};
ce.setExportEnabledThisRound(true);
ce.cacheUNLReport();
auto const activeView = ce.activeValidatorView();
BEAST_EXPECT(activeView->sourceLedgerHash);
if (!activeView->sourceLedgerHash)
return;
auto const senderPK = valKeys.keys->publicKey;
BEAST_EXPECT(ce.isActiveValidator(senderPK, *activeView));
if (!ce.isActiveValidator(senderPK, *activeView))
return;
auto const tx = makeHash("replayed-export-sig-tx");
auto const blob = makeExportSigBlob(tx, senderPK);
ExtendedPosition position{makeHash("replayed-position")};
position.exportSignaturesHash =
proposalExportSignaturesHash(std::vector<std::string>{blob});
ce.onTrustedPeerProposal(
calcNodeID(senderPK),
senderPK,
position,
0,
NetClock::time_point{},
*activeView->sourceLedgerHash,
Slice{},
std::vector<std::string>{blob});
BEAST_EXPECT(ce.exportSigCollector().hasUnverifiedSignatures());
}
public:
void
run() override
{
testActiveValidatorViewAppliesNegativeUNL();
testExportSigGateRequiresQuorumAlignment();
testRngEntropyGateRequiresFullObservation();
testRngFastPathWaitsAfterEntropyPublish();
testExportSigGateAllowsAlignedQuorumDespiteMinorityConflict();
testExportSigGateRequiresFullObservation();
testExportSigGateFetchesAdvertisedPeerSets();
testExportSigGateBoundsCandidateObservationWindow();
testExportSigGateSkipsWhenExportDisabled();
testExportDisabledRoundClearsCollector();
testReplayedProposalHarvestsExportSigs();
}
};
BEAST_DEFINE_TESTSUITE(ConsensusExtensions, consensus, ripple);
} // namespace test
} // namespace ripple

File diff suppressed because it is too large Load Diff

View File

@@ -22,6 +22,8 @@
#include <xrpld/consensus/ConsensusProposal.h>
#include <xrpl/beast/clock/manual_clock.h>
#include <xrpl/beast/unit_test.h>
#include <xrpl/json/to_string.h>
#include <optional>
#include <utility>
namespace ripple {
@@ -40,6 +42,7 @@ public:
testShouldCloseLedger()
{
using namespace std::chrono_literals;
testcase("should close ledger");
// Use default parameters
ConsensusParms const p{};
@@ -78,46 +81,102 @@ public:
testCheckConsensus()
{
using namespace std::chrono_literals;
testcase("check consensus");
// Use default parameterss
ConsensusParms const p{};
///////////////
// Disputes still in doubt
//
// Not enough time has elapsed
BEAST_EXPECT(
ConsensusState::No ==
checkConsensus(10, 2, 2, 0, 3s, 2s, p, true, journal_));
checkConsensus(10, 2, 2, 0, 3s, 2s, false, p, true, journal_));
// If not enough peers have propsed, ensure
// more time for proposals
BEAST_EXPECT(
ConsensusState::No ==
checkConsensus(10, 2, 2, 0, 3s, 4s, p, true, journal_));
checkConsensus(10, 2, 2, 0, 3s, 4s, false, p, true, journal_));
// Enough time has elapsed and we all agree
BEAST_EXPECT(
ConsensusState::Yes ==
checkConsensus(10, 2, 2, 0, 3s, 10s, p, true, journal_));
checkConsensus(10, 2, 2, 0, 3s, 10s, false, p, true, journal_));
// Enough time has elapsed and we don't yet agree
BEAST_EXPECT(
ConsensusState::No ==
checkConsensus(10, 2, 1, 0, 3s, 10s, p, true, journal_));
checkConsensus(10, 2, 1, 0, 3s, 10s, false, p, true, journal_));
// Our peers have moved on
// Enough time has elapsed and we all agree
BEAST_EXPECT(
ConsensusState::MovedOn ==
checkConsensus(10, 2, 1, 8, 3s, 10s, p, true, journal_));
checkConsensus(10, 2, 1, 8, 3s, 10s, false, p, true, journal_));
// If no peers, don't agree until time has passed.
BEAST_EXPECT(
ConsensusState::No ==
checkConsensus(0, 0, 0, 0, 3s, 10s, p, true, journal_));
checkConsensus(0, 0, 0, 0, 3s, 10s, false, p, true, journal_));
// Agree if no peers and enough time has passed.
BEAST_EXPECT(
ConsensusState::Yes ==
checkConsensus(0, 0, 0, 0, 3s, 16s, p, true, journal_));
checkConsensus(0, 0, 0, 0, 3s, 16s, false, p, true, journal_));
// Expire if too much time has passed without agreement
BEAST_EXPECT(
ConsensusState::Expired ==
checkConsensus(10, 8, 1, 0, 1s, 19s, false, p, true, journal_));
///////////////
// Stalled
//
// Not enough time has elapsed
BEAST_EXPECT(
ConsensusState::No ==
checkConsensus(10, 2, 2, 0, 3s, 2s, true, p, true, journal_));
// If not enough peers have propsed, ensure
// more time for proposals
BEAST_EXPECT(
ConsensusState::No ==
checkConsensus(10, 2, 2, 0, 3s, 4s, true, p, true, journal_));
// Enough time has elapsed and we all agree
BEAST_EXPECT(
ConsensusState::Yes ==
checkConsensus(10, 2, 2, 0, 3s, 10s, true, p, true, journal_));
// Enough time has elapsed and we don't yet agree, but there's nothing
// left to dispute
BEAST_EXPECT(
ConsensusState::Yes ==
checkConsensus(10, 2, 1, 0, 3s, 10s, true, p, true, journal_));
// Our peers have moved on
// Enough time has elapsed and we all agree, nothing left to dispute
BEAST_EXPECT(
ConsensusState::Yes ==
checkConsensus(10, 2, 1, 8, 3s, 10s, true, p, true, journal_));
// If no peers, don't agree until time has passed.
BEAST_EXPECT(
ConsensusState::No ==
checkConsensus(0, 0, 0, 0, 3s, 10s, true, p, true, journal_));
// Agree if no peers and enough time has passed.
BEAST_EXPECT(
ConsensusState::Yes ==
checkConsensus(0, 0, 0, 0, 3s, 16s, true, p, true, journal_));
// We are done if there's nothing left to dispute, no matter how much
// time has passed
BEAST_EXPECT(
ConsensusState::Yes ==
checkConsensus(10, 8, 1, 0, 1s, 19s, true, p, true, journal_));
}
void
@@ -125,6 +184,7 @@ public:
{
using namespace std::chrono_literals;
using namespace csf;
testcase("standalone");
Sim s;
PeerGroup peers = s.createGroup(1);
@@ -149,7 +209,9 @@ public:
{
using namespace csf;
using namespace std::chrono;
testcase("peers agree");
//@@start peers-agree
ConsensusParms const parms{};
Sim sim;
PeerGroup peers = sim.createGroup(5);
@@ -179,6 +241,7 @@ public:
BEAST_EXPECT(lcl.txs().find(Tx{i}) != lcl.txs().end());
}
}
//@@end peers-agree
}
void
@@ -186,11 +249,13 @@ public:
{
using namespace csf;
using namespace std::chrono;
testcase("slow peers");
// Several tests of a complete trust graph with a subset of peers
// that have significantly longer network delays to the rest of the
// network
//@@start slow-peer-scenario
// Test when a slow peer doesn't delay a consensus quorum (4/5 agree)
{
ConsensusParms const parms{};
@@ -229,16 +294,18 @@ public:
BEAST_EXPECT(
peer->prevRoundTime == network[0]->prevRoundTime);
// Slow peer's transaction (Tx{0}) didn't make it in time
BEAST_EXPECT(lcl.txs().find(Tx{0}) == lcl.txs().end());
for (std::uint32_t i = 2; i < network.size(); ++i)
BEAST_EXPECT(lcl.txs().find(Tx{i}) != lcl.txs().end());
// Tx 0 didn't make it
// Tx 0 is still in the open transaction set for next round
BEAST_EXPECT(
peer->openTxs.find(Tx{0}) != peer->openTxs.end());
}
}
}
//@@end slow-peer-scenario
// Test when the slow peers delay a consensus quorum (4/6 agree)
{
@@ -351,6 +418,7 @@ public:
{
using namespace csf;
using namespace std::chrono;
testcase("close time disagree");
// This is a very specialized test to get ledgers to disagree on
// the close time. It unfortunately assumes knowledge about current
@@ -417,6 +485,8 @@ public:
{
using namespace csf;
using namespace std::chrono;
testcase("wrong LCL");
// Specialized test to exercise a temporary fork in which some peers
// are working on an incorrect prior ledger.
@@ -426,6 +496,7 @@ public:
// the wrong LCL at different phases of consensus
for (auto validationDelay : {0ms, parms.ledgerMIN_CLOSE})
{
//@@start wrong-lcl-scenario
// Consider 10 peers:
// 0 1 2 3 4 5 6 7 8 9
// minority majorityA majorityB
@@ -446,6 +517,7 @@ public:
// This topology can potentially fork with the above trust relations
// but that is intended for this test.
//@@end wrong-lcl-scenario
Sim sim;
@@ -589,6 +661,7 @@ public:
{
using namespace csf;
using namespace std::chrono;
testcase("consensus close time rounding");
// This is a specialized test engineered to yield ledgers with different
// close times even though the peers believe they had close time
@@ -604,9 +677,6 @@ public:
PeerGroup fast = sim.createGroup(4);
PeerGroup network = fast + slow;
for (Peer* peer : network)
peer->consensusParms = parms;
// Connected trust graph
network.trust(network);
@@ -692,6 +762,7 @@ public:
{
using namespace csf;
using namespace std::chrono;
testcase("fork");
std::uint32_t numPeers = 10;
// Vary overlap between two UNLs
@@ -729,6 +800,7 @@ public:
}
sim.run(1);
//@@start fork-threshold
// Fork should not happen for 40% or greater overlap
// Since the overlapped nodes have a UNL that is the union of the
// two cliques, the maximum sized UNL list is the number of peers
@@ -740,6 +812,7 @@ public:
// One for cliqueA, one for cliqueB and one for nodes in both
BEAST_EXPECT(sim.branches() <= 3);
}
//@@end fork-threshold
}
}
@@ -748,6 +821,7 @@ public:
{
using namespace csf;
using namespace std::chrono;
testcase("hub network");
// Simulate a set of 5 validators that aren't directly connected but
// rely on a single hub node for communication
@@ -835,6 +909,7 @@ public:
{
using namespace csf;
using namespace std::chrono;
testcase("preferred by branch");
// Simulate network splits that are prevented from forking when using
// preferred ledger by trie. This is a contrived example that involves
@@ -967,6 +1042,7 @@ public:
{
using namespace csf;
using namespace std::chrono;
testcase("pause for laggards");
// Test that validators that jump ahead of the network slow
// down.
@@ -1052,6 +1128,410 @@ public:
BEAST_EXPECT(sim.synchronized());
}
// RNG consensus tests in ConsensusRng_test.cpp
// MERGE NOTE (sync-2.5.0): upstream testDisputes() is already present
// below with j/clog stalled() params from 86ef16dbeb. If upstream
// auto-merges a duplicate, delete it — keep only this version.
void
testDisputes()
{
testcase("disputes");
using namespace csf;
// Test dispute objects directly
using Dispute = DisputedTx<Tx, PeerID>;
Tx const txTrue{99};
Tx const txFalse{98};
Tx const txFollowingTrue{97};
Tx const txFollowingFalse{96};
int const numPeers = 100;
ConsensusParms p;
std::size_t peersUnchanged = 0;
auto logs = std::make_unique<Logs>(beast::severities::kError);
auto j = logs->journal("Test");
auto clog = std::make_unique<std::stringstream>();
// Three cases:
// 1 proposing, initial vote yes
// 2 proposing, initial vote no
// 3 not proposing, initial vote doesn't matter after the first update,
// use yes
{
Dispute proposingTrue{txTrue.id(), true, numPeers, journal_};
Dispute proposingFalse{txFalse.id(), false, numPeers, journal_};
Dispute followingTrue{
txFollowingTrue.id(), true, numPeers, journal_};
Dispute followingFalse{
txFollowingFalse.id(), false, numPeers, journal_};
BEAST_EXPECT(proposingTrue.ID() == 99);
BEAST_EXPECT(proposingFalse.ID() == 98);
BEAST_EXPECT(followingTrue.ID() == 97);
BEAST_EXPECT(followingFalse.ID() == 96);
// Create an even split in the peer votes
for (int i = 0; i < numPeers; ++i)
{
BEAST_EXPECT(proposingTrue.setVote(PeerID(i), i < 50));
BEAST_EXPECT(proposingFalse.setVote(PeerID(i), i < 50));
BEAST_EXPECT(followingTrue.setVote(PeerID(i), i < 50));
BEAST_EXPECT(followingFalse.setVote(PeerID(i), i < 50));
}
// Switch the middle vote to match mine
BEAST_EXPECT(proposingTrue.setVote(PeerID(50), true));
BEAST_EXPECT(proposingFalse.setVote(PeerID(49), false));
BEAST_EXPECT(followingTrue.setVote(PeerID(50), true));
BEAST_EXPECT(followingFalse.setVote(PeerID(49), false));
// no changes yet
BEAST_EXPECT(proposingTrue.getOurVote() == true);
BEAST_EXPECT(proposingFalse.getOurVote() == false);
BEAST_EXPECT(followingTrue.getOurVote() == true);
BEAST_EXPECT(followingFalse.getOurVote() == false);
BEAST_EXPECT(
!proposingTrue.stalled(p, true, peersUnchanged, j, clog));
BEAST_EXPECT(
!proposingFalse.stalled(p, true, peersUnchanged, j, clog));
BEAST_EXPECT(
!followingTrue.stalled(p, false, peersUnchanged, j, clog));
BEAST_EXPECT(
!followingFalse.stalled(p, false, peersUnchanged, j, clog));
BEAST_EXPECT(clog->str() == "");
// I'm in the majority, my vote should not change
BEAST_EXPECT(!proposingTrue.updateVote(5, true, p));
BEAST_EXPECT(!proposingFalse.updateVote(5, true, p));
BEAST_EXPECT(!followingTrue.updateVote(5, false, p));
BEAST_EXPECT(!followingFalse.updateVote(5, false, p));
BEAST_EXPECT(!proposingTrue.updateVote(10, true, p));
BEAST_EXPECT(!proposingFalse.updateVote(10, true, p));
BEAST_EXPECT(!followingTrue.updateVote(10, false, p));
BEAST_EXPECT(!followingFalse.updateVote(10, false, p));
peersUnchanged = 2;
BEAST_EXPECT(
!proposingTrue.stalled(p, true, peersUnchanged, j, clog));
BEAST_EXPECT(
!proposingFalse.stalled(p, true, peersUnchanged, j, clog));
BEAST_EXPECT(
!followingTrue.stalled(p, false, peersUnchanged, j, clog));
BEAST_EXPECT(
!followingFalse.stalled(p, false, peersUnchanged, j, clog));
BEAST_EXPECT(clog->str() == "");
// Right now, the vote is 51%. The requirement is about to jump to
// 65%
BEAST_EXPECT(proposingTrue.updateVote(55, true, p));
BEAST_EXPECT(!proposingFalse.updateVote(55, true, p));
BEAST_EXPECT(!followingTrue.updateVote(55, false, p));
BEAST_EXPECT(!followingFalse.updateVote(55, false, p));
BEAST_EXPECT(proposingTrue.getOurVote() == false);
BEAST_EXPECT(proposingFalse.getOurVote() == false);
BEAST_EXPECT(followingTrue.getOurVote() == true);
BEAST_EXPECT(followingFalse.getOurVote() == false);
// 16 validators change their vote to match my original vote
for (int i = 0; i < 16; ++i)
{
auto pTrue = PeerID(numPeers - i - 1);
auto pFalse = PeerID(i);
BEAST_EXPECT(proposingTrue.setVote(pTrue, true));
BEAST_EXPECT(proposingFalse.setVote(pFalse, false));
BEAST_EXPECT(followingTrue.setVote(pTrue, true));
BEAST_EXPECT(followingFalse.setVote(pFalse, false));
}
// The vote should now be 66%, threshold is 65%
BEAST_EXPECT(proposingTrue.updateVote(60, true, p));
BEAST_EXPECT(!proposingFalse.updateVote(60, true, p));
BEAST_EXPECT(!followingTrue.updateVote(60, false, p));
BEAST_EXPECT(!followingFalse.updateVote(60, false, p));
BEAST_EXPECT(proposingTrue.getOurVote() == true);
BEAST_EXPECT(proposingFalse.getOurVote() == false);
BEAST_EXPECT(followingTrue.getOurVote() == true);
BEAST_EXPECT(followingFalse.getOurVote() == false);
// Threshold jumps to 70%
BEAST_EXPECT(proposingTrue.updateVote(86, true, p));
BEAST_EXPECT(!proposingFalse.updateVote(86, true, p));
BEAST_EXPECT(!followingTrue.updateVote(86, false, p));
BEAST_EXPECT(!followingFalse.updateVote(86, false, p));
BEAST_EXPECT(proposingTrue.getOurVote() == false);
BEAST_EXPECT(proposingFalse.getOurVote() == false);
BEAST_EXPECT(followingTrue.getOurVote() == true);
BEAST_EXPECT(followingFalse.getOurVote() == false);
// 5 more validators change their vote to match my original vote
for (int i = 16; i < 21; ++i)
{
auto pTrue = PeerID(numPeers - i - 1);
auto pFalse = PeerID(i);
BEAST_EXPECT(proposingTrue.setVote(pTrue, true));
BEAST_EXPECT(proposingFalse.setVote(pFalse, false));
BEAST_EXPECT(followingTrue.setVote(pTrue, true));
BEAST_EXPECT(followingFalse.setVote(pFalse, false));
}
// The vote should now be 71%, threshold is 70%
BEAST_EXPECT(proposingTrue.updateVote(90, true, p));
BEAST_EXPECT(!proposingFalse.updateVote(90, true, p));
BEAST_EXPECT(!followingTrue.updateVote(90, false, p));
BEAST_EXPECT(!followingFalse.updateVote(90, false, p));
BEAST_EXPECT(proposingTrue.getOurVote() == true);
BEAST_EXPECT(proposingFalse.getOurVote() == false);
BEAST_EXPECT(followingTrue.getOurVote() == true);
BEAST_EXPECT(followingFalse.getOurVote() == false);
// The vote should now be 71%, threshold is 70%
BEAST_EXPECT(!proposingTrue.updateVote(150, true, p));
BEAST_EXPECT(!proposingFalse.updateVote(150, true, p));
BEAST_EXPECT(!followingTrue.updateVote(150, false, p));
BEAST_EXPECT(!followingFalse.updateVote(150, false, p));
BEAST_EXPECT(proposingTrue.getOurVote() == true);
BEAST_EXPECT(proposingFalse.getOurVote() == false);
BEAST_EXPECT(followingTrue.getOurVote() == true);
BEAST_EXPECT(followingFalse.getOurVote() == false);
// The vote should now be 71%, threshold is 70%
BEAST_EXPECT(!proposingTrue.updateVote(190, true, p));
BEAST_EXPECT(!proposingFalse.updateVote(190, true, p));
BEAST_EXPECT(!followingTrue.updateVote(190, false, p));
BEAST_EXPECT(!followingFalse.updateVote(190, false, p));
BEAST_EXPECT(proposingTrue.getOurVote() == true);
BEAST_EXPECT(proposingFalse.getOurVote() == false);
BEAST_EXPECT(followingTrue.getOurVote() == true);
BEAST_EXPECT(followingFalse.getOurVote() == false);
peersUnchanged = 3;
BEAST_EXPECT(
!proposingTrue.stalled(p, true, peersUnchanged, j, clog));
BEAST_EXPECT(
!proposingFalse.stalled(p, true, peersUnchanged, j, clog));
BEAST_EXPECT(
!followingTrue.stalled(p, false, peersUnchanged, j, clog));
BEAST_EXPECT(
!followingFalse.stalled(p, false, peersUnchanged, j, clog));
BEAST_EXPECT(clog->str() == "");
// Threshold jumps to 95%
BEAST_EXPECT(proposingTrue.updateVote(220, true, p));
BEAST_EXPECT(!proposingFalse.updateVote(220, true, p));
BEAST_EXPECT(!followingTrue.updateVote(220, false, p));
BEAST_EXPECT(!followingFalse.updateVote(220, false, p));
BEAST_EXPECT(proposingTrue.getOurVote() == false);
BEAST_EXPECT(proposingFalse.getOurVote() == false);
BEAST_EXPECT(followingTrue.getOurVote() == true);
BEAST_EXPECT(followingFalse.getOurVote() == false);
// 25 more validators change their vote to match my original vote
for (int i = 21; i < 46; ++i)
{
auto pTrue = PeerID(numPeers - i - 1);
auto pFalse = PeerID(i);
BEAST_EXPECT(proposingTrue.setVote(pTrue, true));
BEAST_EXPECT(proposingFalse.setVote(pFalse, false));
BEAST_EXPECT(followingTrue.setVote(pTrue, true));
BEAST_EXPECT(followingFalse.setVote(pFalse, false));
}
// The vote should now be 96%, threshold is 95%
BEAST_EXPECT(proposingTrue.updateVote(250, true, p));
BEAST_EXPECT(!proposingFalse.updateVote(250, true, p));
BEAST_EXPECT(!followingTrue.updateVote(250, false, p));
BEAST_EXPECT(!followingFalse.updateVote(250, false, p));
BEAST_EXPECT(proposingTrue.getOurVote() == true);
BEAST_EXPECT(proposingFalse.getOurVote() == false);
BEAST_EXPECT(followingTrue.getOurVote() == true);
BEAST_EXPECT(followingFalse.getOurVote() == false);
for (peersUnchanged = 0; peersUnchanged < 6; ++peersUnchanged)
{
BEAST_EXPECT(
!proposingTrue.stalled(p, true, peersUnchanged, j, clog));
BEAST_EXPECT(
!proposingFalse.stalled(p, true, peersUnchanged, j, clog));
BEAST_EXPECT(
!followingTrue.stalled(p, false, peersUnchanged, j, clog));
BEAST_EXPECT(
!followingFalse.stalled(p, false, peersUnchanged, j, clog));
BEAST_EXPECT(clog->str() == "");
}
auto expectStalled = [this, &clog](
int txid,
bool ourVote,
int ourTime,
int peerTime,
int support,
std::uint32_t line) {
using namespace std::string_literals;
auto const s = clog->str();
expect(s.find("stalled"), s, __FILE__, line);
expect(
s.starts_with("Transaction "s + std::to_string(txid)),
s,
__FILE__,
line);
expect(
s.find("voting "s + (ourVote ? "YES" : "NO")) != s.npos,
s,
__FILE__,
line);
expect(
s.find("for "s + std::to_string(ourTime) + " rounds."s) !=
s.npos,
s,
__FILE__,
line);
expect(
s.find(
"votes in "s + std::to_string(peerTime) + " rounds.") !=
s.npos,
s,
__FILE__,
line);
expect(
s.ends_with(
"has "s + std::to_string(support) + "% support. "s),
s,
__FILE__,
line);
clog = std::make_unique<std::stringstream>();
};
for (int i = 0; i < 1; ++i)
{
BEAST_EXPECT(!proposingTrue.updateVote(250 + 10 * i, true, p));
BEAST_EXPECT(!proposingFalse.updateVote(250 + 10 * i, true, p));
BEAST_EXPECT(!followingTrue.updateVote(250 + 10 * i, false, p));
BEAST_EXPECT(
!followingFalse.updateVote(250 + 10 * i, false, p));
BEAST_EXPECT(proposingTrue.getOurVote() == true);
BEAST_EXPECT(proposingFalse.getOurVote() == false);
BEAST_EXPECT(followingTrue.getOurVote() == true);
BEAST_EXPECT(followingFalse.getOurVote() == false);
// true vote has changed recently, so not stalled
BEAST_EXPECT(!proposingTrue.stalled(p, true, 0, j, clog));
BEAST_EXPECT(clog->str() == "");
// remaining votes have been unchanged in so long that we only
// need to hit the second round at 95% to be stalled, regardless
// of peers
BEAST_EXPECT(proposingFalse.stalled(p, true, 0, j, clog));
expectStalled(98, false, 11, 0, 2, __LINE__);
BEAST_EXPECT(followingTrue.stalled(p, false, 0, j, clog));
expectStalled(97, true, 11, 0, 97, __LINE__);
BEAST_EXPECT(followingFalse.stalled(p, false, 0, j, clog));
expectStalled(96, false, 11, 0, 3, __LINE__);
// true vote has changed recently, so not stalled
BEAST_EXPECT(
!proposingTrue.stalled(p, true, peersUnchanged, j, clog));
BEAST_EXPECTS(clog->str() == "", clog->str());
// remaining votes have been unchanged in so long that we only
// need to hit the second round at 95% to be stalled, regardless
// of peers
BEAST_EXPECT(
proposingFalse.stalled(p, true, peersUnchanged, j, clog));
expectStalled(98, false, 11, 6, 2, __LINE__);
BEAST_EXPECT(
followingTrue.stalled(p, false, peersUnchanged, j, clog));
expectStalled(97, true, 11, 6, 97, __LINE__);
BEAST_EXPECT(
followingFalse.stalled(p, false, peersUnchanged, j, clog));
expectStalled(96, false, 11, 6, 3, __LINE__);
}
for (int i = 1; i < 3; ++i)
{
BEAST_EXPECT(!proposingTrue.updateVote(250 + 10 * i, true, p));
BEAST_EXPECT(!proposingFalse.updateVote(250 + 10 * i, true, p));
BEAST_EXPECT(!followingTrue.updateVote(250 + 10 * i, false, p));
BEAST_EXPECT(
!followingFalse.updateVote(250 + 10 * i, false, p));
BEAST_EXPECT(proposingTrue.getOurVote() == true);
BEAST_EXPECT(proposingFalse.getOurVote() == false);
BEAST_EXPECT(followingTrue.getOurVote() == true);
BEAST_EXPECT(followingFalse.getOurVote() == false);
// true vote changed 2 rounds ago, and peers are changing, so
// not stalled
BEAST_EXPECT(!proposingTrue.stalled(p, true, 0, j, clog));
BEAST_EXPECTS(clog->str() == "", clog->str());
// still stalled
BEAST_EXPECT(proposingFalse.stalled(p, true, 0, j, clog));
expectStalled(98, false, 11 + i, 0, 2, __LINE__);
BEAST_EXPECT(followingTrue.stalled(p, false, 0, j, clog));
expectStalled(97, true, 11 + i, 0, 97, __LINE__);
BEAST_EXPECT(followingFalse.stalled(p, false, 0, j, clog));
expectStalled(96, false, 11 + i, 0, 3, __LINE__);
// true vote changed 2 rounds ago, and peers are NOT changing,
// so stalled
BEAST_EXPECT(
proposingTrue.stalled(p, true, peersUnchanged, j, clog));
expectStalled(99, true, 1 + i, 6, 97, __LINE__);
// still stalled
BEAST_EXPECT(
proposingFalse.stalled(p, true, peersUnchanged, j, clog));
expectStalled(98, false, 11 + i, 6, 2, __LINE__);
BEAST_EXPECT(
followingTrue.stalled(p, false, peersUnchanged, j, clog));
expectStalled(97, true, 11 + i, 6, 97, __LINE__);
BEAST_EXPECT(
followingFalse.stalled(p, false, peersUnchanged, j, clog));
expectStalled(96, false, 11 + i, 6, 3, __LINE__);
}
for (int i = 3; i < 5; ++i)
{
BEAST_EXPECT(!proposingTrue.updateVote(250 + 10 * i, true, p));
BEAST_EXPECT(!proposingFalse.updateVote(250 + 10 * i, true, p));
BEAST_EXPECT(!followingTrue.updateVote(250 + 10 * i, false, p));
BEAST_EXPECT(
!followingFalse.updateVote(250 + 10 * i, false, p));
BEAST_EXPECT(proposingTrue.getOurVote() == true);
BEAST_EXPECT(proposingFalse.getOurVote() == false);
BEAST_EXPECT(followingTrue.getOurVote() == true);
BEAST_EXPECT(followingFalse.getOurVote() == false);
BEAST_EXPECT(proposingTrue.stalled(p, true, 0, j, clog));
expectStalled(99, true, 1 + i, 0, 97, __LINE__);
BEAST_EXPECT(proposingFalse.stalled(p, true, 0, j, clog));
expectStalled(98, false, 11 + i, 0, 2, __LINE__);
BEAST_EXPECT(followingTrue.stalled(p, false, 0, j, clog));
expectStalled(97, true, 11 + i, 0, 97, __LINE__);
BEAST_EXPECT(followingFalse.stalled(p, false, 0, j, clog));
expectStalled(96, false, 11 + i, 0, 3, __LINE__);
BEAST_EXPECT(
proposingTrue.stalled(p, true, peersUnchanged, j, clog));
expectStalled(99, true, 1 + i, 6, 97, __LINE__);
BEAST_EXPECT(
proposingFalse.stalled(p, true, peersUnchanged, j, clog));
expectStalled(98, false, 11 + i, 6, 2, __LINE__);
BEAST_EXPECT(
followingTrue.stalled(p, false, peersUnchanged, j, clog));
expectStalled(97, true, 11 + i, 6, 97, __LINE__);
BEAST_EXPECT(
followingFalse.stalled(p, false, peersUnchanged, j, clog));
expectStalled(96, false, 11 + i, 6, 3, __LINE__);
}
}
}
void
run() override
{
@@ -1068,6 +1548,8 @@ public:
testHubNetwork();
testPreferredByBranch();
testPauseForLaggards();
// RNG consensus tests moved to ConsensusRng_test.cpp
testDisputes();
}
};

View File

@@ -0,0 +1,478 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2026 XRPL Labs
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <xrpld/app/consensus/RCLCxPeerPos.h>
#include <xrpld/consensus/ConsensusProposal.h>
#include <xrpl/beast/unit_test.h>
#include <xrpl/protocol/SecretKey.h>
#include <xrpl/protocol/digest.h>
#include <cstring>
namespace ripple {
namespace test {
class ExtendedPosition_test : public beast::unit_test::suite
{
// Generate deterministic test hashes
static uint256
makeHash(char const* label)
{
return sha512Half(Slice(label, std::strlen(label)));
}
void
testSerializationRoundTrip()
{
testcase("Serialization round-trip");
// Empty position (legacy compat)
{
auto const txSet = makeHash("txset-a");
ExtendedPosition pos{txSet};
Serializer s;
pos.add(s);
// Should be exactly 32 bytes (no flags byte)
BEAST_EXPECT(s.getDataLength() == 32);
SerialIter sit(s.slice());
auto deserialized =
ExtendedPosition::fromSerialIter(sit, s.getDataLength());
BEAST_EXPECT(deserialized.has_value());
if (!deserialized)
return;
BEAST_EXPECT(deserialized->txSetHash == txSet);
BEAST_EXPECT(!deserialized->myCommitment);
BEAST_EXPECT(!deserialized->myReveal);
BEAST_EXPECT(!deserialized->commitSetHash);
BEAST_EXPECT(!deserialized->entropySetHash);
BEAST_EXPECT(!deserialized->exportSigSetHash);
BEAST_EXPECT(!deserialized->exportSignaturesHash);
}
// Position with commitment
{
auto const txSet = makeHash("txset-b");
auto const commit = makeHash("commit-b");
ExtendedPosition pos{txSet};
pos.myCommitment = commit;
Serializer s;
pos.add(s);
// 32 (txSet) + 1 (flags) + 32 (commitment) = 65
BEAST_EXPECT(s.getDataLength() == 65);
SerialIter sit(s.slice());
auto deserialized =
ExtendedPosition::fromSerialIter(sit, s.getDataLength());
BEAST_EXPECT(deserialized.has_value());
if (!deserialized)
return;
BEAST_EXPECT(deserialized->txSetHash == txSet);
BEAST_EXPECT(deserialized->myCommitment == commit);
BEAST_EXPECT(!deserialized->myReveal);
}
// Position with all fields
{
auto const txSet = makeHash("txset-c");
auto const commitSet = makeHash("commitset-c");
auto const entropySet = makeHash("entropyset-c");
auto const exportSigSet = makeHash("exportsigset-c");
auto const exportSigs = makeHash("exportsigs-c");
auto const commit = makeHash("commit-c");
auto const reveal = makeHash("reveal-c");
ExtendedPosition pos{txSet};
pos.commitSetHash = commitSet;
pos.entropySetHash = entropySet;
pos.exportSigSetHash = exportSigSet;
pos.exportSignaturesHash = exportSigs;
pos.myCommitment = commit;
pos.myReveal = reveal;
Serializer s;
pos.add(s);
// 32 + 1 + 6*32 = 225
BEAST_EXPECT(s.getDataLength() == 225);
SerialIter sit(s.slice());
auto deserialized =
ExtendedPosition::fromSerialIter(sit, s.getDataLength());
BEAST_EXPECT(deserialized.has_value());
if (!deserialized)
return;
BEAST_EXPECT(deserialized->txSetHash == txSet);
BEAST_EXPECT(deserialized->commitSetHash == commitSet);
BEAST_EXPECT(deserialized->entropySetHash == entropySet);
BEAST_EXPECT(deserialized->exportSigSetHash == exportSigSet);
BEAST_EXPECT(deserialized->exportSignaturesHash == exportSigs);
BEAST_EXPECT(deserialized->myCommitment == commit);
BEAST_EXPECT(deserialized->myReveal == reveal);
}
}
void
testSigningConsistency()
{
testcase("Signing hash consistency");
// The signing hash from ConsensusProposal::signingHash() must match
// what a receiver would compute via the same function after
// deserializing the ExtendedPosition from the wire.
auto const [pk, sk] = randomKeyPair(KeyType::secp256k1);
auto const nodeId = calcNodeID(pk);
auto const prevLedger = makeHash("prevledger");
auto const closeTime =
NetClock::time_point{NetClock::duration{1234567}};
// Test with commitment (the case that was failing)
{
auto const txSet = makeHash("txset-sign");
auto const commit = makeHash("commitment-sign");
ExtendedPosition pos{txSet};
pos.myCommitment = commit;
using Proposal =
ConsensusProposal<NodeID, uint256, ExtendedPosition>;
Proposal prop{
prevLedger,
Proposal::seqJoin,
pos,
closeTime,
NetClock::time_point{},
nodeId};
// Sign it (same as propose() does)
auto const signingHash = prop.signingHash();
auto sig = signDigest(pk, sk, signingHash);
// Serialize position to wire format
Serializer positionData;
pos.add(positionData);
auto const posSlice = positionData.slice();
// Deserialize (same as PeerImp::onMessage does)
SerialIter sit(posSlice);
auto const maybeReceivedPos =
ExtendedPosition::fromSerialIter(sit, posSlice.size());
BEAST_EXPECT(maybeReceivedPos.has_value());
if (!maybeReceivedPos)
return;
// Reconstruct proposal on receiver side
Proposal receivedProp{
prevLedger,
Proposal::seqJoin,
*maybeReceivedPos,
closeTime,
NetClock::time_point{},
nodeId};
// The signing hash must match
BEAST_EXPECT(receivedProp.signingHash() == signingHash);
// Verify signature (same as checkSign does)
BEAST_EXPECT(
verifyDigest(pk, receivedProp.signingHash(), sig, false));
}
// Test without commitment (legacy case)
{
auto const txSet = makeHash("txset-legacy");
ExtendedPosition pos{txSet};
using Proposal =
ConsensusProposal<NodeID, uint256, ExtendedPosition>;
Proposal prop{
prevLedger,
Proposal::seqJoin,
pos,
closeTime,
NetClock::time_point{},
nodeId};
auto const signingHash = prop.signingHash();
auto sig = signDigest(pk, sk, signingHash);
Serializer positionData;
pos.add(positionData);
SerialIter sit(positionData.slice());
auto const maybeReceivedPos = ExtendedPosition::fromSerialIter(
sit, positionData.getDataLength());
BEAST_EXPECT(maybeReceivedPos.has_value());
if (!maybeReceivedPos)
return;
Proposal receivedProp{
prevLedger,
Proposal::seqJoin,
*maybeReceivedPos,
closeTime,
NetClock::time_point{},
nodeId};
BEAST_EXPECT(receivedProp.signingHash() == signingHash);
BEAST_EXPECT(
verifyDigest(pk, receivedProp.signingHash(), sig, false));
}
}
void
testSuppressionConsistency()
{
testcase("Suppression hash consistency");
// proposalUniqueId must produce the same result on sender and
// receiver when given the same ExtendedPosition data.
auto const [pk, sk] = randomKeyPair(KeyType::secp256k1);
auto const prevLedger = makeHash("prevledger-supp");
auto const closeTime =
NetClock::time_point{NetClock::duration{1234567}};
std::uint32_t const proposeSeq = 0;
auto const txSet = makeHash("txset-supp");
auto const commit = makeHash("commitment-supp");
ExtendedPosition pos{txSet};
pos.myCommitment = commit;
// Sign (to get a real signature for suppression)
using Proposal = ConsensusProposal<NodeID, uint256, ExtendedPosition>;
Proposal prop{
prevLedger,
proposeSeq,
pos,
closeTime,
NetClock::time_point{},
calcNodeID(pk)};
auto sig = signDigest(pk, sk, prop.signingHash());
// Sender computes suppression
auto const senderSuppression =
proposalUniqueId(pos, prevLedger, proposeSeq, closeTime, pk, sig);
// Simulate wire: serialize and deserialize
Serializer positionData;
pos.add(positionData);
SerialIter sit(positionData.slice());
auto const maybeReceivedPos =
ExtendedPosition::fromSerialIter(sit, positionData.getDataLength());
BEAST_EXPECT(maybeReceivedPos.has_value());
if (!maybeReceivedPos)
return;
// Receiver computes suppression
auto const receiverSuppression = proposalUniqueId(
*maybeReceivedPos, prevLedger, proposeSeq, closeTime, pk, sig);
BEAST_EXPECT(senderSuppression == receiverSuppression);
}
void
testMalformedPayload()
{
testcase("Malformed payload rejected");
// Too short (< 32 bytes)
{
Serializer s;
s.add32(0xDEADBEEF); // only 4 bytes
SerialIter sit(s.slice());
auto result =
ExtendedPosition::fromSerialIter(sit, s.getDataLength());
BEAST_EXPECT(!result.has_value());
}
// Empty payload
{
Serializer s;
SerialIter sit(s.slice());
auto result = ExtendedPosition::fromSerialIter(sit, 0);
BEAST_EXPECT(!result.has_value());
}
// Flags claim fields that aren't present (truncated)
{
auto const txSet = makeHash("txset-malformed");
Serializer s;
s.addBitString(txSet);
// flags = 0x0F (all 4 fields), but no field data follows
s.add8(0x0F);
SerialIter sit(s.slice());
auto result =
ExtendedPosition::fromSerialIter(sit, s.getDataLength());
BEAST_EXPECT(!result.has_value());
}
// Flags claim 2 fields but only 1 field's worth of data
{
auto const txSet = makeHash("txset-malformed2");
auto const commit = makeHash("commit-malformed2");
Serializer s;
s.addBitString(txSet);
// flags = 0x03 (commitSetHash + entropySetHash), but only
// provide commitSetHash data
s.add8(0x03);
s.addBitString(commit);
SerialIter sit(s.slice());
auto result =
ExtendedPosition::fromSerialIter(sit, s.getDataLength());
BEAST_EXPECT(!result.has_value());
}
// Unknown flag bits above known extension fields (wire malleability)
{
auto const txSet = makeHash("txset-unkflags");
Serializer s;
s.addBitString(txSet);
s.add8(0x41); // bit 6 is unknown, bit 0 = commitSetHash
s.addBitString(makeHash("commitset-unkflags"));
SerialIter sit(s.slice());
auto result =
ExtendedPosition::fromSerialIter(sit, s.getDataLength());
BEAST_EXPECT(!result.has_value());
}
// Trailing extra bytes after valid fields
{
auto const txSet = makeHash("txset-trailing");
auto const commitSet = makeHash("commitset-trailing");
Serializer s;
s.addBitString(txSet);
s.add8(0x01); // commitSetHash only
s.addBitString(commitSet);
s.add32(0xDEADBEEF); // 4 extra trailing bytes
SerialIter sit(s.slice());
auto result =
ExtendedPosition::fromSerialIter(sit, s.getDataLength());
BEAST_EXPECT(!result.has_value());
}
// Valid flags with exactly the right amount of data (should succeed)
{
auto const txSet = makeHash("txset-ok");
auto const commitSet = makeHash("commitset-ok");
Serializer s;
s.addBitString(txSet);
s.add8(0x01); // commitSetHash only
s.addBitString(commitSet);
SerialIter sit(s.slice());
auto result =
ExtendedPosition::fromSerialIter(sit, s.getDataLength());
BEAST_EXPECT(result.has_value());
if (result)
{
BEAST_EXPECT(result->txSetHash == txSet);
BEAST_EXPECT(result->commitSetHash == commitSet);
BEAST_EXPECT(!result->entropySetHash);
}
}
}
void
testEquality()
{
testcase("Equality is txSetHash only");
auto const txSet = makeHash("txset-eq");
auto const txSet2 = makeHash("txset-eq-2");
ExtendedPosition a{txSet};
a.myCommitment = makeHash("commit1-eq");
ExtendedPosition b{txSet};
b.myCommitment = makeHash("commit2-eq");
// Same txSetHash, different leaves -> equal
BEAST_EXPECT(a == b);
// Same txSetHash, different commitSetHash -> still equal
// (sub-state quorum handles commitSetHash agreement)
b.commitSetHash = makeHash("cs-eq");
BEAST_EXPECT(a == b);
// Same txSetHash, different entropySetHash -> still equal
b.entropySetHash = makeHash("es-eq");
BEAST_EXPECT(a == b);
// Same txSetHash, different export signature digest -> still equal
b.exportSignaturesHash = makeHash("export-sigs-eq");
BEAST_EXPECT(a == b);
// Different txSetHash -> not equal
ExtendedPosition c{txSet2};
BEAST_EXPECT(a != c);
}
void
testExportSignatureDigest()
{
testcase("Export signature digest");
std::vector<std::string> blobs;
blobs.emplace_back("txhash-pubkey-sig-a");
blobs.emplace_back("txhash-pubkey-sig-b");
auto const digest = proposalExportSignaturesHash(blobs);
BEAST_EXPECT(digest == proposalExportSignaturesHash(blobs));
auto reordered = blobs;
std::swap(reordered[0], reordered[1]);
BEAST_EXPECT(digest != proposalExportSignaturesHash(reordered));
auto mutated = blobs;
mutated[1].push_back('x');
BEAST_EXPECT(digest != proposalExportSignaturesHash(mutated));
}
public:
void
run() override
{
testSerializationRoundTrip();
testSigningConsistency();
testSuppressionConsistency();
testMalformedPayload();
testEquality();
testExportSignatureDigest();
}
};
BEAST_DEFINE_TESTSUITE(ExtendedPosition, consensus, ripple);
} // namespace test
} // namespace ripple

View File

@@ -22,6 +22,7 @@
#include <test/csf/Histogram.h>
#include <test/csf/Peer.h>
#include <test/csf/PeerGroup.h>
#include <test/csf/PeerTick.h>
#include <test/csf/Proposal.h>
#include <test/csf/Scheduler.h>
#include <test/csf/Sim.h>

View File

@@ -20,6 +20,7 @@
#define RIPPLE_TEST_CSF_PEER_H_INCLUDED
#include <test/csf/CollectorRef.h>
#include <test/csf/Proposal.h>
#include <test/csf/Scheduler.h>
#include <test/csf/TrustGraph.h>
#include <test/csf/Tx.h>
@@ -28,11 +29,14 @@
#include <test/csf/ledgers.h>
#include <xrpld/consensus/Consensus.h>
#include <xrpld/consensus/Validations.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/beast/utility/WrappedSink.h>
#include <xrpl/protocol/PublicKey.h>
#include <boost/container/flat_map.hpp>
#include <boost/container/flat_set.hpp>
#include <algorithm>
#include <string>
#include <vector>
namespace ripple {
namespace test {
@@ -51,6 +55,41 @@ namespace bc = boost::container;
by Collectors
- Exposes most internal state for forcibly simulating arbitrary scenarios
*/
/// Content-addressed sidecar set store, simulating InboundTransactions.
/// Shared across all peers in a simulation — peers publish sets by hash
/// and fetch them by hash, just like the real SHAMap fetch pipeline.
///
/// Each entry is tagged with its type so fetchRngSetIfNeeded can merge
/// into the correct local set without content-sniffing heuristics.
struct SidecarStore
{
enum class Type { commit, reveal, exportSig };
using EntrySet = hash_map<PeerID, uint256>;
struct TaggedSet
{
Type type;
EntrySet entries;
};
void
publish(uint256 const& hash, Type type, EntrySet const& entries)
{
sets_[hash] = {type, entries};
}
TaggedSet const*
fetch(uint256 const& hash) const
{
auto it = sets_.find(hash);
return it != sets_.end() ? &it->second : nullptr;
}
private:
std::map<uint256, TaggedSet> sets_;
};
struct Peer
{
/** Basic wrapper of a proposed position taken by a peer.
@@ -61,6 +100,8 @@ struct Peer
class Position
{
public:
using Proposal = csf::Proposal;
Position(Proposal const& p) : proposal_(p)
{
}
@@ -77,6 +118,18 @@ struct Peer
return proposal_.getJson();
}
PeerKey
publicKey() const
{
return {proposal_.nodeID(), 0};
}
std::uint64_t
signature() const
{
return 0;
}
std::string
render() const
{
@@ -169,6 +222,7 @@ struct Peer
using NodeKey_t = PeerKey;
using TxSet_t = TxSet;
using PeerPosition_t = Position;
using Position_t = ProposalPosition;
using Result = ConsensusResult<Peer>;
using NodeKey = Validation::NodeKey;
@@ -188,6 +242,9 @@ struct Peer
//! The oracle that manages unique ledgers
LedgerOracle& oracle;
//! Shared sidecar store (simulates InboundTransactions)
SidecarStore& sidecarStore;
//! Scheduler of events
Scheduler& scheduler;
@@ -257,6 +314,686 @@ struct Peer
// Simulation parameters
ConsensusParms consensusParms;
/// RNG consensus extensions for CSF. Owns all RNG state and methods,
/// same pattern as ConsensusExtensions for production.
struct Extensions
{
Peer& peer;
beast::Journal j_;
// Sub-state machine
EstablishState estState_{EstablishState::ConvergingTx};
std::chrono::steady_clock::time_point revealPhaseStart_{};
std::chrono::steady_clock::time_point commitHashConflictStart_{};
bool explicitFinalProposalSent_{false};
bool entropySetPublished_{false};
std::chrono::steady_clock::time_point entropyPublishStart_{};
bool exportSigGateStarted_{false};
std::chrono::steady_clock::time_point exportSigGateStart_{};
bool exportSigConvergenceFailed_{false};
// RNG state
bool enableRngConsensus_ = false;
bool enableExportConsensus_ = false;
hash_set<PeerID> unlNodes_;
hash_set<PeerID> likelyParticipants_;
hash_map<PeerID, uint256> pendingCommits_;
hash_map<PeerID, uint256> pendingReveals_;
hash_map<PeerID, uint256> pendingExportSigs_;
hash_map<PeerID, PeerKey> nodeKeys_;
uint256 myEntropySecret_;
bool entropyFailed_ = false;
// Last round summary (for test assertions)
uint256 lastEntropyDigest_;
std::uint16_t lastEntropyCount_ = 0;
bool lastEntropyWasFallback_ = true;
bool lastExportSucceeded_ = false;
bool lastExportRetried_ = false;
std::size_t exportSigFetchMerges_ = 0;
// Optional test hook: force a specific commit-set hash
std::optional<uint256> forcedCommitSetHash_;
// Optional test hook: force a specific entropy-set hash
std::optional<uint256> forcedEntropySetHash_;
// Optional test hook: force a specific export sig-set hash
std::optional<uint256> forcedExportSigSetHash_;
// Optional test hook: drop reveals from specific peers
// (simulates asymmetric reveal delivery / packet loss)
hash_set<PeerID> dropRevealFrom_;
// Optional test hook: drop proposal-carried export signatures.
hash_set<PeerID> dropExportSigFrom_;
// Optional test hook: stay an active proposer but do not originate an
// export signature, so tests can force sidecar-fetch-only convergence.
bool suppressOwnExportSig_ = false;
explicit Extensions(Peer& p) : peer(p), j_(p.j)
{
}
// --- RNG methods ---
bool
rngEnabled() const
{
return enableRngConsensus_;
}
bool
exportEnabled() const
{
return enableExportConsensus_;
}
std::size_t
quorumThreshold() const
{
if (!enableRngConsensus_)
return (std::numeric_limits<std::size_t>::max)() / 4;
auto const base = unlNodes_.size();
return calculateQuorumThreshold(base == 0 ? 1 : base);
}
std::size_t
exportSigQuorumThreshold() const
{
if (!enableExportConsensus_)
return (std::numeric_limits<std::size_t>::max)() / 4;
auto const base =
unlNodes_.empty() ? std::size_t{1} : unlNodes_.size();
return calculateQuorumThreshold(base);
}
std::size_t
pendingCommitCount() const
{
return pendingCommits_.size();
}
std::size_t
pendingRevealCount() const
{
return pendingReveals_.size();
}
std::size_t
expectedProposerCount() const
{
return likelyParticipants_.size();
}
bool
hasQuorumOfCommits() const
{
if (!enableRngConsensus_)
return false;
return pendingCommits_.size() >= quorumThreshold();
}
bool
hasMinimumReveals() const
{
if (!enableRngConsensus_)
return false;
return pendingReveals_.size() >= pendingCommits_.size();
}
bool
hasAnyReveals() const
{
if (!enableRngConsensus_)
return false;
return !pendingReveals_.empty();
}
bool
shouldZeroEntropy() const
{
if (entropyFailed_ || pendingReveals_.empty())
return true;
// Match production: zero when reveals < quorum threshold.
auto const threshold = unlNodes_.empty()
? std::size_t{1}
: calculateQuorumThreshold(unlNodes_.size());
return pendingReveals_.size() < threshold;
}
uint256
buildCommitSet(Ledger::Seq seq)
{
if (forcedCommitSetHash_)
return *forcedCommitSetHash_;
auto const hash = hashRngSet(pendingCommits_, seq, "commit");
peer.sidecarStore.publish(
hash, SidecarStore::Type::commit, pendingCommits_);
return hash;
}
uint256
buildEntropySet(Ledger::Seq seq)
{
if (forcedEntropySetHash_)
return *forcedEntropySetHash_;
auto const hash = hashRngSet(pendingReveals_, seq, "reveal");
peer.sidecarStore.publish(
hash, SidecarStore::Type::reveal, pendingReveals_);
return hash;
}
uint256
buildExportSigSet(Ledger::Seq seq)
{
if (forcedExportSigSetHash_)
return *forcedExportSigSetHash_;
auto const hash = hashRngSet(pendingExportSigs_, seq, "export-sig");
peer.sidecarStore.publish(
hash, SidecarStore::Type::exportSig, pendingExportSigs_);
return hash;
}
void
generateEntropySecret()
{
if (!enableRngConsensus_)
return;
auto const seq =
static_cast<std::uint32_t>(peer.lastClosedLedger.seq()) + 1;
myEntropySecret_ = sha512Half(
std::string("csf-rng-secret"),
static_cast<std::uint32_t>(peer.id),
peer.key.second,
seq,
peer.completedLedgers);
}
uint256
getEntropySecret() const
{
return myEntropySecret_;
}
void
selfSeedReveal()
{
if (!enableRngConsensus_)
return;
// Self-seed our own reveal into pendingReveals_ so it
// counts toward reveal quorum. The real code does this
// in decorateMessage; the CSF does it here since it has
// no equivalent serialization hook.
if (myEntropySecret_ != uint256{})
pendingReveals_[peer.id] = myEntropySecret_;
}
void
setEntropyFailed()
{
if (!enableRngConsensus_)
return;
entropyFailed_ = true;
}
enum class SidecarKind : uint8_t { commit, reveal, exportSig };
void
fetchRngSetIfNeeded(
std::optional<uint256> const& hash,
SidecarKind kind = SidecarKind::commit)
{
if (!hash)
return;
auto const* fetched = peer.sidecarStore.fetch(*hash);
if (!fetched)
return;
// Union merge into the correct local set based on type.
auto& target = [&]() -> hash_map<PeerID, uint256>& {
switch (fetched->type)
{
case SidecarStore::Type::commit:
return pendingCommits_;
case SidecarStore::Type::reveal:
return pendingReveals_;
case SidecarStore::Type::exportSig:
return pendingExportSigs_;
}
return pendingCommits_;
}();
for (auto const& [nodeId, digest] : fetched->entries)
{
auto const [_, inserted] = target.emplace(nodeId, digest);
if (fetched->type == SidecarStore::Type::exportSig && inserted)
++exportSigFetchMerges_;
}
}
void
fetchSidecarsIfNeeded(ProposalPosition const& pos)
{
fetchRngSetIfNeeded(pos.commitSetHash, SidecarKind::commit);
fetchRngSetIfNeeded(pos.entropySetHash, SidecarKind::reveal);
fetchRngSetIfNeeded(pos.exportSigSetHash, SidecarKind::exportSig);
}
void
clearRngState()
{
pendingCommits_.clear();
pendingReveals_.clear();
pendingExportSigs_.clear();
nodeKeys_.clear();
likelyParticipants_.clear();
myEntropySecret_.zero();
entropyFailed_ = false;
exportSigGateStarted_ = false;
exportSigGateStart_ = {};
exportSigConvergenceFailed_ = false;
}
void
cacheUNLReport()
{
unlNodes_.clear();
for (auto const* p : peer.trustGraph.trustedPeers(&peer))
{
if (!peer.runAsValidator && p->id == peer.id)
continue;
unlNodes_.insert(p->id);
}
if (peer.runAsValidator)
unlNodes_.insert(peer.id);
}
void
setExpectedProposers(hash_set<PeerID> proposers)
{
bool const includeSelf = peer.runAsValidator;
if (!proposers.empty())
{
hash_set<PeerID> filtered;
for (auto const& nid : proposers)
{
if (!includeSelf && nid == peer.id)
continue;
if (isUNLReportMember(nid))
filtered.insert(nid);
}
if (includeSelf)
filtered.insert(peer.id);
likelyParticipants_ = std::move(filtered);
return;
}
likelyParticipants_.clear();
if (!unlNodes_.empty())
likelyParticipants_ = unlNodes_;
}
void
harvestRngData(
PeerID const& nodeId,
PeerKey const& publicKey,
ProposalPosition const& position,
std::uint32_t,
NetClock::time_point,
Ledger::ID const& prevLedger,
std::uint64_t)
{
if (!enableRngConsensus_ && !enableExportConsensus_)
return;
if (!isUNLReportMember(nodeId))
return;
nodeKeys_.insert_or_assign(nodeId, publicKey);
if (enableRngConsensus_ && position.myCommitment)
{
auto [it, inserted] =
pendingCommits_.emplace(nodeId, *position.myCommitment);
if (!inserted && it->second != *position.myCommitment)
{
it->second = *position.myCommitment;
pendingReveals_.erase(nodeId);
}
}
if (!enableRngConsensus_ || !position.myReveal)
{
if (enableExportConsensus_ && position.myExportSignature &&
dropExportSigFrom_.count(nodeId) == 0)
pendingExportSigs_[nodeId] = *position.myExportSignature;
return;
}
// Test hook: drop reveals from specific peers
if (dropRevealFrom_.count(nodeId) == 0)
{
auto const commitIt = pendingCommits_.find(nodeId);
if (commitIt != pendingCommits_.end())
{
auto const prevIt = peer.ledgers.find(prevLedger);
if (prevIt != peer.ledgers.end())
{
auto const seq =
static_cast<std::uint32_t>(prevIt->second.seq()) +
1;
auto const expected = sha512Half(
*position.myReveal,
static_cast<std::uint32_t>(publicKey.first),
publicKey.second,
seq);
if (expected == commitIt->second)
pendingReveals_[nodeId] = *position.myReveal;
}
}
}
if (enableExportConsensus_ && position.myExportSignature &&
dropExportSigFrom_.count(nodeId) == 0)
pendingExportSigs_[nodeId] = *position.myExportSignature;
}
bool
isUNLReportMember(PeerID const& nodeId) const
{
return unlNodes_.count(nodeId) > 0;
}
void
finalizeRoundEntropy(std::uint32_t seq)
{
if (!enableRngConsensus_)
{
lastEntropyDigest_.zero();
lastEntropyCount_ = 0;
lastEntropyWasFallback_ = true;
return;
}
if (shouldZeroEntropy())
{
lastEntropyDigest_.zero();
lastEntropyCount_ = 0;
lastEntropyWasFallback_ = true;
return;
}
std::vector<std::pair<PeerKey, uint256>> ordered;
ordered.reserve(pendingReveals_.size());
for (auto const& [nodeId, reveal] : pendingReveals_)
{
auto const it = nodeKeys_.find(nodeId);
if (it == nodeKeys_.end())
continue;
ordered.emplace_back(it->second, reveal);
}
if (ordered.empty())
{
lastEntropyDigest_.zero();
lastEntropyCount_ = 0;
lastEntropyWasFallback_ = true;
return;
}
std::sort(
ordered.begin(),
ordered.end(),
[](auto const& a, auto const& b) {
if (a.first.first != b.first.first)
return a.first.first < b.first.first;
return a.first.second < b.first.second;
});
uint256 digest = sha512Half(
std::string("csf-rng-entropy"),
static_cast<std::uint32_t>(seq));
for (auto const& [keyId, reveal] : ordered)
{
digest = sha512Half(
digest,
static_cast<std::uint32_t>(keyId.first),
keyId.second,
reveal);
}
lastEntropyDigest_ = digest;
lastEntropyCount_ = static_cast<std::uint16_t>(ordered.size());
lastEntropyWasFallback_ = false;
}
void
finalizeRoundExport()
{
if (!enableExportConsensus_)
{
lastExportSucceeded_ = false;
lastExportRetried_ = false;
return;
}
auto const activeSigCount = std::count_if(
pendingExportSigs_.begin(),
pendingExportSigs_.end(),
[&](auto const& entry) {
return isUNLReportMember(entry.first);
});
lastExportSucceeded_ = !exportSigConvergenceFailed_ &&
static_cast<std::size_t>(activeSigCount) >=
exportSigQuorumThreshold();
lastExportRetried_ = !lastExportSucceeded_;
}
// --- Lifecycle hooks (matching design doc) ---
template <class Ledger_t>
void
onRoundStart(
Ledger_t const& /* prevLedger */,
hash_set<PeerID> lastProposers)
{
clearRngState();
cacheUNLReport();
setExpectedProposers(std::move(lastProposers));
resetSubState();
}
void
onTrustedPeerProposal(
PeerID const& nodeId,
PeerKey const& publicKey,
ProposalPosition const& position,
std::uint32_t proposeSeq,
NetClock::time_point closeTime,
Ledger::ID const& prevLedger,
std::uint64_t signature)
{
harvestRngData(
nodeId,
publicKey,
position,
proposeSeq,
closeTime,
prevLedger,
signature);
}
void
onAcceptComplete()
{
}
template <class Ledger_t>
void
decoratePosition(
ProposalPosition& pos,
Ledger_t const& prevLedger,
bool proposing)
{
decorateExportPosition(pos, prevLedger, proposing);
if (!enableRngConsensus_ || !proposing || !peer.runAsValidator)
return;
generateEntropySecret();
auto const seq = static_cast<std::uint32_t>(prevLedger.seq()) + 1;
auto const commitment = sha512Half(
myEntropySecret_,
static_cast<std::uint32_t>(peer.id),
peer.key.second,
seq);
pos.myCommitment = commitment;
pendingCommits_[peer.id] = commitment;
nodeKeys_.insert_or_assign(peer.id, peer.key);
}
template <class Ledger_t>
void
decorateExportPosition(
ProposalPosition& pos,
Ledger_t const& prevLedger,
bool proposing)
{
if (!enableExportConsensus_ || !proposing || !peer.runAsValidator)
return;
auto const seq = static_cast<std::uint32_t>(prevLedger.seq()) + 1;
auto const sig = sha512Half(
std::string("csf-export-sig"),
static_cast<std::uint32_t>(peer.id),
peer.key.second,
seq);
if (!suppressOwnExportSig_)
{
pos.myExportSignature = sig;
pendingExportSigs_[peer.id] = sig;
}
nodeKeys_.insert_or_assign(peer.id, peer.key);
}
void
appendJson(Json::Value&) const
{
}
template <class Pos>
void
logPosition(
Pos const&,
beast::Journal,
beast::severities::Severity = beast::severities::kTrace) const
{
}
// --- Stubs for features CSF doesn't model ---
bool
bootstrapFastStartEnabled() const
{
return false;
}
bool
shouldSendExplicitFinalProposal() const
{
return false;
}
std::optional<TxSet>
buildExplicitFinalProposalTxSet(TxSet const&, Ledger::Seq)
{
return std::nullopt;
}
bool
hasPendingExportSigs() const
{
return enableExportConsensus_ && !pendingExportSigs_.empty();
}
bool
hasConsensusExportTxns() const
{
return enableExportConsensus_;
}
void
setExportSigConvergenceFailed()
{
if (enableExportConsensus_)
exportSigConvergenceFailed_ = true;
}
// --- Sub-state accessors ---
bool
extensionsBusy() const
{
return estState_ != EstablishState::ConvergingTx ||
(exportEnabled() &&
(exportSigGateStarted_ || hasPendingExportSigs()));
}
EstablishState
estState() const
{
return estState_;
}
void
resetSubState()
{
estState_ = EstablishState::ConvergingTx;
revealPhaseStart_ = {};
commitHashConflictStart_ = {};
explicitFinalProposalSent_ = false;
entropySetPublished_ = false;
entropyPublishStart_ = {};
exportSigGateStarted_ = false;
exportSigGateStart_ = {};
exportSigConvergenceFailed_ = false;
}
/// Defined in test/csf/PeerTick.h (keeps xrpld/app dependency
/// out of this header).
template <class Ctx>
ExtensionTickResult
onTick(Ctx const& ctx);
private:
uint256
hashRngSet(
hash_map<PeerID, uint256> const& entries,
Ledger::Seq seq,
std::string const& domain) const
{
std::vector<std::pair<std::uint32_t, uint256>> ordered;
ordered.reserve(entries.size());
for (auto const& [nodeId, digest] : entries)
{
if (!isUNLReportMember(nodeId))
continue;
ordered.emplace_back(
static_cast<std::uint32_t>(nodeId), digest);
}
if (ordered.empty())
return uint256{};
std::sort(
ordered.begin(),
ordered.end(),
[](auto const& a, auto const& b) { return a.first < b.first; });
uint256 out = sha512Half(
std::string("csf-rng-set"),
domain,
static_cast<std::uint32_t>(seq));
for (auto const& [nodeId, digest] : ordered)
out = sha512Half(out, nodeId, digest);
return out;
}
};
Extensions extensions_{*this};
Extensions&
ce()
{
return extensions_;
}
Extensions const&
ce() const
{
return extensions_;
}
//! The collectors to report events to
CollectorRefs& collectors;
@@ -278,13 +1015,15 @@ struct Peer
BasicNetwork<Peer*>& n,
TrustGraph<Peer*>& tg,
CollectorRefs& c,
beast::Journal jIn)
beast::Journal jIn,
SidecarStore& sc)
: sink(jIn, "Peer " + to_string(i) + ": ")
, j(sink)
, consensus(s.clock(), *this, j)
, id{i}
, key{id, 0}
, oracle{o}
, sidecarStore{sc}
, scheduler{s}
, net{n}
, trustGraph(tg)
@@ -510,15 +1249,15 @@ struct Peer
{
issue(CloseLedger{prevLedger, openTxs});
Position_t pos{TxSet::calcID(openTxs)};
ce().decoratePosition(
pos, prevLedger, mode == ConsensusMode::proposing);
return Result(
TxSet{openTxs},
Proposal(
prevLedger.id(),
Proposal::seqJoin,
TxSet::calcID(openTxs),
closeTime,
now(),
id));
prevLedger.id(), Proposal::seqJoin, pos, closeTime, now(), id));
}
void
@@ -553,6 +1292,10 @@ struct Peer
schedule(delays.ledgerAccept, [=, this]() {
const bool proposing = mode == ConsensusMode::proposing;
const bool consensusFail = result.state == ConsensusState::MovedOn;
auto const seq = static_cast<std::uint32_t>(prevLedger.seq()) + 1;
ce().finalizeRoundEntropy(seq);
ce().finalizeRoundExport();
TxSet const acceptedTxs = injectTxs(prevLedger, result.txns);
Ledger const newLedger = oracle.accept(

14
src/test/csf/PeerTick.h Normal file
View File

@@ -0,0 +1,14 @@
#ifndef RIPPLE_TEST_CSF_PEERTICK_H_INCLUDED
#define RIPPLE_TEST_CSF_PEERTICK_H_INCLUDED
#include <test/csf/Peer.h>
#include <xrpld/consensus/ConsensusExtensionsTick.h>
template <class Ctx>
ripple::ExtensionTickResult
ripple::test::csf::Peer::Extensions::onTick(Ctx const& ctx)
{
return ripple::extensionsTick(*this, ctx);
}
#endif

View File

@@ -23,17 +23,123 @@
#include <test/csf/Validation.h>
#include <test/csf/ledgers.h>
#include <xrpld/consensus/ConsensusProposal.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/beast/hash/hash_append.h>
#include <cstdint>
#include <optional>
#include <ostream>
#include <string>
namespace ripple {
namespace test {
namespace csf {
/** Proposal is a position taken in the consensus process and is represented
directly from the generic types.
/** Position sidecar for CSF that can model RNG commit/reveal fields.
Core tx-set convergence remains keyed on txSetHash only, matching
production's ExtendedPosition behavior.
*/
using Proposal = ConsensusProposal<PeerID, Ledger::ID, TxSet::ID>;
struct RngPosition
{
TxSet::ID txSetHash{};
std::optional<uint256> commitSetHash;
std::optional<uint256> entropySetHash;
std::optional<uint256> exportSigSetHash;
std::optional<uint256> myCommitment;
std::optional<uint256> myReveal;
std::optional<uint256> myExportSignature;
RngPosition() = default;
explicit RngPosition(TxSet::ID txSet) : txSetHash(txSet)
{
}
operator TxSet::ID() const
{
return txSetHash;
}
void
updateTxSet(TxSet::ID txSet)
{
txSetHash = txSet;
}
bool
operator==(RngPosition const& other) const
{
return txSetHash == other.txSetHash;
}
bool
operator!=(RngPosition const& other) const
{
return !(*this == other);
}
bool
operator==(TxSet::ID txSet) const
{
return txSetHash == txSet;
}
bool
operator!=(TxSet::ID txSet) const
{
return txSetHash != txSet;
}
};
inline bool
operator==(TxSet::ID txSet, RngPosition const& pos)
{
return pos == txSet;
}
inline bool
operator!=(TxSet::ID txSet, RngPosition const& pos)
{
return pos != txSet;
}
inline std::string
to_string(RngPosition const& pos)
{
return std::to_string(pos.txSetHash);
}
inline std::ostream&
operator<<(std::ostream& os, RngPosition const& pos)
{
return os << pos.txSetHash;
}
template <class Hasher>
void
hash_append(Hasher& h, RngPosition const& pos)
{
using beast::hash_append;
auto appendOpt = [&](std::optional<uint256> const& o) {
hash_append(h, static_cast<std::uint8_t>(o.has_value() ? 1 : 0));
if (o)
hash_append(h, *o);
};
hash_append(h, pos.txSetHash);
appendOpt(pos.commitSetHash);
appendOpt(pos.entropySetHash);
appendOpt(pos.exportSigSetHash);
appendOpt(pos.myCommitment);
appendOpt(pos.myReveal);
appendOpt(pos.myExportSignature);
}
/** Proposal is a position taken in the consensus process.
*/
using Proposal = ConsensusProposal<PeerID, Ledger::ID, RngPosition>;
using ProposalPosition = RngPosition;
} // namespace csf
} // namespace test
} // namespace ripple
#endif
#endif

View File

@@ -25,6 +25,7 @@
#include <test/csf/Digraph.h>
#include <test/csf/Peer.h>
#include <test/csf/PeerGroup.h>
#include <test/csf/PeerTick.h>
#include <test/csf/Scheduler.h>
#include <test/csf/SimTime.h>
#include <test/csf/TrustGraph.h>
@@ -83,6 +84,7 @@ public:
BasicNetwork<Peer*> net;
TrustGraph<Peer*> trustGraph;
CollectorRefs collectors;
SidecarStore sidecarStore;
/** Create a simulation
@@ -119,7 +121,8 @@ public:
net,
trustGraph,
collectors,
j);
j,
sidecarStore);
newPeers.emplace_back(&peers.back());
}
PeerGroup res{newPeers};

View File

@@ -48,7 +48,7 @@ public:
{
}
ID
ID const&
id() const
{
return id_;

View File

@@ -82,7 +82,12 @@ supported_amendments()
Throw<std::runtime_error>(
"Unknown feature: " + s + " in supportedAmendments.");
}
return FeatureBitset(feats);
//@@start rng-test-environment-gating
// TODO: ConsensusEntropy injects a pseudo-tx every ledger which
// breaks existing test transaction count assumptions. Exclude from
// default test set until dedicated tests are written.
return FeatureBitset(feats) - featureConsensusEntropy;
//@@end rng-test-environment-gating
}();
return ids;
}

View File

@@ -130,7 +130,8 @@ Env::close(
// Go through the rpc interface unless we need to simulate
// a specific consensus delay.
if (consensusDelay)
app().getOPs().acceptLedger(consensusDelay);
app().getOPs().acceptLedger(
consensusDelay, "Env::close(consensusDelay)");
else
{
auto resp = rpc("ledger_accept");

212
src/test/jtx/xpop.h Normal file
View File

@@ -0,0 +1,212 @@
#ifndef RIPPLE_TEST_JTX_XPOP_H_INCLUDED
#define RIPPLE_TEST_JTX_XPOP_H_INCLUDED
#include <test/jtx/Env.h>
#include <xrpld/app/ledger/LedgerMaster.h>
#include <xrpld/app/proof/LedgerProof.h>
#include <xrpld/app/proof/XPOPv1.h>
#include <xrpl/basics/StringUtilities.h>
#include <xrpl/basics/base64.h>
#include <xrpl/protocol/PublicKey.h>
#include <xrpl/protocol/SecretKey.h>
#include <xrpl/protocol/Sign.h>
#include <xrpl/protocol/digest.h>
namespace ripple {
namespace test {
namespace jtx {
namespace xpop {
/// Build a manifest string (binary, not base64).
inline std::string
makeManifestRaw(
PublicKey const& masterPub,
SecretKey const& masterSec,
PublicKey const& signingPub,
SecretKey const& signingSec,
int seq = 1)
{
STObject st(sfGeneric);
st[sfSequence] = seq;
st[sfPublicKey] = masterPub;
st[sfSigningPubKey] = signingPub;
sign(st, HashPrefix::manifest, *publicKeyType(signingPub), signingSec);
sign(
st,
HashPrefix::manifest,
*publicKeyType(masterPub),
masterSec,
sfMasterSignature);
Serializer s;
st.add(s);
return std::string(static_cast<char const*>(s.data()), s.size());
}
/// A complete test validator with all keys and manifest.
struct TestValidator
{
PublicKey masterPublic;
SecretKey masterSecret;
PublicKey signingPublic;
SecretKey signingSecret;
std::string manifestRaw;
std::string manifestBase64;
static TestValidator
create()
{
auto const ms = randomSecretKey();
auto const mp = derivePublicKey(KeyType::ed25519, ms);
auto const [sp, ss] = randomKeyPair(KeyType::secp256k1);
auto raw = makeManifestRaw(mp, ms, sp, ss, 1);
return {mp, ms, sp, ss, raw, base64_encode(raw)};
}
proof::ValidatorKeys
toValidatorKeys() const
{
return {
masterPublic,
masterSecret,
signingPublic,
signingSecret,
manifestBase64};
}
};
/// A complete test VL publisher with keys and manifest.
struct TestVLPublisher
{
PublicKey masterPublic;
SecretKey masterSecret;
PublicKey signingPublic;
SecretKey signingSecret;
std::string manifestBase64;
static TestVLPublisher
create()
{
auto const ms = randomSecretKey();
auto const mp = derivePublicKey(KeyType::ed25519, ms);
auto const [sp, ss] = randomKeyPair(KeyType::secp256k1);
return {
mp, ms, sp, ss, base64_encode(makeManifestRaw(mp, ms, sp, ss, 1))};
}
/// Build VL data for these validators.
proof::VLData
buildVLData(
std::vector<TestValidator> const& validators,
std::uint32_t sequence = 1,
std::uint32_t expiration = 767784645) const
{
// Build the JSON blob
std::string data = "{\"sequence\":" + std::to_string(sequence) +
",\"expiration\":" + std::to_string(expiration) +
",\"validators\":[";
for (std::size_t i = 0; i < validators.size(); ++i)
{
if (i > 0)
data += ",";
data += "{\"validation_public_key\":\"" +
strHex(validators[i].masterPublic) + "\",\"manifest\":\"" +
validators[i].manifestBase64 + "\"}";
}
data += "]}";
auto const blob = base64_encode(data);
auto const sig =
strHex(sign(signingPublic, signingSecret, makeSlice(data)));
return proof::VLData{
masterPublic, masterSecret, manifestBase64, blob, sig, 1};
}
};
/// Everything needed to build and import XPOPs in tests.
struct TestXPOPContext
{
std::vector<TestValidator> validators;
TestVLPublisher publisher;
proof::VLData vlData;
static TestXPOPContext
create(int validatorCount = 5)
{
auto pub = TestVLPublisher::create();
std::vector<TestValidator> vals;
for (int i = 0; i < validatorCount; ++i)
vals.push_back(TestValidator::create());
auto vl = pub.buildVLData(vals);
return {std::move(vals), std::move(pub), std::move(vl)};
}
/// Get the VL master public key hex for IMPORT_VL_KEYS config.
std::string
vlKeyHex() const
{
return strHex(publisher.masterPublic);
}
/// Build an Env config with NETWORK_ID and IMPORT_VL_KEYS set.
std::unique_ptr<Config>
makeEnvConfig(std::uint32_t networkID = 21337) const
{
auto cfg = envconfig(jtx::validator, "");
cfg->NETWORK_ID = networkID;
auto const keyHex = vlKeyHex();
auto const pkHex = strUnHex(keyHex);
if (pkHex)
cfg->IMPORT_VL_KEYS.emplace(keyHex, makeSlice(*pkHex));
return cfg;
}
/// Build XPOP from a closed ledger for a specific tx.
Json::Value
buildXPOP(Ledger const& ledger, uint256 const& txHash) const
{
std::vector<proof::ValidatorKeys> valKeys;
for (auto const& v : validators)
valKeys.push_back(v.toValidatorKeys());
return proof::buildXPOPv1(ledger, txHash, valKeys, vlData);
}
/// Build XPOP from an Env's last closed ledger.
Json::Value
buildXPOP(Env& env, uint256 const& txHash) const
{
auto const lcl = env.app().getLedgerMaster().getClosedLedger();
if (!lcl)
return {};
return buildXPOP(*lcl, txHash);
}
};
/// Build a complete XPOP v1 JSON from an Env's last closed ledger.
/// Creates fresh validator keys and VL publisher for each call.
inline Json::Value
buildTestXPOP(Env& env, uint256 const& txHash, int validatorCount = 5)
{
auto ctx = TestXPOPContext::create(validatorCount);
return ctx.buildXPOP(env, txHash);
}
/// Get the hex-encoded XPOP blob suitable for sfBlob in ttIMPORT.
inline std::string
buildTestXPOPHex(Env& env, uint256 const& txHash, int validatorCount = 5)
{
auto const xpop = buildTestXPOP(env, txHash, validatorCount);
if (xpop.isNull())
return {};
return proof::xpopToHex(xpop);
}
} // namespace xpop
} // namespace jtx
} // namespace test
} // namespace ripple
#endif

View File

@@ -0,0 +1,530 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2026 XRPL Labs
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <test/jtx.h>
#include <xrpld/app/misc/RuntimeConfig.h>
#include <xrpld/overlay/detail/TrafficCount.h>
#include <xrpl/protocol/jss.h>
namespace ripple {
class RuntimeConfig_test : public beast::unit_test::suite
{
// Helper to call runtime_config RPC with JSON params
Json::Value
runtimeConfig(test::jtx::Env& env, Json::Value const& params)
{
return env.rpc(
"json", "runtime_config", to_string(params))[jss::result];
}
// Helper to call runtime_config RPC with no params (GET)
Json::Value
runtimeConfig(test::jtx::Env& env)
{
return env.rpc("runtime_config")[jss::result];
}
void
testGetEmpty()
{
testcase("GET empty config");
using namespace test::jtx;
Env env{*this};
auto result = runtimeConfig(env);
BEAST_EXPECT(result.isMember("configs"));
BEAST_EXPECT(result["configs"].size() == 0);
BEAST_EXPECT(!env.app().getRuntimeConfig().active());
}
void
testSetGlobal()
{
testcase("SET global config");
using namespace test::jtx;
Env env{*this};
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["send_delay_ms"] = 100;
params["set"]["*"]["send_delay_jitter_ms"] = 20;
params["set"]["*"]["send_drop_pct"] = 5.5;
auto result = runtimeConfig(env, params);
BEAST_EXPECT(result.isMember("configs"));
auto const& configs = result["configs"];
if (!BEAST_EXPECT(configs.isMember("*")))
return;
auto const& global = configs["*"];
BEAST_EXPECT(global["send_delay_ms"].asInt() == 100);
BEAST_EXPECT(global["send_delay_jitter_ms"].asInt() == 20);
BEAST_EXPECT(global["send_drop_pct"].asDouble() == 5.5);
// Verify active state via RuntimeConfig directly
BEAST_EXPECT(env.app().getRuntimeConfig().active());
// Verify getConfig returns the global for any peer
auto cfg = env.app().getRuntimeConfig().getConfig("10.0.0.1:51235");
BEAST_EXPECT(cfg.has_value());
BEAST_EXPECT(cfg->sendDelayMs == 100);
BEAST_EXPECT(cfg->sendDelayJitterMs == 20);
BEAST_EXPECT(cfg->sendDropPctX100 == 550);
}
void
testSetPerPeer()
{
testcase("SET per-peer config with merge");
using namespace test::jtx;
Env env{*this};
// Set global first
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["send_delay_ms"] = 100;
params["set"]["*"]["send_drop_pct"] = 10.0;
runtimeConfig(env, params);
}
// Set per-peer override (only delay, no drop)
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["10.0.0.2:51235"] = Json::objectValue;
params["set"]["10.0.0.2:51235"]["send_delay_ms"] = 500;
runtimeConfig(env, params);
}
auto& rc = env.app().getRuntimeConfig();
// Per-peer should have merged values: delay from override, drop from *
auto peerCfg = rc.getConfig("10.0.0.2:51235");
if (!BEAST_EXPECT(peerCfg.has_value()))
return;
BEAST_EXPECT(peerCfg->sendDelayMs == 500); // overridden
BEAST_EXPECT(peerCfg->sendDropPctX100 == 1000); // inherited from *
// Other peers still get the global
auto otherCfg = rc.getConfig("10.0.0.3:51235");
if (!BEAST_EXPECT(otherCfg.has_value()))
return;
BEAST_EXPECT(otherCfg->sendDelayMs == 100);
BEAST_EXPECT(otherCfg->sendDropPctX100 == 1000);
}
void
testClear()
{
testcase("CLEAR specific target");
using namespace test::jtx;
Env env{*this};
// Set global + per-peer
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["send_delay_ms"] = 50;
params["set"]["10.0.0.2:51235"] = Json::objectValue;
params["set"]["10.0.0.2:51235"]["send_delay_ms"] = 200;
runtimeConfig(env, params);
}
// Clear per-peer
{
Json::Value params;
params["clear"] = Json::arrayValue;
params["clear"].append("10.0.0.2:51235");
auto result = runtimeConfig(env, params);
// Should still have "*"
BEAST_EXPECT(result["configs"].isMember("*"));
BEAST_EXPECT(!result["configs"].isMember("10.0.0.2:51235"));
}
// Per-peer now falls back to global
auto cfg = env.app().getRuntimeConfig().getConfig("10.0.0.2:51235");
BEAST_EXPECT(cfg.has_value());
BEAST_EXPECT(cfg->sendDelayMs == 50);
}
void
testClearAll()
{
testcase("CLEAR_ALL");
using namespace test::jtx;
Env env{*this};
// Set some configs
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["send_delay_ms"] = 100;
params["set"]["10.0.0.2:51235"] = Json::objectValue;
params["set"]["10.0.0.2:51235"]["send_drop_pct"] = 50.0;
runtimeConfig(env, params);
}
BEAST_EXPECT(env.app().getRuntimeConfig().active());
// Clear all
{
Json::Value params;
params["clear_all"] = true;
auto result = runtimeConfig(env, params);
BEAST_EXPECT(result["configs"].size() == 0);
}
BEAST_EXPECT(!env.app().getRuntimeConfig().active());
BEAST_EXPECT(!env.app().getRuntimeConfig().getConfig("*").has_value());
}
void
testPerPeerWithoutGlobal()
{
testcase("Per-peer config without global");
using namespace test::jtx;
Env env{*this};
// Set only per-peer, no global
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["10.0.0.2:51235"] = Json::objectValue;
params["set"]["10.0.0.2:51235"]["send_delay_ms"] = 300;
runtimeConfig(env, params);
}
auto& rc = env.app().getRuntimeConfig();
BEAST_EXPECT(rc.active());
// Targeted peer gets the config
auto peerCfg = rc.getConfig("10.0.0.2:51235");
BEAST_EXPECT(peerCfg.has_value());
BEAST_EXPECT(peerCfg->sendDelayMs == 300);
// Other peers get nothing
BEAST_EXPECT(!rc.getConfig("10.0.0.3:51235").has_value());
}
void
testMessageTypeFilter()
{
testcase("Message type filter");
using namespace test::jtx;
Env env{*this};
// Set with message_types filter
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["send_delay_ms"] = 100;
params["set"]["*"]["message_types"] = Json::arrayValue;
params["set"]["*"]["message_types"].append("proposal");
params["set"]["*"]["message_types"].append("validation");
auto result = runtimeConfig(env, params);
// Verify response includes message_types
auto const& global = result["configs"]["*"];
BEAST_EXPECT(global.isMember("message_types"));
BEAST_EXPECT(global["message_types"].size() == 2);
}
auto& rc = env.app().getRuntimeConfig();
auto cfg = rc.getConfig("10.0.0.1:51235");
if (!BEAST_EXPECT(cfg.has_value()))
return;
// Applies to proposal and validation categories
BEAST_EXPECT(cfg->appliesTo(TrafficCount::category::proposal));
BEAST_EXPECT(cfg->appliesTo(TrafficCount::category::validation));
// Does NOT apply to other categories
BEAST_EXPECT(!cfg->appliesTo(TrafficCount::category::transaction));
BEAST_EXPECT(!cfg->appliesTo(TrafficCount::category::base));
}
void
testMessageTypeFilterEmpty()
{
testcase("No message type filter means all");
using namespace test::jtx;
Env env{*this};
// Set without message_types — applies to all
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["send_delay_ms"] = 100;
runtimeConfig(env, params);
}
auto cfg = env.app().getRuntimeConfig().getConfig("*");
if (!BEAST_EXPECT(cfg.has_value()))
return;
BEAST_EXPECT(!cfg->messageCategories.has_value());
BEAST_EXPECT(cfg->appliesTo(TrafficCount::category::proposal));
BEAST_EXPECT(cfg->appliesTo(TrafficCount::category::validation));
BEAST_EXPECT(cfg->appliesTo(TrafficCount::category::transaction));
BEAST_EXPECT(cfg->appliesTo(TrafficCount::category::base));
}
void
testInvalidMessageType()
{
testcase("Invalid message type returns error");
using namespace test::jtx;
Env env{*this};
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["send_delay_ms"] = 100;
params["set"]["*"]["message_types"] = Json::arrayValue;
params["set"]["*"]["message_types"].append("proposals"); // typo
auto result = runtimeConfig(env, params);
BEAST_EXPECT(result.isMember("error"));
BEAST_EXPECT(result["error"].asString() == "invalidParams");
// Config should NOT have been applied
BEAST_EXPECT(!env.app().getRuntimeConfig().active());
}
void
testDropPctClamping()
{
testcase("send_drop_pct clamped to 0-100");
using namespace test::jtx;
Env env{*this};
// Over 100
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["send_drop_pct"] = 200.0;
runtimeConfig(env, params);
}
auto cfg = env.app().getRuntimeConfig().getConfig("*");
BEAST_EXPECT(cfg.has_value());
BEAST_EXPECT(cfg->sendDropPctX100 == 10000); // clamped to 100%
// Negative
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["send_drop_pct"] = -50.0;
runtimeConfig(env, params);
}
cfg = env.app().getRuntimeConfig().getConfig("*");
BEAST_EXPECT(cfg.has_value());
BEAST_EXPECT(cfg->sendDropPctX100 == 0); // clamped to 0%
}
void
testRngClaimDropPct()
{
testcase("rng_claim_drop_pct round-trips");
using namespace test::jtx;
Env env{*this};
// Set rng_claim_drop_pct
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["rng_claim_drop_pct"] = 50.0;
auto result = runtimeConfig(env, params);
auto const& global = result["configs"]["*"];
BEAST_EXPECT(global["rng_claim_drop_pct"].asDouble() == 50.0);
}
BEAST_EXPECT(env.app().getRuntimeConfig().active());
// Verify via getConfig
auto cfg = env.app().getRuntimeConfig().getConfig("*");
BEAST_EXPECT(cfg.has_value());
BEAST_EXPECT(cfg->rngClaimDropPctX100 == 5000);
// Clear and verify removal
{
Json::Value params;
params["clear_all"] = true;
auto result = runtimeConfig(env, params);
BEAST_EXPECT(result["configs"].size() == 0);
}
BEAST_EXPECT(!env.app().getRuntimeConfig().active());
}
void
testRngClaimDropPctClamping()
{
testcase("rng_claim_drop_pct clamped to 0-100");
using namespace test::jtx;
Env env{*this};
// Over 100
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["rng_claim_drop_pct"] = 150.0;
runtimeConfig(env, params);
}
auto cfg = env.app().getRuntimeConfig().getConfig("*");
BEAST_EXPECT(cfg.has_value());
BEAST_EXPECT(cfg->rngClaimDropPctX100 == 10000); // clamped to 100%
// Negative
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["rng_claim_drop_pct"] = -10.0;
runtimeConfig(env, params);
}
cfg = env.app().getRuntimeConfig().getConfig("*");
BEAST_EXPECT(cfg.has_value());
BEAST_EXPECT(cfg->rngClaimDropPctX100 == 0); // clamped to 0%
}
void
testExplicitFinalProposalToggle()
{
testcase("explicit_final_proposal round-trips and merges");
using namespace test::jtx;
Env env{*this};
// Global default for this node: skip explicit final proposal.
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["explicit_final_proposal"] = false;
auto result = runtimeConfig(env, params);
auto const& global = result["configs"]["*"];
BEAST_EXPECT(global["explicit_final_proposal"].asBool() == false);
}
auto& rc = env.app().getRuntimeConfig();
BEAST_EXPECT(rc.active());
// Global view is false.
auto globalCfg = rc.getConfig("*");
BEAST_EXPECT(globalCfg.has_value());
BEAST_EXPECT(globalCfg->explicitFinalProposal.has_value());
BEAST_EXPECT(*globalCfg->explicitFinalProposal == false);
// Per-peer override can re-enable.
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["10.0.0.2:51235"] = Json::objectValue;
params["set"]["10.0.0.2:51235"]["explicit_final_proposal"] = true;
runtimeConfig(env, params);
}
auto peerCfg = rc.getConfig("10.0.0.2:51235");
BEAST_EXPECT(peerCfg.has_value());
BEAST_EXPECT(peerCfg->explicitFinalProposal.has_value());
BEAST_EXPECT(*peerCfg->explicitFinalProposal == true);
auto otherCfg = rc.getConfig("10.0.0.3:51235");
BEAST_EXPECT(otherCfg.has_value());
BEAST_EXPECT(otherCfg->explicitFinalProposal.has_value());
BEAST_EXPECT(*otherCfg->explicitFinalProposal == false);
}
void
testPerPeerClearInheritedFilter()
{
testcase("Per-peer can override global filter to all");
using namespace test::jtx;
Env env{*this};
// Global: only proposals
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["*"] = Json::objectValue;
params["set"]["*"]["send_delay_ms"] = 100;
params["set"]["*"]["message_types"] = Json::arrayValue;
params["set"]["*"]["message_types"].append("proposal");
runtimeConfig(env, params);
}
// Per-peer: message_types = [] (explicitly all)
{
Json::Value params;
params["set"] = Json::objectValue;
params["set"]["10.0.0.2:51235"] = Json::objectValue;
params["set"]["10.0.0.2:51235"]["message_types"] = Json::arrayValue;
runtimeConfig(env, params);
}
auto& rc = env.app().getRuntimeConfig();
// Per-peer should apply to all categories (empty set override)
auto peerCfg = rc.getConfig("10.0.0.2:51235");
BEAST_EXPECT(peerCfg.has_value());
BEAST_EXPECT(peerCfg->appliesTo(TrafficCount::category::proposal));
BEAST_EXPECT(peerCfg->appliesTo(TrafficCount::category::validation));
BEAST_EXPECT(peerCfg->appliesTo(TrafficCount::category::transaction));
// Other peers still only get proposal filter from global
auto otherCfg = rc.getConfig("10.0.0.3:51235");
BEAST_EXPECT(otherCfg.has_value());
BEAST_EXPECT(otherCfg->appliesTo(TrafficCount::category::proposal));
BEAST_EXPECT(!otherCfg->appliesTo(TrafficCount::category::validation));
}
public:
void
run() override
{
testGetEmpty();
testSetGlobal();
testSetPerPeer();
testClear();
testClearAll();
testPerPeerWithoutGlobal();
testMessageTypeFilter();
testMessageTypeFilterEmpty();
testInvalidMessageType();
testDropPctClamping();
testRngClaimDropPct();
testRngClaimDropPctClamping();
testExplicitFinalProposalToggle();
testPerPeerClearInheritedFilter();
}
};
BEAST_DEFINE_TESTSUITE(RuntimeConfig, rpc, ripple);
} // namespace ripple

View File

@@ -0,0 +1,131 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2026 XRPL Labs
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef TEST_UNIT_TEST_SUITE_LOGS_WITH_OVERRIDES_H
#define TEST_UNIT_TEST_SUITE_LOGS_WITH_OVERRIDES_H
#include <test/unit_test/SuiteJournal.h>
#include <xrpl/basics/Log.h>
#include <xrpl/beast/unit_test.h>
#include <xrpl/beast/utility/Journal.h>
#include <iostream>
#include <mutex>
#include <set>
#include <string>
namespace ripple {
namespace test {
/** A Journal::Sink that writes directly to stderr.
*
* Unlike SuiteJournalSink (which writes to suite_.log and is only
* visible when tests fail), this always produces visible output.
*/
class StderrJournalSink : public beast::Journal::Sink
{
std::string partition_;
public:
StderrJournalSink(
std::string const& partition,
beast::severities::Severity threshold)
: Sink(threshold, false), partition_(partition)
{
}
bool
active(beast::severities::Severity level) const override
{
return level >= threshold();
}
void
write(beast::severities::Severity level, std::string const& text) override
{
if (level >= threshold())
writeAlways(level, text);
}
void
writeAlways(beast::severities::Severity level, std::string const& text)
override
{
static std::mutex mtx;
std::lock_guard lock(mtx);
std::cerr << partition_ << ":" << text << std::endl;
}
};
/** SuiteLogs with per-partition severity overrides written to stderr.
*
* Overridden partitions write to stderr (always visible).
* All other partitions use SuiteJournalSink (suite_.log, only on failure).
*
* Usage:
* #include <test/unit_test/SuiteLogsWithOverrides.h>
*
* using Sev = beast::severities::Severity;
* Env env{*this, cfg, features,
* std::make_unique<SuiteLogsWithOverrides>(
* *this,
* SuiteLogsWithOverrides::Overrides{
* {"Export", Sev::kTrace},
* {"TxQ", Sev::kInfo},
* {"View", Sev::kDebug},
* })};
*/
class SuiteLogsWithOverrides : public Logs
{
beast::unit_test::suite& suite_;
std::set<std::string> overridden_;
public:
using Overrides = std::initializer_list<
std::pair<std::string, beast::severities::Severity>>;
SuiteLogsWithOverrides(
beast::unit_test::suite& suite,
Overrides overrides,
beast::severities::Severity defaultThresh = beast::severities::kError)
: Logs(defaultThresh), suite_(suite)
{
for (auto const& [name, sev] : overrides)
{
overridden_.insert(name);
get(name).threshold(sev);
}
}
~SuiteLogsWithOverrides() override = default;
std::unique_ptr<beast::Journal::Sink>
makeSink(
std::string const& partition,
beast::severities::Severity threshold) override
{
if (overridden_.count(partition))
return std::make_unique<StderrJournalSink>(partition, threshold);
return std::make_unique<SuiteJournalSink>(partition, threshold, suite_);
}
};
} // namespace test
} // namespace ripple
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,458 @@
#ifndef RIPPLE_APP_CONSENSUS_CONSENSUSEXTENSIONS_H_INCLUDED
#define RIPPLE_APP_CONSENSUS_CONSENSUSEXTENSIONS_H_INCLUDED
#include <xrpld/app/consensus/RCLCxLedger.h>
#include <xrpld/app/consensus/RCLCxPeerPos.h>
#include <xrpld/app/consensus/RCLCxTx.h>
#include <xrpld/app/misc/ExportSigCollector.h>
#include <xrpld/consensus/ConsensusParms.h>
#include <xrpld/consensus/ConsensusTypes.h>
#include <xrpld/overlay/Message.h>
#include <xrpld/shamap/SHAMap.h>
#include <xrpl/basics/Log.h>
#include <xrpl/beast/utility/Journal.h>
#include <xrpl/protocol/PublicKey.h>
#include <chrono>
#include <memory>
#include <mutex>
#include <optional>
#include <string>
#include <vector>
namespace ripple {
class Application;
class CanonicalTXSet;
class Ledger;
/// Concrete alias for the consensus tick context.
using TickContext = ConsensusTick<ExtendedPosition, RCLCxPeerPos, RCLTxSet>;
/// Concrete Xahau-owned manager for consensus extensions (RNG + Export).
///
/// Owns all RNG/Export state that was previously scattered across
/// RCLCxAdaptor and Consensus.h. Lifecycle hooks are grouped by
/// caller/threading context.
class ConsensusExtensions
{
Application& app_;
ExportSigCollector exportSigCollector_;
public:
beast::Journal j_; // public: accessed by extensionsTick template
// Type of sidecar set, known at fetch time from proposal context.
enum class SidecarKind : uint8_t { commit, reveal, exportSig };
struct ActiveValidatorView
{
hash_set<PublicKey> masterKeys;
hash_set<NodeID> nodeIds;
std::optional<uint256> sourceLedgerHash;
bool fromUNLReport = false;
// Export paths receive validator keys; RNG sidecars identify
// validators by NodeID. Keep both indexes in lockstep.
void
insertMaster(PublicKey const& masterKey)
{
masterKeys.insert(masterKey);
nodeIds.insert(calcNodeID(masterKey));
}
void
eraseMaster(PublicKey const& masterKey)
{
masterKeys.erase(masterKey);
nodeIds.erase(calcNodeID(masterKey));
}
std::size_t
size() const
{
return masterKeys.size();
}
bool
containsMaster(PublicKey const& masterKey) const
{
return masterKeys.count(masterKey) > 0;
}
bool
containsNode(NodeID const& nodeId) const
{
return nodeIds.count(nodeId) > 0;
}
};
using ActiveValidatorViewPtr = std::shared_ptr<ActiveValidatorView const>;
private:
// --- RNG Pipelined Storage ---
hash_map<NodeID, uint256> pendingCommits_;
hash_map<NodeID, uint256> pendingReveals_;
hash_map<NodeID, PublicKey> nodeIdToKey_;
// Ephemeral entropy secret (in-memory only, crash = non-revealer)
uint256 myEntropySecret_;
bool entropyFailed_ = false;
bool rngEnabledThisRound_ = false;
bool exportEnabledThisRound_ = false;
// Real SHAMaps for the current round (unbacked, ephemeral)
std::shared_ptr<SHAMap> commitSetMap_;
std::shared_ptr<SHAMap> entropySetMap_;
std::shared_ptr<SHAMap> exportSigSetMap_;
std::optional<LedgerIndex> rngRoundSeq_;
std::shared_ptr<SHAMap const> consensusTxSetMap_;
hash_map<uint256, std::shared_ptr<STTx const>> consensusExportTxns_;
std::optional<uint256> consensusTxSetHash_;
// Track pending sidecar set fetches by hash → kind.
// Kind is known at fetch time (call site context), so
// onAcquiredSidecarSet can dispatch without content-sniffing.
hash_map<uint256, SidecarKind> pendingRngFetches_;
// Parent-ledger validator view used by RNG and Export quorum logic.
ActiveValidatorViewPtr activeValidatorView_ =
std::make_shared<ActiveValidatorView const>();
mutable std::mutex activeValidatorViewMutex_;
// Recent proposers intersected with the active UNL (liveness hint)
hash_set<NodeID> likelyParticipants_;
// Current consensus mode (set by adaptor at round start)
ConsensusMode mode_{ConsensusMode::observing};
public:
// --- RNG Sub-state Machine (accessed by extensionsTick template) ---
EstablishState estState_{EstablishState::ConvergingTx};
std::chrono::steady_clock::time_point revealPhaseStart_{};
std::chrono::steady_clock::time_point commitHashConflictStart_{};
bool explicitFinalProposalSent_{false};
bool entropySetPublished_{false};
std::chrono::steady_clock::time_point entropyPublishStart_{};
bool exportSigGateStarted_{false};
std::chrono::steady_clock::time_point exportSigGateStart_{};
bool exportSigConvergenceFailed_{false};
/** Proof data from a proposal signature, for embedding in SHAMap
entries. Contains everything needed to independently verify
that a validator committed/revealed a specific value. */
struct ProposalProof
{
std::uint32_t proposeSeq;
std::uint32_t closeTime;
uint256 prevLedger;
Serializer positionData; // serialized ExtendedPosition
Buffer signature;
};
private:
// Proposal proofs keyed by NodeID.
// commitProofs_: only seq=0 proofs (deterministic across all nodes).
// proposalProofs_: latest proof with reveal (for entropySet).
hash_map<NodeID, ProposalProof> commitProofs_;
hash_map<NodeID, ProposalProof> proposalProofs_;
public:
ConsensusExtensions(Application& app, beast::Journal j);
ExportSigCollector&
exportSigCollector()
{
return exportSigCollector_;
}
ExportSigCollector const&
exportSigCollector() const
{
return exportSigCollector_;
}
/// Set the current consensus mode (called by adaptor).
void
setMode(ConsensusMode m)
{
mode_ = m;
}
// --- RNG Helper Methods ---
std::size_t
quorumThreshold() const;
std::size_t
exportSigQuorumThreshold() const;
void
setExpectedProposers(hash_set<NodeID> proposers);
std::size_t
pendingCommitCount() const;
std::size_t
pendingRevealCount() const;
std::size_t
expectedProposerCount() const;
bool
hasQuorumOfCommits() const;
bool
hasMinimumReveals() const;
bool
hasAnyReveals() const;
bool
shouldZeroEntropy() const;
bool
rngEnabled() const;
bool
exportEnabled() const;
bool
bootstrapFastStartEnabled() const;
bool
shouldSendExplicitFinalProposal() const;
std::optional<RCLTxSet>
buildExplicitFinalProposalTxSet(RCLTxSet const& txns, LedgerIndex seq);
uint256
buildCommitSet(LedgerIndex seq);
uint256
buildEntropySet(LedgerIndex seq);
uint256
buildExportSigSet(LedgerIndex seq);
bool
hasPendingExportSigs() const;
bool
hasConsensusExportTxns() const;
void
setExportSigConvergenceFailed();
bool
exportSigConvergenceFailed() const;
bool
isSidecarSet(uint256 const& hash) const;
ActiveValidatorViewPtr
activeValidatorView() const;
ActiveValidatorViewPtr
makeActiveValidatorView(
std::shared_ptr<Ledger const> const& prevLedger) const;
bool
isActiveValidator(PublicKey const& validationKey) const;
bool
isActiveValidator(
PublicKey const& validationKey,
ActiveValidatorView const& view) const;
void
onAcquiredSidecarSet(std::shared_ptr<SHAMap> const& map);
void
fetchRngSetIfNeeded(
std::optional<uint256> const& hash,
SidecarKind kind = SidecarKind::commit);
/// Fetch any sidecar sets from a peer's position if needed.
void
fetchSidecarsIfNeeded(ExtendedPosition const& peerPos);
void
cacheConsensusTxSet(RCLTxSet const& txns);
std::size_t
verifyPendingExportSigs(RCLTxSet const& txns, LedgerIndex seq);
void
cacheUNLReport(std::shared_ptr<Ledger const> const& prevLedger = {});
bool
isUNLReportMember(NodeID const& nodeId) const;
void
generateEntropySecret();
uint256
getEntropySecret() const;
void
setEntropyFailed();
/// Self-seed our own reveal into pendingReveals_.
/// Called from extensionsTick at reveal transition.
/// In production, decorateMessage also self-seeds (belt + suspenders).
void
selfSeedReveal();
void
clearRngState();
void
onPreBuild(CanonicalTXSet& retriableTxs, LedgerIndex seq);
void
harvestRngData(
NodeID const& nodeId,
PublicKey const& publicKey,
ExtendedPosition const& position,
std::uint32_t proposeSeq,
NetClock::time_point closeTime,
uint256 const& prevLedger,
Slice const& signature);
static Blob
serializeProof(ProposalProof const& proof);
static std::optional<ProposalProof>
deserializeProof(Blob const& proofBlob);
static bool
verifyProof(
Blob const& proofBlob,
PublicKey const& publicKey,
uint256 const& expectedDigest,
bool isCommit);
/// Append extension diagnostics to consensus JSON.
void
appendJson(Json::Value& ret) const;
/// Log extension-specific position fields at trace level.
void
logPosition(
ExtendedPosition const& pos,
beast::Journal j,
beast::severities::Severity level = beast::severities::kTrace) const;
// --- Consensus/adaptor lifecycle hooks ---
/** Reset per-round extension state.
Called from startRoundInternal under RCLConsensus::mutex_. */
void
onRoundStart(RCLCxLedger const& prevLedger, hash_set<NodeID> lastProposers);
/** Extract extension data from the parsed proposal.
Called from peerProposalInternal under RCLConsensus::mutex_. */
void
onTrustedPeerProposal(
NodeID const& nodeId,
PublicKey const& publicKey,
ExtendedPosition const& position,
std::uint32_t proposeSeq,
NetClock::time_point closeTime,
uint256 const& prevLedger,
Slice const& signature,
std::vector<std::string> const& exportSignatures = {});
/** Harvest proposal-carried export signatures after the proposal payload is
known to be signed by `publicKey`. */
std::size_t
harvestExportSignatures(
PublicKey const& publicKey,
uint256 const& prevLedger,
std::vector<std::string> const& exportSignatures,
char const* source);
/** Signal that the accept/build path finished successfully.
Called from doAccept (frozen state, no consensus mutex). */
void
onAcceptComplete();
/** Extract export signatures from the raw protobuf wire message.
Called from PeerImp overlay ingress (outside consensus mutex).
Only touches the independently synchronized ExportSigCollector. */
void
onTrustedPeerMessage(::protocol::TMProposeSet const& wireMsg);
/** Attach RNG commitment to the initial proposal position.
Called from onClose BEFORE signing. Affects proposal identity.
Generates entropy secret, caches UNL, seeds own commitment. */
void
decoratePosition(
ExtendedPosition& pos,
std::shared_ptr<Ledger const> const& prevLedger,
bool proposing);
/** Attach export signatures before proposal signing.
The caller hashes the resulting blobs into ExtendedPosition so the
proposal signature authenticates the side-channel protobuf field. */
void
attachExportSignatures(
protocol::TMProposeSet& prop,
RCLCxPeerPos::Proposal const& proposal);
/** Record post-signature RNG state for the outgoing protobuf.
Self-seeds own reveal and stores proposal proofs. */
void
decorateMessage(
protocol::TMProposeSet& prop,
RCLCxPeerPos::Proposal const& proposal,
ExtendedPosition const& signedPosition,
Buffer const& proposalSig);
ExtensionTickResult
onTick(TickContext const& ctx);
// --- Accessors for adaptor forwarding ---
void
setRngEnabledThisRound(bool v)
{
rngEnabledThisRound_ = v;
}
void
setExportEnabledThisRound(bool v)
{
exportEnabledThisRound_ = v;
}
bool
extensionsBusy() const
{
return estState_ != EstablishState::ConvergingTx ||
(exportEnabled() &&
(exportSigGateStarted_ || hasPendingExportSigs()));
}
EstablishState
estState() const
{
return estState_;
}
void
resetSubState()
{
estState_ = EstablishState::ConvergingTx;
revealPhaseStart_ = {};
commitHashConflictStart_ = {};
explicitFinalProposalSent_ = false;
entropySetPublished_ = false;
entropyPublishStart_ = {};
exportSigGateStarted_ = false;
exportSigGateStart_ = {};
exportSigConvergenceFailed_ = false;
}
};
} // namespace ripple
#endif

View File

@@ -0,0 +1,250 @@
# Consensus Extension Design Principles
This note captures the principles behind the Xahau consensus extensions:
ConsensusEntropy/RNG, proposal sidecars, and export signature convergence.
Read this before changing `ConsensusExtensions`, `ConsensusExtensionsTick`,
`ExtendedPosition`, sidecar SHAMap handling, or the related CSF tests.
The short version: extension data may coordinate extra same-ledger features,
but it must not redefine ordinary transaction-set consensus. When extension
state cannot be made safe in time, the extension degrades deterministically
and the ledger still closes.
The priority order for consensus extensions is: safe, fast, works. Safety means
extension timing must not create divergent closed-ledger effects when a bounded
coordination step can avoid it. Fast means those coordination steps stay short
and conditional, never becoming an open-ended wait for an extension feature to
succeed. Works means missed or late extension material follows that feature's
deterministic fallback, such as zero entropy for RNG or normal Export
retry/expiry, rather than blocking core consensus.
## Core Invariants
1. Core consensus remains keyed by the transaction set.
`ExtendedPosition::operator==` intentionally compares only `txSetHash`.
RNG, export sig, commit-set, and entropy-set hashes are proposal sidecars.
They are coordinated during establish, but they do not define whether peers
agree on the ordinary transaction set.
2. Extension waits are bounded.
RNG and export sidecar convergence may wait briefly inside establish, but
they must not block ledger close indefinitely. If RNG cannot establish a
safe non-zero entropy value, it injects the deterministic zero-entropy path.
If export signatures cannot converge, export retries or expires according
to transaction rules.
3. Safety is in validation; extension logic is deliberation.
ConsensusEntropy is materialized during accept/buildLCL as a deterministic
pseudo-transaction. Nodes agree on the base transaction set first, then
derive the entropy transaction from agreed sidecar inputs. Any local fault
still has to survive normal validation/LCL agreement.
4. Converge signed inputs, not just derived outputs.
RNG commits, RNG reveals, and export signatures are the verifiable inputs.
The design converges on those input sets using sidecar SHAMaps. The final
entropy digest and export quorum result are derived from the converged
inputs.
5. Sidecars are not transactions.
Commit, reveal, and export signature entries are `STObject(sfGeneric)`
leaves in ephemeral `SHAMapType::SIDECAR` maps. They use `sfSidecarType`
to distinguish payloads and `HashPrefix::sidecar` for item hashes. They
are fetched through sidecar sync, not parsed or submitted as transactions.
6. Proposal-visible or validation-visible extension data must be signed.
Do not attach behavior-changing sidecar payloads as unsigned out-of-band
proposal or validation wrapper data. If stripping or changing a field would
alter RNG or export behavior, that field must be covered by the relevant
signed payload and by the identity used for duplicate suppression/replay
checks on that path.
Today ConsensusExtensions uses signed proposal sidecars, not validation
sidecars. If a future design carries extension material through
validations, the same rule applies: the behavior-changing data, or a digest
of it, must be inside the signed validation payload and bound to the
validating key and ledger. A protobuf field outside the signed validation is
only transport metadata; it must not affect consensus-extension behavior.
## Validator Set And Quorum
The active validator view is the shared denominator for RNG and export:
- Prefer `UNLReport.sfActiveValidators` from the consensus parent ledger.
- If no report is available, fall back to configured trusted validators so
early ledgers and dev/test networks can make progress.
- If `featureNegativeUNL` is enabled, subtract the parent ledger's Negative
UNL from whichever source produced the view.
- Use the same snapshot throughout the round.
`quorumThreshold()` is 80% of that active validator view. Recent or expected
proposers are liveness hints only; they do not shrink the quorum denominator.
Be careful with `prevProposers`: in the generic consensus code it is peer-only.
When checking whether the previous round had enough active participants, count
our own proposer slot if this node is proposing.
## RNG Commit/Reveal Principles
RNG proceeds through establish sub-states:
1. `ConvergingTx`: ordinary transaction-set convergence while harvesting
commitments.
2. `ConvergingCommit`: after proofed commit quorum, publish the commit sidecar
hash and reveal the same secret that produced the original commitment.
3. `ConvergingReveal`: collect reveals, publish the entropy sidecar hash, and
wait for sidecar agreement or deterministic fallback.
Commit quorum counts only proofed commits from active validators. A commit that
cannot be emitted as a verifiable sidecar leaf does not count.
Reveal collection targets all known committers, because the commit sidecar set
defines who is expected to reveal. The reveal wait is still bounded. A node
that crashes, withholds, or partitions after committing must not stop the
ledger forever.
Final entropy is computed from the agreed entropy sidecar SHAMap, not from a
node's opportunistic local `pendingReveals_` map. This prevents different
local reveal subsets at timeout boundaries from producing different entropy.
## Entropy Alignment Rules
Non-zero entropy requires quorum alignment on the entropy sidecar hash.
The alignment count is:
```
our published entropySetHash + tx-converged peers with the same entropySetHash
```
If that count reaches `quorumThreshold()`, the node may proceed with non-zero
entropy even if a below-quorum minority advertises a conflicting or
unacquirable entropy hash.
If no entropy hash reaches quorum alignment before the bounded deadline, the
round must fall back to zero entropy. This is the safe degradation path, not a
consensus failure.
Examples with five active validators and threshold four:
- Four honest validators align on one entropy hash and one validator advertises
a bogus hash: proceed with non-zero entropy for the honest quorum.
- Two validators advertise different bogus hashes and only three align on the
honest hash: fall back to zero entropy.
- No peer entropy hash is observed in time: fall back to zero entropy.
Zero entropy means unavailable entropy. The pseudo-transaction is still
deterministic, with zero digest and zero entropy count, so hooks can detect
the unavailable path.
## Sidecar Convergence Rules
Sidecar SHAMaps use union convergence:
- Every valid active-validator contribution belongs in the set.
- Sets only grow during fetch/merge.
- Fetch/merge is a safety net for missed proposals, not the normal transport.
- Rebuild and republish the sidecar hash after merging missing leaves.
Do not use avalanche-style transaction inclusion logic for sidecar inputs.
For RNG and export sidecars, the disagreement to resolve is usually timing or
delivery, not whether a valid contribution should be included.
The entropy sidecar gate always gives peers at least one observation tick after
publishing `entropySetHash`. Publishing and accepting in the same tick can hide
conflicts and produce asymmetric zero/non-zero outcomes.
## Export Principles
`featureExport` and `featureConsensusEntropy` are independently amendment
gated.
Export can run without ConsensusEntropy and still uses the active validator
view's 80% quorum threshold. Verified export signature sidecars converge
through `ExtendedPosition`, and the `exportSigSetHash` is signed by proposals
whether or not RNG is enabled. Do not make Export liveness depend on unanimity:
one active validator with a missing, delayed, or conflicting sidecar must not
veto an otherwise quorum-aligned export round.
The extended proposal machinery is enabled when either feature needs signed
sidecar fields. Do not make Export depend on RNG availability just because RNG
was the first consumer of `ExtendedPosition`.
When `featureExport` is disabled, the export sidecar gate is disabled too. Stale
collector entries must not keep a stopped amendment active.
Only verified export signatures count toward quorum or enter export sidecar
SHAMaps. Proposal-ingress signatures are sender-bound to the trusted proposal
validator and may be stored as unverified until the matching export transaction
is available for cryptographic verification.
The consensus candidate transaction set is the authority for export signature
verification. The open ledger may be used for early proposal ingestion, but
once a candidate tx set exists, only signatures verified against the `ttEXPORT`
in that candidate set may become quorum material or enter `exportSigSetHash`.
Export sidecar publication is local-material only. A node may publish only the
verified export signatures it actually has locally, and only for `ttEXPORT`
transactions in the consensus candidate set. A fetched export sidecar is not a
separate apply input: on merge, each leaf must be active-view checked, verified
against the candidate transaction, and promoted into `ExportSigCollector`.
Closed-ledger apply snapshots that collector, so the sidecar convergence state
and the signer set used by `ttEXPORT` stay on the same path.
If the consensus candidate contains a `ttEXPORT` but the node has no eligible
local export signatures yet, the export sidecar gate opens only a bounded
safety window for tx-converged peers to advertise `exportSigSetHash`. This is
not a wait-for-Export-success mechanism; it is a short opportunity to avoid
closing a minority ledger while sidecar convergence is already reachable. If no
advertised sidecar appears by the deadline, the gate stops waiting and the
export retries or expires through normal transaction rules.
Export success requires quorum alignment on `exportSigSetHash`, not merely a
local collector quorum. If a quorum of tx-converged participants advertises the
same export signature sidecar hash, that hash is aligned and below-quorum
conflicts are ignored. If no export signature hash reaches quorum alignment by
the bounded deadline, do not choose the largest non-quorum set; the export
retries or expires according to normal transaction rules.
Closed-ledger apply must not promote unverified proposal-carried signatures into
current-round quorum material. It may verify and retain them for a future retry,
where they can be published in a sidecar set and converged before use.
Export sig convergence runs in parallel with RNG. An export-side convergence
failure must not change RNG semantics; an RNG fallback must not make export
unsafe. Each feature has its own gate and fallback.
Accept-time cleanup must preserve Export state through `buildLCL` whenever
`featureExport` is enabled. RNG-disabled does not mean extensions-disabled:
`ttEXPORT` still needs the round's export sidecar convergence state when it
applies.
CSF consensus tests model the export sidecar gate directly. Testnet scenarios
under `.testnet/scenarios/export/` cover live-node Export+CE behavior and
Export-only quorum behavior.
## Review Checklist
When changing consensus extension code, check these questions:
- Does this preserve transaction-set equality as the core consensus identity?
- Does every extension wait have a bounded fallback?
- Does non-zero entropy require active-validator quorum alignment?
- Can one bad validator deny entropy to an honest quorum? It must not.
- Can a sub-quorum set produce non-zero entropy? It must not.
- Are quorum calculations using the active validator view, not recent
proposers as the denominator?
- Are sidecar entries typed as sidecars, not pseudo-transactions?
- Are proposal-visible or validation-visible sidecar fields covered by the
relevant signature and duplicate/replay identity?
- Are export signatures verified before they count?
- Does export success require `exportSigSetHash` alignment, not just local
collector quorum?
- Can one bad validator deny Export to an honest quorum? It must not.
- Can timeout select a largest-but-below-quorum export sidecar set? It must not.
- Are CE and Export still independently gated and independently stoppable?

View File

@@ -17,6 +17,7 @@
*/
//==============================================================================
#include <xrpld/app/consensus/ConsensusExtensions.h>
#include <xrpld/app/consensus/RCLConsensus.h>
#include <xrpld/app/consensus/RCLValidations.h>
#include <xrpld/app/ledger/BuildLedger.h>
@@ -27,27 +28,39 @@
#include <xrpld/app/ledger/LocalTxs.h>
#include <xrpld/app/ledger/OpenLedger.h>
#include <xrpld/app/misc/AmendmentTable.h>
#include <xrpld/app/misc/CanonicalTXSet.h>
#include <xrpld/app/misc/HashRouter.h>
#include <xrpld/app/misc/LoadFeeTrack.h>
#include <xrpld/app/misc/NegativeUNLVote.h>
#include <xrpld/app/misc/NetworkOPs.h>
#include <xrpld/app/misc/RuntimeConfig.h>
#include <xrpld/app/misc/Transaction.h>
#include <xrpld/app/misc/TxQ.h>
#include <xrpld/app/misc/ValidatorKeys.h>
#include <xrpld/app/misc/ValidatorList.h>
#include <xrpld/app/tx/apply.h>
#include <xrpld/consensus/Consensus.h>
#include <xrpld/consensus/LedgerTiming.h>
#include <xrpld/overlay/Overlay.h>
#include <xrpld/overlay/predicates.h>
#include <xrpl/basics/random.h>
#include <xrpl/beast/core/LexicalCast.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <xrpl/crypto/csprng.h>
#include <xrpl/protocol/AccountID.h>
#include <xrpl/protocol/BuildInfo.h>
#include <xrpl/protocol/Feature.h>
#include <xrpl/protocol/Indexes.h>
#include <xrpl/protocol/SecretKey.h>
#include <xrpl/protocol/Sign.h>
#include <xrpl/protocol/TxFlags.h>
#include <xrpl/protocol/TxFormats.h>
#include <xrpl/protocol/digest.h>
#include <boost/algorithm/string.hpp>
#include <algorithm>
#include <iomanip>
#include <cstring>
#include <mutex>
#include <random>
namespace ripple {
@@ -57,7 +70,7 @@ RCLConsensus::RCLConsensus(
LedgerMaster& ledgerMaster,
LocalTxs& localTxs,
InboundTransactions& inboundTransactions,
Consensus<Adaptor>::clock_type const& clock,
clock_type const& clock,
ValidatorKeys const& validatorKeys,
beast::Journal journal)
: adaptor_(
@@ -68,11 +81,13 @@ RCLConsensus::RCLConsensus(
inboundTransactions,
validatorKeys,
journal)
, consensus_(clock, adaptor_, journal)
, consensus_(std::make_unique<Consensus<Adaptor>>(clock, adaptor_, journal))
, j_(journal)
{
}
RCLConsensus::~RCLConsensus() = default;
RCLConsensus::Adaptor::Adaptor(
Application& app,
std::unique_ptr<FeeVote>&& feeVote,
@@ -122,6 +137,22 @@ RCLConsensus::Adaptor::Adaptor(
}
}
// --- ConsensusExtensions helpers ---
ConsensusExtensions&
RCLConsensus::Adaptor::ce()
{
return app_.getConsensusExtensions();
}
ConsensusExtensions const&
RCLConsensus::Adaptor::ce() const
{
return app_.getConsensusExtensions();
}
// --- End ConsensusExtensions helpers ---
std::optional<RCLCxLedger>
RCLConsensus::Adaptor::acquireLedger(LedgerHash const& hash)
{
@@ -173,10 +204,14 @@ RCLConsensus::Adaptor::share(RCLCxPeerPos const& peerPos)
prop.set_proposeseq(proposal.proposeSeq());
prop.set_closetime(proposal.closeTime().time_since_epoch().count());
prop.set_currenttxhash(
proposal.position().begin(), proposal.position().size());
// Serialize full ExtendedPosition
Serializer positionData;
proposal.position().add(positionData);
auto const posSlice = positionData.slice();
prop.set_currenttxhash(posSlice.data(), posSlice.size());
prop.set_previousledger(
proposal.prevLedger().begin(), proposal.position().size());
proposal.prevLedger().begin(), proposal.prevLedger().size());
auto const pk = peerPos.publicKey().slice();
prop.set_nodepubkey(pk.data(), pk.size());
@@ -184,6 +219,9 @@ RCLConsensus::Adaptor::share(RCLCxPeerPos const& peerPos)
auto const sig = peerPos.signature();
prop.set_signature(sig.data(), sig.size());
for (auto const& exportSig : peerPos.exportSignatures())
prop.add_exportsignatures(exportSig.data(), exportSig.size());
app_.overlay().relay(prop, peerPos.suppressionID(), peerPos.publicKey());
}
@@ -217,39 +255,52 @@ RCLConsensus::Adaptor::propose(RCLCxPeerPos::Proposal const& proposal)
protocol::TMProposeSet prop;
prop.set_currenttxhash(
proposal.position().begin(), proposal.position().size());
auto wirePosition = proposal.position();
ce().attachExportSignatures(prop, proposal);
if (prop.exportsignatures_size() > 0)
wirePosition.exportSignaturesHash =
proposalExportSignaturesHash(prop.exportsignatures());
// Serialize full ExtendedPosition (includes RNG leaves and export
// signature digest)
Serializer positionData;
wirePosition.add(positionData);
auto const posSlice = positionData.slice();
prop.set_currenttxhash(posSlice.data(), posSlice.size());
prop.set_previousledger(
proposal.prevLedger().begin(), proposal.prevLedger().size());
prop.set_proposeseq(proposal.proposeSeq());
prop.set_closetime(proposal.closeTime().time_since_epoch().count());
prop.set_nodepubkey(
validatorKeys_.keys->publicKey.data(),
validatorKeys_.keys->publicKey.size());
if (!validatorKeys_.keys)
{
JLOG(j_.warn()) << "RCLConsensus::Adaptor::propose: ValidatorKeys "
"not set: \n";
return;
}
auto const& keys = *validatorKeys_.keys;
prop.set_nodepubkey(keys.publicKey.data(), keys.publicKey.size());
auto sig =
signDigest(keys.publicKey, keys.secretKey, proposal.signingHash());
auto sig = signDigest(
validatorKeys_.keys->publicKey,
validatorKeys_.keys->secretKey,
sha512Half(
HashPrefix::proposal,
std::uint32_t(proposal.proposeSeq()),
proposal.closeTime().time_since_epoch().count(),
proposal.prevLedger(),
wirePosition));
prop.set_signature(sig.data(), sig.size());
auto const suppression = proposalUniqueId(
proposal.position(),
wirePosition,
proposal.prevLedger(),
proposal.proposeSeq(),
proposal.closeTime(),
keys.publicKey,
validatorKeys_.keys->publicKey,
sig);
app_.getHashRouter().addSuppression(suppression);
ce().decorateMessage(prop, proposal, wirePosition, sig);
app_.overlay().broadcast(prop);
}
@@ -400,12 +451,16 @@ RCLConsensus::Adaptor::onClose(
// Needed because of the move below.
auto const setHash = initialSet->getHash().as_uint256();
ExtendedPosition pos{setHash};
ce().decoratePosition(pos, prevLedger, proposing);
return Result{
std::move(initialSet),
RCLCxPeerPos::Proposal{
initialLedger->info().parentHash,
RCLCxPeerPos::Proposal::seqJoin,
setHash,
std::move(pos),
closeTime,
app_.timeKeeper().closeTime(),
validatorKeys_.nodeID}};
@@ -443,11 +498,13 @@ RCLConsensus::Adaptor::onAccept(
jtACCEPT,
"acceptLedger",
[=, this, cj = std::move(consensusJson)]() mutable {
//@@start do-accept-freeze-contract
// Note that no lock is held or acquired during this job.
// This is because generic Consensus guarantees that once a ledger
// is accepted, the consensus results and capture by reference state
// will not change until startRound is called (which happens via
// endConsensus).
//@@end do-accept-freeze-contract
RclConsensusLogger clog("onAccept", validating, j_);
this->doAccept(
result,
@@ -529,6 +586,17 @@ RCLConsensus::Adaptor::doAccept(
}
}
//@@start auxiliary-pre-build-injection
// Inject consensus entropy pseudo-transaction (if amendment enabled).
// Export-only rounds still need extension state preserved through buildLCL
// so ttEXPORT can observe exportSigSetHash convergence at apply time.
//@@start accept-time-cleanup-disabled
if (ce().rngEnabled())
ce().onPreBuild(retriableTxs, prevLedger.seq() + 1);
else if (!ce().exportEnabled())
ce().clearRngState();
//@@end accept-time-cleanup-disabled
auto built = buildLCL(
prevLedger,
retriableTxs,
@@ -537,6 +605,7 @@ RCLConsensus::Adaptor::doAccept(
closeResolution,
result.roundTime.read(),
failed);
//@@end auxiliary-pre-build-injection
auto const newLCLHash = built.id();
JLOG(j_.debug()) << "Built ledger #" << built.seq() << ": " << newLCLHash;
@@ -729,6 +798,8 @@ RCLConsensus::Adaptor::doAccept(
app_.timeKeeper().adjustCloseTime(offset);
}
ce().onAcceptComplete();
}
void
@@ -826,19 +897,10 @@ RCLConsensus::Adaptor::validate(
validationTime = lastValidationTime_ + 1s;
lastValidationTime_ = validationTime;
if (!validatorKeys_.keys)
{
JLOG(j_.warn()) << "RCLConsensus::Adaptor::validate: ValidatorKeys "
"not set\n";
return;
}
auto const& keys = *validatorKeys_.keys;
auto v = std::make_shared<STValidation>(
lastValidationTime_,
keys.publicKey,
keys.secretKey,
validatorKeys_.keys->publicKey,
validatorKeys_.keys->secretKey,
validatorKeys_.nodeID,
[&](STValidation& v) {
v.setFieldH256(sfLedgerHash, ledger.id());
@@ -900,7 +962,7 @@ RCLConsensus::Adaptor::validate(
handleNewValidation(app_, v, "local");
// Broadcast to all our peers:
// Broadcast validation to all peers.
protocol::TMValidation val;
val.set_validation(serialized.data(), serialized.size());
app_.overlay().broadcast(val);
@@ -923,6 +985,26 @@ RCLConsensus::Adaptor::onModeChange(ConsensusMode before, ConsensusMode after)
censorshipDetector_.reset();
mode_ = after;
ce().setMode(after);
}
ConsensusPhase
RCLConsensus::phase() const
{
return consensus_->phase();
}
bool
RCLConsensus::extensionsBusy() const
{
return consensus_->extensionsBusy();
}
RCLCxLedger::ID
RCLConsensus::prevLedgerID() const
{
std::lock_guard _{mutex_};
return consensus_->prevLedgerID();
}
Json::Value
@@ -931,7 +1013,7 @@ RCLConsensus::getJson(bool full) const
Json::Value ret;
{
std::lock_guard _{mutex_};
ret = consensus_.getJson(full);
ret = consensus_->getJson(full);
}
ret["validating"] = adaptor_.validating();
return ret;
@@ -945,7 +1027,7 @@ RCLConsensus::timerEntry(
try
{
std::lock_guard _{mutex_};
consensus_.timerEntry(now, clog);
consensus_->timerEntry(now, clog);
}
catch (SHAMapMissingNode const& mn)
{
@@ -964,7 +1046,7 @@ RCLConsensus::gotTxSet(NetClock::time_point const& now, RCLTxSet const& txSet)
try
{
std::lock_guard _{mutex_};
consensus_.gotTxSet(now, txSet);
consensus_->gotTxSet(now, txSet);
}
catch (SHAMapMissingNode const& mn)
{
@@ -982,7 +1064,7 @@ RCLConsensus::simulate(
std::optional<std::chrono::milliseconds> consensusDelay)
{
std::lock_guard _{mutex_};
consensus_.simulate(now, consensusDelay);
consensus_->simulate(now, consensusDelay);
}
bool
@@ -991,14 +1073,24 @@ RCLConsensus::peerProposal(
RCLCxPeerPos const& newProposal)
{
std::lock_guard _{mutex_};
return consensus_.peerProposal(now, newProposal);
return consensus_->peerProposal(now, newProposal);
}
//@@start pre-start-round
bool
RCLConsensus::Adaptor::preStartRound(
RCLCxLedger const& prevLgr,
hash_set<NodeID> const& nowTrusted)
{
ce().setRngEnabledThisRound(
prevLgr.ledger_->rules().enabled(featureConsensusEntropy));
ce().setExportEnabledThisRound(
prevLgr.ledger_->rules().enabled(featureExport));
JLOG(j_.trace()) << "RNGGATE: preStartRound prevSeq=" << prevLgr.seq()
<< " rulesEnabled=" << ce().rngEnabled()
<< " exportEnabled=" << ce().exportEnabled();
// We have a key, we do not want out of sync validations after a restart
// and are not amendment blocked.
validating_ = validatorKeys_.keys &&
@@ -1041,9 +1133,19 @@ RCLConsensus::Adaptor::preStartRound(
!nowTrusted.empty())
nUnlVote_.newValidators(prevLgr.seq() + 1, nowTrusted);
bool const proposing = validating_ && synced;
JLOG(j_.info()) << "STARTDIAG: preStartRound"
<< " mode=" << app_.getOPs().strOperatingMode()
<< " synced=" << (synced ? "yes" : "no")
<< " validating=" << (validating_ ? "yes" : "no")
<< " proposing=" << (proposing ? "yes" : "no")
<< " seq=" << (prevLgr.seq() + 1);
// propose only if we're in sync with the network (and validating)
return validating_ && synced;
return proposing;
}
//@@end pre-start-round
bool
RCLConsensus::Adaptor::haveValidated() const
@@ -1081,7 +1183,11 @@ void
RCLConsensus::Adaptor::updateOperatingMode(std::size_t const positions) const
{
if (!positions && app_.getOPs().isFull())
{
JLOG(j_.warn()) << "STARTDIAG: updateOperatingMode demoting"
<< " FULL->CONNECTED positions=" << positions;
app_.getOPs().setMode(OperatingMode::CONNECTED);
}
}
void
@@ -1094,7 +1200,7 @@ RCLConsensus::startRound(
std::unique_ptr<std::stringstream> const& clog)
{
std::lock_guard _{mutex_};
consensus_.startRound(
consensus_->startRound(
now,
prevLgrId,
prevLgr,
@@ -1124,11 +1230,16 @@ RclConsensusLogger::~RclConsensusLogger()
return;
auto const duration = std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - start_);
std::stringstream outSs;
outSs << header_ << "duration " << (duration.count() / 1000) << '.'
<< std::setw(3) << std::setfill('0') << (duration.count() % 1000)
<< "s. " << ss_->str();
j_.sink().writeAlways(beast::severities::kInfo, outSs.str());
}
ss_->seekg(0, std::ios::beg);
std::string line;
while (std::getline(*ss_, line, '.'))
{
boost::algorithm::trim(line);
if (!line.empty())
JLOG(j_.debug()) << header_ << line << ".";
}
JLOG(j_.debug()) << header_ << "Total duration: " << duration.count()
<< "ms.";
}
} // namespace ripple

View File

@@ -20,22 +20,26 @@
#ifndef RIPPLE_APP_CONSENSUS_RCLCONSENSUS_H_INCLUDED
#define RIPPLE_APP_CONSENSUS_RCLCONSENSUS_H_INCLUDED
#include <xrpld/app/consensus/ConsensusExtensions.h>
#include <xrpld/app/consensus/RCLCensorshipDetector.h>
#include <xrpld/app/consensus/RCLCxLedger.h>
#include <xrpld/app/consensus/RCLCxPeerPos.h>
#include <xrpld/app/consensus/RCLCxTx.h>
#include <xrpld/app/misc/FeeVote.h>
#include <xrpld/app/misc/NegativeUNLVote.h>
#include <xrpld/consensus/Consensus.h>
#include <xrpld/consensus/ConsensusParms.h>
#include <xrpld/consensus/ConsensusTypes.h>
#include <xrpld/core/JobQueue.h>
#include <xrpld/overlay/Message.h>
#include <xrpld/shamap/SHAMap.h>
#include <xrpl/basics/CountedObject.h>
#include <xrpl/basics/Log.h>
#include <xrpl/beast/clock/abstract_clock.h>
#include <xrpl/beast/utility/Journal.h>
#include <xrpl/protocol/RippleLedgerHash.h>
#include <xrpl/protocol/STValidation.h>
#include <atomic>
#include <chrono>
#include <memory>
#include <mutex>
#include <set>
@@ -44,10 +48,13 @@
namespace ripple {
class CanonicalTXSet;
class InboundTransactions;
class LocalTxs;
class LedgerMaster;
class ValidatorKeys;
template <class Adaptor>
class Consensus;
/** Manages the generic consensus algorithm for use by the RCL.
*/
@@ -91,12 +98,16 @@ class RCLConsensus
RCLCensorshipDetector<TxID, LedgerIndex> censorshipDetector_;
NegativeUNLVote nUnlVote_;
// RNG/Export state has moved to ConsensusExtensions
// (owned by Application, accessible via app_.getConsensusExtensions())
public:
using Ledger_t = RCLCxLedger;
using NodeID_t = NodeID;
using NodeKey_t = PublicKey;
using TxSet_t = RCLTxSet;
using PeerPosition_t = RCLCxPeerPos;
using Position_t = ExtendedPosition;
using Result = ConsensusResult<Adaptor>;
@@ -182,6 +193,14 @@ class RCLConsensus
return parms_;
}
// --- ConsensusExtensions access ---
ConsensusExtensions&
ce();
ConsensusExtensions const&
ce() const;
private:
//---------------------------------------------------------------------
// The following members implement the generic Consensus requirements
@@ -420,6 +439,8 @@ class RCLConsensus
};
public:
using clock_type = beast::abstract_clock<std::chrono::steady_clock>;
//! Constructor
RCLConsensus(
Application& app,
@@ -427,10 +448,12 @@ public:
LedgerMaster& ledgerMaster,
LocalTxs& localTxs,
InboundTransactions& inboundTransactions,
Consensus<Adaptor>::clock_type const& clock,
clock_type const& clock,
ValidatorKeys const& validatorKeys,
beast::Journal journal);
~RCLConsensus();
RCLConsensus(RCLConsensus const&) = delete;
RCLConsensus&
@@ -472,9 +495,26 @@ public:
}
ConsensusPhase
phase() const
phase() const;
//! Whether extensions have pending sub-state work in establish
bool
extensionsBusy() const;
//! Check if hash is a known extension sidecar set (under mutex)
bool
isExtensionSet(uint256 const& hash) const
{
return consensus_.phase();
std::lock_guard _{mutex_};
return adaptor_.ce().isSidecarSet(hash);
}
//! Route acquired extension sidecar set (under mutex)
void
gotExtensionSet(std::shared_ptr<SHAMap> const& map)
{
std::lock_guard _{mutex_};
adaptor_.ce().onAcquiredSidecarSet(map);
}
//! @see Consensus::getJson
@@ -505,11 +545,7 @@ public:
// @see Consensus::prevLedgerID
RCLCxLedger::ID
prevLedgerID() const
{
std::lock_guard _{mutex_};
return consensus_.prevLedgerID();
}
prevLedgerID() const;
//! @see Consensus::simulate
void
@@ -536,7 +572,7 @@ private:
mutable std::recursive_mutex mutex_;
Adaptor adaptor_;
Consensus<Adaptor> consensus_;
std::unique_ptr<Consensus<Adaptor>> consensus_;
beast::Journal const j_;
};

View File

@@ -31,10 +31,12 @@ RCLCxPeerPos::RCLCxPeerPos(
PublicKey const& publicKey,
Slice const& signature,
uint256 const& suppression,
Proposal&& proposal)
Proposal&& proposal,
std::vector<std::string> exportSignatures)
: publicKey_(publicKey)
, suppression_(suppression)
, proposal_(std::move(proposal))
, exportSignatures_(std::move(exportSignatures))
{
// The maximum allowed size of a signature is 72 bytes; we verify
// this elsewhere, but we want to be extra careful here:
@@ -66,15 +68,17 @@ RCLCxPeerPos::getJson() const
uint256
proposalUniqueId(
uint256 const& proposeHash,
ExtendedPosition const& position,
uint256 const& previousLedger,
std::uint32_t proposeSeq,
NetClock::time_point closeTime,
Slice const& publicKey,
Slice const& signature)
{
// This is for suppression/dedup only, NOT for signing.
// Must include all fields that distinguish proposals.
Serializer s(512);
s.addBitString(proposeHash);
position.add(s);
s.addBitString(previousLedger);
s.add32(proposeSeq);
s.add32(closeTime.time_since_epoch().count());

View File

@@ -28,13 +28,294 @@
#include <xrpl/protocol/HashPrefix.h>
#include <xrpl/protocol/PublicKey.h>
#include <xrpl/protocol/SecretKey.h>
#include <xrpl/protocol/Serializer.h>
#include <boost/container/static_vector.hpp>
#include <chrono>
#include <cstdint>
#include <optional>
#include <ostream>
#include <string>
#include <vector>
namespace ripple {
/** Extended position for consensus with RNG entropy support.
Carries the tx-set hash (the core convergence target), RNG set hashes
(agreed via sub-state quorum, not via operator==), and per-validator
leaves (unique to each proposer, piggybacked on proposals).
Critical design:
- operator== compares txSetHash ONLY (sub-states handle the rest)
- add() includes ALL fields for signing (prevents stripping attacks)
*/
struct ExtendedPosition
{
// === Core Convergence Target ===
uint256 txSetHash;
// === Set Hashes (sub-state quorum, not in operator==) ===
std::optional<uint256> commitSetHash;
std::optional<uint256> entropySetHash;
std::optional<uint256> exportSigSetHash;
std::optional<uint256> exportSignaturesHash;
// === Per-Validator Leaves (unique per proposer) ===
std::optional<uint256> myCommitment;
std::optional<uint256> myReveal;
ExtendedPosition() = default;
explicit ExtendedPosition(uint256 const& txSet) : txSetHash(txSet)
{
}
// Implicit conversion for legacy compatibility
operator uint256() const
{
return txSetHash;
}
// Helper to update TxSet while preserving sidecar data
void
updateTxSet(uint256 const& set)
{
txSetHash = set;
}
// TODO: replace operator== with a named method (e.g. txSetMatches())
// so call sites read as intent, not as "full equality". Overloading
// operator== to ignore most fields is surprising and fragile.
//
// CRITICAL: Only compare txSetHash for consensus convergence.
//
// Why not commitSetHash / entropySetHash?
// Nodes transition through sub-states (ConvergingTx → ConvergingCommit
// → ConvergingReveal) at slightly different times. If we included
// commitSetHash here, a node that transitions first would set it,
// making its position "different" from peers who haven't transitioned
// yet — deadlocking haveConsensus() for everyone.
//
// Instead, the sub-state machine in phaseEstablish handles agreement
// on those fields via quorum checks (hasQuorumOfCommits, etc.).
//
// Implications to consider:
// - Two nodes with the same txSetHash but different commitSetHash
// will appear to "agree" from the convergence engine's perspective.
// This is intentional: tx consensus must not be blocked by RNG.
// - A malicious node could propose a different commitSetHash without
// affecting tx convergence. This is safe because commitSetHash
// disagreement is caught by the sub-state quorum checks, and the
// entropy result is verified deterministically from collected reveals.
// - Leaves (myCommitment, myReveal) are also excluded — they are
// per-validator data unique to each proposer.
//@@start rng-extended-position-equality
bool
operator==(ExtendedPosition const& other) const
{
return txSetHash == other.txSetHash;
}
bool
operator!=(ExtendedPosition const& other) const
{
return !(*this == other);
}
// Comparison with uint256 (compares txSetHash only)
bool
operator==(uint256 const& hash) const
{
return txSetHash == hash;
}
bool
operator!=(uint256 const& hash) const
{
return txSetHash != hash;
}
friend bool
operator==(uint256 const& hash, ExtendedPosition const& pos)
{
return pos.txSetHash == hash;
}
friend bool
operator!=(uint256 const& hash, ExtendedPosition const& pos)
{
return pos.txSetHash != hash;
}
//@@end rng-extended-position-equality
// CRITICAL: Include ALL fields for signing (prevents stripping attacks)
//
// Compatibility note:
// - New code accepts both legacy 32-byte tx-set hashes and the extended
// payload with RNG sidecars.
// - Older binaries that only understand a raw uint256 proposal position
// will reject extended payloads as malformed.
// - Therefore ConsensusEntropy requires an all-upgraded validator set
// before activation; this format is backward-compatible, not
// forward-compatible.
//@@start rng-extended-position-serialize
void
add(Serializer& s) const
{
s.addBitString(txSetHash);
// Wire compatibility: if no extensions, emit exactly 32 bytes
// so legacy nodes that expect a plain uint256 work unchanged.
if (!commitSetHash && !entropySetHash && !exportSigSetHash &&
!exportSignaturesHash && !myCommitment && !myReveal)
return;
std::uint8_t flags = 0;
if (commitSetHash)
flags |= 0x01;
if (entropySetHash)
flags |= 0x02;
if (myCommitment)
flags |= 0x04;
if (myReveal)
flags |= 0x08;
if (exportSigSetHash)
flags |= 0x10;
if (exportSignaturesHash)
flags |= 0x20;
s.add8(flags);
if (commitSetHash)
s.addBitString(*commitSetHash);
if (entropySetHash)
s.addBitString(*entropySetHash);
if (myCommitment)
s.addBitString(*myCommitment);
if (myReveal)
s.addBitString(*myReveal);
if (exportSigSetHash)
s.addBitString(*exportSigSetHash);
if (exportSignaturesHash)
s.addBitString(*exportSignaturesHash);
}
//@@end rng-extended-position-serialize
Json::Value
getJson() const
{
Json::Value ret = Json::objectValue;
ret["tx_set"] = to_string(txSetHash);
if (commitSetHash)
ret["commit_set"] = to_string(*commitSetHash);
if (entropySetHash)
ret["entropy_set"] = to_string(*entropySetHash);
if (exportSigSetHash)
ret["export_sig_set"] = to_string(*exportSigSetHash);
if (exportSignaturesHash)
ret["export_signatures"] = to_string(*exportSignaturesHash);
return ret;
}
/** Deserialize from wire format.
Handles both legacy 32-byte hash and new extended format.
Returns nullopt if the payload is malformed (truncated for the
flags advertised).
*/
//@@start rng-extended-position-deserialize
static std::optional<ExtendedPosition>
fromSerialIter(SerialIter& sit, std::size_t totalSize)
{
if (totalSize < 32)
return std::nullopt;
ExtendedPosition pos;
pos.txSetHash = sit.get256();
// Legacy format: exactly 32 bytes
if (totalSize == 32)
return pos;
// Extended format: flags byte + optional uint256 fields
if (sit.empty())
return pos;
std::uint8_t flags = sit.get8();
// Reject unknown flag bits (reduces wire malleability)
if (flags & 0xC0)
return std::nullopt;
// Validate exact byte count for the flagged fields.
// Each flag bit indicates a 32-byte uint256.
int fieldCount = 0;
for (int i = 0; i < 6; ++i)
if (flags & (1 << i))
++fieldCount;
if (sit.getBytesLeft() != static_cast<std::size_t>(fieldCount * 32))
return std::nullopt;
if (flags & 0x01)
pos.commitSetHash = sit.get256();
if (flags & 0x02)
pos.entropySetHash = sit.get256();
if (flags & 0x04)
pos.myCommitment = sit.get256();
if (flags & 0x08)
pos.myReveal = sit.get256();
if (flags & 0x10)
pos.exportSigSetHash = sit.get256();
if (flags & 0x20)
pos.exportSignaturesHash = sit.get256();
return pos;
}
//@@end rng-extended-position-deserialize
};
// For logging/debugging - returns txSetHash as string
inline std::string
to_string(ExtendedPosition const& pos)
{
return to_string(pos.txSetHash);
}
// Stream output for logging
inline std::ostream&
operator<<(std::ostream& os, ExtendedPosition const& pos)
{
return os << pos.txSetHash;
}
/** Hash the raw export-signature blobs carried alongside a proposal.
The resulting digest is embedded in ExtendedPosition and therefore covered
by the normal proposal signature. The raw protobuf field remains outside
consensus equality, but stripping or mutating it invalidates the signed
digest before duplicate suppression.
*/
template <class ExportSignatures>
uint256
proposalExportSignaturesHash(ExportSignatures const& exportSignatures)
{
Serializer s(512);
s.add32(static_cast<std::uint32_t>(exportSignatures.size()));
for (auto const& blob : exportSignatures)
s.addVL(Slice(blob.data(), blob.size()));
return s.getSHA512Half();
}
// For hash_append (used in sha512Half and similar)
template <class Hasher>
void
hash_append(Hasher& h, ExtendedPosition const& pos)
{
using beast::hash_append;
// Serialize full position including all fields
Serializer s;
pos.add(s);
hash_append(h, s.slice());
}
/** A peer's signed, proposed position for use in RCLConsensus.
Carries a ConsensusProposal signed by a peer. Provides value semantics
@@ -43,8 +324,9 @@ namespace ripple {
class RCLCxPeerPos
{
public:
//< The type of the proposed position
using Proposal = ConsensusProposal<NodeID, uint256, uint256>;
//< The type of the proposed position (uses ExtendedPosition for RNG
// support)
using Proposal = ConsensusProposal<NodeID, uint256, ExtendedPosition>;
/** Constructor
@@ -60,7 +342,8 @@ public:
PublicKey const& publicKey,
Slice const& signature,
uint256 const& suppress,
Proposal&& proposal);
Proposal&& proposal,
std::vector<std::string> exportSignatures = {});
//! Verify the signing hash of the proposal
bool
@@ -93,6 +376,12 @@ public:
return proposal_;
}
std::vector<std::string> const&
exportSignatures() const
{
return exportSignatures_;
}
//! JSON representation of proposal
Json::Value
getJson() const;
@@ -107,6 +396,7 @@ private:
PublicKey publicKey_;
uint256 suppression_;
Proposal proposal_;
std::vector<std::string> exportSignatures_;
boost::container::static_vector<std::uint8_t, 72> signature_;
template <class Hasher>
@@ -118,7 +408,10 @@ private:
hash_append(h, std::uint32_t(proposal().proposeSeq()));
hash_append(h, proposal().closeTime());
hash_append(h, proposal().prevLedger());
hash_append(h, proposal().position());
// Serialize full ExtendedPosition for hashing
Serializer s;
proposal().position().add(s);
hash_append(h, s.slice());
}
};
@@ -131,7 +424,7 @@ private:
order to validate the signature. If the last closed ledger is left out, then
it is considered as all zeroes for the purposes of signing.
@param proposeHash The hash of the proposed position
@param position The extended position (includes entropy fields)
@param previousLedger The hash of the ledger the proposal is based upon
@param proposeSeq Sequence number of the proposal
@param closeTime Close time of the proposal
@@ -140,7 +433,7 @@ private:
*/
uint256
proposalUniqueId(
uint256 const& proposeHash,
ExtendedPosition const& position,
uint256 const& previousLedger,
std::uint32_t proposeSeq,
NetClock::time_point closeTime,

View File

@@ -139,7 +139,8 @@ RCLValidationsAdaptor::acquire(LedgerHash const& hash)
if (!ledger)
{
JLOG(j_.debug())
// MERGE NOTE (upstream 86ef16dbeb): promoted from debug to warn.
JLOG(j_.warn())
<< "Need validated ledger for preferred ledger analysis " << hash;
Application* pApp = &app_;

View File

@@ -1,4 +1,4 @@
# RCL Consensus
# RCL Consensus
This directory holds the types and classes needed
to connect the generic consensus algorithm to the
@@ -7,7 +7,11 @@ rippled-specific instance of consensus.
* `RCLCxTx` adapts a `SHAMapItem` transaction.
* `RCLCxTxSet` adapts a `SHAMap` to represent a set of transactions.
* `RCLCxLedger` adapts a `Ledger`.
* `RCLConsensus` is implements the requirements of the generic
* `RCLConsensus` implements the requirements of the generic
`Consensus` class by connecting to the rest of the `rippled`
application.
application.
Xahau-specific proposal sidecars, ConsensusEntropy/RNG, and export signature
convergence follow the invariants in
[`ConsensusExtensionsDesign.md`](ConsensusExtensionsDesign.md). Read that note
before changing extension quorum, sidecar sync, or fallback behavior.

View File

@@ -82,6 +82,16 @@ public:
Expected<uint256, HookReturnCode>
etxn_nonce() const;
/// xport APIs
Expected<uint64_t, HookReturnCode>
xport_reserve(uint64_t count) const;
Expected<uint256, HookReturnCode>
xport(Slice const& txBlob) const;
Expected<uint64_t, HookReturnCode>
xport_cancel(uint32_t ticketSeq) const;
/// float APIs
Expected<uint64_t, HookReturnCode>
float_set(int32_t exponent, int64_t mantissa) const;

View File

@@ -145,7 +145,7 @@ struct HookResult
ripple::uint256 const hookNamespace;
std::queue<std::shared_ptr<ripple::Transaction>>
emittedTxn{}; // etx stored here until accept/rollback
emittedTxn{}; // etx stored here until accept/rollback (includes xport)
HookStateMap& stateMap;
uint16_t changedStateCount = 0;
std::map<
@@ -174,6 +174,8 @@ struct HookResult
false; // hook_again allows strong pre-apply to nominate
// additional weak post-apply execution
std::shared_ptr<STObject const> provisionalMeta;
uint64_t rngCallCounter{
0}; // used to ensure conseq. rng calls don't return same data
};
class HookExecutor;
@@ -202,6 +204,8 @@ struct HookContext
uint16_t ledger_nonce_counter{0};
int64_t expected_etxn_count{-1}; // make this a 64bit int so the uint32
// from the hookapi cant overflow it
int64_t expected_export_count{-1};
int64_t export_count{0}; // how many xport() calls succeeded
std::map<ripple::uint256, bool> nonce_used{};
uint32_t generation =
0; // used for caching, only generated when txn_generation is called

View File

@@ -3,6 +3,7 @@
#include <xrpld/app/hook/HookAPI.h>
#include <xrpld/app/ledger/OpenLedger.h>
#include <xrpld/app/ledger/TransactionMaster.h>
#include <xrpld/app/tx/detail/ExportLedgerOps.h>
#include <xrpld/app/tx/detail/Import.h>
#include <xrpl/protocol/STParsedJSON.h>
#include <cfenv>
@@ -1223,6 +1224,193 @@ HookAPI::etxn_reserve(uint64_t count) const
return count;
}
Expected<uint64_t, HookReturnCode>
HookAPI::xport_reserve(uint64_t count) const
{
if (hookCtx.expected_export_count > -1)
return Unexpected(ALREADY_SET);
if (count < 1)
return Unexpected(TOO_SMALL);
if (count > hook_api::max_export)
return Unexpected(TOO_BIG);
hookCtx.expected_export_count = count;
// Also reserve emit slots so the wrapper ttEXPORT can flow
// through the normal emitted txn path.
if (hookCtx.expected_etxn_count < 0)
hookCtx.expected_etxn_count = 0;
hookCtx.expected_etxn_count += count;
return count;
}
Expected<uint256, HookReturnCode>
HookAPI::xport(Slice const& txBlob) const
{
auto& applyCtx = hookCtx.applyCtx;
auto& app = applyCtx.app;
auto j = app.journal("View");
auto& view = applyCtx.view();
if (hookCtx.expected_export_count < 0)
return Unexpected(PREREQUISITE_NOT_MET);
if (hookCtx.export_count >= hookCtx.expected_export_count)
return Unexpected(TOO_MANY_EXPORTED_TXN);
// Parse and validate the inner (cross-chain) transaction.
std::shared_ptr<STTx const> innerTx;
try
{
SerialIter sit(txBlob);
innerTx = std::make_shared<STTx const>(sit);
}
catch (std::exception const& e)
{
JLOG(j.trace()) << "HookExport[" << HC_ACC() << "]: Failed "
<< e.what();
return Unexpected(EXPORT_FAILURE);
}
if (auto ter = ExportLedgerOps::validateExportAccount(
*innerTx, hookCtx.result.account, j);
!isTesSuccess(ter))
return Unexpected(EXPORT_FAILURE);
if (auto ter = ExportLedgerOps::validateNetworkID(
*innerTx, app.config().NETWORK_ID, j);
!isTesSuccess(ter))
return Unexpected(EXPORT_FAILURE);
if (auto ter = ExportLedgerOps::validateTicketSequence(*innerTx, j);
!isTesSuccess(ter))
return Unexpected(EXPORT_FAILURE);
// Construct a ttEXPORT wrapping the inner tx, with EmitDetails,
// and push onto the emitted txn queue. This flows through the
// normal emitted txn path (emitted dir → TxQ injection → open
// ledger → retriable Export transactor).
uint32_t const ledgerSeq = view.info().seq;
// Generate a nonce for the emitted ttEXPORT wrapper.
auto nonce = etxn_nonce();
if (!nonce.has_value())
return Unexpected(INTERNAL_ERROR);
// Serialize inner tx as sfExportedTxn object.
Serializer innerSer;
innerTx->add(innerSer);
// Build the ttEXPORT wrapper as an STObject first so we can
// compute the fee, set it, then construct the STTx from the
// final serialised bytes. This avoids mutating the STTx after
// construction (which would leave a stale cached txid — see
// the tefNONDIR_EMIT check in Transactor::preclaim).
//
// The fee field is a fixed 9 bytes regardless of value, so
// patching it on the STObject doesn't change the serialised size.
STObject exportObj(sfGeneric);
{
exportObj.setFieldU16(sfTransactionType, ttEXPORT);
exportObj[sfAccount] = hookCtx.result.account;
exportObj[sfSequence] = 0u;
exportObj.setFieldVL(sfSigningPubKey, Blob{});
exportObj[sfFirstLedgerSequence] = ledgerSeq + 1;
exportObj[sfLastLedgerSequence] = ledgerSeq + 5;
exportObj[sfFee] = STAmount{0};
// sfExportedTxn inner object
SerialIter sit(innerSer.slice());
exportObj.set(std::make_unique<STObject>(sit, sfExportedTxn));
// sfEmitDetails
STObject emitDetails(sfEmitDetails);
emitDetails.setFieldU32(
sfEmitGeneration, static_cast<uint32_t>(etxn_generation()));
{
auto const burdenResult = etxn_burden();
emitDetails.setFieldU64(
sfEmitBurden,
burdenResult ? static_cast<uint64_t>(*burdenResult) : 1ULL);
}
emitDetails.setFieldH256(
sfEmitParentTxnID, applyCtx.tx.getTransactionID());
emitDetails.setFieldH256(sfEmitNonce, *nonce);
emitDetails.setFieldH256(sfEmitHookHash, hookCtx.result.hookHash);
if (hookCtx.result.hasCallback)
emitDetails.setAccountID(sfEmitCallback, hookCtx.result.account);
exportObj.set(std::move(emitDetails));
// Compute fee from serialised size and patch it in.
Serializer feeSer;
exportObj.add(feeSer);
auto feeResult = etxn_fee_base(feeSer.slice());
if (!feeResult)
{
JLOG(j.trace()) << "HookExport[" << HC_ACC()
<< "]: Fee calculation failed for ttEXPORT wrapper";
return Unexpected(EXPORT_FAILURE);
}
exportObj[sfFee] = STAmount{static_cast<uint64_t>(*feeResult)};
}
// Construct the STTx from the finalised STObject bytes.
Serializer exportSer;
exportObj.add(exportSer);
STTx exportStx(SerialIter{exportSer.slice()});
// Preflight the wrapper.
auto preflightResult = ripple::preflight(
app, view.rules(), exportStx, ripple::ApplyFlags::tapPREFLIGHT_EMIT, j);
if (!isTesSuccess(preflightResult.ter))
{
JLOG(j.trace()) << "HookExport[" << HC_ACC()
<< "]: ttEXPORT wrapper preflight failure: "
<< transHuman(preflightResult.ter);
return Unexpected(EXPORT_FAILURE);
}
// Wrap in Transaction and push to emittedTxn queue.
auto stpExport = std::make_shared<STTx const>(std::move(exportStx));
std::string reason;
auto tpTrans = std::make_shared<Transaction>(stpExport, reason, app);
if (tpTrans->getStatus() != NEW)
{
JLOG(j.trace()) << "HookExport[" << HC_ACC()
<< "]: tpTrans->getStatus() != NEW for wrapper";
return Unexpected(EXPORT_FAILURE);
}
// Push onto emittedTxn. The wrapper ttEXPORT flows through the
// normal emitted txn path (emitted dir → TxQ → open ledger →
// retriable Export transactor).
hookCtx.result.emittedTxn.push(tpTrans);
++hookCtx.export_count;
// Return the inner tx hash — this is what the hook author cares
// about (the cross-chain transaction they built).
return innerTx->getTransactionID();
}
Expected<uint64_t, HookReturnCode>
HookAPI::xport_cancel(uint32_t ticketSeq) const
{
auto& app = hookCtx.applyCtx.app;
auto j = app.journal("View");
TER const ter = ExportLedgerOps::cancelShadowTicket(
hookCtx.applyCtx.view(), hookCtx.result.account, ticketSeq, j);
if (!isTesSuccess(ter))
return Unexpected(DOESNT_EXIST);
return ticketSeq;
}
uint32_t
HookAPI::etxn_generation() const
{

View File

@@ -7,6 +7,7 @@
#include <xrpld/app/misc/TxQ.h>
#include <xrpld/app/tx/detail/Import.h>
#include <xrpld/app/tx/detail/NFTokenUtils.h>
#include <xrpld/ledger/View.h>
#include <xrpl/basics/Log.h>
#include <xrpl/basics/Slice.h>
#include <xrpl/protocol/ErrorCodes.h>
@@ -584,7 +585,9 @@ getTransactionalStakeHolders(STTx const& tx, ReadView const& rv)
case ttFEE:
case ttUNL_MODIFY:
case ttEMIT_FAILURE:
case ttUNL_REPORT: {
case ttUNL_REPORT:
case ttEXPORT:
case ttCONSENSUS_ENTROPY: {
break;
}
default: {
@@ -1657,6 +1660,7 @@ hook::finalizeHookResult(
// directory) if we are allowed to
std::vector<std::pair<uint256 /* txnid */, uint256 /* emit nonce */>>
emission_txnid;
std::vector<uint256 /* txnid */> exported_txnid;
if (doEmit)
{
@@ -1691,7 +1695,8 @@ hook::finalizeHookResult(
ptr->add(s);
SerialIter sit(s.slice());
sleEmitted->emplace_back(ripple::STObject(sit, sfEmittedTxn));
sleEmitted->set(
std::make_unique<ripple::STObject>(sit, sfEmittedTxn));
auto page = applyCtx.view().dirInsert(
keylet::emittedDir(), emittedId, [&](SLE::ref sle) {
(*sle)[sfFlags] = lsfEmittedDir;
@@ -1712,6 +1717,12 @@ hook::finalizeHookResult(
}
}
}
// Exported txns now flow through the emitted txn path above
// (xport() pushes a ttEXPORT wrapper onto emittedTxn).
// The export backlog cap is enforced after hook finalization by
// ApplyContext::checkExportEmissionLimit(), so strong and weak hook
// emissions use the same fee-only reset path.
}
bool const fixV2 = applyCtx.view().rules().enabled(fixXahauV2);
@@ -1738,6 +1749,10 @@ hook::finalizeHookResult(
meta.setFieldU16(
sfHookEmitCount,
emission_txnid.size()); // this will never wrap, hard limit
if (applyCtx.view().rules().enabled(featureExport))
{
meta.setFieldU16(sfHookExportCount, exported_txnid.size());
}
meta.setFieldU16(sfHookExecutionIndex, exec_index);
meta.setFieldU16(sfHookStateChangeCount, hookResult.changedStateCount);
meta.setFieldH256(sfHookHash, hookResult.hookHash);
@@ -3029,6 +3044,31 @@ DEFINE_HOOK_FUNCTION(int64_t, etxn_reserve, uint32_t count)
HOOK_TEARDOWN();
}
DEFINE_HOOK_FUNCTION(int64_t, xport_reserve, uint32_t count)
{
HOOK_SETUP(); // populates memory_ctx, memory, memory_length, applyCtx,
// hookCtx on current stack
auto const result = api.xport_reserve(count);
if (!result)
return result.error();
return result.value();
HOOK_TEARDOWN();
}
DEFINE_HOOK_FUNCTION(int64_t, xport_cancel, uint32_t ticket_seq)
{
HOOK_SETUP();
auto const result = api.xport_cancel(ticket_seq);
if (!result)
return result.error();
return result.value();
HOOK_TEARDOWN();
}
// Compute the burden of an emitted transaction based on a number of factors
DEFINE_HOOK_FUNCTION(int64_t, etxn_burden)
{
@@ -4092,6 +4132,177 @@ DEFINE_HOOK_FUNCTION(
HOOK_TEARDOWN();
}
//@@start xport-impl
DEFINE_HOOK_FUNCTION(
int64_t,
xport,
uint32_t write_ptr,
uint32_t write_len,
uint32_t read_ptr,
uint32_t read_len)
{
HOOK_SETUP();
if (NOT_IN_BOUNDS(read_ptr, read_len, memory_length))
return OUT_OF_BOUNDS;
if (NOT_IN_BOUNDS(write_ptr, write_len, memory_length))
return OUT_OF_BOUNDS;
if (write_len < 32)
return TOO_SMALL;
// Delegate to decoupled HookAPI for xport logic
ripple::Slice txBlob{
reinterpret_cast<const void*>(memory + read_ptr), read_len};
auto const res = api.xport(txBlob);
if (!res)
return res.error();
auto const& innerTxHash = *res;
if (innerTxHash.size() > write_len)
return TOO_SMALL;
if (NOT_IN_BOUNDS(write_ptr, innerTxHash.size(), memory_length))
return OUT_OF_BOUNDS;
WRITE_WASM_MEMORY_AND_RETURN(
write_ptr,
innerTxHash.size(),
innerTxHash.data(),
innerTxHash.size(),
memory,
memory_length);
HOOK_TEARDOWN();
}
//@@end xport-impl
// byteCount must be a multiple of 32
inline std::vector<uint8_t>
fairRng(ApplyContext& applyCtx, hook::HookResult& hr, uint32_t byteCount)
{
if (byteCount > 512)
byteCount = 512;
// force the byte count to be a multiple of 32
byteCount &= ~0b11111;
if (byteCount == 0)
return {};
auto& view = applyCtx.view();
auto const sleEntropy = view.peek(ripple::keylet::consensusEntropy());
auto const seq = view.info().seq;
auto const entropySeq =
sleEntropy ? sleEntropy->getFieldU32(sfLedgerSequence) : 0u;
// Allow entropy from current ledger (during close) or previous ledger
// (open ledger / speculative execution). On the real network hooks
// always execute during buildLCL where the entropy pseudo-tx has
// already updated the SLE to the current seq.
// TODO: open-ledger entropy uses previous ledger's entropy, so
// dice/random results will differ between speculative and final
// execution. This needs further thought re: UX implications.
if (!sleEntropy || entropySeq > seq || (seq - entropySeq) > 1 ||
sleEntropy->getFieldU16(sfEntropyCount) < 5)
return {};
// we'll generate bytes in lots of 32
uint256 rndData = sha512Half(
view.info().seq,
applyCtx.tx.getTransactionID(),
hr.otxnAccount,
hr.hookHash,
hr.account,
hr.hookChainPosition,
hr.executeAgainAsWeak ? std::string("weak") : std::string("strong"),
sleEntropy->getFieldH256(sfDigest),
hr.rngCallCounter++);
std::vector<uint8_t> bytesOut;
bytesOut.resize(byteCount);
uint8_t* ptr = bytesOut.data();
while (1)
{
std::memcpy(ptr, rndData.data(), 32);
ptr += 32;
if (ptr - bytesOut.data() >= byteCount)
break;
rndData = sha512Half(rndData);
}
return bytesOut;
}
DEFINE_HOOK_FUNCTION(int64_t, dice, uint32_t sides)
{
HOOK_SETUP();
if (sides == 0)
return INVALID_ARGUMENT;
auto vec = fairRng(applyCtx, hookCtx.result, 32);
if (vec.empty())
return TOO_LITTLE_ENTROPY;
if (vec.size() != 32)
return INTERNAL_ERROR;
uint32_t value;
std::memcpy(&value, vec.data(), sizeof(uint32_t));
return value % sides;
HOOK_TEARDOWN();
}
DEFINE_HOOK_FUNCTION(int64_t, random, uint32_t write_ptr, uint32_t write_len)
{
HOOK_SETUP();
if (write_len == 0)
return TOO_SMALL;
if (write_len > 512)
return TOO_BIG;
uint32_t required = write_len;
if ((required & ~0b11111) == required)
{
// already a multiple of 32 bytes
}
else
{
// round up
required &= ~0b11111;
required += 32;
}
if (NOT_IN_BOUNDS(write_ptr, write_len, memory_length))
return OUT_OF_BOUNDS;
auto vec = fairRng(applyCtx, hookCtx.result, required);
if (vec.empty())
return TOO_LITTLE_ENTROPY;
WRITE_WASM_MEMORY_AND_RETURN(
write_ptr, write_len, vec.data(), vec.size(), memory, memory_length);
HOOK_TEARDOWN();
}
/*
DEFINE_HOOK_FUNCTION(

View File

@@ -26,6 +26,7 @@
#include <xrpld/nodestore/Database.h>
#include <xrpl/basics/Log.h>
#include <xrpl/protocol/HashPrefix.h>
#include <xrpl/protocol/STTx.h>
#include <xrpl/protocol/digest.h>
namespace ripple {
@@ -64,6 +65,15 @@ ConsensusTransSetSF::gotNode(
stx->getTransactionID() == nodeHash.as_uint256(),
"ripple::ConsensusTransSetSF::gotNode : transaction hash "
"match");
//@@start rng-pseudo-tx-submission-filtering
// Don't submit pseudo-transactions (consensus entropy, fees,
// amendments, etc.) — they exist as SHAMap entries for
// content-addressed identification but are not real user txns.
if (isPseudoTx(*stx))
return;
//@@end rng-pseudo-tx-submission-filtering
auto const pap = &app_;
app_.getJobQueue().addJob(jtTRANSACTION, "TXS->TXN", [pap, stx]() {
pap->getOPs().submitTransaction(stx);

View File

@@ -23,12 +23,15 @@
#include <xrpld/overlay/Peer.h>
#include <xrpld/shamap/SHAMap.h>
#include <xrpl/beast/clock/abstract_clock.h>
#include <cstdint>
#include <memory>
namespace ripple {
class Application;
enum class InboundSetKind : std::uint8_t { transaction, sidecar };
/** Manages the acquisition and lifetime of transaction sets.
*/
@@ -49,11 +52,15 @@ public:
* @param setHash The transaction set ID (digest of the SHAMap root node).
* @param acquire Whether to fetch the transaction set from the network if
* it is missing.
* @param kind The kind of SHAMap payload to acquire if the set is missing.
* @return The transaction set with ID setHash, or nullptr if it is
* missing.
*/
virtual std::shared_ptr<SHAMap>
getSet(uint256 const& setHash, bool acquire) = 0;
getSet(
uint256 const& setHash,
bool acquire,
InboundSetKind kind = InboundSetKind::transaction) = 0;
/** Add a transaction set from a LedgerData message.
*

View File

@@ -0,0 +1,52 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2012, 2013 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <xrpld/app/ledger/SidecarSetSF.h>
namespace ripple {
SidecarSetSF::SidecarSetSF(NodeCache& nodeCache) : m_nodeCache(nodeCache)
{
}
void
SidecarSetSF::gotNode(
bool fromFilter,
SHAMapHash const& nodeHash,
std::uint32_t,
Blob&& nodeData,
SHAMapNodeType) const
{
if (fromFilter)
return;
m_nodeCache.insert(nodeHash, nodeData);
}
std::optional<Blob>
SidecarSetSF::getNode(SHAMapHash const& nodeHash) const
{
Blob nodeData;
if (m_nodeCache.retrieve(nodeHash, nodeData))
return nodeData;
return std::nullopt;
}
} // namespace ripple

View File

@@ -0,0 +1,57 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2012, 2013 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_APP_LEDGER_SIDECARSETSF_H_INCLUDED
#define RIPPLE_APP_LEDGER_SIDECARSETSF_H_INCLUDED
#include <xrpld/shamap/SHAMapSyncFilter.h>
#include <xrpl/basics/TaggedCache.h>
#include <optional>
namespace ripple {
// Sync filter for sidecar SHAMaps. Sidecar leaves are STObject(sfGeneric)
// payloads, not STTx transactions, so acquisition must not submit them.
// Validation stays with the consensus extension merge step, where the expected
// sidecar kind and active validator view are known.
class SidecarSetSF : public SHAMapSyncFilter
{
public:
using NodeCache = TaggedCache<SHAMapHash, Blob>;
explicit SidecarSetSF(NodeCache& nodeCache);
void
gotNode(
bool fromFilter,
SHAMapHash const& nodeHash,
std::uint32_t ledgerSeq,
Blob&& nodeData,
SHAMapNodeType type) const override;
std::optional<Blob>
getNode(SHAMapHash const& nodeHash) const override;
private:
NodeCache& m_nodeCache;
};
} // namespace ripple
#endif

View File

@@ -25,6 +25,7 @@
#include <xrpld/app/misc/CanonicalTXSet.h>
#include <xrpld/app/tx/apply.h>
#include <xrpl/protocol/Feature.h>
#include <xrpl/protocol/TxFormats.h>
namespace ripple {
@@ -106,6 +107,47 @@ applyTransactions(
bool certainRetry = true;
std::size_t count = 0;
//@@start rng-entropy-first-application
// CRITICAL: Apply consensus entropy pseudo-tx FIRST before any other
// transactions. This ensures hooks can read entropy during this ledger.
for (auto it = txns.begin(); it != txns.end(); /* manual */)
{
if (it->second->getTxnType() != ttCONSENSUS_ENTROPY)
{
++it;
continue;
}
auto const txid = it->first.getTXID();
JLOG(j.debug()) << "Applying entropy tx FIRST: " << txid;
try
{
auto const result =
applyTransaction(app, view, *it->second, true, tapNONE, j);
if (result == ApplyTransactionResult::Success)
{
++count;
JLOG(j.debug()) << "Entropy tx applied successfully";
}
else
{
failed.insert(txid);
JLOG(j.warn()) << "Entropy tx failed to apply";
}
}
catch (std::exception const& ex)
{
JLOG(j.warn()) << "Entropy tx throws: " << ex.what();
failed.insert(txid);
}
it = txns.erase(it);
break; // Only one entropy tx per ledger
}
//@@end rng-entropy-first-application
// Attempt to apply all of the retriable transactions
for (int pass = 0; pass < LEDGER_TOTAL_PASSES; ++pass)
{

View File

@@ -93,7 +93,10 @@ public:
}
std::shared_ptr<SHAMap>
getSet(uint256 const& hash, bool acquire) override
getSet(
uint256 const& hash,
bool acquire,
InboundSetKind kind = InboundSetKind::transaction) override
{
TransactionAcquire::pointer ta;
@@ -117,7 +120,7 @@ public:
return std::shared_ptr<SHAMap>();
ta = std::make_shared<TransactionAcquire>(
app_, hash, m_peerSetBuilder->build());
app_, hash, m_peerSetBuilder->build(), kind);
auto& obj = m_map[hash];
obj.mAcquire = ta;

View File

@@ -27,12 +27,14 @@ LedgerReplay::LedgerReplay(
std::shared_ptr<Ledger const> replay)
: parent_{std::move(parent)}, replay_{std::move(replay)}
{
//@@start ledger-replay-ordered-txns
for (auto const& item : replay_->txMap())
{
auto txPair = replay_->txRead(item.key()); // non-const so can be moved
auto const txIndex = (*txPair.second)[sfTransactionIndex];
orderedTxns_.emplace(txIndex, std::move(txPair.first));
}
//@@end ledger-replay-ordered-txns
}
LedgerReplay::LedgerReplay(

View File

@@ -120,7 +120,9 @@ OpenLedger::accept(
f(*next, j_);
// Apply local tx
for (auto const& item : locals)
{
app.getTxQ().apply(app, *next, item.second, flags, j_);
}
// If we didn't relay this transaction recently, relay it to all peers
for (auto const& txpair : next->txs)

View File

@@ -20,11 +20,13 @@
#include <xrpld/app/ledger/ConsensusTransSetSF.h>
#include <xrpld/app/ledger/InboundLedgers.h>
#include <xrpld/app/ledger/InboundTransactions.h>
#include <xrpld/app/ledger/SidecarSetSF.h>
#include <xrpld/app/ledger/detail/TransactionAcquire.h>
#include <xrpld/app/main/Application.h>
#include <xrpld/app/misc/NetworkOPs.h>
#include <xrpld/overlay/Overlay.h>
#include <xrpld/overlay/detail/ProtocolMessage.h>
#include <xrpld/shamap/SHAMapSyncFilter.h>
#include <memory>
@@ -40,10 +42,26 @@ enum {
MAX_TIMEOUTS = 20,
};
namespace {
std::unique_ptr<SHAMapSyncFilter>
makeSyncFilter(InboundSetKind kind, Application& app)
{
// Sidecars deliberately reuse candidate tx-set acquisition; the filter only
// changes leaf handling so sidecar STObjects are cached, not submitted.
if (kind == InboundSetKind::sidecar)
return std::make_unique<SidecarSetSF>(app.getTempNodeCache());
return std::make_unique<ConsensusTransSetSF>(app, app.getTempNodeCache());
}
} // namespace
TransactionAcquire::TransactionAcquire(
Application& app,
uint256 const& hash,
std::unique_ptr<PeerSet> peerSet)
std::unique_ptr<PeerSet> peerSet,
InboundSetKind kind)
: TimeoutCounter(
app,
hash,
@@ -52,9 +70,15 @@ TransactionAcquire::TransactionAcquire(
app.journal("TransactionAcquire"))
, mHaveRoot(false)
, mPeerSet(std::move(peerSet))
, mSetKind(kind)
{
// Keep sidecar fetch on the same content-addressed SHAMap path as tx sets:
// normal reply limits, peer scoring, charging, and timeout behavior apply.
mMap = std::make_shared<SHAMap>(
SHAMapType::TRANSACTION, hash, app_.getNodeFamily());
kind == InboundSetKind::sidecar ? SHAMapType::SIDECAR
: SHAMapType::TRANSACTION,
hash,
app_.getNodeFamily());
mMap->setUnbacked();
}
@@ -69,7 +93,10 @@ TransactionAcquire::done()
}
else
{
JLOG(journal_.debug()) << "Acquired TX set " << hash_;
JLOG(journal_.debug())
<< "Acquired "
<< (mSetKind == InboundSetKind::sidecar ? "sidecar" : "TX")
<< " set " << hash_;
mMap->setImmutable();
uint256 const& hash(hash_);
@@ -145,8 +172,8 @@ TransactionAcquire::trigger(std::shared_ptr<Peer> const& peer)
}
else
{
ConsensusTransSetSF sf(app_, app_.getTempNodeCache());
auto nodes = mMap->getMissingNodes(256, &sf);
auto sf = makeSyncFilter(mSetKind, app_);
auto nodes = mMap->getMissingNodes(256, sf.get());
if (nodes.empty())
{
@@ -198,7 +225,7 @@ TransactionAcquire::takeNodes(
if (data.empty())
return SHAMapAddNode::invalid();
ConsensusTransSetSF sf(app_, app_.getTempNodeCache());
auto sf = makeSyncFilter(mSetKind, app_);
for (auto const& d : data)
{
@@ -216,7 +243,7 @@ TransactionAcquire::takeNodes(
else
mHaveRoot = true;
}
else if (!mMap->addKnownNode(d.first, d.second, &sf).isGood())
else if (!mMap->addKnownNode(d.first, d.second, sf.get()).isGood())
{
JLOG(journal_.warn()) << "TX acquire got bad non-root node";
return SHAMapAddNode::invalid();

View File

@@ -23,9 +23,12 @@
#include <xrpld/app/main/Application.h>
#include <xrpld/overlay/PeerSet.h>
#include <xrpld/shamap/SHAMap.h>
#include <cstdint>
namespace ripple {
enum class InboundSetKind : std::uint8_t;
// VFALCO TODO rename to PeerTxRequest
// A transaction set we are trying to acquire
class TransactionAcquire final
@@ -39,7 +42,8 @@ public:
TransactionAcquire(
Application& app,
uint256 const& hash,
std::unique_ptr<PeerSet> peerSet);
std::unique_ptr<PeerSet> peerSet,
InboundSetKind kind);
~TransactionAcquire() = default;
SHAMapAddNode
@@ -57,6 +61,7 @@ private:
std::shared_ptr<SHAMap> mMap;
bool mHaveRoot;
std::unique_ptr<PeerSet> mPeerSet;
InboundSetKind mSetKind;
void
onTimer(bool progress, ScopedLockType& peerSetLock) override;

Some files were not shown because too many files have changed in this diff Show More