Strip JLOG(j_.debug()) calls from buildEntropySet, fetchRngSetIfNeeded,
and finalizeRoundEntropy in CSF Peer.h. These were added for local
debugging and caused CI failures due to output size limits.
Running 3 rounds caused peer 0 to desync on round 2, dropping
prevProposers for the majority on round 3, triggering bootstrap
skip → zero entropy on the last round. The gate works correctly
(logs show aligned=3, peersSeen=3) but the test was checking the
LAST round's entropy, not the round where the gate was exercised.
Run 1 round after warmup — sufficient to exercise the gate.
The entropy convergence deadline was measured from revealPhaseStart_,
which is set when entering ConvergingReveal. By the time the entropy
set is published (after reveal timeout + observation tick), most of
the deadline budget was already spent — leaving insufficient time
for peer alignment.
Add entropyPublishStart_ timestamp set when the entropy set is first
published. All convergence gate deadlines now measure from this
point, giving the full 2x rngREVEAL_TIMEOUT window for peer
proposals to propagate and alignment to be observed.
When peers have published entropySetHash but none match ours yet
(e.g. a subset peer is the only one seen so far), wait for the
bounded deadline instead of immediately falling back to zero.
Other aligned peers may not have published yet — give them time.
Only fall back to zero if no alignment is observed within the
deadline (2x rngREVEAL_TIMEOUT).
After fetch/merge, if our entropy set hash didn't change, the
conflicting peer had a subset of our data — not a real threat.
Clear the conflict flag so we don't fall back to zero when a peer
simply has fewer reveals than us.
If the hash DID change (merge added data), re-count alignment
with the updated hash before treating it as a real conflict.
This prevents the majority from falling back to zero just because
one peer (e.g. isolated) has a smaller reveal set.
The observation tick alone was insufficient — a node could pass the
gate without any peer confirming its entropySetHash. Now the gate
requires at least one tx-converged peer with a matching hash before
accepting non-zero entropy.
Three cases after the observation tick:
1. aligned > 0: peers confirm our hash → proceed with entropy
2. conflict: fetch/merge/rebuild → bounded wait → zero fallback
3. aligned=0, peersSeen=0: no peers published yet → bounded wait →
zero fallback if still no peers at deadline
4. aligned=0, peersSeen>0: peers published but none match → zero
Also:
- CSF finalizeRoundEntropy now uses shouldZeroEntropy() (quorum check)
- Two new TDD tests:
- testRngNoEntropyWithoutPeerAlignment: healthy network must agree
- testRngAlignmentRequiredForNonZeroEntropy: isolated peer must not
produce non-zero entropy that differs from majority
CSF shouldZeroEntropy() now checks reveals < quorumThreshold (80% of
UNL), matching production. MajorRevealLoss test adjusted to verify
majority group agreement rather than requiring full synchronization
(peer 0 may desync when it misses most reveals).
All 15 ConsensusRng tests now pass.
Two fixes addressing the asymmetric-view problem:
1. Convergence gate now forces one observation tick after first
publishing the entropySet before accepting. Previously a node
could publish + accept in the same tick, never seeing a peer's
different hash. The entropySetPublished_ flag ensures at least
one round-trip for proposal propagation.
2. CSF shouldZeroEntropy() now checks quorum threshold (80% of UNL),
matching production behavior. Previously it only checked empty().
Result: PartialReveals test now passes — all 6 peers converge on
the same entropy (count=6) via union merge after the observation tick.
14/15 ConsensusRng tests pass.
The CSF never self-seeded its own reveal into pendingReveals_ because
harvestRngData only processes peer proposals, not self. The real code
handles this in decorateMessage, but the CSF has no equivalent.
Add selfSeedReveal() called from the tick at reveal transition.
Both the real ConsensusExtensions and the CSF Extensions implement it.
The real code now has belt-and-suspenders: tick + decorateMessage.
This fixes CSF peers having N-1 reveals instead of N, which caused
every peer to compute entropy from a different subset.
Add a content-addressed SidecarStore to the CSF, simulating the
InboundTransactions SHAMap fetch pipeline. Tagged entries (commit
or reveal) are published by hash during buildCommitSet/buildEntropySet
and fetched by hash during fetchRngSetIfNeeded, with type-aware
union merge into the correct local pending set.
Also adds debug logging to CSF Extensions for entropy pipeline
troubleshooting.
Add a bounded pre-accept convergence check for entropySetHash,
closing the gap where two honest validators could accept with
different reveal subsets and compute different entropy (ledger fork).
After publishing the entropy set, the gate:
1. Inspects tx-converged peer positions for conflicting entropySetHash
2. Fetches differing sets via fetchRngSetIfNeeded (union merge)
3. Rebuilds and re-publishes the local entropy set after merge
4. Waits within a bounded window (2x rngREVEAL_TIMEOUT)
5. Falls back to zero entropy if conflict persists past deadline
This follows the same pattern as the existing commitSetHash conflict
handling and exportSigSetHash convergence gate. Union merge ensures
monotonic set growth — honest timing skew resolves quickly, and
hostile hash spam hits the hard deadline and falls back safely.
The "one bad actor shouldn't deny entropy" optimization (supermajority
vote) is deferred to a follow-up patch per codex recommendation.
Three new CSF tests that document expected behavior for the
entropySetHash convergence gate (not yet implemented):
1. testRngEntropyConvergesWithPartialReveals: two groups each drop
one peer's reveal, creating different quorate subsets. Must not
fork — either converge via SHAMap merge or both fall back to zero.
2. testRngEntropyFallbackOnMajorRevealLoss: one peer drops most
reveals (below quorum locally). Network must still agree.
3. testRngSingleByzantineCannotDenyEntropy: one Byzantine peer
(future: forced garbage entropySetHash) should not prevent the
other 80% from producing valid entropy.
Also adds dropRevealFrom_ test knob to CSF Peer::Extensions for
simulating asymmetric reveal delivery.
- Skip addVerifiedSignature in decorateMessage when sigBuf is empty
(sign() threw — don't mark a failed sign as "verified")
- Add XRPL_ASSERT in addVerifiedSignature and addUnverifiedSignature
requiring non-empty signature buffers
- Add XRPL_ASSERT in checkQuorumAndSnapshot verifying that every
entry in the verified set exists in the signatures map with a
non-empty buffer
Enforce the contract: source chain finalizes an export only when it
has a quorum of cryptographically verified multisignatures.
ExportSigCollector changes:
- signatureCount() now counts verified entries only
- checkQuorumAndSnapshot() returns verified-only snapshot
- snapshot() and snapshotWithSigs() return verified-only data
- buildExportSigSet (via snapshot) publishes verified-only entries
- unverifiedSignatures() returns sigs needing verification
- upgradeSignature() promotes unverified to verified
- addStandaloneSignature() marks as verified (no consensus to check)
- All add methods now set firstSeenSeq (fixes stale cleanup bug)
Export::doApply changes:
- Upgrade pass before quorum check: deserializes the inner tx (which
is always available as ctx_.tx), verifies any unverified sigs via
buildMultiSigningData + verify(), upgrades them in the collector
- Then checks quorum on verified-only count
- Assembles blob from verified-only snapshot
This means:
- Unverified sigs (relay ordering) are local cache only
- They don't count toward quorum until upgraded
- SHAMap convergence operates on verified sigs only
- Destination chain verification remains defense-in-depth
SHAMap has no leafCount() method — it was a local variable in
SHAMap.cpp, not a public API. Use std::distance(begin(), end())
on the SHAMap's ForwardRange iterators instead. Cost is O(n) but
the set is bounded by UNL size (~20-35 entries).
shouldZeroEntropy() and sfEntropyCount no longer fall back to
pendingReveals_. If entropySetMap_ is null, entropy failed — the
pipeline didn't complete, and the map is the only canonical source.
pendingReveals_ is now strictly an internal staging area for the
commit/reveal pipeline. All final entropy decisions flow through
entropySetMap_, which is the consensus-agreed set.
The H2 entropy fix switched the digest computation to entropySetMap_
but shouldZeroEntropy() and sfEntropyCount still used pendingReveals_.
Since pendingReveals_ can diverge from the published entropySetMap_
(late reveals mutate it after the map hash is published), two nodes
agreeing on the same entropySetHash could still build different
ttCONSENSUS_ENTROPY pseudo-transactions.
Now shouldZeroEntropy() checks entropySetMap_ leaf count when the map
is available, and sfEntropyCount uses the map's leaf count. Both
fall back to pendingReveals_ only during pipeline stages before the
map is built.
Replace the ambiguous addSignature/hasSignature API with clearly
named methods that make verification state explicit:
addVerifiedSignature() — sig passed buildMultiSigningData + verify()
addUnverifiedSignature() — trusted source but no multisign check yet
addStandaloneSignature() — pubkey-only for standalone/test mode
hasVerifiedSignature() — only returns true for verified sigs
Unverified sigs (relay ordering fallback) are no longer treated as
verified by the cache. When the same sig is encountered again via a
path that CAN verify (e.g. SHAMap merge after the tx arrives), the
verification runs and upgrades it to verified.
addUnverifiedSignature() won't overwrite a verified sig, preventing
downgrade. SigEntry tracks verified validators in a separate set.
Revert the hard reject when ttEXPORT is not in the open ledger.
Under relay ordering, a node can receive a proposal with export sigs
before the ttEXPORT tx itself arrives. Dropping these sigs loses a
valid validator contribution for the entire round with no recovery
path until terRETRY_EXPORT on the next round.
Post C1+C2, the proposal-level authentication is sufficient trust:
checkSign() verified the sender holds the private key, and sender
binding verified the embedded pubkey matches. Store the sig and
let the multisign content be verified on the destination chain.
The collector's stale cleanup (256 ledgers) bounds retention.
When the tx IS in the open ledger (common case), the multisign sig
is still fully verified via buildMultiSigningData + verify().
H2: Compute final entropy from the agreed-upon entropySetMap_ SHAMap
rather than from the local pendingReveals_ in-memory map.
Previously, two nodes with different reveal subsets at timeout would
compute different entropy from their local pendingReveals_ maps,
despite both passing haveConsensus() (which only checks txSetHash).
This could cause a ledger fork.
Now the entropy computation reads directly from the entropySetMap_
whose hash was published in proposals and converged via SHAMap
fetch/merge. Nodes that agree on entropySetHash deterministically
produce the same entropy regardless of local pendingReveals_ state.
If entropySetMap_ is null (bootstrap skip, pipeline failure), the
existing shouldZeroEntropy() fallback handles it.
Reject proposals with more than ExportLimits::maxPendingExports (8)
export sig entries. Honest validators attach at most one sig per
pending export, bounded by the same limit. Prevents DoS via
proposals with millions of entries triggering lock contention on
the validator list and collector mutexes.
Add hasSignature() to ExportSigCollector — checks if a verified sig
already exists for a given (txHash, validator) pair. Both the
proposal ingestion path and the SHAMap merge path now check this
before calling verify(), avoiding redundant ed25519 verification
when the same sig arrives via multiple paths.
No external sig cache exists in rippled, so the collector itself
serves as the verification cache: once a sig is stored (always
post-verify), subsequent encounters skip the crypto work.
Three fixes from codex review:
1. Remove unsafe fallback in proposal ingestion path: reject export
sigs when the ttEXPORT tx is not in the open ledger instead of
storing them unverified. The tx must be in the open ledger for
validators to have signed it, so this is not a legitimate case.
2. Add full sig verification to the SHAMap merge path
(onAcquiredSidecarSet): verify each export sig entry against
buildMultiSigningData + verify() before storing in the collector.
Previously this path only checked trusted() on the pubkey,
allowing a malicious UNL validator to publish a sidecar set with
forged sigs for other validators.
3. Close cluster mode bypass: always call checkSign() and gate export
sig harvesting on sigValid, even when cluster() is true. Cluster
trust is for relay/resource charging, not for accepting on-chain
cryptographic artifacts.
C3: Cryptographically verify each export signature blob against the
inner transaction's signing data before storing in the collector.
Looks up the ttEXPORT tx from the open ledger, reconstructs the
signing data via buildMultiSigningData, and calls verify().
If the tx isn't in our open ledger yet (timing/relay), the sig is
stored unverified as a fallback — it can be verified later at the
SHAMap merge path or will be rejected at Export::doApply if invalid.
This runs on the jtPROPOSAL_t job queue thread (not the IO strand
or transactor), so the verify() cost has no impact on consensus
critical path performance.
C2 hardening: validate all export sig blobs before committing any,
preventing partial state if a later blob fails the sender binding
check. Also moves the trusted() check before the loop since senderPK
is constant.
H1: Add checkQuorumAndSnapshot() to ExportSigCollector that performs
the quorum threshold check and signature snapshot under a single lock
acquisition. Export::doApply now uses this instead of separate
signatureCount() + snapshotWithSigs() calls, eliminating the TOCTOU
window where concurrent overlay threads could mutate the collector
between the two operations.
C1: Move onTrustedPeerMessage() from the synchronous onMessage(TMProposeSet)
handler into checkPropose(), after checkSign() verifies the proposal's
cryptographic signature. Previously, export sigs were ingested before
signature verification, allowing any peer to inject forged sigs by
spoofing nodepubkey to a trusted validator's key.
C2: Add sender binding in onTrustedPeerMessage() — each export sig
blob's embedded validator pubkey must match the proposal sender's
nodepubkey. Reject the entire proposal's export sigs on any mismatch,
preventing a compromised validator from impersonating other validators
to single-handedly forge quorum.
Move ConsensusExtensionsTick.h from xrpld/app/consensus/ to
xrpld/consensus/ — it's a pure template with no app-layer deps.
Extract Peer::Extensions::onTick() definition into test/csf/PeerTick.h
so Peer.h no longer includes from xrpld/app/.
Eliminates the test.csf > xrpld.app levelization edge.
Add --explain flag to levelization.py for tracing dependency edges.
Replace the duplicated throwaway-STTx + real-STTx pattern with a
single STObject: set all fields including fee=0, serialise to compute
the fee, patch the fee, then serialise once into the final STTx.
20 lines shorter, no duplication.
testXportPayment now asserts the full emitted tx lifecycle:
- hook fires with ACCEPT, emitCount=1, returnCode=0
- sfHookEmissions present with 1 entry
- ltEMITTED_TXN in AffectedNodes
- emitted dir is not empty
- after close, emitted ttEXPORT appears in closed ledger
Also add FOCUSED_TEST env var gate for fast iteration during
development (set FOCUSED_TEST=1 to run only focused_test()).
Move export scenario tests from suite.yml into their own
export-suite.yml file. The defaults already set CE+Export features
so individual test entries no longer need to repeat them.
Add SuiteLogsWithOverrides test utility: a Logs subclass that routes
specified journal partitions to stderr (always visible) while keeping
others on suite_.log (only on failure). Useful for debugging specific
subsystems during test development.
Mutating the fee via const_cast after STTx construction left a stale
cached getTransactionID(). When the emitted ttEXPORT was serialised
into the emitted directory and later deserialised, the round-tripped
txid differed from the original, causing tefNONDIR_EMIT in
Transactor::preclaim (the emitted dir entry was keyed with the stale
hash).
Build a throwaway STTx with fee=0 to calculate the fee size, then
construct the real STTx with the correct fee from the start.
Use sfExportedTxn (OBJECT) instead of sfBlob (VL) for the multisigned
transaction in sfExportResult metadata. This renders as readable JSON
with all fields visible (Account, Signers, etc.) instead of an opaque
hex blob.
Also compute the tx hash directly via getHash(HashPrefix::transactionID)
on the STObject instead of serializing/deserializing through STTx.
Export::doApply now builds the fully multisigned inner tx and stores
it as sfBlob in sfExportResult metadata. In standalone mode, the node
signs directly with its own validator keys (no consensus needed).
Key changes:
- ExportSigCollector stores actual multisign signatures, not just pubkeys
- RCLConsensus proposal attachment computes real multisign sigs over inner tx
- PeerImp harvests variable-length sig entries from proposals
- Export::doApply assembles Signers array, builds multisigned blob first,
then uses its hash for the shadow ticket (getTransactionID includes all
fields including Signers)
- Import skips OperationLimit and signing key checks for export callback
path (sfTicketSequence present) — shadow ticket proves the relationship
- Full Export→XRPL→Import round-trip test: export on Xahau, submit
multisigned blob to XRPL (with matching SignerList), build XPOP,
import back, verify shadow ticket consumed
Shadow tickets store the exported transaction hash. Import now verifies
the XPOP's inner tx hash matches, preventing use of a different XPOP
with the same TicketSequence.
- Validate incoming export sig pubkeys against trusted UNL (PeerImp)
- Fix handleAcquiredRngSet misclassifying export sets when local map is null
- Add stale-entry cleanup (cleanupStale with 256-ledger timeout)
- Gate sig attachment on featureExport amendment (RCLConsensus)
- Fail with tefINTERNAL if ExportResult metadata can't be written
- Make export and cancel mutually exclusive (reject both in preflight)
- Remove dead ExportLimits.h include
Without CE: require unanimity (100% UNL) to avoid non-deterministic
quorum disagreement. With CE: use standard 80% quorum via
calculateQuorumThreshold (SHAMap convergence will ensure agreement).
Standalone/unit tests: require 1 sig.
Replace the old ltEXPORTED_TXN + ttEXPORT_FINALIZE (validation-based
sigs, TxQ injection) approach with a retriable ttEXPORT that collects
validator signatures via TMProposeSet during consensus.
Added:
- terRETRY_EXPORT: keeps tx in retry set across ledger boundaries
- tecEXPORT_EXPIRED (200): LLS expiry frees sequence cleanly
- sfExportResult (OBJECT 98): signed export result in tx metadata
- ExportSigCollector: minimal thread-safe sig tracker
- Proposal sig attachment (RCLConsensus) + harvesting (PeerImp)
- exportSignatures field in TMProposeSet (ripple.proto)
- Metadata plumbing (TxMeta, ApplyViewImpl, ApplyStateTable)
- Hook xport() now emits ttEXPORT via normal emitted txn path
Removed:
- ttEXPORT_FINALIZE (type 90) pseudo-tx and Change::applyExportFinalize
- ltEXPORTED_TXN ledger entry and exportedDir/exportedTxn keylets
- ExportSignatureCollector (replaced by ExportSigCollector)
- TxQ export injection (quorum check + rawTxInsert)
- Validation-based export signing in RCLConsensus
- Application::getExportSignatureCollector
Verified on 5-node testnet: golden path (same-ledger finalization with
ExportResult in metadata), degraded path (tecEXPORT_EXPIRED on sub-quorum),
and hook xport() path (emitted ttEXPORT with shadow ticket creation).
Add sfCancelTicketSequence (UINT32 field 101) to ttEXPORT, allowing
users to cancel shadow tickets via a transaction. Both sfExportedTxn
and sfCancelTicketSequence are optional — at least one must be present.
This allows export, cancel, or both in a single transaction.
Test: create shadow ticket via export, cancel via sfCancelTicketSequence,
verify ticket is gone and owner reserve is freed.
Exported transactions must use TicketSequence (with Sequence=0)
because a bounced tx on the destination chain would jam sequential
sequence numbers. This is enforced in both the hook xport() API
and the Export transactor via ExportLedgerOps::validateTicketSequence().
Adds test: ttEXPORT rejects export without TicketSequence.
Updates existing test hooks to include TicketSequence in exported txns.
When the inner XPOP transaction has sfTicketSequence, Import now
takes the export callback path: consume the shadow ticket via
ExportLedgerOps::cancelShadowTicket() and return. No B2M balance
crediting, no account creation. Hooks fire normally and can inspect
the result via xpop_slot().
The B2M path is unchanged for non-ticket imports.
Also migrates the shadow ticket check in preclaim from the old
hookState namespace approach to keylet::shadowTicket().
Removes the unused shadowTicketNamespace constant.
Introduce shadow tickets for export replay protection:
- ltSHADOW_TICKET ledger entry: account-owned, keyed by
account + ticket sequence. Fields: sfAccount, sfTicketSequence,
sfTransactionHash, sfLedgerSequence, sfOwnerNode.
- ExportLedgerOps::createShadowTicket(): creates shadow ticket
when exported tx has sfTicketSequence. Charges owner reserve.
Called from both hook xport() path and Export transactor.
- ExportLedgerOps::cancelShadowTicket(): deletes shadow ticket,
frees reserve. Used by xport_cancel hook API.
- xport_cancel(ticket_seq) hook API: allows hooks to cancel
shadow tickets for exports that will never get a callback.
- InvariantCheck: add ltSHADOW_TICKET to valid entry types.
- Test: verify shadow ticket creation with correct fields and
owner count bump via ttEXPORT with TicketSequence.
Rename the existing ttEXPORT pseudo-tx to ttEXPORT_FINALIZE (type 90)
to make room for a user-submittable ttEXPORT (type 91).
ttEXPORT allows non-hook users to submit export transactions directly,
creating the same ltEXPORTED_TXN entries that the hook xport() API
creates inline.
Extract shared logic into ExportLedgerOps.h:
- createExportedTxn(): creates ltEXPORTED_TXN, enforces directory cap
- validateNetworkID(): self-target and unconfigured guards
- validateExportAccount(): account ownership check
Both the hook API (HookAPI.cpp) and the Export transactor now call
into ExportLedgerOps, eliminating duplicated validation and ledger
mutation code.
If the node's NETWORK_ID is 0 (default/unconfigured) and the exported
transaction has no sfNetworkID field, we can't distinguish self-targeting
from legitimate cross-chain export. Reject to be safe.
Also adds exportTestConfig() helper and test for the unconfigured case.
Verify that xport() rejects exported transactions whose sfNetworkID
matches the local network. The hook builds a Payment with
NetworkID=21337 (matching the test env), and the guard correctly
returns EXPORT_FAILURE causing tecHOOK_REJECTED.
Also fix log level for the guard rejection to warn (not trace).
Explicitly forbid exported transactions whose sfNetworkID matches the
local network's ID. An exported txn re-executing on its origin chain
could cause exploits or logic issues.
The check is intentionally non-mandatory: XRPL mainnet (the primary
export target) doesn't use NetworkID, so absent = allowed.
Both xrpld.overlay and xrpl.hook depend on xrpl.protocol, so placing
the header there avoids introducing a new xrpld.overlay > xrpl.hook
levelization dependency.
- max_export per hook: 4 → 2
- maxPendingExports: cap exported directory at 8 entries (tecDIR_FULL)
- clamp inbound signature processing in PeerImp to directory cap
The directory cap is the root DoS constraint: each pending export
requires every validator to sign and broadcast every round. Inbound
processing and signing throughput are transitively bounded by it.
Inline comments at all 6 conflict points guiding the maintainer
through the expected merge conflicts when sync-2.5.0 lands:
ledgerMAX_CONSENSUS const, bootstrap params, calculateQuorumThreshold,
effectiveParms+stalled in haveConsensus, DisputedTx::stalled() j/clog
params, and testDisputes placement.
Cherry-pick of ripple/rippled@86ef16dbeb ("Fix: Don't flag consensus
as stalled prematurely (#5627)"). Not yet in any xahau sync branch.
Fixes false stall detection when there are no disputed transactions:
std::ranges::all_of on an empty set is vacuously true, so consensus
was incorrectly flagged as stalled. Adds !result_->disputes.empty()
guard.
Also adds diagnostic logging to DisputedTx::stalled() and the
stall detection path in haveConsensus(), and promotes the
"Need validated ledger" log from debug to warn.
Cherry-pick of ripple/rippled@d22a5057b9 / xahau dd085e5d8 ("Prevent
consensus from getting stuck in the establish phase (#5277)"), resolved
against our RNG pipeline and bootstrap fast-start changes.
Upstream adds three layered anti-stall mechanisms:
- Stateful per-dispute avalanche state machine (init→mid→late→stuck)
- Stall detection: declares consensus when all disputes individually settled
- Hard expiration: clamp(10× prev round, 15s, 120s) wall-clock safety net
Conflict resolution:
- ConsensusParms.h: kept both avalanche state machine (const members,
avMIN_ROUNDS, avSTALLED_ROUNDS, getNeededWeight) and our bootstrap
params (bootstrapRoundTimeSeed, bootstrapStableRoundsRequired).
ledgerMAX_CONSENSUS left non-const for bootstrap override.
- Consensus.h: pass both stalled flag and effectiveParms to checkConsensus.
Stall check uses original parms, bootstrap override only affects max
consensus timeout.
- Consensus_test.cpp: kept all 12 RNG tests and new testDisputes test.
Use an explicit 5s cap instead of dividing the default 15s.
5s is the sweet spot: long enough for peers to exchange proposals
and converge naturally, short enough to avoid wasted time.
Shorter values (e.g. 3.75s) cause nodes to hit reachedMax before
peers converge, cascading into slower subsequent rounds.
Seed prevRoundTime_ to 3s instead of 15s on first round, override
idle interval to bypass closeTimeResolution (10-30s on early ledgers),
and halve ledgerMAX_CONSENSUS during bootstrap. Auto-disables after 3
consecutive rounds with UNL quorum participation.
Cuts 5-node testnet cold-start from ~28s to ~13s.
Also adds projected-source markers to TxQ, NetworkOPs, and Submit for
the transaction-submission documentation template.
Consolidate the repeated entropy fallback condition
(entropyFailed || no reveals || sub-quorum reveals) into a single
method. Fixes EntropyCount field reporting non-zero when the digest
was correctly zeroed due to sub-quorum reveals.
Sub-quorum reveals (e.g. 3/4 threshold) were producing real entropy,
allowing a minority of validators to disproportionately influence the
output. Both injectEntropyPseudoTx and buildExplicitFinalProposalTxSet
now fall back to zero entropy when reveals < quorumThreshold().
When prevProposers < quorumThreshold, the network is still converging
and RNG can only produce zero entropy. Skip the commit/reveal pipeline
to avoid PIPELINE_TIMEOUT and conflict-wait delays that compound across
staggered startup rounds.
hasQuorum() and getExportsWithQuorum() were using raw signerMap.size()
which includes unverified signatures. TxQ could inject a ttEXPORT
pseudo-tx that then fails the stricter verified-signature check in
Change::applyExport(). Use verifiedSignatureCount() instead so TxQ
only injects when cryptographically verified quorum is actually met.
Also add cmake plumbing for enhanced logging: link date::date-tz when
available and enable BEAST_ENHANCED_LOGGING for Debug builds.
Quorum fix:
- Rename expectedProposers_ → likelyParticipants_ to clarify role
- Fix commit quorum to 80% of active UNL snapshot (not shrinkable by
recent proposer count, which was allowing 2/3 to pass as quorum)
- hasQuorumOfCommits() now uses simple threshold check only
- Add CSF test: persistent loss does not shrink quorum
Log level cleanup:
- Demote ~30 RNG/STALLDIAG per-peer/per-tick lines from info/debug to
debug/trace across Consensus.h and RCLConsensus.cpp
- Principle: per-peer/per-tick → trace; state transitions → debug;
milestones → info
- Reduces testnet log volume by ~93%
date::current_zone() can throw if the timezone database is unavailable
or misconfigured (e.g. minimal container images). Fall back to UTC
formatting so enhanced logging does not make startup fatal.
Clarify inline that seq=3 publish can carry unchanged txSetHash while still providing extra entropySetHash delivery/fetch opportunity under packet loss or reordering.
Keep explicit final proposal as an opt-in experimental path with implicit mode as default.
Add inline rationale/TBD notes, extend stall diagnostics, and cover runtime-config + CSF txn-path behavior with tests.
Add Builds/levelization/levelization.py for fast local iteration and semantic comparison against canonical shell output via --compare-to.
Keep Builds/levelization/levelization.sh as canonical path, and update levelization workflow to fail if python output diverges from shell-generated results.
Also harden interactive-shell detection in levelization.sh for portability and document local usage in README.
Decouple RCLConsensus.h from Consensus.h by forward-declaring Consensus and storing Consensus<Adaptor> behind std::unique_ptr, moving thin wrappers out-of-line into RCLConsensus.cpp.
Also remove direct RCLConsensus.h include from NetworkOPs.h (forward declare), and add explicit includes in DatagramMonitor.h and ServerDefinitions.cpp to replace transitive dependencies.
Keep RNG fast-path behavior unchanged in Consensus.h; build and ripple.consensus.Consensus remain green.
Keep entropy-set recovery path but elect a deterministic single broadcaster (lowest NodeID among tx-converged participants) instead of every proposer broadcasting entropySetHash.
This lowers steady-state proposal chatter while preserving liveness for lagging peers that need entropy-set fetch/merge.
Expose rng_claim_drop_pct in runtime config (RPC + env) as a clamped 0-100 percentage used by RNG claim-drop testing.
Include RuntimeConfig RPC tests for round-trip and clamping behavior.
Track active RNG round sequence for fetched set validation so lagging observers can merge current-round commit sets instead of rejecting them as closed+1 out-of-round.
Refresh/re-publish commitSetHash after fetch-merge conflicts and publish entropySetHash in ConvergingReveal so peers can recover reveal sets.
Add inline tradeoff notes: extra proposal traffic is accepted to preserve consensus safety/liveness under packet loss or drop injection.
Harden acquired RNG merge paths with strict entry typing, trusted key/node binding, round-sequence gating, reveal-to-commit linkage checks, and stale reveal/proof invalidation on commitment changes.
Adjust proposer expectation logic so non-proposing observers are not counted as expected committers, and add a CSF regression test covering observer self-commit exclusion.
Reject mixed commit/reveal maps, enforce per-entry type checks, bind node identity to trusted validator keys, and gate acquired entries to the active round.
Also verify acquired reveals against stored commitments and clear stale reveal/proof state when commitments change.
Move core xport_reserve and xport implementations from applyHook.cpp
DEFINE_HOOK_FUNCTION wrappers into the decoupled HookAPI class, following
the same pattern used for etxn_reserve and emit.
- Error on unknown message_types instead of silently widening scope
- Make messageCategories optional so per-peer can override global filter
to "all categories" (nullopt=inherit, empty set=explicitly all)
- Clamp send_drop_pct to 0-100% range
- Add STARTDIAG: logging for consensus startup diagnostics
- Add 3 test cases (11 total, 58 assertions)
- Fix convergence regression caused by 2.4.0 merge: replace
stringIsUint256Sized(currenttxhash) with size() < uint256::size()
to accept extended proposals (>32 bytes) containing RNG fields
- Add message_types filter to RuntimeConfig for targeting specific
protocol message categories (proposal, validation, transaction, etc.)
- Add appliesTo() method and messageCategories set to ConfigVals
- Add category name mapping helpers in RPC handler
- Add 2 test cases for message type filtering (8 total)
Add a generic RuntimeConfig service for runtime-configurable parameters,
initially supporting artificial send delays and packet drops for testing
consensus behavior on local testnets.
- RuntimeConfig class with atomic fast-path gate (zero cost when inactive)
- Per-peer targeting via "*" (global) and "ip:port" keys with inheritance
- Pre-merged caching at write time for single-lookup read path
- Admin RPC handler `runtime_config` (set/clear/clear_all/get)
- Env var support: XAHAU_RUNTIME_CONFIG (JSON) or XAHAU_SEND_* vars
- PeerImp::send() integration with delay timer and probabilistic drops
- RPC handler test covering all operations and merge behavior
Due to rounding, the LPTokenBalance of the last LP might not match the LP's trustline balance. This was fixed for `AMMWithdraw` in `fixAMMv1_1` by adjusting the LPTokenBalance to be the same as the trustline balance. Since `AMMClawback` is also performing a withdrawal, we need to adjust LPTokenBalance as well in `AMMClawback.`
This change includes:
1. Refactored `verifyAndAdjustLPTokenBalance` function in `AMMUtils`, which both`AMMWithdraw` and `AMMClawback` call to adjust LPTokenBalance.
2. Added the unit test `testLastHolderLPTokenBalance` to test the scenario.
3. Modify the existing unit tests for `fixAMMClawbackRounding`.
Add single-sign rejection check in Change::applyExport() matching
rippled's multi-sign validation: SigningPubKey must be present but
empty, TxnSignature must not be present.
Fix Export_test.cpp hook to encode an empty VL blob for SigningPubKey
instead of 33 zero bytes (AI slop from export-uvtxn branch).
Extract duplicated (n * 80 + 99) / 100 ceiling quorum formula into shared
calculateQuorumThreshold() in ConsensusParms.h, matching the standard
ValidatorList::calculateQuorum(). Used by ExportSignatureCollector,
Change.cpp, and RCLConsensus.cpp.
Revert Import.cpp quorum from ceiling back to original truncating formula
(totalValidatorCount * 0.8) since Import handles XPOP imports, not the
new Export feature. Added TODO for future upgrade.
- Remove unnecessary cbak() stubs from ConsensusEntropy test hooks and
recompile WASM (cbak is optional per Guard.h validator)
- Restore RCLCxPeerPos::render() lost during merge (delegates to
ConsensusProposal::render())
- Fix Change.cpp applyAmendment() fixInnerObjTemplate2 reversion:
use STObject::makeInnerObject() and bracket assignment (fbcff932)
- Restore txq-export-quorum-check documentation marker in TxQ.cpp
* Add AMM bid/create/deposit/swap/withdraw/vote invariants:
- Deposit, Withdrawal invariants: `sqrt(asset1Balance * asset2Balance) >= LPTokens`.
- Bid: `sqrt(asset1Balance * asset2Balance) > LPTokens` and the pool balances don't change.
- Create: `sqrt(asset1Balance * assetBalance2) == LPTokens`.
- Swap: `asset1BalanceAfter * asset2BalanceAfter >= asset1BalanceBefore * asset2BalanceBefore`
and `LPTokens` don't change.
- Vote: `LPTokens` and pool balances don't change.
- All AMM and swap transactions: amounts and tokens are greater than zero, except on withdrawal if all tokens
are withdrawn.
* Add AMM deposit and withdraw rounding to ensure AMM invariant:
- On deposit, tokens out are rounded downward and deposit amount is rounded upward.
- On withdrawal, tokens in are rounded upward and withdrawal amount is rounded downward.
* Add Order Book Offer invariant to verify consumed amounts. Consumed amounts are less than the offer.
* Fix Bid validation. `AuthAccount` can't have duplicate accounts or the submitter account.
sfHookExportCount was at field code 23, colliding with the mainline
rippled UINT16 range. Move to 98 in the Xahau-reserved range.
Also reorder sfExportedTxn (90) before sfAmountEntry (91) for
consistency.
The merge with origin/dev accidentally reverted 19 XRPL_ASSERT() calls
back to plain assert() and 1 UNREACHABLE() back to assert(0). These
macros provide descriptive diagnostic messages on failure and are the
project convention since the rippled 2.4.0 migration.
Files fixed:
- Consensus.h: 9 XRPL_ASSERT reversions
- RCLConsensus.cpp: 5 XRPL_ASSERT reversions
- BuildLedger.cpp: 3 XRPL_ASSERT reversions
- Change.cpp: 1 UNREACHABLE + 1 XRPL_ASSERT reversion
The merge with origin/dev accidentally stripped all CLOG diagnostic
statements from the consensus code path. This restores the clog
parameter to internal Consensus.h functions (checkLedger, phaseOpen,
closeLedger, updateOurPositions, handleWrongLedger, leaveConsensus,
createDisputes) and re-adds all 46 CLOG statements that provide
per-round diagnostic detail for phase transitions, convergence
progress, dispute tracking, and pause decisions.
Also restores the origin/dev structure of Consensus.cpp by removing
the anonymous-namespace wrapper and forwarding overloads that were
merge artifacts.
Add missing ttEXPORT/ttCONSENSUS_ENTROPY pseudo transaction fields required by runtime logic and ensure corresponding ledger entries carry threading/sequence fields.
Handle ttEXPORT and ttCONSENSUS_ENTROPY in hook stakeholder routing to avoid Unknown transaction type assertion during ledger close.
Handle ltEXPORTED_TXN and ltCONSENSUS_ENTROPY in LedgerEntryTypesMatch so creating/destroying these pseudo-ledger entries does not trigger XRP balance invariant violations.
Resolve the origin/dev post-2.4.0 sync conflicts across the xrpld path migration and macro-based protocol registration changes.
Re-apply export/RNG integration on top of the new structure, including consensus/build plumbing, tx/apply paths, peer ingest, and tests.
Regenerate hook headers and restore a green build via x-run-tests (Export_test build path).
- unify validator trust checks into isExportValidatorTrusted() preferring
UNLReport with local trust fallback
- add last-line-of-defense sig verification in Change::applyExport()
requiring 80% (ceil) verified trusted UNL signatures
- filter untrusted export signatures at ingestion in PeerImp
- fix Import quorum from floor(n*0.8) to ceil(n*80%) matching export side
Single amendment flag for both features. numFeatures 94 → 93.
Exclude featureExportRNG from default test set to prevent
ConsensusEntropy pseudo-tx injection from breaking existing tests.
Resolve 14 conflicts keeping both sides. Renumber TOO_LITTLE_ENTROPY
from -46 to -48 to avoid collision with export error codes.
Fix sfHookExportCount to soeOPTIONAL in InnerObjectFormats (only set
when featureExportRNG is enabled).
Combine multiple related debug log data points into a single
message. Allows quick correlation of events that
previously were either not logged or, if logged, strewn
across multiple lines, making correlation difficult.
The Heartbeat Timer and consensus ledger accept processing
each have this capability.
Also guarantees that log entries will be written if the
node is a validator, regardless of log severity level.
Otherwise, the level of these messages is at INFO severity.
* Add logging for amendment voting decision process
* When counting "received validations" to determine quorum, count the number of validators actually voting, not the total number of possible votes.
The current comment in the example cfg file incorrectly mentions both "may" and "must". This change fixes this comment to clarify that the default port of hosts is 2459 and that specifying it is therefore optional. It further sets the default port to 2459 instead of the legacy 51235.
This change enhances the filtering in the ledger, ledger_data, and account_objects methods by also supporting filtering by the canonical name of the LedgerEntryType using case-insensitive matching.
Rewrites the code so that the lock is not held during the callback. Instead it locks twice, once before, and once after. This is safe due to the structure of the code, but is checked after the second lock. This allows mutex_ to be changed back to a regular mutex.
In PeerImpl.cpp, if the function is a message handler (onMessage) or called directly from a message handler, then it should use fee_, since when the handler returns (OnMessageEnd) then the charge function is called. If the function is not a message handler, such as a job queue item, it should remain charge.
If the permissioned domains amendment XLS-80 is enabled before credentials XLS-70, then the permissioned domain users will not be able to match any credentials. The changes here prevent the creation of any permissioned domain objects if credentials are not enabled.
Make `simulate` RPC easier to use:
* Prevent the use of `seed`, `secret`, `seed_hex`, and `passphrase` fields (to avoid confusing with the signing methods).
* Add autofilling of the `NetworkID` field.
- Also get the branch name.
- Use rev-parse instead of describe to get a clean hash.
- Return the git hash and branch name in server_info for admin
connections.
- Include git hash and branch name on separate lines in --version.
* Resolves an issue introduced in #5111, which inadvertently removed the
-Wno-maybe-uninitialized compiler option from some xrpl.libxrpl
modules. This resulted in new "may be used uninitialized" build
warnings, first noticed in the "protocol" module. When compiling with
derr=TRUE, those warnings became errors, which made the build fail.
* Github CI actions will build with the assert and werr options turned
on. This will cause CI jobs to fail if a developer introduces a new
compiler warning, or causes an assert to fail in release builds.
* Includes the OS and compiler version in the linux dependencies jobs in
the "check environment" step.
* Translates the `unity` build option into `CMAKE_UNITY_BUILD` setting.
The LEDGER_ENTRY macro now takes an additional parameter, which makes it easier to avoid missing including the new field in jss.h and to the list of account_objects/ledger_data filters.
Replace Issue in STIssue with Asset. STIssue with MPTIssue is only used in MPT tests.
Will be used in Vault and in transactions with STIssue fields once MPT is integrated into DEX.
* Rename ASSERT to XRPL_ASSERT
* Upgrade to Anthithesis SDK 0.4.4, and use new 0.4.4 features
* automatic cast to bool, like assert
* Add instrumentation workflow to verify build with instrumentation enabled
Adds two CMake functions:
* add_module(library subdirectory): Declares an OBJECT "library" (a CMake abstraction for a collection of object files) with sources from the given subdirectory of the given library, representing a module. Isolates the module's headers by creating a subdirectory in the build directory, e.g. .build/tmp123, that contains just a symlink, e.g. .build/tmp123/basics, to the module's header directory, e.g. include/xrpl/basics, in the source directory, and putting .build/tmp123 (but not include/xrpl) on the include path of the module sources. This prevents the module sources from including headers not explicitly linked to the module in CMake with target_link_libraries.
* target_link_modules(library scope modules...): Links the library target to each of the module targets, and removes their sources from its source list (so they are not compiled and linked twice).
Uses these functions to separate and explicitly link modules in libxrpl:
Level 01: beast
Level 02: basics
Level 03: json, crypto
Level 04: protocol
Level 05: resource, server
* Copy Antithesis SDK version 0.4.0 to directory external/
* Add build option `voidstar` to enable instrumentation with Antithesis SDK
* Define instrumentation macros ASSERT and UNREACHABLE in terms of regular C assert
* Replace asserts with named ASSERT or UNREACHABLE
* Add UNREACHABLE to LogicError
* Document instrumentation macros in CONTRIBUTING.md
- Fix an erroneous high fee penalty that peers could incur for sending
older transactions.
- Update to the fees charged for imposing a load on the server.
- Prevent the relaying of internal pseudo-transactions.
- Before: Pseudo-transactions received from a peer will fail the signature
check, even if they were requested (using TMGetObjectByHash), because
they have no signature. This causes the peer to be charge for an
invalid signature.
- After: Pseudo-transactions, are put into the global cache
(TransactionMaster) only. If the transaction is not part of
a TMTransactions batch, the peer is charged an unwanted data fee.
These fees will not be a problem in the normal course of operations,
but should dissuade peers from behaving badly by sending a bunch of
junk.
- Improve logging: include the reason for fees charged to a peer.
Co-authored-by: Ed Hennis <ed@ripple.com>
`STNumber` lets objects and transactions contain multiple fields for
quantities of XRP, IOU, or MPT without duplicating information about the
"issue" (represented by `STIssue`). It is a straightforward serialization of
the `Number` type that uniformly represents those quantities.
---------
Co-authored-by: John Freeman <jfreeman08@gmail.com>
Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
* 2.2.2 changed functions acquireAsync and NetworkOPsImp::recvValidation to add an item to a collection under lock, unlock, do some work, then lock again to do remove the item. It will deadlock if an exception is thrown while adding the item - before unlocking.
* Replace ScopedUnlock with scope_unlock.
Move the newest information to the top, i.e., use reverse chronological order within each of the two sections ("API Versions" and "XRP Ledger server versions")
The page_size will soon be made configurable with #5135, making this
re-ordering necessary.
When opening SQLite connection, there are specific pragmas set with
commonPragmas.
In particular, PRAGMA journal_mode creates journal file and locks the
page_size; as of this commit, this sets the page size to the default
value of 4096. Coincidentally, the hardcoded page_size was also 4096, so
no issue was noticed.
Update book_changes RPC to reduce latency, add "validated" field, and accept shortcut strings (current, closed, validated) for ledger_index.
`"validated": true` indicates that the transaction has been included in a validated ledger so the result of the transaction is immutable.
Fix#5033Fix#5034Fix#5035Fix#5036
---------
Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
* Retry some failed RPC connections / commands in unit tests
* Remove orphaned `getAccounts` function
Co-authored-by: John Freeman <jfreeman08@gmail.com>
* Add fixNFTokenPageLinks amendment:
It was discovered that under rare circumstances the links between
NFTokenPages could be removed. If this happens, then the
account_objects and account_nfts RPC commands under-report the
NFTokens owned by an account.
The fixNFTokenPageLinks amendment does the following to address
the problem:
- It fixes the underlying problem so no further broken links
should be created.
- It adds Invariants so, if such damage were introduced in the
future, an invariant would stop it.
- It adds a new FixLedgerState transaction that repairs
directories that were damaged in this fashion.
- It adds unit tests for all of it.
* `account_objects` returns an invalid field error if `type` is not supported.
This includes objects an account can't own, or which are unsupported by `account_objects`
* Includes:
* Amendments
* Directory Node
* Fee Settings
* Ledger Hashes
* Negative UNL
The names of the files should reflect the name of the Dir class.
Co-authored-by: Zack Brunson <Zshooter@gmail.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
* fix CTID in tx command returns invalidParams on lowercase hex
* test mixed case and change auto to explicit type
* add header cctype because std::tolower is called
* remove unused local variable
* change test case comment from 'lowercase' to 'mixed case'
---------
Co-authored-by: Zack Brunson <Zshooter@gmail.com>
* Add feature / amendment "InvariantsV1_1"
* Adds invariant AccountRootsDeletedClean:
* Checks that a deleted account doesn't leave any directly
accessible artifacts behind.
* Always tests, but only changes the transaction result if
featureInvariantsV1_1 is enabled.
* Unit tests.
* Resolves#4638
* [FOLD] Review feedback from @gregtatcam:
* Fix unused variable warning
* Improve Invariant test const correctness
* [FOLD] Review feedback from @mvadari:
* Centralize the account keylet function list, and some optimization
* [FOLD] Some structured binding doesn't work in clang
* [FOLD] Review feedback 2 from @mvadari:
* Clean up and clarify some comments.
* [FOLD] Change InvariantsV1_1 to unsupported
* Will allow multiple PRs to be merged over time using the same amendment.
* fixup! [FOLD] Change InvariantsV1_1 to unsupported
* [FOLD] Update and clarify some comments. No code changes.
* Move CMake directory
* Rearrange sources
* Rewrite includes
* Recompute loops
* Fix merge issue and formatting
---------
Co-authored-by: Pretty Printer <cpp@ripple.com>
* fix "account_nfts" with unassociated marker returning issue
* create unit test for fixing nft page invalid marker not returning error
add more test
change test name
create unit test
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* [FOLD] accumulated review suggestions
* move BEAST check out of lambda function
---------
Authored-by: Scott Schurr <scott@ripple.com>
* fixInnerObjTemplate2 amendment:
Apply inner object templates to all remaining (non-AMM)
inner objects.
Adds a unit test for applying the template to sfMajorities.
Other remaining inner objects showed no problems having
templates applied.
* Move CMake directory
* Rearrange sources
* Rewrite includes
* Recompute loops
---------
Co-authored-by: Pretty Printer <cpp@ripple.com>
Fix interactions between NFTokenOffers and trust lines.
Since the NFTokenAcceptOffer does not check the trust line that
the issuer receives as a transfer fee in the NFTokenAcceptOffer,
if the issuer deletes the trust line after NFTokenCreateOffer,
the trust line is created for the issuer by the
NFTokenAcceptOffer. That's fixed.
Resolves#4925.
Fixes issue #4937.
The fixReducedOffersV1 amendment fixed certain forms of offer
modification that could lead to blocked order books. Reduced
offers can block order books if the effective quality of the
reduced offer is worse than the quality of the original offer
(from the perspective of the taker). It turns out that, for
small values, the quality of the reduced offer can be
significantly affected by the rounding mode used during
scaling computations.
Issue #4937 identified an additional code path that modified
offers in a way that could lead to blocked order books. This
commit changes the rounding in that newly located code path so
the quality of the modified offer is never worse than the
quality of the offer as it was originally placed.
It is possible that additional ways of producing blocking
offers will come to light. Therefore there may be a future
need for a V3 amendment.
* Add trap_tx_hash command line option
This new option can be used only if replay is also enabled. It takes a transaction hash from the ledger loaded for replay, and will cause a specific line to be hit in Transactor.cpp, right before the selected transaction is applied.
When rippled initiates a connection to SQLite3, rippled sends a "PRAGMA"
statement defining the maximum number of pages allowed in the database.
Update the max_page_count so it is consistent with the default for newer
versions of SQLite3. Increasing max_page_count is critical for keeping
full history servers online.
Fix#5102
Price Oracle data-series logic uses `unordered_map` to update the Oracle object.
This results in different servers disagreeing on the order of that hash table.
Consequently, the generated ledgers will have different hashes.
The fix uses `map` instead to guarantee the order of the token pairs
in the data-series.
Due to the rounding, LPTokenBalance of the last
Liquidity Provider (LP), might not match this LP's
trustline balance. This fix sets LPTokenBalance on
last LP withdrawal to this LP's LPToken trustline
balance.
Single path AMM offer has to factor in the transfer in rate
when calculating the upper bound quality and the quality function
because single path AMM's offer quality is not constant.
This fix factors in the transfer fee in
BookStep::adjustQualityWithFees().
* Fix AMM offer rounding and low quality LOB offer blocking AMM:
A single-path AMM offer with account offer on DEX, is always generated
starting with the takerPays first, which is rounded up, and then
the takerGets, which is rounded down. This rounding ensures that the pool's
product invariant is maintained. However, when one of the offer's side
is XRP, this rounding can result in the AMM offer having a lower
quality, potentially causing offer generation to fail if the quality
is lower than the account's offer quality.
To address this issue, the proposed fix adjusts the offer generation process
to start with the XRP side first and always rounds it down. This results
in a smaller offer size, improving the offer's quality. Regardless if the offer
has XRP or not, the rounding is done so that the offer size is minimized.
This change still ensures the product invariant, as the other generated
side is the exact result of the swap-in or swap-out equations.
If a liquidity can be provided by both AMM and LOB offer on offer crossing
then AMM offer is generated so that it matches LOB offer quality. If LOB
offer quality is less than limit quality then generated AMM offer quality
is also less than limit quality and the offer doesn't cross. To address
this issue, if LOB quality is better than limit quality then use LOB
quality to generate AMM offer. Otherwise, don't use the quality to generate
AMM offer. In this case, limitOut() function in StrandFlow limits
the out amount to match strand's quality to limit quality and consume
maximum AMM liquidity.
Rounding in the payment engine is causing an assert to sometimes fire
with "dust" amounts. This is causing issues when running debug builds of
rippled. This issue will be addressed, but the assert is no longer
serving its purpose.
The AMM has an invariant for swaps where:
new_balance_1*new_balance_2 >= old_balance_1*old_balance_2
Due to rounding, this invariant could sometimes be violated (although by
very small amounts).
This patch introduces an amendment `fixAMMRounding` that changes the
rounding to always favor the AMM. Doing this should maintain the
invariant.
Co-authored-by: Bronek Kozicki
Co-authored-by: thejohnfreeman
It can be difficult to make transaction breaking changes to low level
code because the low level code does not have access to a ledger and the
current activated amendments in that ledger (the "rules"). This patch
adds global access to the current ledger rules as a `std::optional`. If
the optional is not seated, then there is no active transaction.
The `rotateWithLock` function holds a lock while it calls a callback
function that's passed in by the caller. This is a problematic design
that needs to be used very carefully. In this case, at least one caller
passed in a callback that eventually relocks the mutex on the same
thread, causing UB (a deadlock was observed). The caller was from
SHAMapStoreImpl, and it called `clearCaches`. This `clearCaches` can
potentially call `fetchNodeObject`, which tried to relock the mutex.
This patch resolves the issue by changing the mutex type to a
`recursive_mutex`. Ideally, the code should be rewritten so it doesn't
hold the mutex during the callback and the mutex should be changed back
to a regular mutex.
Co-authored-by: Ed Hennis <ed@ripple.com>
This amendment, `fixPreviousTxnID`, adds `PreviousTxnID` and
`PreviousTxnLgrSequence` as fields to all ledger objects that did
not already have them included (`DirectoryNode`, `Amendments`,
`FeeSettings`, `NegativeUNL`, and `AMM`). This makes it much easier
to go through the history of these ledger objects.
When calculating reward shares, the amount should always be rounded
down. If the `fixUniversalNumber` amendment is not active, this works
correctly. If it is not active, then the amount is incorrectly rounded
up. This patch introduces an amendment so it will be rounded down.
This fixes a case where a peer can desync under a certain timing
circumstance--if it reaches a certain point in consensus before it receives
proposals.
This was noticed under high transaction volumes. Namely, when we arrive at the
point of deciding whether consensus is reached after minimum establish phase
duration but before having received any proposals. This could be caused by
finishing the previous round slightly faster and/or having some delay in
receiving proposals. Existing behavior arrives at consensus immediately after
the minimum establish duration with no proposals. This causes us to desync
because we then close a non-validated ledger. The change in this PR causes us to
wait for a configured threshold before making the decision to arrive at
consensus with no proposals. This allows validators to catch up and for brief
delays in receiving proposals to be absorbed. There should be no drawback since,
with no proposals coming in, we needn't be in a huge rush to jump ahead.
We do not currently enforce that incoming peer connection does not have
remote_endpoint which is already used (either by incoming or outgoing
connection), hence already stored in slots_. If we happen to receive a
connection from such a duplicate remote_endpoint, it will eventually result in a
crash (when disconnecting) or weird behavior (when updating slot state), as a
result of an apparently matching remote_endpoint in slots_ being used by a
different connection.
This amendment fixes an edge case where an empty DID object can be
created. It adds an additional check to ensure that DIDs are
non-empty when created, and returns a `tecEMPTY_DID` error if the DID
would be empty.
* telENV_RPC_FAILED is a new code, reserved exclusively
for unit tests when RPC fails. This will
make those types of errors distinct and easier to test
for when expected and/or diagnose when not.
* Output RPC command result when result is not expected.
We are currently using old version 0.6.2 of `xxhash`, as a verbatim copy and paste of its header file `xxhash.h`. Switch to the more recent version 0.8.2. Since this version is in Conan Center (and properly protects its ABI by keeping the state object incomplete), add it as a Conan requirement. Switch to the SIMD instructions (in the new `XXH3` family) supported by the new version.
This algorithm is about an order of magnitude faster than the existing
algorithm (about 10x faster for encoding and about 15x faster for
decoding - including the double hash for the checksum). The algorithms
use gcc's int128 (fast MS version will have to wait, in the meantime MS
falls back to the slow code).
* It is now an invariant that all constructed Public Keys are valid,
non-empty and contain 33 bytes of data.
* Additionally, the memory footprint of the PublicKey class is reduced.
The size_ data member is declared as static.
* Distinguish and identify the PublisherList retrieved from the local
config file, versus the ones obtained from other validators.
* Fixes#2942
The compilation fails due to an issue in the initializer list
of an optional argument, which holds a vector of pairs.
The code compiles correctly on earlier gcc versions, but fails on gcc 13.
Implement native support for Price Oracles.
A Price Oracle is used to bring real-world data, such as market prices,
onto the blockchain, enabling dApps to access and utilize information
that resides outside the blockchain.
Add Price Oracle functionality:
- OracleSet: create or update the Oracle object
- OracleDelete: delete the Oracle object
To support this functionality add:
- New RPC method, `get_aggregate_price`, to calculate aggregate price for a token pair of the specified oracles
- `ltOracle` object
The `ltOracle` object maintains:
- Oracle Owner's account
- Oracle's metadata
- Up to ten token pairs with the scaled price
- The last update time the token pairs were updated
Add Oracle unit-tests
Add a new RPC / WS call for `server_definitions`, which returns an
SDK-compatible `definitions.json` (binary enum definitions) generated by
the server. This enables clients/libraries to dynamically work with new
fields and features, such as ones that may become available on side
chains. Clients query `server_definitions` on a node from the network
they want to work with, and immediately know how to speak that node's
binary "language", even if new features are added to it in the future
(as long as there are no new serialized types that the software doesn't
know how to serialize/deserialize).
Example:
```js
> {"command": "server_definitions"}
< {
"result": {
"FIELDS": [
[
"Generic",
{
"isSerialized": false,
"isSigningField": false,
"isVLEncoded": false,
"nth": 0,
"type": "Unknown"
}
],
[
"Invalid",
{
"isSerialized": false,
"isSigningField": false,
"isVLEncoded": false,
"nth": -1,
"type": "Unknown"
}
],
[
"ObjectEndMarker",
{
"isSerialized": false,
"isSigningField": true,
"isVLEncoded": false,
"nth": 1,
"type": "STObject"
}
],
...
```
Close#3657
---------
Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
A large synthetic offer was not handled correctly in the payment engine.
This patch fixes that issue and introduces a new invariant check while
processing synthetic offers.
* Commit 01c37fe introduced a change to the STTx unit test where a local
"defaultRules" object was created with a temporary inline "presets"
value provided to the ctor. Rules::Impl stores a const ref to the
presets provided to the ctor. This particular call provided an inline
temp variable, which goes out of scope as soon as the object is
created. On Windows, attempting to use the presets (e.g. via the
enabled() function) causes an access violation, which crashes the test
run.
* An audit of the code indicates that all other instances of Rules use
the Application's config.features list, which will have a sufficient
lifetime.
Add `STObject` constructor to explicitly set the inner object template.
This allows certain AMM transactions to apply in the same ledger:
There is no issue if the trading fee is greater than or equal to 0.01%.
If the trading fee is less than 0.01%, then:
- After AMM create, AMM transactions must wait for one ledger to close
(3-5 seconds).
- After one ledger is validated, all AMM transactions succeed, as
appropriate, except for AMMVote.
- The first AMMVote which votes for a 0 trading fee in a ledger will
succeed. Subsequent AMMVote transactions which vote for a 0 trading
fee will wait for the next ledger (3-5 seconds). This behavior repeats
for each ledger.
This has no effect on the ultimate correctness of AMM. This amendment
will allow the transactions described above to succeed as expected, even
if the trading fee is 0 and the transactions are applied within one
ledger (block).
Prior to this commit, `port_grpc` could not be added to the [server]
stanza. Instead of validating gRPC IP/Port/Protocol information in
ServerHandler, validate grpc port info in GRPCServer constructor. This
should not break backwards compatibility.
gRPC-related config info must be in a section (stanza) called
[port_gprc].
* Close#4015 - That was an alternate solution. It was decided that with
relaxed validation, it is not necessary to rename port_grpc.
* Fix#4557
These headers are required in the xrpl Conan package in order for
xbridge witness server (xbwd) to build. This change to libxrpl may help
any dependents of libxrpl. This addition does not change any C++ code.
Without this amendment, an NFTokenAcceptOffer transaction can succeed
even when the NFToken recipient does not have sufficient reserves for
the new NFTokenPage. This allowed accounts to accept NFT sell offers
without having a sufficient reserve. (However, there was no issue in
brokered mode or when a buy offer is involved.)
Instead, the transaction should fail with `tecINSUFFICIENT_RESERVE` as
appropriate. The `fixNFTokenReserve` amendment adds checks in the
NFTokenAcceptOffer transactor to check if the OwnerCount changed. If it
did, then it checks the new reserve requirement.
Fix#4679
Use consistent platform-agnostic library names on all platforms.
Fix an issue that prevents dependents like validator-keys-tool from
linking to libxrpl on Windows.
It is bad practice to change the binary base name depending on the
platform. CMake already manipulates the base name into a final name that
fits the conventions of the platform. Linkers accept base names on the
command line and then look for conventional names on disk.
Clients subscribed to `transactions` over WebSocket are being
disconnected because the traffic exceeds the default `send_queue_limit`
of 100.
This commit changes the default configuration, not the default in code.
Fix#4866
* Add logging for Application.cpp sweep()
* Improve lifetime management of ledger objects (`SLE`s)
* Only store SLE digest in CachedView; get SLEs from CachedSLEs
* Also force release of last ledger used for path finding if there are
no path finding requests to process
* Count more ST objects (derive from `CountedObject`)
* Track CachedView stats in CountedObjects
* Rename the CachedView counters
* Fix the scope of the digest lookup lock
Before this patch, if you asked "is it caching?" It was always caching.
Prevent WebSocket connections from trying to close twice.
The issue only occurs in debug builds (assertions are disabled in
release builds, including published packages), and when the WebSocket
connections are unprivileged. The assert (and WRN log) occurs when a
client drives up the resource balance enough to be forcibly disconnected
while there are still messages pending to be sent.
Thanks to @lathanbritz for discovering this issue in #4822.
Workaround for compilation errors with gcc-13 and other compilers
relying on `libstdc++` version 13. This is temporary until actual fix is
available for us to use: https://github.com/boostorg/beast/pull/2682
Some boost.beast files (which we do use) rely on an old gcc-12 behaviour
where `#include <cstdint>` was not needed even though types from this
header were used. This was broken by a change in libstdc++ version 13:
https://gcc.gnu.org/gcc-13/porting_to.html#header-dep-changes
The necessary fix was implemented in boost.beast, however it is not yet
available. Until it is available, we can use this workaround to enable
compilation of `rippled` with gcc-13, clang-16, etc.
* Promote API version 2 to supported
* Switch command line to API version 1
* Fix LedgerRequestRPC test
* Remove obsolete tx_account method
This method is not implemented, the only parts which are removed are related to command-line parsing
* Fix RPCCall test
* Reduce diff size, small test improvements
* Minor fixes
* Support for the mold linker
* [fold] handle case where both mold and gold are installed
* [fold] Use first non-default linker
* Fix TransactionEntry_test
* Fix AccountTx_test
---------
Co-authored-by: seelabs <scott.determan@yahoo.com>
* Remove include <ranges>
* Formatting fix
* Output for subscriptions
* Output from sign, submit etc.
* Output from ledger
* Output from account_tx
* Output from transaction_entry
* Output from tx
* Store close_time_iso in API v2 output
* Add small APIv2 unit test for subscribe
* Add unit test for transaction_entry
* Add unit test for tx
* Remove inLedger from API version 2
* Set ledger_hash and ledger_index
* Move isValidated from RPCHelpers to LedgerMaster
* Store closeTime in LedgerFill
* Time formatting fix
* additional tests for Subscribe unit tests
* Improved comments
* Rename mInLedger to mLedgerIndex
* Minor fixes
* Set ledger_hash on closed ledger, even if not validated
* Update API-CHANGELOG.md
* Add ledger_hash, ledger_index to transaction_entry
* Fix validated and close_time_iso in account_tx
* Fix typos
* Improve getJson for Transaction and STTx
* Minor improvements
* Replace class enum JsonOptions with struct
We may consider turning this into a general-purpose template and using it elsewhere
* simplify the extraction of transactionID from Transaction object
* Remove obsolete comments
* Unconditionally set validated in account_tx output
* Minor improvements
* Minor fixes
---------
Co-authored-by: Chenna Keshava <ckeshavabs@gmail.com>
The command line API still uses `apiMaximumSupportedVersion`.
The unit test RPCs use `apiMinimumSupportedVersion` if unspecified.
Context:
- #4568
- #4552
Introduce the `fixFillOrKill` amendment.
Fix an edge case occurring when an offer with `tfFillOrKill` set (but
without `tfSell` set) fails to cross an offer with a better rate. If
`tfFillOrKill` is set, then the owner must receive the full TakerPays.
Without this amendment, an offer fails if the entire `TakerGets` is not
spent. With this amendment, when `tfSell` is not set, the entire
`TakerGets` does not have to be spent.
For details about OfferCreate, see: https://xrpl.org/offercreate.htmlFix#4684
---------
Co-authored-by: Scott Schurr <scott@ripple.com>
Remove dependency on `<ranges>` header, since it is not implemented by
all compilers which we want to support.
This code change only affects unit tests.
Resolve https://github.com/XRPLF/rippled/issues/4787
Remove `tx_history` and `ledger_header` methods from API version 2.
Update `RPC::Handler` to allow for methods (or method implementations)
to be API version specific. This partially resolves#4727. We can now
store multiple handlers with the same name, as long as they belong to
different (non-overlapping) API versions. This necessarily impacts the
handler lookup algorithm and its complexity; however, there is no
performance loss on x86_64 architecture, and only minimal performance
loss on arm64 (around 10ns). This design change gives us extra
flexibility evolving the API in the future, including other parts of
In API version 2, `tx_history` and `ledger_header` are no longer
recognised; if they are called, `rippled` will return error
`unknownCmd`
Resolve#3638Resolve#3539
Using the "Amount" field in Payment transactions can cause incorrect
interpretation. There continue to be problems from the use of this
field. "Amount" is rarely the correct field to use; instead,
"delivered_amount" (or "DeliveredAmount") should be used.
Rename the "Amount" field to "DeliverMax", a less misleading name. With
api_version: 2, remove the "Amount" field from Payment transactions.
- Input: "DeliverMax" in `tx_json` is an alias for "Amount"
- sign
- submit (in sign-and-submit mode)
- submit_multisigned
- sign_for
- Output: Add "DeliverMax" where transactions are provided by the API
- ledger
- tx
- tx_history
- account_tx
- transaction_entry
- subscribe (transactions stream)
- Output: Remove "Amount" from API version 2
Fix#3484Fix#3902
Implement native support for W3C DIDs.
Add a new ledger object: `DID`.
Add two new transactions:
1. `DIDSet`: create or update the `DID` object.
2. `DIDDelete`: delete the `DID` object.
This meets the requirements specified in the DID v1.0 specification
currently recommended by the W3C Credentials Community Group.
The DID format for the XRP Ledger conforms to W3C DID standards.
The objects can be created and owned by any XRPL account holder.
The transactions can be integrated by any service, wallet, or application.
It might be possible for the server code to indirect through certain
`end()` iterators. While a debug build would catch this problem with
`assert()`s, a release build would crash. If there are problems in this
area in the future, it is best to get a definitive indication of the
nature of the error regardless of whether it's a debug or release build.
To accomplish this, these `assert`s are converted into `LogicError`s
that will produce a reasonable error message when they fire.
The assert is saying that the only reason `pathFinder` would be null is
if the request was aborted (connection dropped, etc.). That's what
`continueCallback()` checks. But that is very clearly not true if you
look at `getPathFinder`, which calls `findPaths`, which can return false
for many reasons.
Fix#4744
P2P link compression is a feature added in 1.6.0 by #3287.
https://xrpl.org/enable-link-compression.html
If the default changes in the future - for example, as currently
proposed by #4387 - the comment will be updated at that time.
Fix#4656
Context: The `DisallowIncoming` amendment provides an option to block
incoming trust lines from reaching your account. The
asfDisallowIncomingTrustline AccountSet Flag, when enabled, prevents any
incoming trust line from being created. However, it was too restrictive:
it would block an issuer from authorizing a trust line, even if the
trust line already exists. Consider:
1. Issuer sets asfRequireAuth on their account.
2. User sets asfDisallowIncomingTrustline on their account.
3. User submits tx to SetTrust to Issuer.
At this point, without `fixDisallowIncomingV1` active, the issuer would
not be able to authorize the trust line.
The `fixDisallowIncomingV1` amendment, once activated, allows an issuer
to authorize a trust line even after the user sets the
asfDisallowIncomingTrustline flag, as long as the trust line already
exists.
When a new transactor is added, there are several places in applySteps
that need to be modified. This patch refactors the code so only one
function needs to be modified.
Make transactions and pseudo-transactions share the same commonFields
again. This regularizes the code in a nice way.
While this technically allows pseudo-transactions to have a
TicketSequence field, pseudo-transactions are only ever constructed by
code paths that don't add such a field, so this is not a transaction
processing change. It may be possible to add a separate check to ensure
TicketSequence (and other fields that don't make sense on
pseudo-transactions) are never added to pseudo-transactions, but that
should not be necessary. (TicketSequence is not the only common field
that can not and does not appear in pseudo-transactions.) Note:
TicketSequence is already documented as a common field.
Related: #4637Fix#4714
Update minimum compiler requirement for building the codebase. The
feature "using enum" is required. This feature was introduced in C++20.
Updating the C++ compiler to version 11 or later fixes this error:
```
Building CXX object CMakeFiles/xrpl_core.dir/src/ripple/protocol/impl/STAmount.cpp.o
/build/ripple/binary/src/ripple/protocol/impl/STAmount.cpp: In lambda function:
/build/ripple/binary/src/ripple/protocol/impl/STAmount.cpp:1577:15: error: expected nested-name-specifier before 'enum'
1577 | using enum Number::rounding_mode;
| ^~~~
```
Fix#4693
Modify the `XChainBridge` amendment.
Before this patch, two door accounts on the same chain could could own
the same bridge spec (of course, one would have to be the issuer and one
would have to be the locker). While this is silly, it does not violate
any bridge invariants. However, on further review, if we allow this then
the `claim` transactions would need to change. Since it's hard to see a
use case for two doors to own the same bridge, this patch disallows
it. (The transaction will return tecDUPLICATE).
Amendment "flapping" (an amendment repeatedly gaining and losing
majority) usually occurs when an amendment is on the verge of gaining
majority, and a validator not in favor of the amendment goes offline or
loses sync. This fix makes two changes:
1. The number of validators in the UNL determines the threshold required
for an amendment to gain majority.
2. The AmendmentTable keeps a record of the most recent Amendment vote
received from each trusted validator (and, with `trustChanged`, stays
up-to-date when the set of trusted validators changes). If no
validation arrives from a given validator, then the AmendmentTable
assumes that the previously-received vote has not changed.
In other words, when missing an `STValidation` from a remote validator,
each server now uses the last vote seen. There is a 24 hour timeout for
recorded validator votes.
These changes do not require an amendment because they do not impact
transaction processing, but only the threshold at which each individual
validator decides to propose an EnableAmendment pseudo-transaction.
Fix#4350
Fix the Windows build by using `unsigned int` (instead of `uint`).
The error, introduced by #4618, looks something like:
rpc\impl\RPCHelpers.h(299,5): error C2061: syntax error: identifier
'uint' (compiling source file app\ledger\Ledger.cpp)
A few methods, including `book_offers`, take currency codes as
parameters. The XRPL doesn't care if the letters in those codes are
lowercase or uppercase, as long as they come from an alphabet defined
internally. rippled doesn't care either, when they are submitted in a
hex representation. When they are submitted in an ASCII string
representation, rippled, but not XRPL, is more restrictive, preventing
clients from interacting with some currencies already in the XRPL.
This change gets rippled out of the way and lets clients submit currency
codes in ASCII using the full alphabet.
Fixes#4112
Currently, the `BUILD.md` instructions suggest using `.build` as the
build directory, so this change helps to reduce confusion.
An alternative would be to instruct developers to add `/.build/` to
`.git/info/exclude` or to user-level `.gitignore` (although the latter
is very intrusive). However, it is being added here because it is a good
practice to have a sensible default that's consistent with the build
instructions.
gateway_balances
* When `account` does not exist in the ledger, return `actNotFound`
* (Previously, a normal response was returned)
* Fix#4290
* When required field(s) are missing, return `invalidParams`
* (Previously, `invalidHotWallet` was incorrectly returned)
* Fix#4548
channel_authorize
* When the specified `key_type` is invalid, return `badKeyType`
* (Previously, `invalidParams` was returned)
* Fix#4289
Since these are breaking changes, they apply only to API version 2.
Supersedes #4577
Copy the new code to `src/secp256k1` without changes:
`src/secp256k1` is identical to bitcoin-core/secp256k1@acf5c55 (v0.3.2).
We could consider changing to a Git submodule, though that would require
changes to the build instructions because we are not using submodules
anywhere else.
Remove the `verify` and `message` function declarations. The explicit
instantiation requests could not be completed because there were no
implementations for those two member functions. It is helpful that the
Microsoft (MSVC) compiler on Windows appears to be strict when it comes
to template instantiation.
This resolves the warning:
XChainAttestations.h(450): warning C4661: 'bool
ripple::XChainAttestationsBase<ripple::XChainClaimAttestation>::verify(void)
const': no suitable definition provided for explicit template
instantiation request
A bridge connects two blockchains: a locking chain and an issuing
chain (also called a mainchain and a sidechain). Both are independent
ledgers, with their own validators and potentially their own custom
transactions. Importantly, there is a way to move assets from the
locking chain to the issuing chain and a way to return those assets from
the issuing chain back to the locking chain: the bridge. This key
operation is called a cross-chain transfer. A cross-chain transfer is
not a single transaction. It happens on two chains, requires multiple
transactions, and involves an additional server type called a "witness".
A bridge does not exchange assets between two ledgers. Instead, it locks
assets on one ledger (the "locking chain") and represents those assets
with wrapped assets on another chain (the "issuing chain"). A good model
to keep in mind is a box with an infinite supply of wrapped assets.
Putting an asset from the locking chain into the box will release a
wrapped asset onto the issuing chain. Putting a wrapped asset from the
issuing chain back into the box will release one of the existing locking
chain assets back onto the locking chain. There is no other way to get
assets into or out of the box. Note that there is no way for the box to
"run out of" wrapped assets - it has an infinite supply.
Co-authored-by: Gregory Popovitch <greg7mdp@gmail.com>
For the `account_tx` and `noripple_check` methods, perform input
validation for optional parameters such as "binary", "forward",
"strict", "transactions". Previously, when these parameters had invalid
values (e.g. not a bool), no error would be returned. Now, it returns an
`invalidParams` error.
* This updates the behavior to match Clio
(https://github.com/XRPLF/clio).
* Since this is potentially a breaking change, it only applies to
requests specifying api_version: 2.
* Fix#4543.
Minor refactor to `TxFormats.cpp`:
- Rename `commonFields` to `pseudoCommonFields` (since it is the common fields
that all pseudo-transactions need)
- Add a new static variable, `commonFields`, which represents all the common
fields that non-pseudo transactions need (essentially everything that
`pseudoCommonFields` contains, plus `sfTicketSequence`)
This makes it harder to accidentally leave out `sfTicketSequence` in a new
transaction.
- Verify "check", used to retrieve a Check object, is a string.
- Verify "nft_page", used to retrieve an NFT Page, is a string.
- Verify "index", used to retrieve any type of ledger object by its
unique ID, is a string.
- Verify "directory", used to retrieve a DirectoryNode, is a string or
an object.
This change only impacts api_version 2 since it is a breaking change.
https://xrpl.org/ledger_entry.htmlFix#4550
* In namespace ripple, introduces get_name function that takes a
std:🧵:native_handle_type and returns a std::string.
* In namespace ripple, introduces get_name function that takes a
std::thread or std::jthread and returns a std::string.
* In namespace ripple::this_thread, introduces get_name function
that takes no parameters and returns the name of the current
thread as a std::string.
* In namespace ripple::this_thread, introduces set_name function
that takes a std::string_view and sets the name of the current
thread.
* Intended to replace the beast utilities setCurrentThreadName
and getCurrentThreadName.
Use the most recent versions in ConanCenter.
* Due to a bug in Clang 16, you may get a compile error:
"call to 'async_teardown' is ambiguous"
* A compiler flag workaround is documented in `BUILD.md`.
* At this time, building this with gcc 13 may require editing some files
in `.conan/data`
* A patch to support gcc13 may be added in a later PR.
---------
Co-authored-by: Scott Schurr <scott@ripple.com>
- Update amm_info to fetch AMM by amm account id.
- This is an additional way to retrieve an AMM object.
- Alternatively, AMM can still be fetched by the asset pair as well.
- Add owner directory entry for AMM object.
Context:
- Add back the AMM object directory entry, which was deleted by #4626.
- This fixes `account_objects` for `amm` type.
Modify two error cases in AMMBid transactor to return `tecINTERNAL` to
more clearly indicate that these errors should not be possible unless
operating in unforeseen circumstances. It likely indicates a bug.
The log level has been updated to `fatal()` since it indicates a
(potentially network-wide) unexpected condition when either of these
errors occurs.
Details:
The two specific transaction error cases changed are:
- `tecAMM_BALANCE` - In this case, this error (total LP Tokens
outstanding is lower than the amount to be burned for the bid) is a
subset of the case where the user doesn't have enough LP Tokens to pay
for the bid. When this case is reached, the bidder's LP Tokens balance
has already been checked first. The user's LP Tokens should always be
a subset of total LP Tokens issued, so this should be impossible.
- `tecINSUFFICIENT_PAYMENT` - In this case, the amount to be refunded as
a result of the bid is greater than the price paid for the auction
slot. This should never occur unless something is wrong with the math
for calculating the refund amount.
Both error cases in question are "defense in depth" measures meant to
protect against making things worse if the code has already reached a
state that is supposed to be impossible, likely due to a bug elsewhere.
Such "shouldn't ever occur" checks should use an error code that
categorically indicates a larger problem. This is similar to how
`tecINVARIANT_FAILED` is a warning sign that something went wrong and
likely could've been worse, but since there isn't an Invariant Check
applying here, `tecINTERNAL` is the appropriate error code.
This is "debatably" a transaction processing change since it could
hypothetically change how transactions are processed if there's a bug we
don't know about.
Introduce a new variadic template helper function, `forAllApiVersions`,
that accepts callables to execute a set of functions over a range of
versions - from RPC::apiMinimumSupportedVersion to RPC::apiBetaVersion.
This avoids the duplication of code.
Context: #4552
- "Rename" the type `LedgerInfo` to `LedgerHeader` (but leave an alias
for `LedgerInfo` to not yet disturb existing uses). Put it in its own
public header, named after itself, so that it is more easily found.
- Move the type `Fees` and NFT serialization functions into public
(installed) headers.
- Compile the XRPL and gRPC protocol buffers directly into `libxrpl` and
install their headers. Fix the Conan recipe to correctly export these
types.
Addresses change (2) in
https://github.com/XRPLF/XRPL-Standards/discussions/121.
For context: This work supports Clio's dependence on libxrpl. Clio is
just an example consumer. These changes should benefit all current and
future consumers.
---------
Co-authored-by: cyan317 <120398799+cindyyan317@users.noreply.github.com>
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
Fix the libxrpl library target for consumers using Conan.
* Fix installation issues and update includes.
* Update requirements in the Conan package info.
* libxrpl requires openssl::crypto.
(Conan is a software package manager for C++.)
Improve the checking of the path lengths during Payments. Previously,
the code that did the check of the payment path lengths was sometimes
executed, but without any effect. This changes it to only check when it
matters, and to not make unnecessary copies of the path vectors.
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
- Replace custom popcnt16 implementation with std::popcount from C++20
- Maintain compatibility with older compilers and MacOS by providing a
conditional compilation fallback to __builtin_popcount and a lookup
table method
- Move and inline related functions within SHAMapInnerNode for
performance and readability
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
When an AMM account is deleted, the owner directory entries must be
deleted in order to ensure consistent ledger state.
* When deleting AMM account:
* Clean up AMM owner dir, linking AMM account and AMM object
* Delete trust lines to AMM
* Disallow `CheckCreate` to AMM accounts
* AMM cannot cash a check
* Constrain entries in AuthAccounts array to be accounts
* AuthAccounts is an array of objects for the AMMBid transaction
* SetTrust (TrustSet): Allow on AMM only for LP tokens
* If the destination is an AMM account and the trust line doesn't
exist, then:
* If the asset is not the AMM LP token, then fail the tx with
`tecNO_PERMISSION`
* If the AMM is in empty state, then fail the tx with `tecAMM_EMPTY`
* This disallows trustlines to AMM in empty state
* Add AMMID to AMM root account
* Remove lsfAMM flag and use sfAMMID instead
* Remove owner dir entry for ltAMM
* Add `AMMDelete` transaction type to handle amortized deletion
* Limit number of trust lines to delete on final withdraw + AMMDelete
* Put AMM in empty state when LPTokens is 0 upon final withdraw
* Add `tfTwoAssetIfEmpty` deposit option in AMM empty state
* Fail all AMM transactions in AMM empty state except special deposit
* Add `tecINCOMPLETE` to indicate that not all AMM trust lines are
deleted (i.e. partial deletion)
* This is handled in Transactor similar to deleted offers
* Fail AMMDelete with `tecINTERNAL` if AMM root account is nullptr
* Don't validate for invalid asset pair in AMMDelete
* AMMWithdraw deletes AMM trust lines and AMM account/object only if the
number of trust lines is less than max
* Current `maxDeletableAMMTrustLines` = 512
* Check no directory left after AMM trust lines are deleted
* Enable partial trustline deletion in AMMWithdraw
* Add `tecAMM_NOT_EMPTY` to fail any transaction that expects an AMM in
empty state
* Clawback considerations
* Disallow clawback out of AMM account
* Disallow AMM create if issuer can claw back
This patch applies to the AMM implementation in #4294.
Acknowledgements:
Richard Holland and Nik Bougalis for responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the project code and urge researchers to
responsibly disclose any issues they may find.
To report a bug, please send a detailed report to:
bugs@xrpl.org
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
Add AMM functionality:
- InstanceCreate
- Deposit
- Withdraw
- Governance
- Auctioning
- payment engine integration
To support this functionality, add:
- New RPC method, `amm_info`, to fetch pool and LPT balances
- AMM Root Account
- trust line for each IOU AMM token
- trust line to track Liquidity Provider Tokens (LPT)
- `ltAMM` object
The `ltAMM` object tracks:
- fee votes
- auction slot bids
- AMM tokens pair
- total outstanding tokens balance
- `AMMID` to AMM `RootAccountID` mapping
Add new classes to facilitate AMM integration into the payment engine.
`BookStep` uses these classes to infer if AMM liquidity can be consumed.
The AMM formula implementation uses the new Number class added in #4192.
IOUAmount and STAmount use Number arithmetic.
Add AMM unit tests for all features.
AMM requires the following amendments:
- featureAMM
- fixUniversalNumber
- featureFlowCross
Notes:
- Current trading fee threshold is 1%
- AMM currency is generated by: 0x03 + 152 bits of sha256{cur1, cur2}
- Current max AMM Offers is 30
---------
Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
Improve error handling for ledger_entry by returning an "invalidParams"
error when one or more request fields are specified incorrectly, or one
or more required fields are missing.
For example, if none of of the following fields is provided, then the
API should return an invalidParams error:
* index, account_root, directory, offer, ripple_state, check, escrow,
payment_channel, deposit_preauth, ticket
Prior to this commit, the API returned an "unknownOption" error instead.
Since the error was actually due to invalid parameters, rather than
unknown options, this error was misleading.
Since this is an API breaking change, the "invalidParams" error is only
returned for requests using api_version: 2 and above. To maintain
backward compatibility, the "unknownOption" error is still returned for
api_version: 1.
Related: #4573Fix#4303
- Previously, mulDiv had `std::pair<bool, uint64_t>` as the output type.
- This is an error-prone interface as it is easy to ignore when
overflow occurs.
- Using a return type of `std::optional` should decrease the likelihood
of ignoring overflow.
- It also allows for the use of optional::value_or() as a way to
explicitly recover from overflow.
- Include limits.h header file preprocessing directive in order to
satisfy gcc's numeric_limits incomplete_type requirement.
Fix#3495
---------
Co-authored-by: John Freeman <jfreeman08@gmail.com>
When requesting `account_info` with an invalid `signer_lists` value, the
API should return an "invalidParams" error.
`signer_lists` should have a value of type boolean. If it is not a
boolean, then it is invalid input. The response now indicates that.
* This is an API breaking change, so the change is only reflected for
requests containing `"api_version": 2`
* Fix#4539
- Use powers of two to clearly indicate the bitmask
- Replace bitmask with explicit if-conditions to better indicate predicates
Change enum values to be powers of two (fix#3417) #4239
Implement the simplified condition evaluation
removes the complex bitwise and(&) operator
Implement the second proposed solution in Nik Bougalis's comment - Software does not distinguish between different Conditions (Version: 1.5) #3417 (comment)
I have tested this code change by performing RPC calls with the commands server_info, server_state, peers and validation_info. These commands worked as expected.
Certain inputs for the AccountTx method should return an error. In other
words, an invalid request from a user or client now results in an error
message.
Since this can change the response from the API, it is an API breaking
change. This commit maintains backward compatibility by keeping the
existing behavior for existing requests. When clients specify
"api_version": 2, they will be able to get the updated error messages.
Update unit tests to check the error based on the API version.
* Fix#4288
* Fix#4545
* Commits 0b812cd (#4427) and 11e914f (#4516) conflict. The first added
references to `ServerHandlerImp` in files outside of that class's
organizational unit (which is technically incorrect). The second
removed `ServerHandlerImp`, but was not up to date with develop. This
results in the build failing.
* Fixes the build by changing references to `ServerHandlerImp` to
the more correct `ServerHandler`.
Rename `ServerHandlerImp` to `ServerHandler`. There was no other
ServerHandler definition despite the existence of a header suggesting
that there was.
This resolves a piece of historical confusion in the code, which was
identified during a code review.
The changes in the diff may look more extensive than they actually are.
The contents of `impl/ServerHandlerImp.h` were merged into
`ServerHandler.h`, making the latter file appear to have undergone
significant modifications. However, this is a non-breaking refactor that
only restructures code.
Replace hand-rolled code with std::from_chars for better
maintainability.
The C++ std::from_chars function is intended to be as fast as possible,
so it is unlikely to be slower than the code it replaces. This change is
a net gain because it reduces the amount of hand-rolled code.
Apply a minor cleanup in `TypedField`:
* Remove a non-working and unused move constructor.
* Constrain the remaining constructor to not be overly generic enough as
to be used as a copy or move constructor.
Enhance the /crawl endpoint by publishing WebSocket/RPC ports in the
server_info response. The function processing requests to the /crawl
endpoint actually calls server_info internally, so this change enables a
server to advertise its WebSocket/RPC port(s) to peers via the /crawl
endpoint. `grpc` and `peer` ports are included as well.
The new `ports` array contains objects, each containing a `port` for the
listening port (number string), and an array `protocol` listing the
supported protocol(s).
This allows crawlers to build a richer topology without needing to
port-scan nodes. For non-admin users (including peers), the info about
*admin* ports is excluded.
Also increase test coverage for RPC ServerInfo.
Fix#2837.
Curtail the occurrence of order books that are blocked by reduced offers
with the implementation of the fixReducedOffersV1 amendment.
This commit identifies three ways in which offers can be reduced:
1. A new offer can be partially crossed by existing offers, so the new
offer is reduced when placed in the ledger.
2. An in-ledger offer can be partially crossed by a new offer in a
transaction. So the in-ledger offer is reduced by the new offer.
3. An in-ledger offer may be under-funded. In this case the in-ledger
offer is scaled down to match the available funds.
Reduced offers can block order books if the effective quality of the
reduced offer is worse than the quality of the original offer (from the
perspective of the taker). It turns out that, for small values, the
quality of the reduced offer can be significantly affected by the
rounding mode used during scaling computations.
This commit adjusts some rounding modes so that the quality of a reduced
offer is always at least as good (from the taker's perspective) as the
original offer.
The amendment is titled fixReducedOffersV1 because additional ways of
producing reduced offers may come to light. Therefore, there may be a
future need for a V2 amendment.
* Enable api_version 2, which is currently in beta. It is expected to be
marked stable by the next stable release.
* This does not change any defaults.
* The only existing tests changed were one that set the same flag, which
was now redundant, and a couple that tested versioning explicitly.
Add @@start/@@end comment markers to pseudo-tx submission filtering,
fast-polling, local testnet resource bucketing, and test environment
gating. No logic changes.
Address findings from code review:
- dice(0): add early return with INVALID_ARGUMENT before modulo
operation to prevent undefined behavior
- fromSerialIter: return std::optional to safely reject malformed
payloads (truncated, unknown flag bits, trailing bytes) instead
of throwing
- Update all callers (PeerImp, RCLConsensus, tests) for optional
- Add unit tests for dice(0) error code and 7 malformed wire cases
Standalone synthetic entropy produces identical dice(6) results for
consecutive calls due to hash collision mod 6. Switch to dice(1000000)
and add diagnostic output for return code debugging.
- ADD_HOOK_FUNCTION for dice/random (was defined+declared but not registered)
- Relax fairRng() seq check to allow previous ledger entropy (open ledger)
- Add hook tests: dice range, random fill, consecutive calls differ
- TODO: open-ledger entropy semantics need further thought
Port the Hook API surface from the tt-rng branch, adapted to use our
commit-reveal consensus entropy (ltCONSENSUS_ENTROPY / sfDigest).
Hook APIs:
- dice(sides): returns random int [0, sides) from consensus entropy
- random(write_ptr, write_len): fills buffer with 1-512 random bytes
Internal fairRng() derives per-execution entropy by hashing: ledger
seq + tx ID + hook hash + account + chain position + execution phase
+ consensus entropy + incrementing call counter. This ensures each
call within a single hook execution returns different values.
Quality gate: fairRng returns empty (TOO_LITTLE_ENTROPY) if fewer
than 5 validators contributed, preventing weak entropy from being
consumed by hooks.
Also adds sfEntropyCount and sfLedgerSequence to the consensus
entropy SLE and pseudo-tx, enabling the freshness and quality
checks needed by the Hook API.
setExpectedProposers() now filters incoming proposers against the
on-chain UNL Report, preventing non-UNL nodes from inflating the
expected set and causing unnecessary timeouts.
quorumThreshold() uses expectedProposers_.size() (recent proposers ∩
UNL) when available, falling back to full UNL Report count on cold
boot. This adapts to actual network conditions rather than relying
on a potentially stale UNL Report that over-counts offline validators.
Renamed activeUNLNodeIds_/cacheActiveUNL/isActiveUNLMember to
unlReportNodeIds_/cacheUNLReport/isUNLReportMember to make the
on-chain data source explicit.
When expected proposers don't all arrive before rngPIPELINE_TIMEOUT,
check if we still have quorum (80% of UNL). If so, build the commitSet
with available commits and continue to reveals. Only fall back to zero
entropy when truly below quorum.
Previously any missing expected proposer caused a full timeout with zero
entropy for that round. Now: kill 3 of 20 nodes → one 3s timeout round
per kill but entropy preserved (17/16 quorum met).
Wait for commits from last round's proposers (falling back to activeUNL
on cold boot) instead of 80% of UNL. This ensures all nodes build the
commitSet at the same moment with the same entries.
Split proof storage: commitProofs_ (seq=0 only, deterministic) and
proposalProofs_ (latest with reveal, for entropySet). Previously the
proof blob contained whichever proposeSeq was last seen, causing
identical commits to produce different SHAMap hashes across nodes.
20-node testnet: all nodes now produce identical commitSet hashes.
Previously rngPIPELINE_TIMEOUT (3s) was measured from round start,
meaning txSet convergence could eat into the reveal budget. Now reveals
get their own rngREVEAL_TIMEOUT (1.5s) measured from the moment we
enter ConvergingReveal, ensuring consistent time for reveal collection
regardless of how long txSet convergence took.
- Change hasMinimumReveals() to wait for reveals from ALL committers
(pendingCommits_.size()) instead of 80% quorum. The commit set is
deterministic, so we know exactly which reveals to expect.
rngPIPELINE_TIMEOUT remains the safety valve for crash/partition.
Fixes reveal-set non-determinism causing entropy divergence on
15-node testnets.
- Resource manager: preserve port for loopback addresses so local
testnet nodes each get their own resource bucket instead of all
sharing one on 127.0.0.1 (causing rate-limit disconnections).
- Make RNG fast-poll interval configurable via XAHAU_RNG_POLL_MS
env var (default 250ms) for testnet tuning.
When fewer participants are present than the quorum threshold, skip the
RNG commit wait immediately instead of waiting the full pipeline timeout.
Also base the quorum on activeUNLNodeIds_ (UNL Report with fallback)
instead of the full trusted key set, so the denominator reflects who is
actually active on the network.
Add rngPIPELINE_TIMEOUT (3s) to replace ledgerMAX_CONSENSUS (10s) in
the commit/reveal quorum gates. Late-joining nodes enter as
proposing=false and cannot contribute commitments until promoted, so
waiting beyond a few seconds just delays the ZERO-entropy fallback and
penalizes recovery. Add inline comments documenting the late-joiner
constraint and SHAMap sync's role as a dropped-proposal safety net.
Prevent grinding attacks by verifying sha512Half(reveal, pubKey, seq)
matches the stored commitment before accepting a reveal. Also move
cacheActiveUNL() into startRound so non-proposing nodes (exchanges,
block explorers) correctly accept RNG data instead of diverging with
zero entropy.
During ConvergingTx all RNG data arrives via proposal leaves, so
fetching a peer's commitSet before we've built our own just generates
unnecessary traffic. Only fetch commitSetHash once in ConvergingCommit+,
and entropySetHash once in ConvergingReveal.
Prevents spoofed SHAMap entries by embedding verifiable proof blobs
(proposal signature + metadata) in each commit/reveal entry via sfBlob.
- Store ProposalProof in harvestRngData (peers) and propose() (self)
- serializeProof: pack proposeSeq/closeTime/prevLedger/position/sig
- verifyProof: reconstruct signingHash, verify against public key
- Embed proofs in buildCommitSet/buildEntropySet via sfBlob field
- Verify proofs in handleAcquiredRngSet (both diff and visitLeaves paths)
- Add stall fix: gate ConvergingTx on timeout when commits unavailable
- Clear proposalProofs_ in clearRngState
Cache active UNL NodeIDs per round from UNL Report (in-ledger),
falling back to getTrustedMasterKeys() on fresh testnets.
Reject non-UNL validators at all entry points: harvestRngData,
buildCommitSet, buildEntropySet, and handleAcquiredRngSet.
Build real ephemeral (unbacked) SHAMaps for commitSet and entropySet using
ttCONSENSUS_ENTROPY entries with tfEntropyCommit/tfEntropyReveal flags.
Reuse InboundTransactions pipeline for peer fetch/diff/merge with no new
classes. Encode NodeID in sfAccount to avoid master-vs-signing key mismatch.
Add isPseudoTx guard in ConsensusTransSetSF to prevent pseudo-tx submission.
Route acquired RNG sets via isRngSet/gotRngSet in NetworkOPs mapComplete.
During ConvergingCommit and ConvergingReveal sub-states, poll at 250ms
instead of the default 1s ledgerGRANULARITY. This reduces total RNG
pipeline overhead from ~3s to ~500ms while keeping the normal heartbeat
interval unchanged for all other consensus phases.
Three critical fixes that unblock the RNG commit-reveal pipeline:
- Remove entropy secret regeneration in ConvergingTx->ConvergingCommit
transition that was overwriting the onClose() secret, breaking reveal
verification against the original commitment
- Change ExtendedPosition operator== to compare txSetHash only, preventing
deadlock where nodes transitioning sub-states at different times would
break haveConsensus() for all peers
- Self-seed own commitment and reveal into pending collections so the
node counts toward its own quorum checks
Also adds ExtendedPosition_test with signing, suppression, serialization
round-trip and equality tests, iterator safety fix in BuildLedger, wire
compatibility early-return, and RNG debug logging throughout the pipeline.
- Implement injectEntropyPseudoTx() to combine reveals into final
entropy hash and inject as pseudo-tx into CanonicalTXSet in doAccept()
- Modify BuildLedger applyTransactions() to apply entropy tx FIRST
before all other transactions to prevent front-running
- Remove redundant explicit threading in applyConsensusEntropy() as
sfPreviousTxnID/sfPreviousTxnLgrSeq are set automatically by
ApplyStateTable::threadItem()
- Register ttCONSENSUS_ENTROPY in applySteps.cpp dispatch tables
(preflight, preclaim, calculateBaseFee, apply)
- Add ltCONSENSUS_ENTROPY to InvariantCheck.cpp valid type whitelist
Add protocol definitions for consensus-derived entropy pseudo-transaction:
- ttCONSENSUS_ENTROPY = 105 transaction type
- ltCONSENSUS_ENTROPY = 0x0058 ledger entry type
- keylet::consensusEntropy() singleton keylet (namespace 'X')
- applyConsensusEntropy() handler in Change.cpp
- Added to isPseudoTx() in STTx.cpp
The entropy value is stored in sfDigest field of the singleton ledger object.
This provides the protocol foundation for same-ledger entropy injection.
- Add clearRngState() call in startRoundInternal
- Reset estState_ in closeLedger when entering establish phase
- Implement three-phase RNG checkpoint gating:
- ConvergingTx: wait for quorum commits, build commitSet
- ConvergingCommit: reveal entropy, transition immediately
- ConvergingReveal: wait for reveals or timeout, build entropySet
- Use if constexpr for test framework compatibility
- Serialize full ExtendedPosition in share() and propose()
- Deserialize ExtendedPosition in PeerImp using fromSerialIter()
- Add harvestRngData() to collect commits/reveals from peer proposals
- Conditionally call harvest via if constexpr for test compatibility
Introduce data structures for consensus-derived randomness using
commit-reveal scheme:
- Add ExtendedPosition struct with consensus targets (txSetHash,
commitSetHash, entropySetHash) and pipelined leaves (myCommitment,
myReveal)
- operator== excludes leaves to allow convergence with unique leaves
- add() includes ALL fields to prevent signature stripping attacks
- Add EstablishState enum for sub-phases: ConvergingTx, ConvergingCommit,
ConvergingReveal
- Update Consensus template to use Adaptor::Position_t
- Add Position_t typedef to RCLConsensus::Adaptor and test CSF Peer
This is the foundational data structure work for the RNG implementation.
The gating logic and entropy computation will follow.
- Delete ExportSign.cpp/h transactor (ttEXPORT_SIGN no longer used)
- Remove isUVTx() function and all UVTx checks from Transactor/TxQ
- Remove ttEXPORT_SIGN from TxFormats enum and format definition
- Remove jss::ExportSign
- Move signPendingExports() to ExportSignatureCollector
Export signatures are now collected ephemerally via TMValidation
messages, not via ttEXPORT_SIGN transactions.
- Remove makeExportSignTxns() function (signatures now via TMValidation)
- Simplify ExportSign::doApply() to no-op (ttEXPORT_SIGN kept for protocol)
- Remove sfSigners from ltEXPORTED_TXN format (collected in memory now)
- Remove unused OpenView include and forward declaration
- Remove vestigial comment in TxQ about makeExportSignTxns
Add cryptographic verification of export signatures as they arrive:
- stashTxnData() caches serialized txn for verification
- verifyAndAddSignature() verifies against cached data, rejects invalid
- isSignatureVerified() / verifySignature() for Transactor fallback
- Cleanup methods updated to clear verification cache
Also removes leftover debug std::cerr from OpenView, STObject, and tests.
Validators now sign ALL pending ltEXPORTED_TXN entries every ledger
(not just those from the current ledger). Signatures are cached in
ExportSignatureCollector and re-broadcast until the export is finalized.
Changes:
- Add hasSignatureFrom() and getSignatureFrom() to collector for
checking/retrieving cached signatures
- signPendingExports() now iterates ALL pending exports, uses cached
signature if available, otherwise signs fresh
- Signatures keep broadcasting until ltEXPORTED_TXN is deleted
This ensures:
- Late validators can contribute (sign when they come online)
- Network partitions self-heal (signatures propagate on reconnect)
- Node restarts recover (re-sign from ledger state)
The ltEXPORTED_TXN acts as a "ticket" - signatures only valid while it
exists. No explicit expiry check needed; ledger state is the gatekeeper.
Replace on-ledger ttEXPORT_SIGN transactions with ephemeral signature
collection via TMValidation messages. This eliminates O(n²) metadata
bloat from accumulating signatures on-ledger.
Changes:
- Add ExportSignatureCollector for in-memory signature storage with
quorum tracking (80% UNL threshold)
- Extend TMValidation protobuf with exportSignatures field
- Sign pending exports during validate() and broadcast via validation
- Extract signatures from received TMValidation in PeerImp
- TxQ checks quorum from memory instead of ledger
- Inject ttEXPORT when quorum reached (can be ledger N+1 or N+2)
- Clean up collector after ttEXPORT processed
Includes [EXPORT-TIMING] debug logging for timing analysis.
- Fix xport hook API whitelist to declare 4 args (I32, I32, I32, I32)
instead of 2, matching the actual implementation signature
- Fix TxQ.cpp to use emplace_back with STObject for sfExportedTxn
instead of setFieldVL, since sfExportedTxn is OBJECT type not VL.
The previous code would throw "Wrong field type" at runtime.
Move ttEXPORT_SIGN handling to dedicated ExportSign transactor class,
following the same pattern as ttENTROPY/Entropy from the RNG feature.
UVTxns (signed validator transactions) should not be mixed with
pseudo-transactions in the Change transactor.
- Create ExportSign.h/cpp with preflight, preclaim, doApply
- Route ttEXPORT_SIGN through ExportSign in applySteps.cpp
- Remove UVTx branches from Change transactor
- Add documentation markers to View.h for inUNLReport functions
Port the UNL Validator Transaction (UVTxn) pattern from the RNG feature
to allow validators to submit signed ttEXPORT_SIGN transactions without
requiring a funded account.
Changes:
- Add isUVTx() to identify UVTxn transaction types
- Add inUNLReport() templates to check validator UNLReport membership
- Add getValidationSecretKey() to Application for signing
- Modify Transactor for UVTxn bypasses (fee, seq, signature checks)
- Add makeExportSignTxns() to generate validator signatures
- Hook into RCLConsensus to submit ttEXPORT_SIGN during accept
- Update applySteps.cpp routing for ttEXPORT_SIGN
- Remove direct ttEXPORT_SIGN injection from TxQ::accept
Note: Currently uses Change transactor with UVTx branches.
May refactor to dedicated ExportSign transactor class.
Fixes the use of high and low in variable names, as these are determined by ripple::keylet::line processing.
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
std::endl triggers flush() which calls sync() on the shared log buffer.
Multiple threads racing in sync() cause str()/str("") operations to
corrupt buffer state, leading to crashes and double frees.
Added mutex to serialize access to suite.log, preventing concurrent
sync() calls on the same buffer.
* Match unit tests on start of test name (#4634)
* For example, without this change, to run the TxQ tests, must specify
`--unittest=TxQ1,TxQ2` on the command line. With this change, can use
`--unittest=TxQ`, and both will be run.
* An exact match will prevent any further partial matching.
* This could have some side effects for different tests with a common
name beginning. For example, NFToken, NFTokenBurn, NFTokenDir. This
might be useful. If not, the shorter-named test(s) can be renamed. For
example, NFToken to NFTokens.
* Split the NFToken, NFTokenBurn, and Offer test classes. Potentially speeds
up parallel tests by a factor of 5.
* SetHook_test, SetHookTSH_test, XahauGenesis_test
---------
Co-authored-by: Ed Hennis <ed@ripple.com>
* Add jss fields used by Clio `nft_info`: (#4320)
Add Clio-specific JSS constants to ensure a common vocabulary of
keywords in Clio and this project. By providing visibility of the full
API keyword namespace, it reduces the likelihood of developers
introducing minor variations on names used by Clio, or unknowingly
claiming a keyword that Clio has already claimed. This change moves this
project slightly away from having only the code necessary for running
the core server, but it is a step toward the goal of keeping this
server's and Clio's APIs similar. The added JSS constants are annotated
to indicate their relevance to Clio.
Clio can be found here: https://github.com/XRPLF/clio
Signed-off-by: ledhed2222 <ledhed2222@users.noreply.github.com>
* Introduce support for a slabbed allocator: (#4218)
When instantiating a large amount of fixed-sized objects on the heap
the overhead that dynamic memory allocation APIs impose will quickly
become significant.
In some cases, allocating a large amount of memory at once and using
a slabbing allocator to carve the large block into fixed-sized units
that are used to service requests for memory out will help to reduce
memory fragmentation significantly and, potentially, improve overall
performance.
This commit introduces a new `SlabAllocator<>` class that exposes an
API that is _similar_ to the C++ concept of an `Allocator` but it is
not meant to be a general-purpose allocator.
It should not be used unless profiling and analysis of specific memory
allocation patterns indicates that the additional complexity introduced
will improve the performance of the system overall, and subsequent
profiling proves it.
A helper class, `SlabAllocatorSet<>` simplifies handling of variably
sized objects that benefit from slab allocations.
This commit incorporates improvements suggested by Greg Popovitch
(@greg7mdp).
Commit 1 of 3 in #4218.
* Optimize `SHAMapItem` and leverage new slab allocator: (#4218)
The `SHAMapItem` class contains a variable-sized buffer that
holds the serialized data associated with a particular item
inside a `SHAMap`.
Prior to this commit, the buffer for the serialized data was
allocated separately. Coupled with the fact that most instances
of `SHAMapItem` were wrapped around a `std::shared_ptr` meant
that an instantiation might result in up to three separate
memory allocations.
This commit switches away from `std::shared_ptr` for `SHAMapItem`
and uses `boost::intrusive_ptr` instead, allowing the reference
count for an instance to live inside the instance itself. Coupled
with using a slab-based allocator to optimize memory allocation
for the most commonly sized buffers, the net result is significant
memory savings. In testing, the reduction in memory usage hovers
between 400MB and 650MB. Other scenarios might result in larger
savings.
In performance testing with NFTs, this commit reduces memory size by
about 15% sustained over long duration.
Commit 2 of 3 in #4218.
* Avoid using std::shared_ptr when not necessary: (#4218)
The `Ledger` class contains two `SHAMap` instances: the state and
transaction maps. Previously, the maps were dynamically allocated using
`std::make_shared` despite the fact that they did not require lifetime
management separate from the lifetime of the `Ledger` instance to which
they belong.
The two `SHAMap` instances are now regular member variables. Some smart
pointers and dynamic memory allocation was avoided by using stack-based
alternatives.
Commit 3 of 3 in #4218.
* Prevent replay attacks with NetworkID field: (#4370)
Add a `NetworkID` field to help prevent replay attacks on and from
side-chains.
The new field must be used when the server is using a network id > 1024.
To preserve legacy behavior, all chains with a network ID less than 1025
retain the existing behavior. This includes Mainnet, Testnet, Devnet,
and hooks-testnet. If `sfNetworkID` is present in any transaction
submitted to any of the nodes on one of these chains, then
`telNETWORK_ID_MAKES_TX_NON_CANONICAL` is returned.
Since chains with a network ID less than 1025, including Mainnet, retain
the existing behavior, there is no need for an amendment.
The `NetworkID` helps to prevent replay attacks because users specify a
`NetworkID` field in every transaction for that chain.
This change introduces a new UINT32 field, `sfNetworkID` ("NetworkID").
There are also three new local error codes for transaction results:
- `telNETWORK_ID_MAKES_TX_NON_CANONICAL`
- `telREQUIRES_NETWORK_ID`
- `telWRONG_NETWORK`
To learn about the other transaction result codes, see:
https://xrpl.org/transaction-results.html
Local error codes were chosen because a transaction is not necessarily
malformed if it is submitted to a node running on the incorrect chain.
This is a local error specific to that node and could be corrected by
switching to a different node or by changing the `network_id` on that
node. See:
https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html
In addition to using `NetworkID`, it is still generally recommended to
use different accounts and keys on side-chains. However, people will
undoubtedly use the same keys on multiple chains; for example, this is
common practice on other blockchain networks. There are also some
legitimate use cases for this.
A `app.NetworkID` test suite has been added, and `core.Config` was
updated to include some network_id tests.
* Fix the fix for std::result_of (#4496)
Newer compilers, such as Apple Clang 15.0, have removed `std::result_of`
as part of C++20. The build instructions provided a fix for this (by
adding a preprocessor definition), but the fix was broken.
This fixes the fix by:
* Adding the `conf` prefix for tool configurations (which had been
forgotten).
* Passing `extra_b2_flags` to `boost` package to fix its build.
* Define `BOOST_ASIO_HAS_STD_INVOKE_RESULT` in order to build boost
1.77 with a newer compiler.
* Use quorum specified via command line: (#4489)
If `--quorum` setting is present on the command line, use the specified
value as the minimum quorum. This allows for the use of a potentially
fork-unsafe quorum, but it is sometimes necessary for small and test
networks.
Fix#4488.
---------
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
* Fix errors for Clang 16: (#4501)
Address issues related to the removal of `std::{u,bi}nary_function` in
C++17 and some warnings with Clang 16. Some warnings appeared with the
upgrade to Apple clang version 14.0.3 (clang-1403.0.22.14.1).
- `std::{u,bi}nary_function` were removed in C++17. They were empty
classes with a few associated types. We already have conditional code
to define the types. Just make it unconditional.
- libc++ checks a cast in an unevaluated context to see if a type
inherits from a binary function class in the standard library, e.g.
`std::equal_to`, and this causes an error when the type privately
inherits from such a class. Change these instances to public
inheritance.
- We don't need a middle-man for the empty base optimization. Prefer to
inherit directly from an empty class than from
`beast::detail::empty_base_optimization`.
- Clang warns when all the uses of a variable are removed by conditional
compilation of assertions. Add a `[[maybe_unused]]` annotation to
suppress it.
- As a drive-by clean-up, remove commented code.
See related work in #4486.
* Fix typo (#4508)
* fix!: Prevent API from accepting seed or public key for account (#4404)
The API would allow seeds (and public keys) to be used in place of
accounts at several locations in the API. For example, when calling
account_info, you could pass `"account": "foo"`. The string "foo" is
treated like a seed, so the method returns `actNotFound` (instead of
`actMalformed`, as most developers would expect). In the early days,
this was a convenience to make testing easier. However, it allows for
poor security practices, so it is no longer a good idea. Allowing a
secret or passphrase is now considered a bug. Previously, it was
controlled by the `strict` option on some methods. With this commit,
since the API does not interpret `account` as `seed`, the option
`strict` is no longer needed and is removed.
Removing this behavior from the API is a [breaking
change](https://xrpl.org/request-formatting.html#breaking-changes). One
could argue that it shouldn't be done without bumping the API version;
however, in this instance, there is no evidence that anyone is using the
API in the "legacy" way. Furthermore, it is a potential security hole,
as it allows users to send secrets to places where they are not needed,
where they could end up in logs, error messages, etc. There's no reason
to take such a risk with a seed/secret, since only the public address is
needed.
Resolves: #3329, #3330, #4337
BREAKING CHANGE: Remove non-strict account parsing (#3330)
* Add nftoken_id, nftoken_ids, offer_id fields for NFTokens (#4447)
Three new fields are added to the `Tx` responses for NFTs:
1. `nftoken_id`: This field is included in the `Tx` responses for
`NFTokenMint` and `NFTokenAcceptOffer`. This field indicates the
`NFTokenID` for the `NFToken` that was modified on the ledger by the
transaction.
2. `nftoken_ids`: This array is included in the `Tx` response for
`NFTokenCancelOffer`. This field provides a list of all the
`NFTokenID`s for the `NFToken`s that were modified on the ledger by
the transaction.
3. `offer_id`: This field is included in the `Tx` response for
`NFTokenCreateOffer` transactions and shows the OfferID of the
`NFTokenOffer` created.
The fields make it easier to track specific tokens and offers. The
implementation includes code (by @ledhed2222) from the Clio project to
extract NFTokenIDs from mint transactions.
* Ensure that switchover vars are initialized before use: (#4527)
Global variables in different TUs are initialized in an undefined order.
At least one global variable was accessing a global switchover variable.
This caused the switchover variable to be accessed in an uninitialized
state.
Since the switchover is always explicitly set before transaction
processing, this bug can not effect transaction processing, but could
effect unit tests (and potentially the value of some global variables).
Note: at the time of this patch the offending bug is not yet in
production.
* Move faulty assert (#4533)
This assert was put in the wrong place, but it only triggers if shards
are configured. This change moves the assert to the right place and
updates it to ensure correctness.
The assert could be hit after the server downloads some shards. It may
be necessary to restart after the shards are downloaded.
Note that asserts are normally checked only in debug builds, so release
packages should not be affected.
Introduced in: #4319 (66627b26cf)
* Fix unaligned load and stores: (#4528) (#4531)
Misaligned load and store operations are supported by both Intel and ARM
CPUs. However, in C++, these operations are undefined behavior (UB).
Substituting these operations with a `memcpy` fixes this UB. The
compiled assembly code is equivalent to the original, so there is no
performance penalty to using memcpy.
For context: The unaligned load and store operations fixed here were
originally introduced in the slab allocator (#4218).
* Add missing includes for gcc 13.1: (#4555)
gcc 13.1 failed to compile due to missing headers. This patch adds the
needed headers.
* Trivial: add comments for NFToken-related invariants (#4558)
* fix node size estimation (#4536)
Fix a bug in the `NODE_SIZE` auto-detection feature in `Config.cpp`.
Specifically, this patch corrects the calculation for the total amount
of RAM available, which was previously returned in bytes, but is now
being returned in units of the system's memory unit. Additionally, the
patch adjusts the node size based on the number of available hardware
threads of execution.
* fix: remove redundant moves (#4565)
- Resolve gcc compiler warning:
AccountObjects.cpp:182:47: warning: redundant move in initialization [-Wredundant-move]
- The std::move() operation on trivially copyable types may generate a
compile warning in newer versions of gcc.
- Remove extraneous header (unused imports) from a unit test file.
* Revert "Fix the fix for std::result_of (#4496)"
This reverts commit cee8409d60.
* Revert "Fix typo (#4508)"
This reverts commit 2956f14de8.
* clang
* [fold] bad merge
* [fold] fix bad merge
- add back filter for ripple state on account_channels
- add back network id test (env auto adds network id in xahau)
* [fold] fix build error
---------
Signed-off-by: ledhed2222 <ledhed2222@users.noreply.github.com>
Co-authored-by: ledhed2222 <ledhed2222@users.noreply.github.com>
Co-authored-by: Nik Bougalis <nikb@bougalis.net>
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
Co-authored-by: John Freeman <jfreeman08@gmail.com>
Co-authored-by: Mark Travis <mtrippled@users.noreply.github.com>
Co-authored-by: solmsted <steven.olm@gmail.com>
Co-authored-by: drlongle <drlongle@gmail.com>
Co-authored-by: Shawn Xie <35279399+shawnxie999@users.noreply.github.com>
Co-authored-by: Scott Determan <scott.determan@yahoo.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
Co-authored-by: Scott Schurr <scott@ripple.com>
Co-authored-by: Chenna Keshava B S <21219765+ckeshava@users.noreply.github.com>
* CI on `jshooks` branch
* CI Split jobs with prev job dependency
* No multi branch worker in parallel
---------
Co-authored-by: Denis Angell <dangell@transia.co>
* fix ns delete owner count
* add a new success code and refactor success checks, limit ns delete operations to 256 entries per txn
---------
Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
fixXahauV2
* refactor tsh
* add uritoken mint/cancel tsh
* add flags to hookexections meta and nonce to hookemissions meta
Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
* add features to `server_definitions`
* clang-format
* Update RPCCall.cpp
* only return features without params
* clang-format
* include features in hashed value
* clang-format
* rework features addition to server_defintions to be cached at flag ledgers
* fix clang, duplicate hash key
---------
Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
* modify the way server_version int is built to include build number
* clang
* Update BuildInfo_test.cpp
* clang-format
* Update BuildInfo_test.cpp
* clang-format
* Update BuildInfo_test.cpp
---------
Co-authored-by: Denis Angell <dangell@transia.co>
* make account starting seq the parent close time to prevent replay attacks in reset networks
* add tests for activation
---------
Co-authored-by: Denis Angell <dangell@transia.co>
* add issuer != source test
* add destination can be genesis test
* add issuer != source test
* add wrong network error code
* add wrong network id rpc test
* clang-format
* add `'` seperator to hex constants
fix bad seperator
clang-format
* remove impossible validation
add parenthesis
* pre-increment `it`
* add leading 0
* use boost_regex
* add leading 0
* `const` after declare variable
* clang-format
* add extra check on std::optional
* move guard
* add test
* update test to match
* rename `txnIDfromIndex` -> `txnIdFromIndex`
* update test to match guard
* fix test naming
---------
Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
*If there is any impact to the public API methods (HTTP / WebSocket), please update https://github.com/xrplf/rippled/blob/develop/API-CHANGELOG.md
*Update API-CHANGELOG.md and add the change directly in this PR by pushing to your PR branch.
*libxrpl: See https://github.com/XRPLF/rippled/blob/develop/docs/build/depend.md
*Peer Protocol: See https://xrpl.org/peer-protocol.html
-->
- [ ] Public API: New feature (new methods and/or new fields)
- [ ] Public API: Breaking change (in general, breaking changes should only impact the next api_version)
- [ ]`libxrpl` change (any change that may affect `libxrpl` or dependents of `libxrpl`)
- [ ] Peer protocol change (must be backward compatible or bump the peer protocol version)
<!--
## Before / After
If relevant, use this section for an English description of the change at a technical level.
If this change affects an API, examples should be included here.
For performance-impacting changes, please provide these details:
1. Is this a new feature, bug fix, or improvement to existing functionality?
2. What behavior/functionality does the change impact?
3. In what processing can the impact be measured? Be as specific as possible - e.g. RPC client call, payment transaction that involves LOB, AMM, caching, DB operations, etc.
4. Does this change affect concurrent processing - e.g. does it involve acquiring locks, multi-threaded processing, or async processing?
-->
<!--
@@ -52,4 +76,4 @@ This section may not be needed if your change includes thoroughly commented unit
# No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
# Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#build-times-out-because-no-output-was-received
This changelog is intended to list all updates to the [public API methods](https://xrpl.org/public-api-methods.html).
For info about how [API versioning](https://xrpl.org/request-formatting.html#api-versioning) works, including examples, please view the [XLS-22d spec](https://github.com/XRPLF/XRPL-Standards/discussions/54). For details about the implementation of API versioning, view the [implementation PR](https://github.com/XRPLF/rippled/pull/3155). API versioning ensures existing integrations and users continue to receive existing behavior, while those that request a higher API version will experience new behavior.
The API version controls the API behavior you see. This includes what properties you see in responses, what parameters you're permitted to send in requests, and so on. You specify the API version in each of your requests. When a breaking change is introduced to the `rippled` API, a new version is released. To avoid breaking your code, you should set (or increase) your version when you're ready to upgrade.
For a log of breaking changes, see the **API Version [number]** headings. In general, breaking changes are associated with a particular API Version number. For non-breaking changes, scroll to the **XRP Ledger version [x.y.z]** headings. Non-breaking changes are associated with a particular XRP Ledger (`rippled`) release.
## API Version 2
API version 2 is available in `rippled` version 2.0.0 and later. To use this API, clients specify `"api_version" : 2` in each request.
#### Removed methods
In API version 2, the following deprecated methods are no longer available: (https://github.com/XRPLF/rippled/pull/4759)
-`tx_history` - Instead, use other methods such as `account_tx` or `ledger` with the `transactions` field set to `true`.
-`ledger_header` - Instead, use the `ledger` method.
#### Modifications to JSON transaction element in V2
In API version 2, JSON elements for transaction output have been changed and made consistent for all methods which output transactions. (https://github.com/XRPLF/rippled/pull/4775)
This helps to unify the JSON serialization format of transactions. (https://github.com/XRPLF/clio/issues/722, https://github.com/XRPLF/rippled/issues/4727)
- JSON transaction element is named `tx_json`
- Binary transaction element is named `tx_blob`
- JSON transaction metadata element is named `meta`
- Binary transaction metadata element is named `meta_blob`
Additionally, these elements are now consistently available next to `tx_json` (i.e. sibling elements), where possible:
-`hash` - Transaction ID. This data was stored inside transaction output in API version 1, but in API version 2 is a sibling element.
-`ledger_index` - Ledger index (only set on validated ledgers)
-`ledger_hash` - Ledger hash (only set on closed or validated ledgers)
-`close_time_iso` - Ledger close time expressed in ISO 8601 time format (only set on validated ledgers)
-`validated` - Bool element set to `true` if the transaction is in a validated ledger, otherwise `false`
This change affects the following methods:
-`tx` - Transaction data moved into element `tx_json` (was inline inside `result`) or, if binary output was requested, moved from `tx` to `tx_blob`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
-`account_tx` - Renamed transaction element from `tx` to `tx_json`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
-`transaction_entry` - Renamed transaction metadata element from `metadata` to `meta`. Changed location of `hash` and added new elements
-`subscribe` - Renamed transaction element from `transaction` to `tx_json`. Changed location of `hash` and added new elements
-`sign`, `sign_for`, `submit` and `submit_multisigned` - Changed location of `hash` element.
#### Modification to `Payment` transaction JSON schema
When reading Payments, the `Amount` field should generally **not** be used. Instead, use [delivered_amount](https://xrpl.org/partial-payments.html#the-delivered_amount-field) to see the amount that the Payment delivered. To clarify its meaning, the `Amount` field is being renamed to `DeliverMax`. (https://github.com/XRPLF/rippled/pull/4733)
- In `Payment` transaction type, JSON RPC field `Amount` is renamed to `DeliverMax`. To enable smooth client transition, `Amount` is still handled, as described below: (https://github.com/XRPLF/rippled/pull/4733)
- On JSON RPC input (e.g. `submit_multisigned` etc. methods), `Amount` is recognized as an alias to `DeliverMax` for both API version 1 and version 2 clients.
- On JSON RPC input, submitting both `Amount` and `DeliverMax` fields is allowed _only_ if they are identical; otherwise such input is rejected with `rpcINVALID_PARAMS` error.
- On JSON RPC output (e.g. `subscribe`, `account_tx` etc. methods), `DeliverMax` is present in both API version 1 and version 2.
- On JSON RPC output, `Amount` is only present in API version 1 and _not_ in version 2.
#### Modifications to account_info response
-`signer_lists` is returned in the root of the response. (In API version 1, it was nested under `account_data`.) (https://github.com/XRPLF/rippled/pull/3770)
- When using an invalid `signer_lists` value, the API now returns an "invalidParams" error. (https://github.com/XRPLF/rippled/pull/4585)
- (`signer_lists` must be a boolean. In API version 1, strings were accepted and may return a normal response - i.e. as if `signer_lists` were `true`.)
#### Modifications to [account_tx](https://xrpl.org/account_tx.html#account_tx) response
- Using `ledger_index_min`, `ledger_index_max`, and `ledger_index` returns `invalidParams` because if you use `ledger_index_min` or `ledger_index_max`, then it does not make sense to also specify `ledger_index`. In API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4571)
- The same applies for `ledger_index_min`, `ledger_index_max`, and `ledger_hash`. (https://github.com/XRPLF/rippled/issues/4545#issuecomment-1565065579)
- Using a `ledger_index_min` or `ledger_index_max` beyond the range of ledgers that the server has:
- returns `lgrIdxMalformed` in API version 2. Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/issues/4288)
- Attempting to use a non-boolean value (such as a string) for the `binary` or `forward` parameters returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
#### Modifications to [noripple_check](https://xrpl.org/noripple_check.html#noripple_check) response
- Attempting to use a non-boolean value (such as a string) for the `transactions` parameter returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
## API Version 1
This version is supported by all `rippled` versions. For WebSocket and HTTP JSON-RPC requests, it is currently the default API version used when no `api_version` is specified.
The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conventions/request-formatting/#commandline-format) always uses the latest API version. The command line is intended for ad-hoc usage by humans, not programs or automated scripts. The command line is not meant for use in production code.
### Inconsistency: server_info - network_id
The `network_id` field was added in the `server_info` response in version 1.5.0 (2019), but it is not returned in [reporting mode](https://xrpl.org/rippled-server-modes.html#reporting-mode). However, use of reporting mode is now discouraged, in favor of using [Clio](https://github.com/XRPLF/clio) instead.
## XRP Ledger server version 2.4.0
As of 2025-01-28, version 2.4.0 is in development. You can use a pre-release version by building from source or [using the `nightly` package](https://xrpl.org/docs/infrastructure/installation/install-rippled-on-ubuntu).
### Additions and bugfixes in 2.4.0
-`ledger_entry`: `state` is added an alias for `ripple_state`.
-`ledger_entry`: Enables case-insensitive filtering by canonical name in addition to case-sensitive filtering by RPC name.
-`validators`: Added new field `validator_list_threshold` in response.
-`simulate`: A new RPC that executes a [dry run of a transaction submission](https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0069d-simulate#2-rpc-simulate)
- Signing methods autofill fees better and properly handle transactions that don't have a base fee, and will also autofill the `NetworkID` field.
## XRP Ledger server version 2.3.0
[Version 2.3.0](https://github.com/XRPLF/rippled/releases/tag/2.3.0) was released on Nov 25, 2024.
### Breaking changes in 2.3.0
-`book_changes`: If the requested ledger version is not available on this node, a `ledgerNotFound` error is returned and the node does not attempt to acquire the ledger from the p2p network (as with other non-admin RPCs).
Admins can still attempt to retrieve old ledgers with the `ledger_request` RPC.
### Additions and bugfixes in 2.3.0
-`book_changes`: Returns a `validated` field in its response, which was missing in prior versions.
## XRP Ledger server version 2.2.0
[Version 2.2.0](https://github.com/XRPLF/rippled/releases/tag/2.2.0) was released on Jun 5, 2024. The following additions are non-breaking (because they are purely additive):
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 2.0.0
[Version 2.0.0](https://github.com/XRPLF/rippled/releases/tag/2.0.0) was released on Jan 9, 2024. The following additions are non-breaking (because they are purely additive):
-`server_definitions`: A new RPC that generates a `definitions.json`-like output that can be used in XRPL libraries.
- In `Payment` transactions, `DeliverMax` has been added. This is a replacement for the `Amount` field, which should not be used. Typically, the `delivered_amount` (in transaction metadata) should be used. To ease the transition, `DeliverMax` is present regardless of API version, since adding a field is non-breaking.
- API version 2 has been moved from beta to supported, meaning that it is generally available (regardless of the `beta_rpc_api` setting).
## XRP Ledger server version 2.2.0
The following is a non-breaking addition to the API.
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 1.12.0
[Version 1.12.0](https://github.com/XRPLF/rippled/releases/tag/1.12.0) was released on Sep 6, 2023. The following additions are non-breaking (because they are purely additive).
-`server_info`: Added `ports`, an array which advertises the RPC and WebSocket ports. This information is also included in the `/crawl` endpoint (which calls `server_info` internally). `grpc` and `peer` ports are also included. (https://github.com/XRPLF/rippled/pull/4427)
-`ports` contains objects, each containing a `port` for the listening port (a number string), and a `protocol` array listing the supported protocols on that port.
- This allows crawlers to build a more detailed topology without needing to port-scan nodes.
- (For peers and other non-admin clients, the info about admin ports is excluded.)
- Clawback: The following additions are gated by the Clawback amendment (`featureClawback`). (https://github.com/XRPLF/rippled/pull/4553)
- Adds an [AccountRoot flag](https://xrpl.org/accountroot.html#accountroot-flags) called `lsfAllowTrustLineClawback` (https://github.com/XRPLF/rippled/pull/4617)
- Adds the corresponding `asfAllowTrustLineClawback` [AccountSet Flag](https://xrpl.org/accountset.html#accountset-flags) as well.
- Clawback is disabled by default, so if an issuer desires the ability to claw back funds, they must use an `AccountSet` transaction to set the AllowTrustLineClawback flag. They must do this before creating any trust lines, offers, escrows, payment channels, or checks.
- Adds the [Clawback transaction type](https://github.com/XRPLF/XRPL-Standards/blob/master/XLS-39d-clawback/README.md#331-clawback-transaction), containing these fields:
-`Account`: The issuer of the asset being clawed back. Must also be the sender of the transaction.
-`Amount`: The amount being clawed back, with the `Amount.issuer` being the token holder's address.
`tfWithdrawAll`, `tfOneAssetWithdrawAll` which allow a trader to specify different fields combination
for `AMMDeposit` and `AMMWithdraw` transactions.
- Adds new transaction result codes:
- tecUNFUNDED_AMM: insufficient balance to fund AMM. The account does not have funds for liquidity provision.
- tecAMM_BALANCE: AMM has invalid balance. Calculated balances greater than the current pool balances.
- tecAMM_FAILED: AMM transaction failed. Fails due to a processing failure.
- tecAMM_INVALID_TOKENS: AMM invalid LP tokens. Invalid input values, format, or calculated values.
- tecAMM_EMPTY: AMM is in empty state. Transaction requires AMM in non-empty state (LP tokens > 0).
- tecAMM_NOT_EMPTY: AMM is not in empty state. Transaction requires AMM in empty state (LP tokens == 0).
- tecAMM_ACCOUNT: AMM account. Clawback of AMM account.
- tecINCOMPLETE: Some work was completed, but more submissions required to finish. AMMDelete partially deletes the trustlines.
## XRP Ledger server version 1.11.0
[Version 1.11.0](https://github.com/XRPLF/rippled/releases/tag/1.11.0) was released on Jun 20, 2023.
### Breaking changes in 1.11
- Added the ability to mark amendments as obsolete. For the `feature` admin API, there is a new possible value for the `vetoed` field. (https://github.com/XRPLF/rippled/pull/4291)
- The value of `vetoed` can now be `true`, `false`, or `"Obsolete"`.
- Removed the acceptance of seeds or public keys in place of account addresses. (https://github.com/XRPLF/rippled/pull/4404)
- This simplifies the API and encourages better security practices (i.e. seeds should never be sent over the network).
- For the `ledger_data` method, when all entries are filtered out, the `state` field of the response is now an empty list (in other words, an empty array, `[]`). (Previously, it would return `null`.) While this is technically a breaking change, the new behavior is consistent with the documentation, so this is considered only a bug fix. (https://github.com/XRPLF/rippled/pull/4398)
- If and when the `fixNFTokenRemint` amendment activates, there will be a new AccountRoot field, `FirstNFTSequence`. This field is set to the current account sequence when the account issues their first NFT. If an account has not issued any NFTs, then the field is not set. ([#4406](https://github.com/XRPLF/rippled/pull/4406))
- There is a new account deletion restriction: an account can only be deleted if `FirstNFTSequence` + `MintedNFTokens` + `256` is less than the current ledger sequence.
- This is potentially a breaking change if clients have logic for determining whether an account can be deleted.
- NetworkID
- For sidechains and networks with a network ID greater than 1024, there is a new [transaction common field](https://xrpl.org/transaction-common-fields.html), `NetworkID`. (https://github.com/XRPLF/rippled/pull/4370)
- This field helps to prevent replay attacks and is now required for chains whose network ID is 1025 or higher.
- The field must be omitted for Mainnet, so there is no change for Mainnet users.
- There are three new local error codes:
-`telNETWORK_ID_MAKES_TX_NON_CANONICAL`: a `NetworkID` is present but the chain's network ID is less than 1025. Remove the field from the transaction, and try again.
-`telREQUIRES_NETWORK_ID`: a `NetworkID` is required, but is not present. Add the field to the transaction, and try again.
-`telWRONG_NETWORK`: a `NetworkID` is specified, but it is for a different network. Submit the transaction to a different server which is connected to the correct network.
### Additions and bug fixes in 1.11
- Added `nftoken_id`, `nftoken_ids` and `offer_id` meta fields into NFT `tx` and `account_tx` responses. (https://github.com/XRPLF/rippled/pull/4447)
- Added an `account_flags` object to the `account_info` method response. (https://github.com/XRPLF/rippled/pull/4459)
- Added `NFTokenPages` to the `account_objects` RPC. (https://github.com/XRPLF/rippled/pull/4352)
- Fixed: `marker` returned from the `account_lines` command would not work on subsequent commands. (https://github.com/XRPLF/rippled/pull/4361)
- If the `XRPFees` feature is enabled, the `fee_ref` field will be
removed from the [ledger subscription stream](https://xrpl.org/subscribe.html#ledger-stream), because it will no longer
have any meaning.
# Unit tests for API changes
The following information is useful to developers contributing to this project:
The purpose of unit tests is to catch bugs and prevent regressions. In general, it often makes sense to create a test function when there is a breaking change to the API. For APIs that have changed in a new API version, the tests should be modified so that both the prior version and the new version are properly tested.
To take one example: for `account_info` version 1, WebSocket and JSON-RPC behavior should be tested. The latest API version, i.e. API version 2, should be tested over WebSocket, JSON-RPC, and command line.
| These instructions assume you have a C++ development environment ready with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up on Linux, macOS, or Windows, [see this guide](./docs/build/environment.md). |
> These instructions also assume a basic familiarity with Conan and CMake.
> If you are unfamiliar with Conan,
> you can read our [crash course](./docs/build/conan.md)
> or the official [Getting Started][3] walkthrough.
## Branches
For a stable release, choose the `master` branch or one of the [tagged
For the latest release candidate, choose the `release` branch.
```
git checkout release
```
For the latest set of untested features, or to contribute, choose the `develop`
branch.
```
git checkout develop
```
## Minimum Requirements
See [System Requirements](https://xrpl.org/system-requirements.html).
Building rippled generally requires git, Python, Conan, CMake, and a C++ compiler. Some guidance on setting up such a [C++ development environment can be found here](./docs/build/environment.md).
- [Python 3.7](https://www.python.org/downloads/)
- [Conan 2.x](https://conan.io/downloads)
- [CMake 3.16](https://cmake.org/download/)
`xahaud` is written in the C++20 dialect and includes the `<concepts>` header.
The [minimum compiler versions][2] required are:
| Compiler | Version |
|-------------|---------|
| GCC | 11 |
| Clang | 13 |
| Apple Clang | 13.1.6 |
| MSVC | 19.23 |
### Linux
The Ubuntu operating system has received the highest level of
quality assurance, testing, and support.
Here are [sample instructions for setting up a C++ development environment on Linux](./docs/build/environment.md#linux).
### Mac
Many xahaud engineers use macOS for development.
Here are [sample instructions for setting up a C++ development environment on macOS](./docs/build/environment.md#macos).
### Windows
We don't recommend Windows for `xahaud` production at this time. As of
November 2025, Ubuntu has the highest level of quality assurance, testing,
and support.
Windows developers should use Visual Studio 2019. `xahaud` isn't
compatible with [Boost](https://www.boost.org/) 1.78 or 1.79, and Conan
can't build earlier Boost versions.
## Steps
### Set Up Conan
After you have a [C++ development environment](./docs/build/environment.md) ready with Git, Python, Conan, CMake, and a C++ compiler, you may need to set up your Conan profile.
These instructions assume a basic familiarity with Conan and CMake.
If you are unfamiliar with Conan, then please read [this crash course](./docs/build/conan.md) or the official [Getting Started][3] walkthrough.
You'll need at least one Conan profile:
```
conan profile detect --force
```
Update the compiler settings:
For Conan 2, you can edit the profile directly at `~/.conan2/profiles/default`,
or use the Conan CLI. Ensure C++20 is set:
```
conan profile show
```
Look for `compiler.cppstd=20` in the output. If it's not set, edit the profile:
```
# Edit ~/.conan2/profiles/default and ensure these settings exist:
[settings]
compiler.cppstd=20
```
Configure Conan (1.x only) to use recipe revisions:
```
conan config set general.revisions_enabled=1
```
**Linux** developers will commonly have a default Conan [profile][] that compiles
with GCC and links with libstdc++.
If you are linking with libstdc++ (see profile setting `compiler.libcxx`),
then you will need to choose the `libstdc++11` ABI:
```
# In ~/.conan2/profiles/default, ensure:
[settings]
compiler.libcxx=libstdc++11
```
Ensure inter-operability between `boost::string_view` and `std::string_view` types:
If you have other flags in the `conf.tools.build` or `env.CXXFLAGS` sections, make sure to retain the existing flags and append the new ones. You can check them with:
```
conan profile show default
```
**Windows** developers may need to use the x64 native build tools.
An easy way to do that is to run the shortcut "x64 Native Tools Command
Prompt" for the version of Visual Studio that you have installed.
Windows developers must also build `xahaud` and its dependencies for the x64
architecture.
```
# In ~/.conan2/profiles/default, ensure:
[settings]
arch=x86_64
```
### Multiple compilers
```
# In ~/.conan2/profiles/default, add under [conf] section:
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.