Compare commits

..

3 Commits

Author SHA1 Message Date
Vito
1c2be05cdc fix: Amendment-gate hybrid (1+r)^n-1 evaluator on fixCleanup3_2_0
Commit 26f82a2a1 swapped `computePaymentFactor` from the cancellation-prone
`power(1+r, n) - 1` to `computePowerMinusOneHybrid`, but did not
amendment-gate the swap. Because the change alters the bit-pattern of
amortization output at near-zero rates, applying it without a gate would
fork consensus on any ledger with a long-running near-zero-rate loan.

Gate the hybrid path on `fixCleanup3_2_0`. When the amendment is not
enabled, `computePaymentFactor` falls back to the original direct
subtraction, preserving pre-fix output bit-exactly. When enabled, the
hybrid path is used and the near-zero-rate regression tests
(`testBugInterestDueDeltaCrash`, `testFullLifecycleVaultPnLNearZeroRate`)
pass.

Threads `Rules const&` from the public lending API down to
`computePaymentFactor`:

- `computePaymentFactor`, `loanPeriodicPayment`,
  `loanPrincipalFromPeriodicPayment` (leaves)
- `computeTheoreticalLoanState`, `computeFullPayment`,
  `computePaymentComponents` (mid-level)
- `computeLoanProperties` (both overloads), `tryOverpayment`,
  `doOverpayment` (top-level)

Public callers (`LoanSet::doApply`, `loanMakePayment`) pass
`view.rules()`. Tests pass `env.current()->rules()`.
2026-04-29 12:17:59 +02:00
Vito
de7a14a28b Merge remote-tracking branch 'origin/develop' into tapanito/lending-negative-interest 2026-04-29 12:03:17 +02:00
Vito
26f82a2a16 fix: Numerically-stable (1+r)^n-1 in computePaymentFactor
At near-zero `periodicRate`, the direct subtraction
`power(1 + r, n) - 1` suffers catastrophic cancellation: `(1+r)^n`
rounds to a value very close to 1 and the subtraction discards most
of Number's 19-digit large-mantissa precision. The resulting
amortization factor is inaccurate enough that
`loanPrincipalFromPeriodicPayment` returns a principal greater than
`periodicPayment * paymentsRemaining`, which propagates into
`computeTheoreticalLoanState` as a negative `interestDue` and fires
the `interest due delta not greater than outstanding` assertion in
`computePaymentComponents` (testBugInterestDueDeltaCrash).

The same numerical defect causes systematic underpayment of
interest in release builds at the bug regime. On a $1B loan with
the test parameters, closed-form charges only $0.0588 of interest
versus the mathematically correct $0.3805 (verified independently
via 50-digit Decimal arithmetic). Linear scaling: ~$321 underpaid
per $1T of principal.

Replace `raisedRate - 1` with a hybrid evaluator:

- When `r * paymentsRemaining >= 1e-9`, use the closed-form
  `power(1+r, n) - 1`. At Number's 19-digit mantissa this still
  retains ~10 sig digits post-subtraction and is ~30-500x faster
  than the binomial expansion.
- Below the threshold, expand `(1+r)^n - 1 = sum C(n,k) r^k` with
  early termination once terms fall below Number precision.

The hybrid takes the closed-form path at every rate covered by
existing fixtures, so output is bit-identical to pre-fix code at
moderate rates (no fixture drift).

Also drops the now-unused `computeRaisedRate` from the lending
helpers (its only caller was `computePaymentFactor`).

Test coverage:

- testComputePowerMinusOne / testComputePowerMinusOneHybrid:
  direct unit tests for both new helpers, including property
  checks against `(1+r)^2 = 1 + 2r + r^2` and `(1+r)^3 = 1 + 3r +
  3r^2 + r^3`, threshold-boundary verification at exactly
  r*n = 1e-9, and large-n early-termination.
- testLoanPrincipalFromPeriodicPaymentNearZeroRate: regression
  guard, asserts `principal <= payment * n` at near-zero rate.
- testComputeTheoreticalLoanStateNearZeroRate: regression guard,
  asserts `interestDue >= 0` and `principalOutstanding <=
  valueOutstanding`.
- testBugInterestDueDeltaCrash: end-to-end reproduction of the
  original assertion abort, now passes cleanly.
- testFullLifecycleVaultPnLNearZeroRate: integration test running
  a $1B loan to completion, verifies the vault collects the
  economically-correct interest matching the 50-digit Decimal
  reference within sub-microcent tolerance, plus self-consistency
  (vault gain == TVO - principal at LoanSet) and conservation
  (borrower outflow == vault gain + broker gain).
2026-04-27 18:43:00 +02:00
39 changed files with 887 additions and 976 deletions

View File

@@ -93,6 +93,7 @@ test.core > xrpl.basics
test.core > xrpl.core
test.core > xrpld.core
test.core > xrpl.json
test.core > xrpl.protocol
test.core > xrpl.rdb
test.core > xrpl.server
test.csf > xrpl.basics

View File

@@ -11,7 +11,6 @@
#include <limits>
#include <stdexcept>
#include <string>
#include <string_view>
#include <type_traits>
#include <vector>
@@ -232,11 +231,4 @@ makeSlice(std::basic_string<char, Traits, Alloc> const& s)
return Slice(s.data(), s.size());
}
template <class Traits>
Slice
makeSlice(std::basic_string_view<char, Traits> s)
{
return Slice(s.data(), s.size());
}
} // namespace xrpl

View File

@@ -184,6 +184,7 @@ checkLoanGuards(
LoanState
computeTheoreticalLoanState(
Rules const& rules,
Number const& periodicPayment,
Number const& periodicRate,
std::uint32_t const paymentRemaining,
@@ -353,6 +354,7 @@ struct LoanStateDeltas
Expected<std::pair<LoanPaymentParts, LoanProperties>, TER>
tryOverpayment(
Rules const& rules,
Asset const& asset,
std::int32_t loanScale,
ExtendedPaymentComponents const& overpaymentComponents,
@@ -363,11 +365,17 @@ tryOverpayment(
TenthBips16 const managementFeeRate,
beast::Journal j);
Number
computeRaisedRate(Number const& periodicRate, std::uint32_t paymentsRemaining);
[[nodiscard]] Number
computePowerMinusOne(Number const& periodicRate, std::uint32_t paymentsRemaining);
Number
computePaymentFactor(Number const& periodicRate, std::uint32_t paymentsRemaining);
[[nodiscard]] Number
computePowerMinusOneHybrid(Number const& periodicRate, std::uint32_t paymentsRemaining);
[[nodiscard]] Number
computePaymentFactor(
Rules const& rules,
Number const& periodicRate,
std::uint32_t paymentsRemaining);
std::pair<Number, Number>
computeInterestAndFeeParts(
@@ -378,12 +386,14 @@ computeInterestAndFeeParts(
Number
loanPeriodicPayment(
Rules const& rules,
Number const& principalOutstanding,
Number const& periodicRate,
std::uint32_t paymentsRemaining);
Number
loanPrincipalFromPeriodicPayment(
Rules const& rules,
Number const& periodicPayment,
Number const& periodicRate,
std::uint32_t paymentsRemaining);
@@ -415,6 +425,7 @@ computeOverpaymentComponents(
PaymentComponents
computePaymentComponents(
Rules const& rules,
Asset const& asset,
std::int32_t scale,
Number const& totalValueOutstanding,
@@ -438,6 +449,7 @@ operator+(LoanState const& lhs, detail::LoanStateDeltas const& rhs);
LoanProperties
computeLoanProperties(
Rules const& rules,
Asset const& asset,
Number const& principalOutstanding,
TenthBips32 interestRate,
@@ -448,6 +460,7 @@ computeLoanProperties(
LoanProperties
computeLoanProperties(
Rules const& rules,
Asset const& asset,
Number const& principalOutstanding,
Number const& periodicRate,

View File

@@ -246,15 +246,7 @@ message TMGetObjectByHash {
message TMLedgerNode {
required bytes nodedata = 1;
// Used when protocol version <2.3. Not set for ledger base data.
optional bytes nodeid = 2;
// Used when protocol version >=2.3. Neither value is set for ledger base data.
oneof reference {
bytes id = 3; // Set for inner nodes.
uint32 depth = 4; // Set for leaf nodes.
}
optional bytes nodeid = 2; // missing for ledger base data
}
enum TMLedgerInfoType {

View File

@@ -16,7 +16,6 @@
#include <set>
#include <stack>
#include <tuple>
#include <vector>
namespace xrpl {
@@ -74,22 +73,6 @@ enum class SHAMapState {
See https://en.wikipedia.org/wiki/Merkle_tree
*/
/** Holds a SHAMap node's identity, serialized data, and leaf status.
Used by getNodeFat to return node data for peer synchronization.
*/
struct SHAMapNodeData
{
SHAMapNodeID nodeID;
Blob data;
bool isLeaf;
SHAMapNodeData(SHAMapNodeID const& id, Blob d, bool leaf)
: nodeID(id), data(std::move(d)), isLeaf(leaf)
{
}
};
class SHAMap
{
private:
@@ -102,7 +85,7 @@ private:
/** The sequence of the ledger that this map references, if any. */
std::uint32_t ledgerSeq_ = 0;
SHAMapTreeNodePtr root_;
intr_ptr::SharedPtr<SHAMapTreeNode> root_;
mutable SHAMapState state_;
SHAMapType const type_;
bool backed_ = true; // Map is backed by the database
@@ -267,10 +250,10 @@ public:
std::vector<std::pair<SHAMapNodeID, uint256>>
getMissingNodes(int maxNodes, SHAMapSyncFilter* filter);
[[nodiscard]] bool
bool
getNodeFat(
SHAMapNodeID const& wanted,
std::vector<SHAMapNodeData>& data,
std::vector<std::pair<SHAMapNodeID, Blob>>& data,
bool fatLeaves,
std::uint32_t depth) const;
@@ -297,42 +280,10 @@ public:
void
serializeRoot(Serializer& s) const;
/** Add a root node to the SHAMap during synchronization.
*
* This function is used when receiving the root node of a SHAMap from a peer during ledger
* synchronization. The node must already have been deserialized.
*
* @param hash The expected hash of the root node.
* @param rootNode A deserialized root node to add.
* @param filter Optional sync filter to track received nodes.
* @return Status indicating whether the node was useful, duplicate, or invalid.
*
* @note This function expects the rootNode to be a valid, deserialized SHAMapTreeNode. The
* caller is responsible for deserialization and basic validation before calling this
* function.
*/
SHAMapAddNode
addRootNode(SHAMapHash const& hash, SHAMapTreeNodePtr rootNode, SHAMapSyncFilter const* filter);
/** Add a known node at a specific position in the SHAMap during synchronization.
*
* This function is used when receiving nodes from peers during ledger synchronization. The node
* is inserted at the position specified by nodeID. The node must already have been
* deserialized.
*
* @param nodeID The position in the tree where this node belongs.
* @param treeNode A deserialized tree node to add.
* @param filter Optional sync filter to track received nodes.
* @return Status indicating whether the node was useful, duplicate, or invalid.
*
* @note This function expects that the caller has already validated that the nodeID is
* consistent with the node's content.
*/
addRootNode(SHAMapHash const& hash, Slice const& rootNode, SHAMapSyncFilter* filter);
SHAMapAddNode
addKnownNode(
SHAMapNodeID const& nodeID,
SHAMapTreeNodePtr treeNode,
SHAMapSyncFilter const* filter);
addKnownNode(SHAMapNodeID const& nodeID, Slice const& rawNode, SHAMapSyncFilter* filter);
// status functions
void
@@ -375,32 +326,36 @@ public:
invariants() const;
private:
using SharedPtrNodeStack = std::stack<std::pair<SHAMapTreeNodePtr, SHAMapNodeID>>;
using SharedPtrNodeStack =
std::stack<std::pair<intr_ptr::SharedPtr<SHAMapTreeNode>, SHAMapNodeID>>;
using DeltaRef =
std::pair<boost::intrusive_ptr<SHAMapItem const>, boost::intrusive_ptr<SHAMapItem const>>;
// tree node cache operations
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
cacheLookup(SHAMapHash const& hash) const;
void
canonicalize(SHAMapHash const& hash, SHAMapTreeNodePtr&) const;
canonicalize(SHAMapHash const& hash, intr_ptr::SharedPtr<SHAMapTreeNode>&) const;
// database operations
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
fetchNodeFromDB(SHAMapHash const& hash) const;
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
fetchNodeNT(SHAMapHash const& hash) const;
SHAMapTreeNodePtr
fetchNodeNT(SHAMapHash const& hash, SHAMapSyncFilter const* filter) const;
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
fetchNodeNT(SHAMapHash const& hash, SHAMapSyncFilter* filter) const;
intr_ptr::SharedPtr<SHAMapTreeNode>
fetchNode(SHAMapHash const& hash) const;
SHAMapTreeNodePtr
checkFilter(SHAMapHash const& hash, SHAMapSyncFilter const* filter) const;
intr_ptr::SharedPtr<SHAMapTreeNode>
checkFilter(SHAMapHash const& hash, SHAMapSyncFilter* filter) const;
/** Update hashes up to the root */
void
dirtyUp(SharedPtrNodeStack& stack, uint256 const& target, SHAMapTreeNodePtr terminal);
dirtyUp(
SharedPtrNodeStack& stack,
uint256 const& target,
intr_ptr::SharedPtr<SHAMapTreeNode> terminal);
/** Walk towards the specified id, returning the node. Caller must check
if the return is nullptr, and if not, if the node->peekItem()->key() ==
@@ -422,21 +377,25 @@ private:
preFlushNode(intr_ptr::SharedPtr<Node> node) const;
/** write and canonicalize modified node */
SHAMapTreeNodePtr
writeNode(NodeObjectType t, SHAMapTreeNodePtr node) const;
intr_ptr::SharedPtr<SHAMapTreeNode>
writeNode(NodeObjectType t, intr_ptr::SharedPtr<SHAMapTreeNode> node) const;
// returns the first item at or below this node
SHAMapLeafNode*
firstBelow(SHAMapTreeNodePtr node, SharedPtrNodeStack& stack, int branch = 0) const;
firstBelow(intr_ptr::SharedPtr<SHAMapTreeNode>, SharedPtrNodeStack& stack, int branch = 0)
const;
// returns the last item at or below this node
SHAMapLeafNode*
lastBelow(SHAMapTreeNodePtr node, SharedPtrNodeStack& stack, int branch = branchFactor) const;
lastBelow(
intr_ptr::SharedPtr<SHAMapTreeNode> node,
SharedPtrNodeStack& stack,
int branch = branchFactor) const;
// helper function for firstBelow and lastBelow
SHAMapLeafNode*
belowHelper(
SHAMapTreeNodePtr node,
intr_ptr::SharedPtr<SHAMapTreeNode> node,
SharedPtrNodeStack& stack,
int branch,
std::tuple<int, std::function<bool(int)>, std::function<void(int&)>> const& loopParams)
@@ -448,19 +407,20 @@ private:
descend(SHAMapInnerNode*, int branch) const;
SHAMapTreeNode*
descendThrow(SHAMapInnerNode*, int branch) const;
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
descend(SHAMapInnerNode&, int branch) const;
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
descendThrow(SHAMapInnerNode&, int branch) const;
// Descend with filter
// If pending, callback is called as if it called fetchNodeNT
using descendCallback = std::function<void(SHAMapTreeNodePtr, SHAMapHash const&)>;
using descendCallback =
std::function<void(intr_ptr::SharedPtr<SHAMapTreeNode>, SHAMapHash const&)>;
SHAMapTreeNode*
descendAsync(
SHAMapInnerNode* parent,
int branch,
SHAMapSyncFilter const* filter,
SHAMapSyncFilter* filter,
bool& pending,
descendCallback&&) const;
@@ -469,11 +429,11 @@ private:
SHAMapInnerNode* parent,
SHAMapNodeID const& parentID,
int branch,
SHAMapSyncFilter const* filter) const;
SHAMapSyncFilter* filter) const;
// Non-storing
// Does not hook the returned node to its parent
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
descendNoStore(SHAMapInnerNode&, int branch) const;
/** If there is only one leaf below this node, get its contents */
@@ -535,10 +495,10 @@ private:
// nodes we may have acquired from deferred reads
using DeferredNode = std::tuple<
SHAMapInnerNode*, // parent node
SHAMapNodeID, // parent node ID
int, // branch
SHAMapTreeNodePtr>; // node
SHAMapInnerNode*, // parent node
SHAMapNodeID, // parent node ID
int, // branch
intr_ptr::SharedPtr<SHAMapTreeNode>>; // node
int deferred_;
std::mutex deferLock_;
@@ -564,7 +524,7 @@ private:
gmn_ProcessDeferredReads(MissingNodes&);
// fetch from DB helper function
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
finishFetch(SHAMapHash const& hash, std::shared_ptr<NodeObject> const& object) const;
};

View File

@@ -27,7 +27,7 @@ public:
{
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
clone(std::uint32_t cowid) const final
{
return intr_ptr::make_shared<SHAMapAccountStateLeafNode>(item_, cowid, hash_);

View File

@@ -87,7 +87,7 @@ public:
void
partialDestructor() override;
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
clone(std::uint32_t cowid) const override;
SHAMapNodeType
@@ -121,19 +121,19 @@ public:
getChildHash(int m) const;
void
setChild(int m, SHAMapTreeNodePtr child);
setChild(int m, intr_ptr::SharedPtr<SHAMapTreeNode> child);
void
shareChild(int m, SHAMapTreeNodePtr const& child);
shareChild(int m, intr_ptr::SharedPtr<SHAMapTreeNode> const& child);
SHAMapTreeNode*
getChildPointer(int branch);
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
getChild(int branch);
SHAMapTreeNodePtr
canonicalizeChild(int branch, SHAMapTreeNodePtr node);
intr_ptr::SharedPtr<SHAMapTreeNode>
canonicalizeChild(int branch, intr_ptr::SharedPtr<SHAMapTreeNode> node);
// sync functions
bool
@@ -161,10 +161,10 @@ public:
void
invariants(bool is_root = false) const override;
static SHAMapTreeNodePtr
static intr_ptr::SharedPtr<SHAMapTreeNode>
makeFullInner(Slice data, SHAMapHash const& hash, bool hashValid);
static SHAMapTreeNodePtr
static intr_ptr::SharedPtr<SHAMapTreeNode>
makeCompressedInner(Slice data);
};

View File

@@ -166,6 +166,4 @@ private:
makeTransactionWithMeta(Slice data, SHAMapHash const& hash, bool hashValid);
};
using SHAMapTreeNodePtr = intr_ptr::SharedPtr<SHAMapTreeNode>;
} // namespace xrpl

View File

@@ -26,7 +26,7 @@ public:
{
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
clone(std::uint32_t cowid) const final
{
return intr_ptr::make_shared<SHAMapTxLeafNode>(item_, cowid, hash_);

View File

@@ -27,7 +27,7 @@ public:
{
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
clone(std::uint32_t cowid) const override
{
return intr_ptr::make_shared<SHAMapTxPlusMetaLeafNode>(item_, cowid, hash_);

View File

@@ -11,5 +11,5 @@ using TreeNodeCache = TaggedCache<
SHAMapTreeNode,
/*IsKeyCache*/ false,
intr_ptr::SharedWeakUnionPtr<SHAMapTreeNode>,
SHAMapTreeNodePtr>;
intr_ptr::SharedPtr<SHAMapTreeNode>>;
} // namespace xrpl

View File

@@ -148,7 +148,7 @@ public:
/** Get the number of elements in each array and a pointer to the start
of each array.
*/
[[nodiscard]] std::tuple<std::uint8_t, SHAMapHash*, SHAMapTreeNodePtr*>
[[nodiscard]] std::tuple<std::uint8_t, SHAMapHash*, intr_ptr::SharedPtr<SHAMapTreeNode>*>
getHashesAndChildren() const;
/** Get the `hashes` array */
@@ -156,7 +156,7 @@ public:
getHashes() const;
/** Get the `children` array */
[[nodiscard]] SHAMapTreeNodePtr*
[[nodiscard]] intr_ptr::SharedPtr<SHAMapTreeNode>*
getChildren() const;
/** Call the `f` callback for all 16 (branchFactor) branches - even if

View File

@@ -25,7 +25,8 @@ static_assert(
// Terminology: A chunk is the memory being allocated from a block. A block
// contains multiple chunks. This is the terminology the boost documentation
// uses. Pools use "Simple Segregated Storage" as their storage format.
constexpr size_t elementSizeBytes = (sizeof(SHAMapHash) + sizeof(SHAMapTreeNodePtr));
constexpr size_t elementSizeBytes =
(sizeof(SHAMapHash) + sizeof(intr_ptr::SharedPtr<SHAMapTreeNode>));
constexpr size_t blockSizeBytes = kilobytes(512);
@@ -362,7 +363,8 @@ inline TaggedPointer::TaggedPointer(
// keep
new (&dstHashes[dstIndex]) SHAMapHash{srcHashes[srcIndex]};
new (&dstChildren[dstIndex]) SHAMapTreeNodePtr{std::move(srcChildren[srcIndex])};
new (&dstChildren[dstIndex])
intr_ptr::SharedPtr<SHAMapTreeNode>{std::move(srcChildren[srcIndex])};
++dstIndex;
++srcIndex;
}
@@ -373,7 +375,7 @@ inline TaggedPointer::TaggedPointer(
if (dstIsDense)
{
new (&dstHashes[dstIndex]) SHAMapHash{};
new (&dstChildren[dstIndex]) SHAMapTreeNodePtr{};
new (&dstChildren[dstIndex]) intr_ptr::SharedPtr<SHAMapTreeNode>{};
++dstIndex;
}
}
@@ -381,7 +383,7 @@ inline TaggedPointer::TaggedPointer(
{
// add
new (&dstHashes[dstIndex]) SHAMapHash{};
new (&dstChildren[dstIndex]) SHAMapTreeNodePtr{};
new (&dstChildren[dstIndex]) intr_ptr::SharedPtr<SHAMapTreeNode>{};
++dstIndex;
if (srcIsDense)
{
@@ -394,7 +396,7 @@ inline TaggedPointer::TaggedPointer(
if (dstIsDense)
{
new (&dstHashes[dstIndex]) SHAMapHash{};
new (&dstChildren[dstIndex]) SHAMapTreeNodePtr{};
new (&dstChildren[dstIndex]) intr_ptr::SharedPtr<SHAMapTreeNode>{};
++dstIndex;
}
if (srcIsDense)
@@ -411,7 +413,7 @@ inline TaggedPointer::TaggedPointer(
for (int i = dstIndex; i < dstNumAllocated; ++i)
{
new (&dstHashes[i]) SHAMapHash{};
new (&dstChildren[i]) SHAMapTreeNodePtr{};
new (&dstChildren[i]) intr_ptr::SharedPtr<SHAMapTreeNode>{};
}
*this = std::move(dst);
}
@@ -431,7 +433,7 @@ inline TaggedPointer::TaggedPointer(
// allocate hashes and children, but do not run constructors
TaggedPointer newHashesAndChildren{RawAllocateTag{}, toAllocate};
SHAMapHash *newHashes, *oldHashes;
SHAMapTreeNodePtr *newChildren, *oldChildren;
intr_ptr::SharedPtr<SHAMapTreeNode>*newChildren, *oldChildren;
std::uint8_t newNumAllocated;
// structured bindings can't be captured in c++ 17; use tie instead
std::tie(newNumAllocated, newHashes, newChildren) = newHashesAndChildren.getHashesAndChildren();
@@ -442,7 +444,8 @@ inline TaggedPointer::TaggedPointer(
// new arrays are dense, old arrays are sparse
iterNonEmptyChildIndexes(isBranch, [&](auto branchNum, auto indexNum) {
new (&newHashes[branchNum]) SHAMapHash{oldHashes[indexNum]};
new (&newChildren[branchNum]) SHAMapTreeNodePtr{std::move(oldChildren[indexNum])};
new (&newChildren[branchNum])
intr_ptr::SharedPtr<SHAMapTreeNode>{std::move(oldChildren[indexNum])};
});
// Run the constructors for the remaining elements
for (int i = 0; i < SHAMapInnerNode::branchFactor; ++i)
@@ -450,7 +453,7 @@ inline TaggedPointer::TaggedPointer(
if ((1 << i) & isBranch)
continue;
new (&newHashes[i]) SHAMapHash{};
new (&newChildren[i]) SHAMapTreeNodePtr{};
new (&newChildren[i]) intr_ptr::SharedPtr<SHAMapTreeNode>{};
}
}
else
@@ -460,14 +463,14 @@ inline TaggedPointer::TaggedPointer(
iterNonEmptyChildIndexes(isBranch, [&](auto branchNum, auto indexNum) {
new (&newHashes[curCompressedIndex]) SHAMapHash{oldHashes[indexNum]};
new (&newChildren[curCompressedIndex])
SHAMapTreeNodePtr{std::move(oldChildren[indexNum])};
intr_ptr::SharedPtr<SHAMapTreeNode>{std::move(oldChildren[indexNum])};
++curCompressedIndex;
});
// Run the constructors for the remaining elements
for (int i = curCompressedIndex; i < newNumAllocated; ++i)
{
new (&newHashes[i]) SHAMapHash{};
new (&newChildren[i]) SHAMapTreeNodePtr{};
new (&newChildren[i]) intr_ptr::SharedPtr<SHAMapTreeNode>{};
}
}
@@ -481,7 +484,7 @@ inline TaggedPointer::TaggedPointer(std::uint8_t numChildren)
for (std::size_t i = 0; i < numAllocated; ++i)
{
new (&hashes[i]) SHAMapHash{};
new (&children[i]) SHAMapTreeNodePtr{};
new (&children[i]) intr_ptr::SharedPtr<SHAMapTreeNode>{};
}
}
@@ -519,13 +522,14 @@ TaggedPointer::isDense() const
return (tp_ & tagMask) == boundaries.size() - 1;
}
[[nodiscard]] inline std::tuple<std::uint8_t, SHAMapHash*, SHAMapTreeNodePtr*>
[[nodiscard]] inline std::tuple<std::uint8_t, SHAMapHash*, intr_ptr::SharedPtr<SHAMapTreeNode>*>
TaggedPointer::getHashesAndChildren() const
{
auto const [tag, ptr] = decode();
auto const hashes = reinterpret_cast<SHAMapHash*>(ptr);
std::uint8_t numAllocated = boundaries[tag];
auto const children = reinterpret_cast<SHAMapTreeNodePtr*>(hashes + numAllocated);
auto const children =
reinterpret_cast<intr_ptr::SharedPtr<SHAMapTreeNode>*>(hashes + numAllocated);
return {numAllocated, hashes, children};
};
@@ -535,7 +539,7 @@ TaggedPointer::getHashes() const
return reinterpret_cast<SHAMapHash*>(tp_ & ptrMask);
};
[[nodiscard]] inline SHAMapTreeNodePtr*
[[nodiscard]] inline intr_ptr::SharedPtr<SHAMapTreeNode>*
TaggedPointer::getChildren() const
{
auto [unused1, unused2, result] = getHashesAndChildren();

View File

@@ -110,14 +110,90 @@ LoanStateDeltas::nonNegative()
managementFee = numZero;
}
/* Computes (1 + periodicRate)^paymentsRemaining for amortization calculations.
/* Computes (1 + r)^n - 1 accurately even for near-zero r, where direct
* subtraction of `power(1 + r, n) - 1` suffers catastrophic cancellation.
*
* Equation (5) from XLS-66 spec, Section A-2 Equation Glossary
* The binomial expansion gives
* (1 + r)^n - 1 = sum_{k=1}^{n} C(n,k) r^k
* = nr + C(n,2) r^2 + ... + r^n
* which is a sum of positive terms when r >= 0, avoiding cancellation.
* Each term is computed from the previous via
* term_{k+1} = term_k * r * (n - k) / (k + 1)
*
* The loop terminates early once the next term is below Number precision.
*
* Precondition: r >= 0. Negative rates would produce alternating-sign
* terms; the early-termination check (next == sum) could exit before the
* series stabilizes, yielding an incorrect result. The lending protocol
* derives rates from `TenthBips32` (unsigned), so this is always met in
* production paths.
*/
Number
computeRaisedRate(Number const& periodicRate, std::uint32_t paymentsRemaining)
computePowerMinusOne(Number const& periodicRate, std::uint32_t paymentsRemaining)
{
return power(1 + periodicRate, paymentsRemaining);
XRPL_ASSERT_PARTS(
periodicRate >= beast::zero,
"xrpl::detail::computePowerMinusOne",
"periodicRate is non-negative");
if (paymentsRemaining == 0 || periodicRate == beast::zero)
return numZero;
// k = 1 term: C(n, 1) * r = n * r
Number term = paymentsRemaining * periodicRate;
Number sum = term;
for (std::uint32_t k = 1; k < paymentsRemaining; ++k)
{
// term_{k+1} from term_k: multiply by r * (n - k) / (k + 1)
term = term * periodicRate * (paymentsRemaining - k) / (k + 1);
Number const next = sum + term;
// adding this term fell below Number's precision
if (next == sum)
break;
sum = next;
}
return sum;
}
/* Hybrid evaluator of (1 + r)^n - 1.
*
* The closed-form `power(1 + r, n) - 1` loses sig digits to cancellation
* when `r * n` is small: the result `~r*n` sits well below the `1` that
* dominates `(1+r)^n`, so most of Number's stored precision is consumed
* by the leading `1`. The lending code path uses Number's large-mantissa
* range (mantissa in `[10^18, 10^19)` — 19 significant digits).
*
* Repeated squaring in `power(...)` contributes roughly `log2(n)` ULPs of
* error at the scale of `(1+r)^n` (~1 for small `r*n`), so the absolute
* error after the subtraction is around `log2(n) * 1e-18`. To retain at
* least ~10 significant digits of `(1+r)^n - 1`, we need
* `r*n >= log2(n) * 1e-18 * 1e9 ~ 1e-9` across realistic `n`. A threshold
* of `1e-9` preserves the closed-form path for any rate the lending code
* actually sees in practice (fixtures at moderate rates are bit-exact),
* while routing the pathological near-zero regime through the binomial
* expansion where cancellation is severe.
*/
Number
computePowerMinusOneHybrid(Number const& periodicRate, std::uint32_t paymentsRemaining)
{
XRPL_ASSERT_PARTS(
periodicRate >= beast::zero,
"xrpl::detail::computePowerMinusOneHybrid",
"periodicRate is non-negative");
if (paymentsRemaining == 0 || periodicRate == beast::zero)
return numZero;
// Threshold 1e-9 retains ~10 sig digits of (1+r)^n - 1 against
// Number's 19-digit mantissa: the leading "1" of (1+r)^n consumes
// ~log10(1/(r*n)) digits before the subtraction. Above this point
// closed form is accurate and ~30-500x faster than the binomial
// expansion.
Number const cancellationThreshold{1, -9};
if (paymentsRemaining * periodicRate >= cancellationThreshold)
return power(1 + periodicRate, paymentsRemaining) - 1;
return computePowerMinusOne(periodicRate, paymentsRemaining);
}
/* Computes the payment factor used in standard amortization formulas.
@@ -126,7 +202,10 @@ computeRaisedRate(Number const& periodicRate, std::uint32_t paymentsRemaining)
* Equation (6) from XLS-66 spec, Section A-2 Equation Glossary
*/
Number
computePaymentFactor(Number const& periodicRate, std::uint32_t paymentsRemaining)
computePaymentFactor(
Rules const& rules,
Number const& periodicRate,
std::uint32_t paymentsRemaining)
{
if (paymentsRemaining == 0)
return numZero;
@@ -135,7 +214,19 @@ computePaymentFactor(Number const& periodicRate, std::uint32_t paymentsRemaining
if (periodicRate == beast::zero)
return Number{1} / paymentsRemaining;
Number const raisedRate = computeRaisedRate(periodicRate, paymentsRemaining);
if (rules.enabled(fixCleanup3_2_0))
{
Number const raisedRateMinusOne =
computePowerMinusOneHybrid(periodicRate, paymentsRemaining);
Number const raisedRate = 1 + raisedRateMinusOne;
return (periodicRate * raisedRate) / raisedRateMinusOne;
}
// Pre-fixCleanup3_2_0: direct subtraction `(1+r)^n - 1` suffers
// catastrophic cancellation at near-zero rates. Retained for
// amendment-gated bit-exact pre-fix behavior.
Number const raisedRate = power(1 + periodicRate, paymentsRemaining);
return (periodicRate * raisedRate) / (raisedRate - 1);
}
@@ -147,6 +238,7 @@ computePaymentFactor(Number const& periodicRate, std::uint32_t paymentsRemaining
*/
Number
loanPeriodicPayment(
Rules const& rules,
Number const& principalOutstanding,
Number const& periodicRate,
std::uint32_t paymentsRemaining)
@@ -158,7 +250,7 @@ loanPeriodicPayment(
if (periodicRate == beast::zero)
return principalOutstanding / paymentsRemaining;
return principalOutstanding * computePaymentFactor(periodicRate, paymentsRemaining);
return principalOutstanding * computePaymentFactor(rules, periodicRate, paymentsRemaining);
}
/* Reverse-calculates principal from periodic payment amount.
@@ -168,6 +260,7 @@ loanPeriodicPayment(
*/
Number
loanPrincipalFromPeriodicPayment(
Rules const& rules,
Number const& periodicPayment,
Number const& periodicRate,
std::uint32_t paymentsRemaining)
@@ -178,7 +271,7 @@ loanPrincipalFromPeriodicPayment(
if (periodicRate == 0)
return periodicPayment * paymentsRemaining;
return periodicPayment / computePaymentFactor(periodicRate, paymentsRemaining);
return periodicPayment / computePaymentFactor(rules, periodicRate, paymentsRemaining);
}
/*
@@ -402,6 +495,7 @@ doPayment(
*/
Expected<std::pair<LoanPaymentParts, LoanProperties>, TER>
tryOverpayment(
Rules const& rules,
Asset const& asset,
std::int32_t loanScale,
ExtendedPaymentComponents const& overpaymentComponents,
@@ -414,7 +508,7 @@ tryOverpayment(
{
// Calculate what the loan state SHOULD be theoretically (at full precision)
auto const theoreticalState = computeTheoreticalLoanState(
periodicPayment, periodicRate, paymentRemaining, managementFeeRate);
rules, periodicPayment, periodicRate, paymentRemaining, managementFeeRate);
// Calculate the accumulated rounding errors. These need to be preserved
// across the re-amortization to maintain consistency with the loan's
@@ -432,6 +526,7 @@ tryOverpayment(
// recalculates the periodic payment, total value, and management fees
// for the remaining payment schedule.
auto newLoanProperties = computeLoanProperties(
rules,
asset,
newTheoreticalPrincipal,
periodicRate,
@@ -445,9 +540,12 @@ tryOverpayment(
// Calculate what the new loan state should be with the new periodic payment
// including rounding errors
auto const newTheoreticalState =
computeTheoreticalLoanState(
newLoanProperties.periodicPayment, periodicRate, paymentRemaining, managementFeeRate) +
auto const newTheoreticalState = computeTheoreticalLoanState(
rules,
newLoanProperties.periodicPayment,
periodicRate,
paymentRemaining,
managementFeeRate) +
errors;
JLOG(j.debug()) << "new theoretical value: " << newTheoreticalState.valueOutstanding
@@ -578,6 +676,7 @@ tryOverpayment(
template <class NumberProxy>
Expected<LoanPaymentParts, TER>
doOverpayment(
Rules const& rules,
Asset const& asset,
std::int32_t loanScale,
ExtendedPaymentComponents const& overpaymentComponents,
@@ -606,6 +705,7 @@ doOverpayment(
// Attempt to re-amortize the loan with the overpayment applied.
// This modifies the temporary copies, leaving the proxies unchanged.
auto const ret = tryOverpayment(
rules,
asset,
loanScale,
overpaymentComponents,
@@ -819,8 +919,8 @@ computeFullPayment(
// Calculate the theoretical principal based on the payment schedule.
// This theoretical (unrounded) value is used to compute interest and
// penalties accurately.
Number const theoreticalPrincipalOutstanding =
loanPrincipalFromPeriodicPayment(periodicPayment, periodicRate, paymentRemaining);
Number const theoreticalPrincipalOutstanding = loanPrincipalFromPeriodicPayment(
view.rules(), periodicPayment, periodicRate, paymentRemaining);
// Full payment interest includes both accrued interest (time since last
// payment) and prepayment penalty (for closing early).
@@ -924,6 +1024,7 @@ PaymentComponents::trackedInterestPart() const
*/
PaymentComponents
computePaymentComponents(
Rules const& rules,
Asset const& asset,
std::int32_t scale,
Number const& totalValueOutstanding,
@@ -961,7 +1062,7 @@ computePaymentComponents(
// Calculate what the loan state SHOULD be after this payment (the target).
// This is computed at full precision using the theoretical amortization.
LoanState const trueTarget = computeTheoreticalLoanState(
periodicPayment, periodicRate, paymentRemaining - 1, managementFeeRate);
rules, periodicPayment, periodicRate, paymentRemaining - 1, managementFeeRate);
// Round the target to the loan's scale to match how actual loan values
// are stored.
@@ -1370,6 +1471,7 @@ computeFullPaymentInterest(
*/
LoanState
computeTheoreticalLoanState(
Rules const& rules,
Number const& periodicPayment,
Number const& periodicRate,
std::uint32_t const paymentRemaining,
@@ -1387,8 +1489,8 @@ computeTheoreticalLoanState(
// Equation (30) from XLS-66 spec, Section A-2 Equation Glossary
Number const totalValueOutstanding = periodicPayment * paymentRemaining;
Number const principalOutstanding =
detail::loanPrincipalFromPeriodicPayment(periodicPayment, periodicRate, paymentRemaining);
Number const principalOutstanding = detail::loanPrincipalFromPeriodicPayment(
rules, periodicPayment, periodicRate, paymentRemaining);
// Equation (31) from XLS-66 spec, Section A-2 Equation Glossary
Number const interestOutstandingGross = totalValueOutstanding - principalOutstanding;
@@ -1478,6 +1580,7 @@ computeManagementFee(
*/
LoanProperties
computeLoanProperties(
Rules const& rules,
Asset const& asset,
Number const& principalOutstanding,
TenthBips32 interestRate,
@@ -1489,6 +1592,7 @@ computeLoanProperties(
auto const periodicRate = loanPeriodicRate(interestRate, paymentInterval);
XRPL_ASSERT(interestRate == 0 || periodicRate > 0, "xrpl::computeLoanProperties : valid rate");
return computeLoanProperties(
rules,
asset,
principalOutstanding,
periodicRate,
@@ -1507,6 +1611,7 @@ computeLoanProperties(
*/
LoanProperties
computeLoanProperties(
Rules const& rules,
Asset const& asset,
Number const& principalOutstanding,
Number const& periodicRate,
@@ -1515,7 +1620,7 @@ computeLoanProperties(
std::int32_t minimumScale)
{
auto const periodicPayment =
detail::loanPeriodicPayment(principalOutstanding, periodicRate, paymentsRemaining);
detail::loanPeriodicPayment(rules, principalOutstanding, periodicRate, paymentsRemaining);
auto const [totalValueOutstanding, loanScale] = [&]() {
// only round up if there should be interest
@@ -1562,10 +1667,10 @@ computeLoanProperties(
// Compute the parts for the first payment. Ensure that the
// principal payment will actually change the principal.
auto const startingState = computeTheoreticalLoanState(
periodicPayment, periodicRate, paymentsRemaining, managementFeeRate);
rules, periodicPayment, periodicRate, paymentsRemaining, managementFeeRate);
auto const firstPaymentState = computeTheoreticalLoanState(
periodicPayment, periodicRate, paymentsRemaining - 1, managementFeeRate);
rules, periodicPayment, periodicRate, paymentsRemaining - 1, managementFeeRate);
// The unrounded principal part needs to be large enough to affect
// the principal. What to do if not is left to the caller
@@ -1722,6 +1827,7 @@ loanMakePayment(
// payment is late or regular
detail::ExtendedPaymentComponents periodic{
detail::computePaymentComponents(
view.rules(),
asset,
loanScale,
totalValueOutstandingProxy,
@@ -1830,6 +1936,7 @@ loanMakePayment(
periodic = detail::ExtendedPaymentComponents{
detail::computePaymentComponents(
view.rules(),
asset,
loanScale,
totalValueOutstandingProxy,
@@ -1890,6 +1997,7 @@ loanMakePayment(
// change
auto periodicPaymentProxy = loan->at(sfPeriodicPayment);
if (auto const overResult = detail::doOverpayment(
view.rules(),
asset,
loanScale,
overpaymentComponents,

View File

@@ -97,7 +97,10 @@ SHAMap::snapShot(bool isMutable) const
}
void
SHAMap::dirtyUp(SharedPtrNodeStack& stack, uint256 const& target, SHAMapTreeNodePtr child)
SHAMap::dirtyUp(
SharedPtrNodeStack& stack,
uint256 const& target,
intr_ptr::SharedPtr<SHAMapTreeNode> child)
{
// walk the tree up from through the inner nodes to the root_
// update hashes and links
@@ -162,7 +165,7 @@ SHAMap::findKey(uint256 const& id) const
return leaf;
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMap::fetchNodeFromDB(SHAMapHash const& hash) const
{
XRPL_ASSERT(backed_, "xrpl::SHAMap::fetchNodeFromDB : is backed");
@@ -170,7 +173,7 @@ SHAMap::fetchNodeFromDB(SHAMapHash const& hash) const
return finishFetch(hash, obj);
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMap::finishFetch(SHAMapHash const& hash, std::shared_ptr<NodeObject> const& object) const
{
XRPL_ASSERT(backed_, "xrpl::SHAMap::finishFetch : is backed");
@@ -205,8 +208,8 @@ SHAMap::finishFetch(SHAMapHash const& hash, std::shared_ptr<NodeObject> const& o
}
// See if a sync filter has a node
SHAMapTreeNodePtr
SHAMap::checkFilter(SHAMapHash const& hash, SHAMapSyncFilter const* filter) const
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMap::checkFilter(SHAMapHash const& hash, SHAMapSyncFilter* filter) const
{
if (auto nodeData = filter->getNode(hash))
{
@@ -231,8 +234,8 @@ SHAMap::checkFilter(SHAMapHash const& hash, SHAMapSyncFilter const* filter) cons
// Get a node without throwing
// Used on maps where missing nodes are expected
SHAMapTreeNodePtr
SHAMap::fetchNodeNT(SHAMapHash const& hash, SHAMapSyncFilter const* filter) const
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMap::fetchNodeNT(SHAMapHash const& hash, SHAMapSyncFilter* filter) const
{
auto node = cacheLookup(hash);
if (node)
@@ -254,7 +257,7 @@ SHAMap::fetchNodeNT(SHAMapHash const& hash, SHAMapSyncFilter const* filter) cons
return node;
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMap::fetchNodeNT(SHAMapHash const& hash) const
{
auto node = cacheLookup(hash);
@@ -266,7 +269,7 @@ SHAMap::fetchNodeNT(SHAMapHash const& hash) const
}
// Throw if the node is missing
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMap::fetchNode(SHAMapHash const& hash) const
{
auto node = fetchNodeNT(hash);
@@ -288,10 +291,10 @@ SHAMap::descendThrow(SHAMapInnerNode* parent, int branch) const
return ret;
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMap::descendThrow(SHAMapInnerNode& parent, int branch) const
{
SHAMapTreeNodePtr ret = descend(parent, branch);
intr_ptr::SharedPtr<SHAMapTreeNode> ret = descend(parent, branch);
if (!ret && !parent.isEmptyBranch(branch))
Throw<SHAMapMissingNode>(type_, parent.getChildHash(branch));
@@ -306,7 +309,7 @@ SHAMap::descend(SHAMapInnerNode* parent, int branch) const
if ((ret != nullptr) || !backed_)
return ret;
SHAMapTreeNodePtr node = fetchNodeNT(parent->getChildHash(branch));
intr_ptr::SharedPtr<SHAMapTreeNode> node = fetchNodeNT(parent->getChildHash(branch));
if (!node)
return nullptr;
@@ -314,10 +317,10 @@ SHAMap::descend(SHAMapInnerNode* parent, int branch) const
return node.get();
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMap::descend(SHAMapInnerNode& parent, int branch) const
{
SHAMapTreeNodePtr node = parent.getChild(branch);
intr_ptr::SharedPtr<SHAMapTreeNode> node = parent.getChild(branch);
if (node || !backed_)
return node;
@@ -331,10 +334,10 @@ SHAMap::descend(SHAMapInnerNode& parent, int branch) const
// Gets the node that would be hooked to this branch,
// but doesn't hook it up.
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMap::descendNoStore(SHAMapInnerNode& parent, int branch) const
{
SHAMapTreeNodePtr ret = parent.getChild(branch);
intr_ptr::SharedPtr<SHAMapTreeNode> ret = parent.getChild(branch);
if (!ret && backed_)
ret = fetchNode(parent.getChildHash(branch));
return ret;
@@ -345,7 +348,7 @@ SHAMap::descend(
SHAMapInnerNode* parent,
SHAMapNodeID const& parentID,
int branch,
SHAMapSyncFilter const* filter) const
SHAMapSyncFilter* filter) const
{
XRPL_ASSERT(parent->isInner(), "xrpl::SHAMap::descend : valid parent input");
XRPL_ASSERT(
@@ -358,7 +361,7 @@ SHAMap::descend(
if (child == nullptr)
{
auto const& childHash = parent->getChildHash(branch);
SHAMapTreeNodePtr childNode = fetchNodeNT(childHash, filter);
intr_ptr::SharedPtr<SHAMapTreeNode> childNode = fetchNodeNT(childHash, filter);
if (childNode)
{
@@ -374,7 +377,7 @@ SHAMapTreeNode*
SHAMap::descendAsync(
SHAMapInnerNode* parent,
int branch,
SHAMapSyncFilter const* filter,
SHAMapSyncFilter* filter,
bool& pending,
descendCallback&& callback) const
{
@@ -431,7 +434,7 @@ SHAMap::unshareNode(intr_ptr::SharedPtr<Node> node, SHAMapNodeID const& nodeID)
SHAMapLeafNode*
SHAMap::belowHelper(
SHAMapTreeNodePtr node,
intr_ptr::SharedPtr<SHAMapTreeNode> node,
SharedPtrNodeStack& stack,
int branch,
std::tuple<int, std::function<bool(int)>, std::function<void(int&)>> const& loopParams) const
@@ -476,7 +479,8 @@ SHAMap::belowHelper(
return nullptr;
}
SHAMapLeafNode*
SHAMap::lastBelow(SHAMapTreeNodePtr node, SharedPtrNodeStack& stack, int branch) const
SHAMap::lastBelow(intr_ptr::SharedPtr<SHAMapTreeNode> node, SharedPtrNodeStack& stack, int branch)
const
{
auto init = branchFactor - 1;
auto cmp = [](int i) { return i >= 0; };
@@ -485,7 +489,8 @@ SHAMap::lastBelow(SHAMapTreeNodePtr node, SharedPtrNodeStack& stack, int branch)
return belowHelper(node, stack, branch, {init, cmp, incr});
}
SHAMapLeafNode*
SHAMap::firstBelow(SHAMapTreeNodePtr node, SharedPtrNodeStack& stack, int branch) const
SHAMap::firstBelow(intr_ptr::SharedPtr<SHAMapTreeNode> node, SharedPtrNodeStack& stack, int branch)
const
{
auto init = 0;
auto cmp = [](int i) { return i <= branchFactor; };
@@ -694,8 +699,10 @@ SHAMap::delItem(uint256 const& id)
SHAMapNodeType const type = leaf->getType();
using TreeNodeType = intr_ptr::SharedPtr<SHAMapTreeNode>;
// What gets attached to the end of the chain (For now, nothing, since we deleted the leaf)
SHAMapTreeNodePtr prevNode;
TreeNodeType prevNode;
while (!stack.empty())
{
@@ -721,7 +728,7 @@ SHAMap::delItem(uint256 const& id)
// no children below this branch
//
// Note: This is unnecessary due to the std::move above but left here for safety
prevNode = SHAMapTreeNodePtr{};
prevNode = TreeNodeType{};
}
else if (bc == 1)
{
@@ -734,7 +741,7 @@ SHAMap::delItem(uint256 const& id)
{
if (!node->isEmptyBranch(i))
{
node->setChild(i, SHAMapTreeNodePtr{});
node->setChild(i, TreeNodeType{});
break;
}
}
@@ -930,8 +937,8 @@ SHAMap::fetchRoot(SHAMapHash const& hash, SHAMapSyncFilter* filter)
@note The node must have already been unshared by having the caller
first call SHAMapTreeNode::unshare().
*/
SHAMapTreeNodePtr
SHAMap::writeNode(NodeObjectType t, SHAMapTreeNodePtr node) const
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMap::writeNode(NodeObjectType t, intr_ptr::SharedPtr<SHAMapTreeNode> node) const
{
XRPL_ASSERT(node->cowid() == 0, "xrpl::SHAMap::writeNode : valid input node");
XRPL_ASSERT(backed_, "xrpl::SHAMap::writeNode : is backed");
@@ -1148,7 +1155,7 @@ SHAMap::dump(bool hash) const
JLOG(journal_.info()) << leafCount << " resident leaves";
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMap::cacheLookup(SHAMapHash const& hash) const
{
auto ret = f_.getTreeNodeCache()->fetch(hash.as_uint256());
@@ -1157,7 +1164,7 @@ SHAMap::cacheLookup(SHAMapHash const& hash) const
}
void
SHAMap::canonicalize(SHAMapHash const& hash, SHAMapTreeNodePtr& node) const
SHAMap::canonicalize(SHAMapHash const& hash, intr_ptr::SharedPtr<SHAMapTreeNode>& node) const
{
XRPL_ASSERT(backed_, "xrpl::SHAMap::canonicalize : is backed");
XRPL_ASSERT(node->cowid() == 0, "xrpl::SHAMap::canonicalize : valid node input");

View File

@@ -261,7 +261,7 @@ SHAMap::walkMap(std::vector<SHAMapMissingNode>& missingNodes, int maxMissing) co
{
if (!node->isEmptyBranch(i))
{
SHAMapTreeNodePtr const nextNode = descendNoStore(*node, i);
intr_ptr::SharedPtr<SHAMapTreeNode> const nextNode = descendNoStore(*node, i);
if (nextNode)
{
@@ -286,7 +286,7 @@ SHAMap::walkMapParallel(std::vector<SHAMapMissingNode>& missingNodes, int maxMis
return false;
using StackEntry = intr_ptr::SharedPtr<SHAMapInnerNode>;
std::array<SHAMapTreeNodePtr, 16> topChildren;
std::array<intr_ptr::SharedPtr<SHAMapTreeNode>, 16> topChildren;
{
auto const& innerRoot = intr_ptr::static_pointer_cast<SHAMapInnerNode>(root_);
for (int i = 0; i < 16; ++i)
@@ -331,7 +331,8 @@ SHAMap::walkMapParallel(std::vector<SHAMapMissingNode>& missingNodes, int maxMis
{
if (node->isEmptyBranch(i))
continue;
SHAMapTreeNodePtr const nextNode = descendNoStore(*node, i);
intr_ptr::SharedPtr<SHAMapTreeNode> const nextNode =
descendNoStore(*node, i);
if (nextNode)
{

View File

@@ -37,7 +37,7 @@ SHAMapInnerNode::~SHAMapInnerNode() = default;
void
SHAMapInnerNode::partialDestructor()
{
SHAMapTreeNodePtr* children = nullptr;
intr_ptr::SharedPtr<SHAMapTreeNode>* children = nullptr;
// structured bindings can't be captured in c++ 17; use tie instead
std::tie(std::ignore, std::ignore, children) = hashesAndChildren_.getHashesAndChildren();
iterNonEmptyChildIndexes([&](auto branchNum, auto indexNum) { children[indexNum].reset(); });
@@ -69,7 +69,7 @@ SHAMapInnerNode::getChildIndex(int i) const
return hashesAndChildren_.getChildIndex(isBranch_, i);
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMapInnerNode::clone(std::uint32_t cowid) const
{
auto const branchCount = getBranchCount();
@@ -79,7 +79,7 @@ SHAMapInnerNode::clone(std::uint32_t cowid) const
p->isBranch_ = isBranch_;
p->fullBelowGen_ = fullBelowGen_;
SHAMapHash *cloneHashes = nullptr, *thisHashes = nullptr;
SHAMapTreeNodePtr *cloneChildren = nullptr, *thisChildren = nullptr;
intr_ptr::SharedPtr<SHAMapTreeNode>*cloneChildren = nullptr, *thisChildren = nullptr;
// structured bindings can't be captured in c++ 17; use tie instead
std::tie(std::ignore, cloneHashes, cloneChildren) =
p->hashesAndChildren_.getHashesAndChildren();
@@ -118,7 +118,7 @@ SHAMapInnerNode::clone(std::uint32_t cowid) const
return p;
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMapInnerNode::makeFullInner(Slice data, SHAMapHash const& hash, bool hashValid)
{
// A full inner node is serialized as 16 256-bit hashes, back to back:
@@ -153,7 +153,7 @@ SHAMapInnerNode::makeFullInner(Slice data, SHAMapHash const& hash, bool hashVali
return ret;
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMapInnerNode::makeCompressedInner(Slice data)
{
// A compressed inner node is serialized as a series of 33 byte chunks,
@@ -207,7 +207,7 @@ void
SHAMapInnerNode::updateHashDeep()
{
SHAMapHash* hashes = nullptr;
SHAMapTreeNodePtr* children = nullptr;
intr_ptr::SharedPtr<SHAMapTreeNode>* children = nullptr;
// structured bindings can't be captured in c++ 17; use tie instead
std::tie(std::ignore, hashes, children) = hashesAndChildren_.getHashesAndChildren();
iterNonEmptyChildIndexes([&](auto branchNum, auto indexNum) {
@@ -265,7 +265,7 @@ SHAMapInnerNode::getString(SHAMapNodeID const& id) const
// We are modifying an inner node
void
SHAMapInnerNode::setChild(int m, SHAMapTreeNodePtr child)
SHAMapInnerNode::setChild(int m, intr_ptr::SharedPtr<SHAMapTreeNode> child)
{
XRPL_ASSERT(
(m >= 0) && (m < branchFactor), "xrpl::SHAMapInnerNode::setChild : valid branch input");
@@ -307,7 +307,7 @@ SHAMapInnerNode::setChild(int m, SHAMapTreeNodePtr child)
// finished modifying, now make shareable
void
SHAMapInnerNode::shareChild(int m, SHAMapTreeNodePtr const& child)
SHAMapInnerNode::shareChild(int m, intr_ptr::SharedPtr<SHAMapTreeNode> const& child)
{
XRPL_ASSERT(
(m >= 0) && (m < branchFactor), "xrpl::SHAMapInnerNode::shareChild : valid branch input");
@@ -337,7 +337,7 @@ SHAMapInnerNode::getChildPointer(int branch)
return hashesAndChildren_.getChildren()[index].get();
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMapInnerNode::getChild(int branch)
{
XRPL_ASSERT(
@@ -364,8 +364,8 @@ SHAMapInnerNode::getChildHash(int m) const
return zeroSHAMapHash;
}
SHAMapTreeNodePtr
SHAMapInnerNode::canonicalizeChild(int branch, SHAMapTreeNodePtr node)
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMapInnerNode::canonicalizeChild(int branch, intr_ptr::SharedPtr<SHAMapTreeNode> node)
{
XRPL_ASSERT(
branch >= 0 && branch < branchFactor,

View File

@@ -129,9 +129,7 @@ selectBranch(SHAMapNodeID const& id, uint256 const& hash)
SHAMapNodeID
SHAMapNodeID::createID(int depth, uint256 const& key)
{
XRPL_ASSERT(
depth >= 0 && depth <= SHAMap::leafDepth,
"xrpl::SHAMapNodeID::createID : valid branch input");
XRPL_ASSERT((depth >= 0) && (depth < 65), "xrpl::SHAMapNodeID::createID : valid branch input");
return SHAMapNodeID(depth, key & depthMask(depth));
}

View File

@@ -66,7 +66,7 @@ SHAMap::visitNodes(std::function<bool(SHAMapTreeNode&)> const& function) const
{
if (!node->isEmptyBranch(pos))
{
SHAMapTreeNodePtr const child = descendNoStore(*node, pos);
intr_ptr::SharedPtr<SHAMapTreeNode> const child = descendNoStore(*node, pos);
if (!function(*child))
return;
@@ -204,7 +204,8 @@ SHAMap::gmn_ProcessNodes(MissingNodes& mn, MissingNodes::StackEntry& se)
branch,
mn.filter_,
pending,
[node, nodeID, branch, &mn](SHAMapTreeNodePtr found, SHAMapHash const&) {
[node, nodeID, branch, &mn](
intr_ptr::SharedPtr<SHAMapTreeNode> found, SHAMapHash const&) {
// a read completed asynchronously
std::unique_lock<std::mutex> const lock{mn.deferLock_};
mn.finishedReads_.emplace_back(node, nodeID, branch, std::move(found));
@@ -267,7 +268,8 @@ SHAMap::gmn_ProcessDeferredReads(MissingNodes& mn)
int complete = 0;
while (complete != mn.deferred_)
{
std::tuple<SHAMapInnerNode*, SHAMapNodeID, int, SHAMapTreeNodePtr> deferredNode;
std::tuple<SHAMapInnerNode*, SHAMapNodeID, int, intr_ptr::SharedPtr<SHAMapTreeNode>>
deferredNode;
{
std::unique_lock<std::mutex> lock{mn.deferLock_};
@@ -415,7 +417,7 @@ SHAMap::getMissingNodes(int max, SHAMapSyncFilter* filter)
bool
SHAMap::getNodeFat(
SHAMapNodeID const& wanted,
std::vector<SHAMapNodeData>& data,
std::vector<std::pair<SHAMapNodeID, Blob>>& data,
bool fatLeaves,
std::uint32_t depth) const
{
@@ -461,7 +463,7 @@ SHAMap::getNodeFat(
// Add this node to the reply
s.erase();
node->serializeForWire(s);
data.emplace_back(nodeID, s.getData(), node->isLeaf());
data.emplace_back(nodeID, s.getData());
if (node->isInner())
{
@@ -491,7 +493,7 @@ SHAMap::getNodeFat(
// Just include this node
s.erase();
childNode->serializeForWire(s);
data.emplace_back(childID, s.getData(), childNode->isLeaf());
data.emplace_back(childID, s.getData());
}
}
}
@@ -509,18 +511,8 @@ SHAMap::serializeRoot(Serializer& s) const
}
SHAMapAddNode
SHAMap::addRootNode(
SHAMapHash const& hash,
SHAMapTreeNodePtr rootNode,
SHAMapSyncFilter const* filter)
SHAMap::addRootNode(SHAMapHash const& hash, Slice const& rootNode, SHAMapSyncFilter* filter)
{
XRPL_ASSERT(rootNode, "xrpl::SHAMap::addRootNode : non-null root node");
if (!rootNode)
{
JLOG(journal_.error()) << "Null node received";
return SHAMapAddNode::invalid();
}
// we already have a root_ node
if (root_->getHash().isNonZero())
{
@@ -530,16 +522,14 @@ SHAMap::addRootNode(
}
XRPL_ASSERT(cowid_ >= 1, "xrpl::SHAMap::addRootNode : valid cowid");
if (rootNode->getHash() != hash)
{
JLOG(journal_.warn()) << "Corrupt node received";
auto node = SHAMapTreeNode::makeFromWire(rootNode);
if (!node || node->getHash() != hash)
return SHAMapAddNode::invalid();
}
if (backed_)
canonicalize(hash, rootNode);
canonicalize(hash, node);
root_ = std::move(rootNode);
root_ = node;
if (root_->isLeaf())
clearSynching();
@@ -556,23 +546,9 @@ SHAMap::addRootNode(
}
SHAMapAddNode
SHAMap::addKnownNode(
SHAMapNodeID const& nodeID,
SHAMapTreeNodePtr treeNode,
SHAMapSyncFilter const* filter)
SHAMap::addKnownNode(SHAMapNodeID const& node, Slice const& rawNode, SHAMapSyncFilter* filter)
{
XRPL_ASSERT(!nodeID.isRoot(), "xrpl::SHAMap::addKnownNode : valid node input");
if (nodeID.isRoot())
{
JLOG(journal_.error()) << "Root node received";
return SHAMapAddNode::invalid();
}
XRPL_ASSERT(treeNode, "xrpl::SHAMap::addKnownNode : non-null tree node");
if (!treeNode)
{
JLOG(journal_.error()) << "Null node received";
return SHAMapAddNode::invalid();
}
XRPL_ASSERT(!node.isRoot(), "xrpl::SHAMap::addKnownNode : valid node input");
if (!isSynching())
{
@@ -586,14 +562,14 @@ SHAMap::addKnownNode(
while (currNode->isInner() &&
!safe_downcast<SHAMapInnerNode*>(currNode)->isFullBelow(generation) &&
(currNodeID.getDepth() < nodeID.getDepth()))
(currNodeID.getDepth() < node.getDepth()))
{
int const branch = selectBranch(currNodeID, nodeID.getNodeID());
int const branch = selectBranch(currNodeID, node.getNodeID());
XRPL_ASSERT(branch >= 0, "xrpl::SHAMap::addKnownNode : valid branch");
auto inner = safe_downcast<SHAMapInnerNode*>(currNode);
if (inner->isEmptyBranch(branch))
{
JLOG(journal_.warn()) << "Add known node for empty branch" << nodeID;
JLOG(journal_.warn()) << "Add known node for empty branch" << node;
return SHAMapAddNode::invalid();
}
@@ -609,44 +585,67 @@ SHAMap::addKnownNode(
if (currNode != nullptr)
continue;
if (childHash != treeNode->getHash())
auto newNode = SHAMapTreeNode::makeFromWire(rawNode);
if (!newNode || childHash != newNode->getHash())
{
JLOG(journal_.warn()) << "Corrupt node received";
return SHAMapAddNode::invalid();
}
// In rare cases, a node can still be corrupt even after hash
// validation. For leaf nodes, we perform an additional check to
// ensure the node's position in the tree is consistent with its
// content to prevent inconsistencies that could
// propagate further down the line.
if (newNode->isLeaf())
{
auto const& actualKey =
safe_downcast<SHAMapLeafNode const*>(newNode.get())->peekItem()->key();
// Validate that this leaf belongs at the target position
auto const expectedNodeID = SHAMapNodeID::createID(node.getDepth(), actualKey);
if (expectedNodeID.getNodeID() != node.getNodeID())
{
JLOG(journal_.debug())
<< "Leaf node position mismatch: "
<< "expected=" << expectedNodeID.getNodeID() << ", actual=" << node.getNodeID();
return SHAMapAddNode::invalid();
}
}
// Inner nodes must be at a level strictly less than 64
// but leaf nodes (while notionally at level 64) can be
// at any depth up to and including 64:
if ((currNodeID.getDepth() > leafDepth) ||
(treeNode->isInner() && currNodeID.getDepth() == leafDepth))
(newNode->isInner() && currNodeID.getDepth() == leafDepth))
{
// Map is provably invalid
state_ = SHAMapState::Invalid;
return SHAMapAddNode::useful();
}
if (currNodeID != nodeID)
if (currNodeID != node)
{
// Either this node is broken or we didn't request it (yet)
JLOG(journal_.warn()) << "unable to hook node " << nodeID;
JLOG(journal_.warn()) << "unable to hook node " << node;
JLOG(journal_.info()) << " stuck at " << currNodeID;
JLOG(journal_.info()) << "got depth=" << nodeID.getDepth()
JLOG(journal_.info()) << "got depth=" << node.getDepth()
<< ", walked to= " << currNodeID.getDepth();
return SHAMapAddNode::useful();
}
if (backed_)
canonicalize(childHash, treeNode);
canonicalize(childHash, newNode);
treeNode = prevNode->canonicalizeChild(branch, std::move(treeNode));
newNode = prevNode->canonicalizeChild(branch, std::move(newNode));
if (filter != nullptr)
{
Serializer s;
treeNode->serializeWithPrefix(s);
newNode->serializeWithPrefix(s);
filter->gotNode(
false, childHash, ledgerSeq_, std::move(s.modData()), treeNode->getType());
false, childHash, ledgerSeq_, std::move(s.modData()), newNode->getType());
}
return SHAMapAddNode::useful();

View File

@@ -25,7 +25,7 @@
namespace xrpl {
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMapTreeNode::makeTransaction(Slice data, SHAMapHash const& hash, bool hashValid)
{
auto item = make_shamapitem(sha512Half(HashPrefix::transactionID, data), data);
@@ -36,7 +36,7 @@ SHAMapTreeNode::makeTransaction(Slice data, SHAMapHash const& hash, bool hashVal
return intr_ptr::make_shared<SHAMapTxLeafNode>(std::move(item), 0);
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMapTreeNode::makeTransactionWithMeta(Slice data, SHAMapHash const& hash, bool hashValid)
{
Serializer s(data.data(), data.size());
@@ -60,7 +60,7 @@ SHAMapTreeNode::makeTransactionWithMeta(Slice data, SHAMapHash const& hash, bool
return intr_ptr::make_shared<SHAMapTxPlusMetaLeafNode>(std::move(item), 0);
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMapTreeNode::makeAccountState(Slice data, SHAMapHash const& hash, bool hashValid)
{
Serializer s(data.data(), data.size());
@@ -87,7 +87,7 @@ SHAMapTreeNode::makeAccountState(Slice data, SHAMapHash const& hash, bool hashVa
return intr_ptr::make_shared<SHAMapAccountStateLeafNode>(std::move(item), 0);
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMapTreeNode::makeFromWire(Slice rawNode)
{
if (rawNode.empty())
@@ -118,7 +118,7 @@ SHAMapTreeNode::makeFromWire(Slice rawNode)
Throw<std::runtime_error>("wire: Unknown type (" + std::to_string(type) + ")");
}
SHAMapTreeNodePtr
intr_ptr::SharedPtr<SHAMapTreeNode>
SHAMapTreeNode::makeFromPrefix(Slice rawNode, SHAMapHash const& hash)
{
if (rawNode.size() < 4)

View File

@@ -418,6 +418,7 @@ LoanSet::doApply()
auto const paymentTotal = tx[~sfPaymentTotal].value_or(defaultPaymentTotal);
auto const properties = computeLoanProperties(
view.rules(),
vaultAsset,
principalRequested,
interestRate,

View File

@@ -1,350 +0,0 @@
#include <xrpld/app/ledger/detail/LedgerNodeHelpers.h>
#include <xrpl/basics/IntrusivePointer.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/beast/unit_test/suite.h>
#include <xrpl/protocol/Serializer.h>
#include <xrpl/shamap/SHAMap.h>
#include <xrpl/shamap/SHAMapAccountStateLeafNode.h>
#include <xrpl/shamap/SHAMapInnerNode.h>
#include <xrpl/shamap/SHAMapItem.h>
#include <xrpl/shamap/SHAMapTreeNode.h>
#include <boost/smart_ptr/intrusive_ptr.hpp>
#include <xrpl.pb.h>
#include <bit>
#include <cstdint>
#include <string>
namespace xrpl::tests {
class LedgerNodeHelpers_test : public beast::unit_test::suite
{
static boost::intrusive_ptr<SHAMapItem>
makeTestItem(std::uint32_t seed)
{
Serializer s;
s.add32(seed);
s.add32(seed + 1);
s.add32(seed + 2);
return make_shamapitem(s.getSHA512Half(), s.slice());
}
static std::string
serializeNode(SHAMapTreeNodePtr const& node)
{
Serializer s;
node->serializeForWire(s);
auto const slice = s.slice();
return std::string(std::bit_cast<char const*>(slice.data()), slice.size());
}
void
testValidateLedgerNode()
{
// In the tests below the validity of the content of the node data and ID fields is not
// checked - only that the fields have values when expected. The content of the fields is
// verified in the other tests in this file.
testcase("validateLedgerNode");
// Invalid: missing all fields.
{
protocol::TMLedgerNode const node;
BEAST_EXPECT(!validateLedgerNode(node));
}
// Invalid: missing `nodedata` field.
{
protocol::TMLedgerNode node;
node.set_nodeid("test_nodeid");
BEAST_EXPECT(!validateLedgerNode(node));
}
// Invalid: missing `nodedata` field.
{
protocol::TMLedgerNode node;
node.set_id("test_nodeid");
BEAST_EXPECT(!validateLedgerNode(node));
}
// Invalid: missing `nodedata` field.
{
protocol::TMLedgerNode node;
node.set_depth(1);
BEAST_EXPECT(!validateLedgerNode(node));
}
// Valid: legacy `nodeid` field.
{
protocol::TMLedgerNode node;
node.set_nodedata("test_data");
node.set_nodeid("test_nodeid");
BEAST_EXPECT(validateLedgerNode(node));
}
// Invalid: has both legacy `nodeid` and new `id` fields.
{
protocol::TMLedgerNode node;
node.set_nodedata("test_data");
node.set_nodeid("test_nodeid");
node.set_id("test_nodeid");
BEAST_EXPECT(!validateLedgerNode(node));
}
// Invalid: has both legacy `nodeid` and new `depth` fields.
{
protocol::TMLedgerNode node;
node.set_nodedata("test_data");
node.set_nodeid("test_nodeid");
node.set_depth(5);
BEAST_EXPECT(!validateLedgerNode(node));
}
// Valid: new `id` field.
{
protocol::TMLedgerNode node;
node.set_nodedata("test_data");
node.set_id("test_id");
BEAST_EXPECT(validateLedgerNode(node));
}
// Valid: new `depth` field.
{
protocol::TMLedgerNode node;
node.set_nodedata("test_data");
node.set_depth(5);
BEAST_EXPECT(validateLedgerNode(node));
}
// Valid: `depth` at minimum depth.
{
protocol::TMLedgerNode node;
node.set_nodedata("test_data");
node.set_depth(0);
BEAST_EXPECT(validateLedgerNode(node));
}
// Valid: `depth` at arbitrary depth between minimum and maximum.
{
protocol::TMLedgerNode node;
node.set_nodedata("test_data");
node.set_depth(10);
BEAST_EXPECT(validateLedgerNode(node));
}
// Valid: `depth` at maximum depth.
{
protocol::TMLedgerNode node;
node.set_nodedata("test_data");
node.set_depth(SHAMap::leafDepth);
BEAST_EXPECT(validateLedgerNode(node));
}
// Invalid: `depth` is greater than maximum depth.
{
protocol::TMLedgerNode node;
node.set_nodedata("test_data");
node.set_depth(SHAMap::leafDepth + 1);
BEAST_EXPECT(!validateLedgerNode(node));
}
}
void
testGetTreeNode()
{
testcase("getTreeNode");
// Valid: inner node. It must have at least one child for `serializeNode` to work.
{
auto const innerNode = intr_ptr::make_shared<SHAMapInnerNode>(1);
auto const childNode = intr_ptr::make_shared<SHAMapInnerNode>(1);
innerNode->setChild(0, childNode);
auto const innerData = serializeNode(innerNode);
auto const result = getTreeNode(innerData);
BEAST_EXPECT(result && result->isInner());
}
// Valid: leaf node.
{
auto const leafItem = makeTestItem(12345);
auto const leafNode = intr_ptr::make_shared<SHAMapAccountStateLeafNode>(leafItem, 1);
auto const leafData = serializeNode(leafNode);
auto const result = getTreeNode(leafData);
BEAST_EXPECT(result && result->isLeaf());
}
// Invalid: empty data.
{
auto const result = getTreeNode("");
BEAST_EXPECT(!result);
}
// Invalid: garbage data.
{
auto const result = getTreeNode("invalid");
BEAST_EXPECT(!result);
}
// Invalid: truncated data.
{
auto const leafItem = makeTestItem(54321);
auto const leafNode = intr_ptr::make_shared<SHAMapAccountStateLeafNode>(leafItem, 1);
// Truncate the data to trigger an exception in SHAMapTreeNode::makeAccountState when
// the data is used to deserialize the node.
uint256 const tag;
auto const leafData = serializeNode(leafNode).substr(0, tag.bytes - 1);
auto const result = getTreeNode(leafData);
BEAST_EXPECT(!result);
}
}
void
testGetSHAMapNodeID()
{
testcase("getSHAMapNodeID");
{
// Tests using inner nodes at various depths.
auto const innerNode = intr_ptr::make_shared<SHAMapInnerNode>(1);
auto const childNode = intr_ptr::make_shared<SHAMapInnerNode>(1);
innerNode->setChild(0, childNode);
auto const innerData = serializeNode(innerNode);
// Valid: legacy `nodeid` field at arbitrary depth.
{
auto const innerDepth = 3;
auto const innerID = SHAMapNodeID::createID(innerDepth, uint256{});
protocol::TMLedgerNode node;
node.set_nodedata(innerData);
node.set_nodeid(innerID.getRawString());
auto const result = getSHAMapNodeID(node, innerNode);
BEAST_EXPECT(result == innerID);
}
// Valid: new `id` field at minimum depth.
{
auto const innerDepth = 0;
auto const innerID = SHAMapNodeID::createID(innerDepth, uint256{});
protocol::TMLedgerNode node;
node.set_nodedata(innerData);
node.set_id(innerID.getRawString());
auto const result = getSHAMapNodeID(node, innerNode);
BEAST_EXPECT(result == innerID);
}
// Invalid: new `depth` field should not be used for inner nodes.
{
protocol::TMLedgerNode node;
node.set_nodedata(innerData);
node.set_depth(10);
auto const result = getSHAMapNodeID(node, innerNode);
BEAST_EXPECT(!result);
}
}
{
// Tests using leaf nodes at various depths.
auto const leafItem = makeTestItem(12345);
auto const leafNode = intr_ptr::make_shared<SHAMapAccountStateLeafNode>(leafItem, 1);
auto const leafData = serializeNode(leafNode);
auto const leafKey = leafItem->key();
// Valid: legacy `nodeid` field at arbitrary depth.
{
auto const leafDepth = 5;
auto const leafID = SHAMapNodeID::createID(leafDepth, leafKey);
protocol::TMLedgerNode ledgerNode;
ledgerNode.set_nodedata(leafData);
ledgerNode.set_nodeid(leafID.getRawString());
auto const result = getSHAMapNodeID(ledgerNode, leafNode);
BEAST_EXPECT(result == leafID);
}
// Invalid: new `id` field should not be used for leaf nodes.
{
auto const leafDepth = 5;
auto const leafID = SHAMapNodeID::createID(leafDepth, leafKey);
protocol::TMLedgerNode ledgerNode;
ledgerNode.set_nodedata(leafData);
ledgerNode.set_id(leafID.getRawString());
auto const result = getSHAMapNodeID(ledgerNode, leafNode);
BEAST_EXPECT(!result);
}
// Valid: new `depth` field at minimum depth.
{
auto const leafDepth = 0;
auto const leafID = SHAMapNodeID::createID(leafDepth, leafKey);
protocol::TMLedgerNode node;
node.set_nodedata(leafData);
node.set_depth(leafDepth);
auto const result = getSHAMapNodeID(node, leafNode);
BEAST_EXPECT(result == leafID);
}
// Valid: new `depth` field at arbitrary depth between minimum and maximum.
{
auto const leafDepth = 10;
auto const leafID = SHAMapNodeID::createID(leafDepth, leafKey);
protocol::TMLedgerNode ledgerNode;
ledgerNode.set_nodedata(leafData);
ledgerNode.set_depth(leafDepth);
auto const result = getSHAMapNodeID(ledgerNode, leafNode);
BEAST_EXPECT(result == leafID);
}
// Valid: new `depth` field at maximum depth.
// Note that we do not test a depth greater than the maximum depth, because the proto
// message is assumed to have been validated by the time the getSHAMapNodeID function is
// called.
{
auto const leafDepth = SHAMap::leafDepth;
auto const leafID = SHAMapNodeID::createID(leafDepth, leafKey);
protocol::TMLedgerNode node;
node.set_nodedata(leafData);
node.set_depth(leafDepth);
auto const result = getSHAMapNodeID(node, leafNode);
BEAST_EXPECT(result == leafID);
}
// Invalid: legacy `nodeid` field where the node ID is inconsistent with the key.
{
auto const otherItem = makeTestItem(54321);
auto const otherNode =
intr_ptr::make_shared<SHAMapAccountStateLeafNode>(otherItem, 1);
auto const otherData = serializeNode(otherNode);
auto const otherKey = otherItem->key();
auto const otherDepth = 1;
auto const otherID = SHAMapNodeID::createID(otherDepth, otherKey);
protocol::TMLedgerNode ledgerNode;
ledgerNode.set_nodedata(otherData);
ledgerNode.set_nodeid(otherID.getRawString());
auto const result = getSHAMapNodeID(ledgerNode, leafNode);
BEAST_EXPECT(!result);
}
}
}
public:
void
run() override
{
testValidateLedgerNode();
testGetTreeNode();
testGetSHAMapNodeID();
}
};
BEAST_DEFINE_TESTSUITE(LedgerNodeHelpers, app, xrpl);
} // namespace xrpl::tests

View File

@@ -17,63 +17,13 @@ namespace xrpl::test {
class LendingHelpers_test : public beast::unit_test::suite
{
void
testComputeRaisedRate()
{
using namespace jtx;
using namespace xrpl::detail;
struct TestCase
{
std::string name;
Number periodicRate;
std::uint32_t paymentsRemaining;
Number expectedRaisedRate;
};
auto const testCases = std::vector<TestCase>{
{
.name = "Zero payments remaining",
.periodicRate = Number{5, -2},
.paymentsRemaining = 0,
.expectedRaisedRate = Number{1}, // (1 + r)^0 = 1
},
{
.name = "One payment remaining",
.periodicRate = Number{5, -2},
.paymentsRemaining = 1,
.expectedRaisedRate = Number{105, -2},
}, // 1.05^1
{
.name = "Multiple payments remaining",
.periodicRate = Number{5, -2},
.paymentsRemaining = 3,
.expectedRaisedRate = Number{1157625, -6},
}, // 1.05^3
{
.name = "Zero periodic rate",
.periodicRate = Number{0},
.paymentsRemaining = 5,
.expectedRaisedRate = Number{1}, // (1 + 0)^5 = 1
}};
for (auto const& tc : testCases)
{
testcase("computeRaisedRate: " + tc.name);
auto const computedRaisedRate =
computeRaisedRate(tc.periodicRate, tc.paymentsRemaining);
BEAST_EXPECTS(
computedRaisedRate == tc.expectedRaisedRate,
"Raised rate mismatch: expected " + to_string(tc.expectedRaisedRate) + ", got " +
to_string(computedRaisedRate));
}
}
void
testComputePaymentFactor()
{
using namespace jtx;
using namespace xrpl::detail;
Env const env{*this};
auto const& rules = env.current()->rules();
struct TestCase
{
std::string name;
@@ -114,7 +64,7 @@ class LendingHelpers_test : public beast::unit_test::suite
testcase("computePaymentFactor: " + tc.name);
auto const computedPaymentFactor =
computePaymentFactor(tc.periodicRate, tc.paymentsRemaining);
computePaymentFactor(rules, tc.periodicRate, tc.paymentsRemaining);
BEAST_EXPECTS(
computedPaymentFactor == tc.expectedPaymentFactor,
"Payment factor mismatch: expected " + to_string(tc.expectedPaymentFactor) +
@@ -127,6 +77,8 @@ class LendingHelpers_test : public beast::unit_test::suite
{
using namespace jtx;
using namespace xrpl::detail;
Env const env{*this};
auto const& rules = env.current()->rules();
struct TestCase
{
@@ -172,8 +124,8 @@ class LendingHelpers_test : public beast::unit_test::suite
{
testcase("loanPeriodicPayment: " + tc.name);
auto const computedPeriodicPayment =
loanPeriodicPayment(tc.principalOutstanding, tc.periodicRate, tc.paymentsRemaining);
auto const computedPeriodicPayment = loanPeriodicPayment(
rules, tc.principalOutstanding, tc.periodicRate, tc.paymentsRemaining);
BEAST_EXPECTS(
computedPeriodicPayment == tc.expectedPeriodicPayment,
"Periodic payment mismatch: expected " + to_string(tc.expectedPeriodicPayment) +
@@ -186,6 +138,8 @@ class LendingHelpers_test : public beast::unit_test::suite
{
using namespace jtx;
using namespace xrpl::detail;
Env const env{*this};
auto const& rules = env.current()->rules();
struct TestCase
{
@@ -232,7 +186,7 @@ class LendingHelpers_test : public beast::unit_test::suite
testcase("loanPrincipalFromPeriodicPayment: " + tc.name);
auto const computedPrincipalOutstanding = loanPrincipalFromPeriodicPayment(
tc.periodicPayment, tc.periodicRate, tc.paymentsRemaining);
rules, tc.periodicPayment, tc.periodicRate, tc.paymentsRemaining);
BEAST_EXPECTS(
computedPrincipalOutstanding == tc.expectedPrincipalOutstanding,
"Principal outstanding mismatch: expected " +
@@ -241,6 +195,251 @@ class LendingHelpers_test : public beast::unit_test::suite
}
}
void
testComputePowerMinusOne()
{
using namespace jtx;
using namespace xrpl::detail;
// Edge cases.
{
testcase("computePowerMinusOne: zero rate returns zero");
BEAST_EXPECT(computePowerMinusOne(0, 5) == 0);
}
{
testcase("computePowerMinusOne: zero paymentsRemaining returns zero");
Number const fivePercent{5, -2};
BEAST_EXPECT(computePowerMinusOne(fivePercent, 0) == 0);
}
// (1.05)^3 - 1 = 0.157625, computed independently by hand.
{
testcase("computePowerMinusOne: standard case (1.05)^3 - 1 = 0.157625");
Number const r{5, -2};
Number const expected{157625, -6};
BEAST_EXPECT(computePowerMinusOne(r, 3) == expected);
}
// (1+1)^1 - 1 = 1.
{
testcase("computePowerMinusOne: r=1, n=1");
BEAST_EXPECT(computePowerMinusOne(1, 1) == 1);
}
// Property check at near-zero rate (the bug regime): for n=2 the
// mathematical identity is `(1+r)^2 - 1 = 2r + r^2`. We compute
// `2r + r^2` by direct multiplication in Number arithmetic — a
// path that doesn't share any code with the binomial loop — and
// assert the two paths agree.
{
testcase("computePowerMinusOne: near-zero rate matches independent 2r + r^2");
// r = 1 TenthBips32 over 600s payment interval, computed
// independently below using xrpl::detail::loanPeriodicRate.
Number const r = loanPeriodicRate(TenthBips32{1}, 600);
Number const independentExpected = 2 * r + r * r; // (1+r)^2 - 1
BEAST_EXPECT(computePowerMinusOne(r, 2) == independentExpected);
}
// Same property at n=3: (1+r)^3 - 1 = 3r + 3r^2 + r^3.
{
testcase("computePowerMinusOne: near-zero rate matches independent 3r + 3r^2 + r^3");
Number const r = loanPeriodicRate(TenthBips32{1}, 600);
Number const independentExpected = 3 * r + 3 * r * r + r * r * r;
BEAST_EXPECT(computePowerMinusOne(r, 3) == independentExpected);
}
// Larger-n stress test for the loop's early-termination logic.
// At very small r the binomial terms decrease by a factor of
// ~r*(n-k)/(k+1) per step, so even at n=1000 the loop should
// terminate in a small handful of iterations. Cross-check the
// result against the hybrid (which dispatches to this same
// binomial path when r*n < 1e-9).
{
testcase("computePowerMinusOne: large n, early termination matches hybrid output");
// r*n = 1e-10 and 1e-12 — both clearly below the 1e-9 threshold.
Number const r1{1, -13};
std::uint32_t const n1 = 1'000;
Number const r2{1, -15};
std::uint32_t const n2 = 1'000;
BEAST_EXPECT(computePowerMinusOne(r1, n1) == computePowerMinusOneHybrid(r1, n1));
BEAST_EXPECT(computePowerMinusOne(r2, n2) == computePowerMinusOneHybrid(r2, n2));
BEAST_EXPECT(computePowerMinusOne(r1, n1) > 0);
BEAST_EXPECT(computePowerMinusOne(r2, n2) > 0);
}
}
// Direct tests of `computePowerMinusOneHybrid`. Verifies the dispatcher
// picks the right branch and produces the right result on each side
// of the threshold.
void
testComputePowerMinusOneHybrid()
{
using namespace jtx;
using namespace xrpl::detail;
// Above threshold (r * n >= 1e-9): hybrid must agree with the closed
// form `power(1+r, n) - 1` exactly (it is the closed form).
{
testcase("computePowerMinusOneHybrid: r*n >= 1e-9 uses closed form (bit-exact match)");
struct AboveThreshold
{
std::string name;
Number r;
std::uint32_t n;
};
auto const cases = std::vector<AboveThreshold>{
{"r=5%, n=3", Number{5, -2}, 3},
{"r=0.1%, n=1000", Number{1, -3}, 1'000},
{"r=1e-7, n=100 (above threshold by 10x)", Number{1, -7}, 100},
};
for (auto const& tc : cases)
{
Number const closed = power(1 + tc.r, tc.n) - 1;
Number const hybrid = computePowerMinusOneHybrid(tc.r, tc.n);
BEAST_EXPECTS(
hybrid == closed,
tc.name + ": closed=" + to_string(closed) + ", hybrid=" + to_string(hybrid));
}
}
// Below threshold (r * n < 1e-9): hybrid must agree with
// `computePowerMinusOne` (the binomial expansion). At this regime
// the closed form is provably wrong (cancellation); we verify the
// dispatcher routes to the binomial path.
{
testcase(
"computePowerMinusOneHybrid: r*n < 1e-9 uses binomial expansion (bit-exact match)");
struct BelowThreshold
{
std::string name;
Number r;
std::uint32_t n;
};
auto const cases = std::vector<BelowThreshold>{
// bug regime: r = 1 TenthBips32 over 600s payment interval
// → r ≈ 1.9e-10, r*n ≈ 3.8e-10 < 1e-9.
{"bug regime: r~1.9e-10, n=2", loanPeriodicRate(TenthBips32{1}, 600), 2},
{"r=1e-12, n=100", Number{1, -12}, 100},
};
for (auto const& tc : cases)
{
Number const binom = computePowerMinusOne(tc.r, tc.n);
Number const hybrid = computePowerMinusOneHybrid(tc.r, tc.n);
BEAST_EXPECTS(
hybrid == binom,
tc.name + ": binom=" + to_string(binom) + ", hybrid=" + to_string(hybrid));
}
}
// Edge cases.
{
testcase("computePowerMinusOneHybrid: edge cases");
Number const fivePercent{5, -2};
BEAST_EXPECT(computePowerMinusOneHybrid(0, 100) == 0);
BEAST_EXPECT(computePowerMinusOneHybrid(fivePercent, 0) == 0);
BEAST_EXPECT(computePowerMinusOneHybrid(0, 0) == 0);
}
// Threshold boundary: r*n = 1e-9 exactly. Hybrid uses `>=` against
// the threshold, so this case must take the closed-form branch.
// We also verify that the binomial path agrees with the closed
// form to high precision at this crossover — confirming the
// threshold is placed where both paths give "adequate" answers.
{
testcase("computePowerMinusOneHybrid: threshold boundary r*n = 1e-9");
// Construct exactly r*n = 1e-9 with two distinct (r, n) pairs.
struct Boundary
{
std::string name;
Number r;
std::uint32_t n;
};
auto const cases = std::vector<Boundary>{
{"r=1e-9, n=1", Number{1, -9}, 1},
{"r=1e-12, n=1000", Number{1, -12}, 1'000},
};
for (auto const& tc : cases)
{
Number const closed = power(1 + tc.r, tc.n) - 1;
Number const hybrid = computePowerMinusOneHybrid(tc.r, tc.n);
Number const binom = computePowerMinusOne(tc.r, tc.n);
// At exact threshold, hybrid must take closed-form path:
// bit-exact match with closed.
BEAST_EXPECTS(
hybrid == closed,
tc.name + ": hybrid should equal closed at threshold; got hybrid=" +
to_string(hybrid) + ", closed=" + to_string(closed));
// Closed-form and binomial must agree at the threshold to
// within Number's post-subtraction precision (~10 sig
// digits of `r*n = 1e-9`, i.e. ~1e-19 absolute error).
Number const tolerance{1, -18};
Number const diff = abs(closed - binom);
BEAST_EXPECTS(
diff < tolerance,
tc.name + ": closed and binomial diverge at threshold by " + to_string(diff));
}
}
}
// Regression: at near-zero rate, `loanPrincipalFromPeriodicPayment`
// must satisfy `principal <= periodicPayment * paymentsRemaining` for
// any non-negative rate. The naive closed-form path violated this
// bound due to catastrophic cancellation in `(1+r)^n - 1`.
void
testLoanPrincipalFromPeriodicPaymentNearZeroRate()
{
testcase("loanPrincipalFromPeriodicPayment: principal <= payment*n at near-zero rate");
using namespace jtx;
using namespace xrpl::detail;
Env const env{*this};
auto const& rules = env.current()->rules();
// Inputs from the bug reproduction in Loan_test.cpp:
// InterestRate = 1 TenthBips32 (0.001 % per year),
// PaymentInterval = 600 s, principal = 100, 3 payments.
// periodicRate is ~1.9e-10.
auto const periodicRate = loanPeriodicRate(TenthBips32{1}, 600);
auto const periodicPayment = loanPeriodicPayment(rules, 100, periodicRate, 3);
for (auto const n : {3u, 2u, 1u})
{
auto const computed =
loanPrincipalFromPeriodicPayment(rules, periodicPayment, periodicRate, n);
auto const upperBound = periodicPayment * Number{n};
BEAST_EXPECTS(
computed <= upperBound,
"n=" + std::to_string(n) + ": payment*n=" + to_string(upperBound) +
", principal=" + to_string(computed));
}
}
// Regression: `computeTheoreticalLoanState` must produce a non-negative
// `interestDue` for any non-negative rate. Pre-fix, near-zero rates
// produced a negative `interestDue` because `(1+r)^n - 1` lost most of
// its precision to cancellation.
void
testComputeTheoreticalLoanStateNearZeroRate()
{
testcase("computeTheoreticalLoanState: non-negative interestDue at near-zero rate");
using namespace jtx;
using namespace xrpl::detail;
Env const env{*this};
auto const& rules = env.current()->rules();
auto const periodicRate = loanPeriodicRate(TenthBips32{1}, 600);
auto const periodicPayment = loanPeriodicPayment(rules, 100, periodicRate, 3);
auto const state =
computeTheoreticalLoanState(rules, periodicPayment, periodicRate, 2, TenthBips32{0});
BEAST_EXPECT(state.principalOutstanding <= state.valueOutstanding);
BEAST_EXPECT(state.interestDue >= 0);
BEAST_EXPECT(state.managementFeeDue == 0);
}
void
testComputeOverpaymentComponents()
{
@@ -606,6 +805,7 @@ class LendingHelpers_test : public beast::unit_test::suite
asset, loanScale, overpaymentAmount, TenthBips32(0), TenthBips32(0), managementFeeRate);
auto const loanProperties = computeLoanProperties(
env.current()->rules(),
asset,
loanPrincipal,
loanInterestRate,
@@ -615,6 +815,7 @@ class LendingHelpers_test : public beast::unit_test::suite
loanScale);
auto const ret = tryOverpayment(
env.current()->rules(),
asset,
loanScale,
overpaymentComponents,
@@ -697,6 +898,7 @@ class LendingHelpers_test : public beast::unit_test::suite
managementFeeRate);
auto const loanProperties = computeLoanProperties(
env.current()->rules(),
asset,
loanPrincipal,
loanInterestRate,
@@ -706,6 +908,7 @@ class LendingHelpers_test : public beast::unit_test::suite
loanScale);
auto const ret = tryOverpayment(
env.current()->rules(),
asset,
loanScale,
overpaymentComponents,
@@ -790,6 +993,7 @@ class LendingHelpers_test : public beast::unit_test::suite
managementFeeRate);
auto const loanProperties = computeLoanProperties(
env.current()->rules(),
asset,
loanPrincipal,
loanInterestRate,
@@ -799,6 +1003,7 @@ class LendingHelpers_test : public beast::unit_test::suite
loanScale);
auto const ret = tryOverpayment(
env.current()->rules(),
asset,
loanScale,
overpaymentComponents,
@@ -889,6 +1094,7 @@ class LendingHelpers_test : public beast::unit_test::suite
managementFeeRate);
auto const loanProperties = computeLoanProperties(
env.current()->rules(),
asset,
loanPrincipal,
loanInterestRate,
@@ -898,6 +1104,7 @@ class LendingHelpers_test : public beast::unit_test::suite
loanScale);
auto const ret = tryOverpayment(
env.current()->rules(),
asset,
loanScale,
overpaymentComponents,
@@ -996,6 +1203,7 @@ class LendingHelpers_test : public beast::unit_test::suite
managementFeeRate);
auto const loanProperties = computeLoanProperties(
env.current()->rules(),
asset,
loanPrincipal,
loanInterestRate,
@@ -1005,6 +1213,7 @@ class LendingHelpers_test : public beast::unit_test::suite
loanScale);
auto const ret = tryOverpayment(
env.current()->rules(),
asset,
loanScale,
overpaymentComponents,
@@ -1103,6 +1312,7 @@ class LendingHelpers_test : public beast::unit_test::suite
managementFeeRate);
auto const loanProperties = computeLoanProperties(
env.current()->rules(),
asset,
loanPrincipal,
loanInterestRate,
@@ -1112,6 +1322,7 @@ class LendingHelpers_test : public beast::unit_test::suite
loanScale);
auto const ret = tryOverpayment(
env.current()->rules(),
asset,
loanScale,
overpaymentComponents,
@@ -1199,8 +1410,11 @@ public:
testLoanLatePaymentInterest();
testLoanPeriodicPayment();
testLoanPrincipalFromPeriodicPayment();
testComputeRaisedRate();
testLoanPrincipalFromPeriodicPaymentNearZeroRate();
testComputePaymentFactor();
testComputePowerMinusOne();
testComputePowerMinusOneHybrid();
testComputeTheoreticalLoanStateNearZeroRate();
testComputeOverpaymentComponents();
testComputeInterestAndFeeParts();
}

View File

@@ -715,6 +715,7 @@ protected:
auto const total = loanParams.payTotal.value_or(LoanSet::defaultPaymentTotal);
auto const feeRate = brokerParams.managementFeeRate;
auto const props = computeLoanProperties(
env.current()->rules(),
asset,
principal,
interest,
@@ -920,6 +921,7 @@ protected:
state.totalValue, state.principalOutstanding, state.managementFeeOutstanding);
{
auto const raw = computeTheoreticalLoanState(
env.current()->rules(),
state.periodicPayment,
periodicRate,
state.paymentRemaining,
@@ -963,6 +965,7 @@ protected:
std::size_t totalPaymentsMade = 0;
xrpl::LoanState currentTrueState = computeTheoreticalLoanState(
env.current()->rules(),
state.periodicPayment,
periodicRate,
state.paymentRemaining,
@@ -992,6 +995,7 @@ protected:
validateBorrowerBalance();
// Compute the expected principal amount
auto const paymentComponents = xrpl::detail::computePaymentComponents(
env.current()->rules(),
broker.asset.raw(),
state.loanScale,
state.totalValue,
@@ -1012,6 +1016,7 @@ protected:
paymentComponents.trackedManagementFeeDelta);
xrpl::LoanState const nextTrueState = computeTheoreticalLoanState(
env.current()->rules(),
state.periodicPayment,
periodicRate,
state.paymentRemaining - 1,
@@ -1409,6 +1414,7 @@ protected:
auto state = getCurrentState(env, broker, keylet, verifyLoanStatus);
auto const loanProperties = computeLoanProperties(
env.current()->rules(),
broker.asset.raw(),
state.principalOutstanding,
state.interestRate,
@@ -2537,6 +2543,7 @@ protected:
{
auto const raw = computeTheoreticalLoanState(
env.current()->rules(),
state.periodicPayment,
periodicRate,
state.paymentRemaining,
@@ -2574,6 +2581,7 @@ protected:
std::size_t totalPaymentsMade = 0;
xrpl::LoanState currentTrueState = computeTheoreticalLoanState(
env.current()->rules(),
state.periodicPayment,
periodicRate,
state.paymentRemaining,
@@ -2583,6 +2591,7 @@ protected:
{
// Compute the expected principal amount
auto const paymentComponents = xrpl::detail::computePaymentComponents(
env.current()->rules(),
broker.asset.raw(),
state.loanScale,
state.totalValue,
@@ -2600,6 +2609,7 @@ protected:
", periodic payment: " + to_string(roundedPeriodicPayment));
xrpl::LoanState const nextTrueState = computeTheoreticalLoanState(
env.current()->rules(),
state.periodicPayment,
periodicRate,
state.paymentRemaining - 1,
@@ -5500,7 +5510,11 @@ protected:
auto const periodicRate = loanPeriodicRate(interestRateValue, state.paymentInterval);
auto const rawLoanState = computeTheoreticalLoanState(
state.periodicPayment, periodicRate, state.paymentRemaining, managementFeeRate);
env.current()->rules(),
state.periodicPayment,
periodicRate,
state.paymentRemaining,
managementFeeRate);
auto const parentCloseTime = env.current()->parentCloseTime();
auto const startDateSeconds =
@@ -5729,6 +5743,7 @@ protected:
auto state = getCurrentState(env, broker, loanKeylet);
Number const periodicRate = loanPeriodicRate(state.interestRate, state.paymentInterval);
auto const components = xrpl::detail::computePaymentComponents(
env.current()->rules(),
asset.raw(),
state.loanScale,
state.totalValue,
@@ -5762,7 +5777,10 @@ protected:
// schedule
auto const fullPaymentInterest = computeFullPaymentInterest(
xrpl::detail::loanPrincipalFromPeriodicPayment(
after.periodicPayment, periodicRate2, after.paymentRemaining),
env.current()->rules(),
after.periodicPayment,
periodicRate2,
after.paymentRemaining),
periodicRate2,
env.current()->parentCloseTime(),
after.paymentInterval,
@@ -5795,7 +5813,10 @@ protected:
auto const prevClamped = std::min(after.previousPaymentDate, nowSecs);
auto const fullPaymentInterestClamped = computeFullPaymentInterest(
xrpl::detail::loanPrincipalFromPeriodicPayment(
after.periodicPayment, periodicRate2, after.paymentRemaining),
env.current()->rules(),
after.periodicPayment,
periodicRate2,
after.paymentRemaining),
periodicRate2,
env.current()->parentCloseTime(),
after.paymentInterval,
@@ -7198,6 +7219,188 @@ protected:
attemptWithdrawShares(depositorB, sharesLpB, tesSUCCESS);
}
// A near-zero interest rate (1 TenthBips = 0.0001%) on a 100 USD loan
// produces total interest of ~6 units at loanScale -9. Numerical error
// in the amortization formula pushes the theoretical principal above
// the theoretical value, producing a negative theoretical interest.
// The payment delta then exceeds the actual outstanding interest,
// violating XRPL_ASSERT_PARTS in computePaymentComponents.
void
testBugInterestDueDeltaCrash()
{
testcase("bug: LoanPay asserts 'interest due delta' on near-zero rate");
using namespace jtx;
using namespace std::chrono_literals;
Env env(*this, all);
Account const issuer{"issuer"};
Account const lender{"lender"};
Account const borrower{"borrower"};
env.fund(XRP(1'000'000), issuer, lender, borrower);
env.close();
env(fset(issuer, asfDefaultRipple));
env.close();
PrettyAsset const iouAsset = issuer["USD"];
env(trust(lender, iouAsset(1'000'000'000)));
env(trust(borrower, iouAsset(1'000'000'000)));
env(pay(issuer, lender, iouAsset(5'000'000)));
env(pay(issuer, borrower, iouAsset(5'000'000)));
env.close();
BrokerParameters const brokerParams{
.vaultDeposit = 1'000'000,
.debtMax = 1'000'000,
.coverRateMin = TenthBips32{0},
.coverDeposit = 0,
.managementFeeRate = TenthBips16{0},
.coverRateLiquidation = TenthBips32{0}};
BrokerInfo const broker{createVaultAndBroker(env, iouAsset, lender, brokerParams)};
using namespace loan;
auto const loanSetFee = fee(env.current()->fees().base * 2);
Number const principalRequest{100};
auto createJson = env.json(
set(borrower, broker.brokerID, principalRequest),
fee(loanSetFee),
json(sfCounterpartySignature, Json::objectValue));
createJson["InterestRate"] = 1; // minimum non-zero rate
createJson["PaymentTotal"] = 3;
createJson["PaymentInterval"] = 600;
auto const brokerStateBefore = env.le(keylet::loanbroker(broker.brokerID));
auto const loanSequence = brokerStateBefore->at(sfLoanSequence);
auto const keylet = keylet::loan(broker.brokerID, loanSequence);
createJson = env.json(createJson, sig(sfCounterpartySignature, lender));
env(createJson, ter(tesSUCCESS));
env.close();
// For principal=100, n=3 the amortization schedule produces a
// periodic payment ≈ 33.33 USD. We pay 35 USD, which is more than
// one period's worth — enough for the LoanPay path to enter
// computePaymentComponents and reach the assertion that fires
// when the bug is present. With the fix, the tx applies cleanly.
env(pay(borrower, keylet.key, iouAsset(35)), ter(tesSUCCESS));
env.close();
}
// Integration test: full lifecycle of a $1B loan in the bug regime.
// Verifies that the vault collects the economically-correct interest
// income and that conservation holds at the trust-line level.
//
// Pre-fix (closed-form `power(1+r, n) - 1`): vault collected only
// ~$0.058 per $1B due to cancellation of `(1+r)^n - 1` at r*n ~ 5.7e-10.
// Post-fix (hybrid binomial path): vault collects ~$0.38 per $1B,
// matching the value computed independently with arbitrary-precision
// Decimal arithmetic.
void
testFullLifecycleVaultPnLNearZeroRate()
{
testcase("integration: full loan lifecycle, vault interest at near-zero rate");
using namespace jtx;
using namespace jtx::loan;
using namespace std::chrono_literals;
Env env(*this, all);
Account const issuer{"issuer"};
Account const lender{"lender"};
Account const borrower{"borrower"};
env.fund(XRP(1'000'000), issuer, lender, borrower);
env.close();
env(fset(issuer, asfDefaultRipple));
env.close();
PrettyAsset const iouAsset = issuer["USD"];
STAmount const trustLimit{iouAsset.raw(), Number{1, 17}};
env(trust(lender, trustLimit));
env(trust(borrower, trustLimit));
env.close();
env(pay(issuer, lender, iouAsset(5'000'000'000LL)));
env(pay(issuer, borrower, iouAsset(5'000'000'000LL)));
env.close();
auto usdBalance = [&](Account const& a) {
return env.balance(a, iouAsset.raw().get<Issue>()).value();
};
STAmount const borrowerStartBal = usdBalance(borrower);
BrokerParameters const brokerParams{
.vaultDeposit = Number{2, 9},
.debtMax = Number{0},
.coverRateMin = TenthBips32{0},
.coverDeposit = 0,
.managementFeeRate = TenthBips16{0},
.coverRateLiquidation = TenthBips32{0}};
BrokerInfo const broker{createVaultAndBroker(env, iouAsset, lender, brokerParams)};
auto const vaultBefore = env.le(broker.vaultKeylet());
BEAST_EXPECT(vaultBefore);
Number const vaultAvailableBefore = vaultBefore->at(sfAssetsAvailable);
// Loan: $1B principal, 3 payments, 600s interval, rate=1 TenthBips32.
auto const loanSetFee = fee(env.current()->fees().base * 2);
Number const principalRequest{1, 9};
auto createJson = env.json(
set(borrower, broker.brokerID, principalRequest),
fee(loanSetFee),
json(sfCounterpartySignature, Json::objectValue));
createJson["InterestRate"] = 1;
createJson["PaymentTotal"] = 3;
createJson["PaymentInterval"] = 600;
auto const brokerStateBefore = env.le(keylet::loanbroker(broker.brokerID));
auto const loanSequence = brokerStateBefore->at(sfLoanSequence);
auto const loanKeylet = keylet::loan(broker.brokerID, loanSequence);
createJson = env.json(createJson, sig(sfCounterpartySignature, lender));
env(createJson, ter(tesSUCCESS));
env.close();
auto const loanSle = env.le(loanKeylet);
BEAST_EXPECT(loanSle);
Number const expectedTotalInterest =
loanSle->at(sfTotalValueOutstanding) - loanSle->at(sfPrincipalOutstanding);
env(pay(borrower, loanKeylet.key, iouAsset(1'500'000'000LL)), ter(tesSUCCESS));
env.close();
auto const vaultAfter = env.le(broker.vaultKeylet());
Number const vaultAvailableAfter = vaultAfter->at(sfAssetsAvailable);
Number const vaultGain = vaultAvailableAfter - vaultAvailableBefore;
STAmount const borrowerEndBal = usdBalance(borrower);
STAmount const borrowerNetOut = borrowerStartBal - borrowerEndBal;
// Self-consistency: vault gained exactly the expected interest
// computed at LoanSet, and the borrower's outflow matches.
BEAST_EXPECT(vaultGain == expectedTotalInterest);
BEAST_EXPECT(Number(borrowerNetOut) == expectedTotalInterest);
// Mathematical correctness: the total interest for this loan
// configuration is 0.38051750382930729983, calculated
// independently using 50-digit Decimal arithmetic (no
// cancellation possible at that precision). At Number's 19-digit
// mantissa this rounds to 0.38051750382930729 — the literal
// below. The vault's actual gain must agree to within
// sub-microcent precision.
Number const decimalReference{38051750382930729LL, -17};
Number const tolerance{1, -6}; // 1e-6 USD = sub-microcent
Number const error = abs(vaultGain - decimalReference);
BEAST_EXPECTS(
error < tolerance,
"vault gain " + to_string(vaultGain) + " differs from Decimal reference " +
to_string(decimalReference) + " by " + to_string(error) + " — exceeds tolerance " +
to_string(tolerance));
}
public:
void
run() override
@@ -7206,6 +7409,10 @@ public:
testLoanPayLateFullPaymentBypassesPenalties();
testLoanCoverMinimumRoundingExploit();
#endif
testBugInterestDueDeltaCrash();
testFullLifecycleVaultPnLNearZeroRate();
testWithdrawReflectsUnrealizedLoss();
testInvalidLoanSet();

View File

@@ -7,6 +7,7 @@
#include <xrpl/basics/BasicConfig.h>
#include <xrpl/beast/unit_test/suite.h>
#include <xrpl/beast/utility/temp_dir.h>
#include <xrpl/protocol/SystemParameters.h> // IWYU pragma: keep
#include <xrpl/server/Port.h>
#include <boost/filesystem/operations.hpp>

View File

@@ -63,8 +63,8 @@ public:
negotiateProtocolVersion("RTXP/1.2, XRPL/2.0, XRPL/2.1") == make_protocol(2, 1));
BEAST_EXPECT(negotiateProtocolVersion("XRPL/2.2") == make_protocol(2, 2));
BEAST_EXPECT(
negotiateProtocolVersion("RTXP/1.2, XRPL/2.3, XRPL/2.4, XRPL/999.999") ==
make_protocol(2, 3));
negotiateProtocolVersion("RTXP/1.2, XRPL/2.2, XRPL/2.3, XRPL/999.999") ==
make_protocol(2, 2));
BEAST_EXPECT(negotiateProtocolVersion("XRPL/999.999, WebSocket/1.0") == std::nullopt);
BEAST_EXPECT(negotiateProtocolVersion("") == std::nullopt);
}

View File

@@ -1,6 +1,7 @@
#include <test/shamap/common.h>
#include <test/unit_test/SuiteJournal.h>
#include <xrpl/basics/Blob.h>
#include <xrpl/basics/SHAMapHash.h>
#include <xrpl/basics/Slice.h>
#include <xrpl/basics/base_uint.h>
@@ -114,17 +115,14 @@ public:
destination.setSynching();
{
std::vector<SHAMapNodeData> a;
std::vector<std::pair<SHAMapNodeID, Blob>> a;
BEAST_EXPECT(source.getNodeFat(SHAMapNodeID(), a, rand_bool(eng_), rand_int(eng_, 2)));
unexpected(a.empty(), "NodeSize");
auto node = SHAMapTreeNode::makeFromWire(makeSlice(a[0].data));
if (!node)
fail("", __FILE__, __LINE__);
BEAST_EXPECT(
destination.addRootNode(source.getHash(), std::move(node), nullptr).isGood());
BEAST_EXPECT(destination.addRootNode(source.getHash(), makeSlice(a[0].second), nullptr)
.isGood());
}
do
@@ -138,7 +136,7 @@ public:
break;
// get as many nodes as possible based on this information
std::vector<SHAMapNodeData> b;
std::vector<std::pair<SHAMapNodeID, Blob>> b;
for (auto& it : nodesMissing)
{
@@ -160,10 +158,8 @@ public:
// Don't use BEAST_EXPECT here b/c it will be called a
// non-deterministic number of times and the number of tests run
// should be deterministic
auto node = SHAMapTreeNode::makeFromWire(makeSlice(b[i].data));
if (!node)
fail("", __FILE__, __LINE__);
if (!destination.addKnownNode(b[i].nodeID, std::move(node), nullptr).isUseful())
if (!destination.addKnownNode(b[i].first, makeSlice(b[i].second), nullptr)
.isUseful())
fail("", __FILE__, __LINE__);
}
} while (true);

View File

@@ -9,7 +9,6 @@
#include <mutex>
#include <set>
#include <string_view>
#include <utility>
namespace xrpl {
@@ -132,16 +131,16 @@ private:
processData(std::shared_ptr<Peer> peer, protocol::TMLedgerData& data);
bool
takeHeader(std::string_view data);
takeHeader(std::string const& data);
void
receiveNode(protocol::TMLedgerData& packet, SHAMapAddNode& san);
receiveNode(protocol::TMLedgerData& packet, SHAMapAddNode&);
bool
takeTxRootNode(std::string_view data, SHAMapAddNode& san);
takeTxRootNode(Slice const& data, SHAMapAddNode&);
bool
takeAsRootNode(std::string_view data, SHAMapAddNode& san);
takeAsRootNode(Slice const& data, SHAMapAddNode&);
std::vector<uint256>
neededTxHashes(int max, SHAMapSyncFilter* filter) const;

View File

@@ -4,7 +4,6 @@
#include <xrpld/app/ledger/InboundLedgers.h>
#include <xrpld/app/ledger/LedgerMaster.h>
#include <xrpld/app/ledger/TransactionStateSF.h>
#include <xrpld/app/ledger/detail/LedgerNodeHelpers.h>
#include <xrpld/app/ledger/detail/TimeoutCounter.h>
#include <xrpld/app/main/Application.h>
#include <xrpld/overlay/Message.h>
@@ -43,8 +42,8 @@
#include <mutex>
#include <random>
#include <sstream>
#include <stdexcept>
#include <string>
#include <string_view>
#include <tuple>
#include <unordered_map>
#include <utility>
@@ -794,7 +793,7 @@ InboundLedger::filterNodes(
*/
// data must not have hash prefix
bool
InboundLedger::takeHeader(std::string_view data)
InboundLedger::takeHeader(std::string const& data)
{
// Return value: true=normal, false=bad data
JLOG(journal_.trace()) << "got header acquiring ledger " << hash_;
@@ -882,31 +881,20 @@ InboundLedger::receiveNode(protocol::TMLedgerData& packet, SHAMapAddNode& san)
{
auto const f = filter.get();
for (auto const& ledgerNode : packet.nodes())
for (auto const& node : packet.nodes())
{
auto treeNode = getTreeNode(ledgerNode.nodedata());
if (!treeNode)
{
JLOG(journal_.warn()) << "Got invalid node data";
san.incInvalid();
return;
}
auto const nodeID = deserializeSHAMapNodeID(node.nodeid());
auto const nodeID = getSHAMapNodeID(ledgerNode, treeNode);
if (!nodeID)
{
JLOG(journal_.warn()) << "Got invalid node id";
san.incInvalid();
return;
}
throw std::runtime_error("data does not properly deserialize");
if (nodeID->isRoot())
{
san += map.addRootNode(rootHash, std::move(treeNode), f);
san += map.addRootNode(rootHash, makeSlice(node.nodedata()), f);
}
else
{
san += map.addKnownNode(*nodeID, std::move(treeNode), f);
san += map.addKnownNode(*nodeID, makeSlice(node.nodedata()), f);
}
if (!san.isGood())
@@ -946,7 +934,7 @@ InboundLedger::receiveNode(protocol::TMLedgerData& packet, SHAMapAddNode& san)
Call with a lock
*/
bool
InboundLedger::takeAsRootNode(std::string_view data, SHAMapAddNode& san)
InboundLedger::takeAsRootNode(Slice const& data, SHAMapAddNode& san)
{
if (failed_ || mHaveState)
{
@@ -962,17 +950,9 @@ InboundLedger::takeAsRootNode(std::string_view data, SHAMapAddNode& san)
// LCOV_EXCL_STOP
}
auto treeNode = getTreeNode(data);
if (!treeNode)
{
JLOG(journal_.warn()) << "Got invalid node data";
san.incInvalid();
return false;
}
AccountStateSF filter(mLedger->stateMap().family().db(), app_.getLedgerMaster());
san += mLedger->stateMap().addRootNode(
SHAMapHash{mLedger->header().accountHash}, std::move(treeNode), &filter);
san +=
mLedger->stateMap().addRootNode(SHAMapHash{mLedger->header().accountHash}, data, &filter);
return san.isGood();
}
@@ -980,7 +960,7 @@ InboundLedger::takeAsRootNode(std::string_view data, SHAMapAddNode& san)
Call with a lock
*/
bool
InboundLedger::takeTxRootNode(std::string_view data, SHAMapAddNode& san)
InboundLedger::takeTxRootNode(Slice const& data, SHAMapAddNode& san)
{
if (failed_ || mHaveTransactions)
{
@@ -996,17 +976,8 @@ InboundLedger::takeTxRootNode(std::string_view data, SHAMapAddNode& san)
// LCOV_EXCL_STOP
}
auto treeNode = getTreeNode(data);
if (!treeNode)
{
JLOG(journal_.warn()) << "Got invalid node data";
san.incInvalid();
return false;
}
TransactionStateSF filter(mLedger->txMap().family().db(), app_.getLedgerMaster());
san += mLedger->txMap().addRootNode(
SHAMapHash{mLedger->header().txHash}, std::move(treeNode), &filter);
san += mLedger->txMap().addRootNode(SHAMapHash{mLedger->header().txHash}, data, &filter);
return san.isGood();
}
@@ -1103,13 +1074,13 @@ InboundLedger::processData(std::shared_ptr<Peer> peer, protocol::TMLedgerData& p
}
if (!mHaveState && (packet.nodes().size() > 1) &&
!takeAsRootNode(packet.nodes(1).nodedata(), san))
!takeAsRootNode(makeSlice(packet.nodes(1).nodedata()), san))
{
JLOG(journal_.warn()) << "Included AS root invalid";
}
if (!mHaveTransactions && (packet.nodes().size() > 2) &&
!takeTxRootNode(packet.nodes(2).nodedata(), san))
!takeTxRootNode(makeSlice(packet.nodes(2).nodedata()), san))
{
JLOG(journal_.warn()) << "Included TX root invalid";
}
@@ -1140,13 +1111,13 @@ InboundLedger::processData(std::shared_ptr<Peer> peer, protocol::TMLedgerData& p
ScopedLockType const sl(mtx_);
// Verify nodes are complete
for (auto const& ledgerNode : packet.nodes())
// Verify node IDs and data are complete
for (auto const& node : packet.nodes())
{
if (!validateLedgerNode(ledgerNode))
if (!node.has_nodeid() || !node.has_nodedata())
{
JLOG(journal_.warn()) << "Got malformed ledger node";
peer->charge(Resource::feeMalformedRequest, "ledgerNode");
JLOG(journal_.warn()) << "Got bad node";
peer->charge(Resource::feeMalformedRequest, "ledger_data bad node");
return -1;
}
}

View File

@@ -2,13 +2,13 @@
#include <xrpld/app/ledger/InboundLedger.h>
#include <xrpld/app/ledger/LedgerMaster.h>
#include <xrpld/app/ledger/detail/LedgerNodeHelpers.h>
#include <xrpld/app/main/Application.h>
#include <xrpld/overlay/PeerSet.h>
#include <xrpl/basics/Blob.h>
#include <xrpl/basics/DecayingSample.h>
#include <xrpl/basics/Log.h>
#include <xrpl/basics/Slice.h>
#include <xrpl/basics/UnorderedContainers.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/scope.h>
@@ -248,20 +248,23 @@ public:
Serializer s;
try
{
for (auto const& ledgerNode : packet_ptr->nodes())
for (int i = 0; i < packet_ptr->nodes().size(); ++i)
{
if (!validateLedgerNode(ledgerNode))
auto const& node = packet_ptr->nodes(i);
if (!node.has_nodeid() || !node.has_nodedata())
return;
auto const treeNode = getTreeNode(ledgerNode.nodedata());
if (!treeNode)
auto newNode = SHAMapTreeNode::makeFromWire(makeSlice(node.nodedata()));
if (!newNode)
return;
s.erase();
treeNode->serializeWithPrefix(s);
newNode->serializeWithPrefix(s);
app_.getLedgerMaster().addFetchPack(
treeNode->getHash().as_uint256(), std::make_shared<Blob>(s.begin(), s.end()));
newNode->getHash().as_uint256(), std::make_shared<Blob>(s.begin(), s.end()));
}
}
catch (std::exception const&) // NOLINT(bugprone-empty-catch)

View File

@@ -1,11 +1,11 @@
#include <xrpld/app/ledger/InboundTransactions.h>
#include <xrpld/app/ledger/detail/LedgerNodeHelpers.h>
#include <xrpld/app/ledger/detail/TransactionAcquire.h>
#include <xrpld/app/main/Application.h>
#include <xrpld/overlay/PeerSet.h>
#include <xrpl/basics/Log.h>
#include <xrpl/basics/Slice.h>
#include <xrpl/basics/UnorderedContainers.h>
#include <xrpl/beast/insight/Collector.h>
#include <xrpl/protocol/RippleLedgerHash.h>
@@ -14,7 +14,6 @@
#include <xrpl/shamap/SHAMap.h>
#include <xrpl/shamap/SHAMapMissingNode.h>
#include <xrpl/shamap/SHAMapNodeID.h>
#include <xrpl/shamap/SHAMapTreeNode.h>
#include <xrpl.pb.h>
@@ -145,38 +144,29 @@ public:
return;
}
std::vector<std::pair<SHAMapNodeID, SHAMapTreeNodePtr>> data;
std::vector<std::pair<SHAMapNodeID, Slice>> data;
data.reserve(packet.nodes().size());
for (auto const& ledgerNode : packet.nodes())
for (auto const& node : packet.nodes())
{
if (!validateLedgerNode(ledgerNode))
if (!node.has_nodeid() || !node.has_nodedata())
{
JLOG(j_.warn()) << "Got malformed ledger node";
peer->charge(Resource::feeMalformedRequest, "ledgerNode");
peer->charge(Resource::feeMalformedRequest, "ledger_data");
return;
}
auto treeNode = getTreeNode(ledgerNode.nodedata());
if (!treeNode)
auto const id = deserializeSHAMapNodeID(node.nodeid());
if (!id)
{
JLOG(j_.warn()) << "Got invalid node data";
peer->charge(Resource::feeInvalidData, "node_data");
peer->charge(Resource::feeInvalidData, "ledger_data");
return;
}
auto const nodeID = getSHAMapNodeID(ledgerNode, treeNode);
if (!nodeID)
{
JLOG(j_.warn()) << "Got invalid node id";
peer->charge(Resource::feeInvalidData, "node_id");
return;
}
data.emplace_back(*nodeID, std::move(treeNode));
data.emplace_back(*id, makeSlice(node.nodedata()));
}
if (!ta->takeNodes(std::move(data), peer).isUseful())
if (!ta->takeNodes(data, peer).isUseful())
peer->charge(Resource::feeUselessData, "ledger_data not useful");
}

View File

@@ -1,91 +0,0 @@
#include <xrpld/app/ledger/detail/LedgerNodeHelpers.h>
#include <xrpl/basics/Slice.h>
#include <xrpl/basics/safe_cast.h>
#include <xrpl/beast/utility/instrumentation.h>
#include <xrpl/shamap/SHAMap.h>
#include <xrpl/shamap/SHAMapLeafNode.h>
#include <xrpl/shamap/SHAMapNodeID.h>
#include <xrpl/shamap/SHAMapTreeNode.h>
#include <xrpl.pb.h>
#include <exception>
#include <optional>
#include <string_view>
namespace xrpl {
bool
validateLedgerNode(protocol::TMLedgerNode const& ledgerNode)
{
if (!ledgerNode.has_nodedata())
return false;
if (ledgerNode.has_nodeid())
return !ledgerNode.has_id() && !ledgerNode.has_depth();
return ledgerNode.has_id() ||
(ledgerNode.has_depth() && ledgerNode.depth() <= SHAMap::leafDepth);
}
SHAMapTreeNodePtr
getTreeNode(std::string_view data)
{
auto const slice = makeSlice(data);
try
{
return SHAMapTreeNode::makeFromWire(slice);
}
catch (std::exception const&)
{
return {};
}
}
std::optional<SHAMapNodeID>
getSHAMapNodeID(protocol::TMLedgerNode const& ledgerNode, SHAMapTreeNodePtr const& treeNode)
{
if (ledgerNode.has_id() || ledgerNode.has_depth())
{
if (treeNode->isInner())
{
if (!ledgerNode.has_id())
return std::nullopt;
return deserializeSHAMapNodeID(ledgerNode.id());
}
if (treeNode->isLeaf())
{
if (!ledgerNode.has_depth())
return std::nullopt;
auto const key =
safe_downcast<SHAMapLeafNode const*>(treeNode.get())->peekItem()->key();
return SHAMapNodeID::createID(ledgerNode.depth(), key);
}
UNREACHABLE("xrpl::getSHAMapNodeID : tree node is neither inner nor leaf");
return std::nullopt;
}
if (!ledgerNode.has_nodeid())
return std::nullopt;
auto nodeID = deserializeSHAMapNodeID(ledgerNode.nodeid());
if (!nodeID.has_value())
return std::nullopt;
if (treeNode->isLeaf())
{
auto const key = safe_downcast<SHAMapLeafNode const*>(treeNode.get())->peekItem()->key();
auto const expected_id = SHAMapNodeID::createID(static_cast<int>(nodeID->getDepth()), key);
if (nodeID->getNodeID() != expected_id.getNodeID())
return std::nullopt;
}
return nodeID;
}
} // namespace xrpl

View File

@@ -1,72 +0,0 @@
#pragma once
#include <xrpl/basics/IntrusivePointer.h>
#include <xrpl/shamap/SHAMapNodeID.h>
#include <xrpl/shamap/SHAMapTreeNode.h>
#include <optional>
#include <string_view>
namespace protocol {
class TMLedgerNode;
} // namespace protocol
namespace xrpl {
/**
* @brief Validates a ledger node proto message.
*
* This function checks whether a ledger node has the expected fields (for non-ledger base data):
* - The node must have `nodedata`.
* - If the legacy `nodeid` field is present then the new `id` and `depth` fields must not be
* present.
* - If the new `id` or `depth` fields are present (it is a oneof field, so only one of the two can
* be set) then the legacy `nodeid` must not be present.
* - If the `depth` field is present then it must be between 0 and SHAMap::leafDepth (inclusive).
*
* @param ledgerNode The ledger node to validate.
* @return true if the ledger node has the expected fields, false otherwise.
*/
[[nodiscard]] bool
validateLedgerNode(protocol::TMLedgerNode const& ledgerNode);
/**
* @brief Deserializes a SHAMapTreeNode from wire format data.
*
* This function attempts to create a SHAMapTreeNode from the provided data string. If the data is
* malformed or deserialization fails, the function returns a nullptr instead of throwing an
* exception.
*
* @param data The serialized node data in wire format.
* @return The deserialized tree node if successful, or a nullptr if deserialization fails.
*/
[[nodiscard]] SHAMapTreeNodePtr
getTreeNode(std::string_view data);
/**
* @brief Extracts or reconstructs the SHAMapNodeID from a ledger node proto message.
*
* This function retrieves the SHAMapNodeID for a tree node, with behavior that depends on which
* field is set and the node type (inner vs. leaf).
*
* When the legacy `nodeid` field is set in the message:
* - For all nodes: Deserializes the node ID from the field.
* - For leaf nodes: Validates that the node ID is consistent with the leaf's key.
*
* When the new `id` or `depth` field is set in the message:
* - For inner nodes: Deserializes the node ID from the `id` field.
* - For leaf nodes: Reconstructs the node ID using both the depth from the `depth` field and the
* key from the leaf node's item.
* Note that root nodes may be inner nodes or leaf nodes.
*
* @param ledgerNode The validated protocol message containing the ledger node data.
* @param treeNode The deserialized tree node (inner or leaf node).
* @return An optional containing the node ID if extraction/reconstruction succeeds, or std::nullopt
* if the required fields are missing or validation fails.
* @note This function expects that the caller has already validated the ledger node by calling the
* `validateLedgerNode` function and obtained a valid tree node by calling `getTreeNode`.
*/
[[nodiscard]] std::optional<SHAMapNodeID>
getSHAMapNodeID(protocol::TMLedgerNode const& ledgerNode, SHAMapTreeNodePtr const& treeNode);
} // namespace xrpl

View File

@@ -7,13 +7,13 @@
#include <xrpld/overlay/PeerSet.h>
#include <xrpl/basics/Log.h>
#include <xrpl/basics/Slice.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/core/Job.h>
#include <xrpl/server/NetworkOPs.h>
#include <xrpl/shamap/SHAMap.h>
#include <xrpl/shamap/SHAMapAddNode.h>
#include <xrpl/shamap/SHAMapMissingNode.h>
#include <xrpl/shamap/SHAMapTreeNode.h>
#include <xrpl.pb.h>
@@ -173,7 +173,7 @@ TransactionAcquire::trigger(std::shared_ptr<Peer> const& peer)
SHAMapAddNode
TransactionAcquire::takeNodes(
std::vector<std::pair<SHAMapNodeID, SHAMapTreeNodePtr>> data,
std::vector<std::pair<SHAMapNodeID, Slice>> const& data,
std::shared_ptr<Peer> const& peer)
{
ScopedLockType const sl(mtx_);

View File

@@ -21,8 +21,8 @@ public:
SHAMapAddNode
takeNodes(
std::vector<std::pair<SHAMapNodeID, SHAMapTreeNodePtr>> data,
std::shared_ptr<Peer> const& peer);
std::vector<std::pair<SHAMapNodeID, Slice>> const& data,
std::shared_ptr<Peer> const&);
void
init(int startPeers);

View File

@@ -17,7 +17,6 @@ enum class ProtocolFeature {
ValidatorListPropagation,
ValidatorList2Propagation,
LedgerReplay,
LedgerNodeDepth,
};
/** Represents a peer connection in the overlay. */

View File

@@ -61,7 +61,6 @@
#include <xrpl/server/Handoff.h>
#include <xrpl/server/LoadFeeTrack.h>
#include <xrpl/server/NetworkOPs.h>
#include <xrpl/shamap/SHAMap.h>
#include <xrpl/shamap/SHAMapNodeID.h>
#include <xrpl/tx/apply.h>
@@ -566,8 +565,6 @@ PeerImp::supportsFeature(ProtocolFeature f) const
return protocol_ >= make_protocol(2, 1);
case ProtocolFeature::ValidatorList2Propagation:
return protocol_ >= make_protocol(2, 2);
case ProtocolFeature::LedgerNodeDepth:
return protocol_ >= make_protocol(2, 3);
case ProtocolFeature::LedgerReplay:
return ledgerReplayEnabled_;
}
@@ -1614,8 +1611,7 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMGetLedger> const& m)
}
}
// Verify and parse ledger node IDs
std::vector<SHAMapNodeID> nodeIDs;
// Verify ledger node IDs
if (itype != protocol::liBASE)
{
if (m->nodeids_size() <= 0)
@@ -1624,16 +1620,13 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMGetLedger> const& m)
return;
}
nodeIDs.reserve(m->nodeids_size());
for (auto const& nodeId : m->nodeids())
{
auto parsed = deserializeSHAMapNodeID(nodeId);
if (!parsed)
if (deserializeSHAMapNodeID(nodeId) == std::nullopt)
{
badData("Invalid SHAMap node ID");
return;
}
nodeIDs.push_back(std::move(*parsed));
}
}
@@ -1656,11 +1649,10 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMGetLedger> const& m)
// Queue a job to process the request
std::weak_ptr<PeerImp> const weak = shared_from_this();
app_.getJobQueue().addJob(
jtLEDGER_REQ, "RcvGetLedger", [weak, m, nodeIDs = std::move(nodeIDs)]() mutable {
if (auto peer = weak.lock())
peer->processLedgerRequest(m, std::move(nodeIDs));
});
app_.getJobQueue().addJob(jtLEDGER_REQ, "RcvGetLedger", [weak, m]() {
if (auto peer = weak.lock())
peer->processLedgerRequest(m);
});
}
void
@@ -3369,9 +3361,7 @@ PeerImp::getTxSet(std::shared_ptr<protocol::TMGetLedger> const& m) const
}
void
PeerImp::processLedgerRequest(
std::shared_ptr<protocol::TMGetLedger> const& m,
std::vector<SHAMapNodeID> nodeIDs)
PeerImp::processLedgerRequest(std::shared_ptr<protocol::TMGetLedger> const& m)
{
// Do not resource charge a peer responding to a relay
if (!m->has_requestcookie())
@@ -3456,25 +3446,26 @@ PeerImp::processLedgerRequest(
}
// Add requested node data to reply
if (!nodeIDs.empty())
if (m->nodeids_size() > 0)
{
std::uint32_t const defaultDepth = isHighLatency() ? 2 : 1;
auto const queryDepth{m->has_querydepth() ? m->querydepth() : defaultDepth};
std::vector<SHAMapNodeData> data;
auto const useLedgerNodeDepth = supportsFeature(ProtocolFeature::LedgerNodeDepth);
std::vector<std::pair<SHAMapNodeID, Blob>> data;
for (auto const& nodeID : nodeIDs)
for (int i = 0;
i < m->nodeids_size() && ledgerData.nodes_size() < Tuning::softMaxReplyNodes;
++i)
{
if (ledgerData.nodes_size() >= Tuning::softMaxReplyNodes)
break;
auto const shaMapNodeId{deserializeSHAMapNodeID(m->nodeids(i))};
data.clear();
data.reserve(Tuning::softMaxReplyNodes);
try
{
if (map->getNodeFat(nodeID, data, fatLeaves, queryDepth))
// NOLINTNEXTLINE(bugprone-unchecked-optional-access) nodeids checked in onGetLedger
if (map->getNodeFat(*shaMapNodeId, data, fatLeaves, queryDepth))
{
JLOG(p_journal_.trace())
<< "processLedgerRequest: getNodeFat got " << data.size() << " nodes";
@@ -3483,26 +3474,9 @@ PeerImp::processLedgerRequest(
{
if (ledgerData.nodes_size() >= Tuning::hardMaxReplyNodes)
break;
protocol::TMLedgerNode* node{ledgerData.add_nodes()};
node->set_nodedata(d.data.data(), d.data.size());
// When the LedgerNodeDepth protocol feature is not supported by the peer,
// we always set the `nodeid` field. However, when it is supported then we
// set the `id` field for inner nodes and the `depth` field for leaf nodes.
if (!useLedgerNodeDepth)
{
node->set_nodeid(d.nodeID.getRawString());
}
else if (d.isLeaf)
{
node->set_depth(d.nodeID.getDepth());
}
else
{
node->set_id(d.nodeID.getRawString());
}
node->set_nodeid(d.first.getRawString());
node->set_nodedata(d.second.data(), d.second.size());
}
}
else
@@ -3541,7 +3515,7 @@ PeerImp::processLedgerRequest(
info += ", no hash specified";
JLOG(p_journal_.warn())
<< "processLedgerRequest: getNodeFat with nodeId " << nodeID
<< "processLedgerRequest: getNodeFat with nodeId " << *shaMapNodeId
<< " and ledger info type " << info << " throws exception: " << e.what();
}
}

View File

@@ -14,7 +14,6 @@
#include <xrpl/protocol/STTx.h>
#include <xrpl/protocol/STValidation.h>
#include <xrpl/resource/Fees.h>
#include <xrpl/shamap/SHAMapNodeID.h>
#include <boost/circular_buffer.hpp>
#include <boost/endian/conversion.hpp>
@@ -793,9 +792,7 @@ private:
getTxSet(std::shared_ptr<protocol::TMGetLedger> const& m) const;
void
processLedgerRequest(
std::shared_ptr<protocol::TMGetLedger> const& m,
std::vector<SHAMapNodeID> nodeIDs);
processLedgerRequest(std::shared_ptr<protocol::TMGetLedger> const& m);
};
//------------------------------------------------------------------------------

View File

@@ -28,7 +28,6 @@ namespace xrpl {
constexpr ProtocolVersion const supportedProtocolList[]{
{2, 1},
{2, 2},
{2, 3},
};
// This ugly construct ensures that supportedProtocolList is sorted in strictly