Compare commits

...

7 Commits

Author SHA1 Message Date
Denis Angell
b418b416b0 add claude 2026-02-06 19:31:07 +01:00
Ayaz Salikhov
2305bc98a4 chore: Remove CODEOWNERS (#6337) 2026-02-06 11:39:23 -05:00
Bart
677758b1cc perf: Remove unnecessary caches (#5439)
This change removes the cache in `DatabaseNodeImp` and simplifies the caching logic in `SHAMapStoreImp`. As NuDB and RocksDB internally already use caches, additional caches in the code are not very valuable or may even be unnecessary, as also confirmed during preliminary performance analyses.
2026-02-06 09:42:35 -05:00
Bart
25d7c2c4ec chore: Restore unity builds (#6328)
In certain cases, such as when modifying headers used by many compilation units, performing a unity build is slower than when performing a regular build with `ccache` enabled. There is also a benefit to a unity build in that it can detect things such as macro redefinitions within the group of files that are compiled together as a unit. This change therefore restores the ability to perform unity builds. However, instead of running every configuration with and without unity enabled, it is now only enabled for a single configuration to maintain lower computational use.

As part of restoring the code, it became clear that currently two configurations have coverage enabled, since the check doesn't focus specifically on Debian Bookworm so it also applies to Debian Trixie. This has been fixed too in this change.
2026-02-06 14:12:45 +00:00
Bart
0a626d95f4 refactor: Update secp256k1 to 0.7.1 (#6331)
The latest secp256k1 release, 0.7.1, contains bug fixes that we may benefit from, see https://github.com/bitcoin-core/secp256k1/blob/master/CHANGELOG.md.
2026-02-05 16:45:57 +00:00
Niq Dudfield
6006c281e2 fix: Increment sequence when accepting new manifests (#6059)
The `ManifestCache::applyManifest` function was returning early without incrementing `seq_`. `OverlayImpl `uses this sequence to identify/invalidate a cached `TMManifests` message, which is exchanged with peers on connection. Depending on network size, startup sequencing, and topology, this can cause syncing issues. This change therefore increments `seq_` when a new manifest is accepted.
2026-02-05 10:40:27 -05:00
Vito Tumas
e79673cf40 fix typo in LendingHelpers unit-test (#6215) 2026-02-05 10:23:44 +00:00
24 changed files with 741 additions and 299 deletions

54
.claude/CLAUDE.md Normal file
View File

@@ -0,0 +1,54 @@
# Workflow Orchestration
## 1. Plan Mode Default
- Enter plan mode for ANY non-trivial task (3+ steps or architectural decisions)
- If something goes sideways, STOP and re-plan immediately - don't keep pushing
- Use plan mode for verification steps, not just building
- Write detailed specs upfront to reduce ambiguity
## 2. Subagent Strategy
- Use subagents liberally to keep main context window clean
- Offload research, exploration, and parallel analysis to subagents
- For complex problems, throw compute at it via parallelizing
- One task per subagent for focused execution
## 3. Self-Improvement Loop
- After ANY correction from the user: update `tasks/lessons.md` with the pattern
- Write rules for yourself that prevent the same mistake
- Ruthlessly iterate on these rules until mistake drops
- Review lessons at session start for relevant project
## 4. Verification Before Done
- Never mark a task complete without proving it works
- Diff behavior between main and your changes when relevant
- Ask yourself: "Would a staff engineer approve this?"
- Run tests, check logs, demonstrate correctness
## 5. Demand Elegance (Balanced)
- For non-trivial changes: pause and ask "is there a more elegant way?"
- If a fix feels hacky: "Know now, do later, implement the elegant solution"
- Skip this for simple, obvious fixes - don't over-engineer
- Challenge your own first solution
## 6. Autonomous Bug Fixing
- When given a bug report: just fix it. Don't ask for hand-holding
- Point at logs, errors, failing tests - then resolve them
- Zero context switching required from the user
- Go fix failing CI tests without being told how
# Task Management
1. **Plan First**: Write plan to `tasks/todo.md` with checkable items
2. **Verify Plans**: Check in before starting implementation
3. **Track Progress**: Mark items complete as you go
4. **Explain Changes**: High-level summary at each step
5. **Document Results**: Add review section to `tasks/todo.md`
6. **Capture Lessons**: Update `tasks/lessons.md` after corrections
# Core Principles
**Simplicity First**: Make every change as simple as possible. Impact minimal code.
**No Boilerplate**: Skip documentation. No README/docs bloat. Test only if requested.
**Minimal Impact**: Changes should only touch what's necessary. Avoid introducing bugs.

530
.claude/skills/amendment.md Normal file
View File

@@ -0,0 +1,530 @@
# XRPL Amendment Creation Skill
This skill guides you through creating XRPL amendments, whether for brand new features or fixes/extensions to existing functionality.
## Amendment Types
There are two main types of amendments:
1. **New Feature Amendment** (`feature{Name}`) - Entirely new functionality
2. **Fix/Extension Amendment** (`fix{Name}`) - Modifications to existing functionality
## Step 1: Determine Amendment Type
Ask the user:
- Is this a **brand new feature** (new transaction type, ledger entry, or capability)?
- Or is this a **fix or extension** to existing functionality?
---
## For NEW FEATURE Amendments
### Checklist:
#### 1. Feature Definition in features.macro
**ONLY FILE TO EDIT:** `include/xrpl/protocol/detail/features.macro`
- [ ] Add to TOP of features.macro (reverse chronological order):
```
XRPL_FEATURE(YourFeatureName, Supported::no, VoteBehavior::DefaultNo)
```
- [ ] This creates the variable `featureYourFeatureName` automatically
- [ ] Follow naming convention: Use the feature name WITHOUT the "feature" prefix
- [ ] Examples: `Batch``featureBatch`, `LendingProtocol``featureLendingProtocol`
#### 2. Support Status
- [ ] Start with `Supported::no` during development
- [ ] Change to `Supported::yes` when ready for network voting
- [ ] Use `VoteBehavior::DefaultNo` (validators must explicitly vote for it)
#### 3. Code Implementation
- [ ] Implement new functionality (transaction type, ledger entry, etc.)
- [ ] Add feature gate check in preflight:
```cpp
if (!env.current()->rules().enabled(feature{Name}))
{
return temDISABLED;
}
```
#### 4. Disable Route Handling
- [ ] Ensure transaction returns `temDISABLED` when amendment is disabled
- [ ] Implement early rejection in preflight/preclaim phase
- [ ] Add appropriate error messages
#### 5. Test Implementation
Create comprehensive test suite with this structure:
```cpp
class {FeatureName}_test : public beast::unit_test::suite
{
public:
void testEnable(FeatureBitset features)
{
testcase("enabled");
// Test with feature DISABLED
{
auto const amendNoFeature = features - feature{Name};
Env env{*this, amendNoFeature};
env(transaction, ter(temDISABLED));
}
// Test with feature ENABLED
{
Env env{*this, features};
env(transaction, ter(tesSUCCESS));
// Validate new functionality works
}
}
void testPreflight(FeatureBitset features)
{
testcase("preflight");
// Test malformed transaction validation
}
void testPreclaim(FeatureBitset features)
{
testcase("preclaim");
// Test signature and claim phase validation
}
void testWithFeats(FeatureBitset features)
{
testEnable(features);
testPreflight(features);
testPreclaim(features);
// Add feature-specific tests
}
void run() override
{
using namespace test::jtx;
auto const all = supported_amendments();
testWithFeats(all);
}
};
```
#### 6. Test Coverage Requirements
- [ ] Test amendment DISABLED state (returns `temDISABLED`)
- [ ] Test amendment ENABLED state (returns `tesSUCCESS`)
- [ ] Test malformed transactions
- [ ] Test signature validation
- [ ] Test edge cases specific to feature
- [ ] Test amendment transition behavior
#### 7. Documentation
- [ ] Create specification document (XLS_{NAME}.md)
- [ ] Document new transaction types, ledger entries, or capabilities
- [ ] Create test plan document
- [ ] Document expected behavior when enabled/disabled
---
## For FIX/EXTENSION Amendments
### Checklist:
#### 1. Fix Definition in features.macro
**ONLY FILE TO EDIT:** `include/xrpl/protocol/detail/features.macro`
- [ ] Add to TOP of features.macro (reverse chronological order):
```
XRPL_FIX(YourFixName, Supported::no, VoteBehavior::DefaultNo)
```
- [ ] This creates the variable `fixYourFixName` automatically
- [ ] Follow naming convention: Use the fix name WITHOUT the "fix" prefix (it's added automatically)
- [ ] Examples: `TokenEscrowV1``fixTokenEscrowV1`, `DirectoryLimit``fixDirectoryLimit`
- [ ] Start with `Supported::no` during development, change to `Supported::yes` when ready
#### 2. Backward Compatibility Implementation
**Critical**: Use enable() with if/else to preserve existing functionality
```cpp
// Check if fix is enabled
bool const fix{Name} = env.current()->rules().enabled(fix{Name});
// Conditional logic based on amendment state
if (fix{Name})
{
// NEW behavior with fix applied
// This is the corrected/improved logic
}
else
{
// OLD behavior (legacy path)
// Preserve original functionality for backward compatibility
}
```
**Alternative pattern with ternary operator:**
```cpp
auto& viewToUse = sb.rules().enabled(fix{Name}) ? sb : legacyView;
```
#### 3. Multiple Fix Versions Pattern
For iterative fixes, use version checking:
```cpp
bool const fixV1 = rv.rules().enabled(fixXahauV1);
bool const fixV2 = rv.rules().enabled(fixXahauV2);
switch (transactionType)
{
case TYPE_1:
if (fixV1) {
// Behavior with V1 fix
} else {
// Legacy behavior
}
break;
case TYPE_2:
if (fixV2) {
// Behavior with V2 fix
} else if (fixV1) {
// Behavior with only V1
} else {
// Legacy behavior
}
break;
}
```
#### 4. Test Both Paths
Always test BOTH enabled and disabled states:
```cpp
void testFix(FeatureBitset features)
{
testcase("fix behavior");
for (bool withFix : {false, true})
{
auto const amend = withFix ? features : features - fix{Name};
Env env{*this, amend};
// Setup test scenario
env.fund(XRP(1000), alice);
env.close();
if (!withFix)
{
// Test OLD behavior (before fix)
env(operation, ter(expectedErrorWithoutFix));
// Verify old behavior is preserved
}
else
{
// Test NEW behavior (after fix)
env(operation, ter(expectedErrorWithFix));
// Verify fix works correctly
}
}
}
```
#### 5. Security Fix Pattern
For security-critical fixes (like fixBatchInnerSigs):
```cpp
// Test vulnerability exists WITHOUT fix
{
auto const amendNoFix = features - fix{Name};
Env env{*this, amendNoFix};
// Demonstrate vulnerability
// Expected: Validity::Valid (WRONG - vulnerable!)
BEAST_EXPECT(result == Validity::Valid);
}
// Test vulnerability is FIXED WITH amendment
{
Env env{*this, features};
// Demonstrate fix
// Expected: Validity::SigBad (CORRECT - protected!)
BEAST_EXPECT(result == Validity::SigBad);
}
```
#### 6. Test Coverage Requirements
- [ ] Test fix DISABLED (legacy behavior preserved)
- [ ] Test fix ENABLED (new behavior applied)
- [ ] Test amendment transition
- [ ] For security fixes: demonstrate vulnerability without fix
- [ ] For security fixes: demonstrate protection with fix
- [ ] Test edge cases that triggered the fix
- [ ] Test combinations with other amendments
#### 7. Documentation
- [ ] Document what was broken/suboptimal
- [ ] Document the fix applied
- [ ] Document backward compatibility behavior
- [ ] Create test summary showing both paths
---
## Best Practices for All Amendments
### 1. Naming Conventions
- New features: `feature{DescriptiveName}` (e.g., `featureBatch`, `featureHooks`)
- Fixes: `fix{IssueDescription}` (e.g., `fixBatchInnerSigs`, `fixNSDelete`)
- Use CamelCase without underscores
### 2. Feature Flag Checking
```cpp
// At the point where behavior diverges:
bool const amendmentEnabled = env.current()->rules().enabled(feature{Name});
// Or in view/rules context:
if (!rv.rules().enabled(feature{Name}))
return {}; // or legacy behavior
```
### 3. Error Codes
- New features when disabled: `temDISABLED`
- Fixes may return different validation errors based on state
- Document all error code changes
### 4. Test Structure Template
```cpp
class Amendment_test : public beast::unit_test::suite
{
public:
// Core tests
void testEnable(FeatureBitset features); // Enable/disable states
void testPreflight(FeatureBitset features); // Validation
void testPreclaim(FeatureBitset features); // Claim phase
// Feature-specific tests
void test{SpecificScenario1}(FeatureBitset features);
void test{SpecificScenario2}(FeatureBitset features);
// Master orchestrator
void testWithFeats(FeatureBitset features)
{
testEnable(features);
testPreflight(features);
testPreclaim(features);
test{SpecificScenario1}(features);
test{SpecificScenario2}(features);
}
void run() override
{
using namespace test::jtx;
auto const all = supported_amendments();
testWithFeats(all);
}
};
BEAST_DEFINE_TESTSUITE(Amendment, app, ripple);
```
### 5. Documentation Files
Create these files:
- **Specification**: `XLS_{FEATURE_NAME}.md` - Technical specification
- **Test Plan**: `{FEATURE}_COMPREHENSIVE_TEST_PLAN.md` - Test strategy
- **Test Summary**: `{FEATURE}_TEST_IMPLEMENTATION_SUMMARY.md` - Test results
- **Review Findings**: `{FEATURE}_REVIEW_FINDINGS.md` (if applicable)
### 6. Amendment Transition Testing
Test the moment an amendment activates:
```cpp
void testAmendmentTransition(FeatureBitset features)
{
testcase("amendment transition");
// Start with amendment disabled
auto const amendNoFeature = features - feature{Name};
Env env{*this, amendNoFeature};
// Perform operations in disabled state
env(operation1, ter(temDISABLED));
// Enable amendment mid-test (if testing mechanism supports it)
// Verify state transitions correctly
// Perform operations in enabled state
env(operation2, ter(tesSUCCESS));
}
```
### 7. Cache Behavior
If amendment affects caching (like fixBatchInnerSigs):
- [ ] Test cache behavior without fix
- [ ] Test cache behavior with fix
- [ ] Document cache invalidation requirements
### 8. Multi-Amendment Combinations
Test interactions with other amendments:
```cpp
void testMultipleAmendments(FeatureBitset features)
{
// Test all combinations
for (bool withFeature1 : {false, true})
for (bool withFeature2 : {false, true})
{
auto amend = features;
if (!withFeature1) amend -= feature1;
if (!withFeature2) amend -= feature2;
Env env{*this, amend};
// Test interaction behavior
}
}
```
### 9. Performance Considerations
- [ ] Minimize runtime checks (cache `rules().enabled()` result if used multiple times)
- [ ] Avoid nested feature checks where possible
- [ ] Document performance impact
### 10. Code Review Checklist
- [ ] Both enabled/disabled paths are tested
- [ ] Backward compatibility is preserved (for fixes)
- [ ] Error codes are appropriate
- [ ] Documentation is complete
- [ ] Security implications are considered
- [ ] Cache behavior is correct
- [ ] Edge cases are covered
---
## Common Patterns Reference
### Pattern: New Transaction Type
```cpp
// In transactor code:
TER doApply() override
{
if (!ctx_.view().rules().enabled(feature{Name}))
return temDISABLED;
// New transaction logic here
return tesSUCCESS;
}
```
### Pattern: New Ledger Entry Type
```cpp
// In ledger entry creation:
if (!view.rules().enabled(feature{Name}))
return temDISABLED;
auto const sle = std::make_shared<SLE>(ltNEW_TYPE, keylet);
view.insert(sle);
```
### Pattern: Behavioral Fix
```cpp
// At decision point:
bool const useFix = view.rules().enabled(fix{Name});
if (useFix)
{
// Corrected behavior
return performCorrectValidation();
}
else
{
// Legacy behavior (preserved for compatibility)
return performLegacyValidation();
}
```
### Pattern: View Selection
```cpp
// Select which view to use based on amendment:
auto& applyView = sb.rules().enabled(feature{Name}) ? newView : legacyView;
```
---
## Example Workflows
### Workflow 1: Creating a New Feature Amendment
1. User requests: "Add a new ClaimReward transaction type"
2. Skill asks: "What should the amendment be called? (e.g., BalanceRewards - without 'feature' prefix)"
3. Add to features.macro:
```
XRPL_FEATURE(BalanceRewards, Supported::no, VoteBehavior::DefaultNo)
```
4. Implement transaction with `temDISABLED` gate using `featureBalanceRewards`
5. Create test suite with testEnable, testPreflight, testPreclaim
6. Run tests with amendment enabled and disabled
7. When ready, update to `Supported::yes` in features.macro
8. Create specification document
9. Review checklist
### Workflow 2: Creating a Fix Amendment
1. User requests: "Fix the signature validation bug in batch transactions"
2. Skill asks: "What should the fix be called? (e.g., BatchInnerSigs - without 'fix' prefix)"
3. Add to features.macro:
```
XRPL_FIX(BatchInnerSigs, Supported::no, VoteBehavior::DefaultNo)
```
4. Implement fix with if/else using `fixBatchInnerSigs` to preserve old behavior
5. Create test demonstrating vulnerability without fix
6. Create test showing fix works when enabled
7. When ready, update to `Supported::yes` in features.macro
8. Document both code paths
9. Review checklist
---
## Quick Reference: File Locations
- **Amendment definitions (ONLY place to add)**: `include/xrpl/protocol/detail/features.macro`
- **Feature.h (auto-generated, DO NOT EDIT)**: `include/xrpl/protocol/Feature.h`
- **Feature.cpp (auto-generated, DO NOT EDIT)**: `src/libxrpl/protocol/Feature.cpp`
- **Test files**: `src/test/app/` or `src/test/protocol/`
- **Specifications**: Project root (e.g., `XLS_SMART_CONTRACTS.md`)
- **Test plans**: Project root (e.g., `BATCH_COMPREHENSIVE_TEST_PLAN.md`)
## How the Macro System Works
The amendment system uses C preprocessor macros:
1. **features.macro** - Single source of truth (ONLY file you edit):
```
XRPL_FEATURE(Batch, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX(TokenEscrowV1, Supported::yes, VoteBehavior::DefaultNo)
```
2. **Feature.h** - Auto-generated declarations from macro:
```cpp
extern uint256 const featureBatch;
extern uint256 const fixTokenEscrowV1;
```
3. **Feature.cpp** - Auto-generated registrations from macro:
```cpp
uint256 const featureBatch = registerFeature("Batch", ...);
uint256 const fixTokenEscrowV1 = registerFeature("fixTokenEscrowV1", ...);
```
**DO NOT** modify Feature.h or Feature.cpp directly - they process features.macro automatically.
---
## When to Use This Skill
Invoke this skill when:
- Creating a new XRPL amendment
- Adding a new transaction type or ledger entry
- Fixing existing XRPL functionality
- Need guidance on amendment best practices
- Setting up amendment tests
- Reviewing amendment implementation
The skill will guide you through the appropriate workflow based on amendment type.

8
.github/CODEOWNERS vendored
View File

@@ -1,8 +0,0 @@
# Allow anyone to review any change by default.
*
# Require the rpc-reviewers team to review changes to the rpc code.
include/xrpl/protocol/ @xrplf/rpc-reviewers
src/libxrpl/protocol/ @xrplf/rpc-reviewers
src/xrpld/rpc/ @xrplf/rpc-reviewers
src/xrpld/app/misc/ @xrplf/rpc-reviewers

View File

@@ -196,11 +196,22 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
# Enable code coverage for Debian Bookworm using GCC 15 in Debug on
# linux/amd64
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-15"
f"{os['distro_name']}-{os['distro_version']}" == "debian-bookworm"
and f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-15"
and build_type == "Debug"
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"-Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0 {cmake_args}"
cmake_args = f"{cmake_args} -Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0"
# Enable unity build for Ubuntu Jammy using GCC 12 in Debug on
# linux/amd64.
if (
f"{os['distro_name']}-{os['distro_version']}" == "ubuntu-jammy"
and f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-12"
and build_type == "Debug"
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"{cmake_args} -Dunity=ON"
# Generate a unique name for the configuration, e.g. macos-arm64-debug
# or debian-bookworm-gcc-12-amd64-release.
@@ -217,6 +228,8 @@ def generate_strategy_matrix(all: bool, config: Config) -> list:
config_name += f"-{build_type.lower()}"
if "-Dcoverage=ON" in cmake_args:
config_name += "-coverage"
if "-Dunity=ON" in cmake_args:
config_name += "-unity"
# Add the configuration to the list, with the most unique fields first,
# so that they are easier to identify in the GitHub Actions UI, as long

2
.gitignore vendored
View File

@@ -69,5 +69,3 @@ DerivedData
# AI tools.
/.augment
/.claude
/CLAUDE.md

View File

@@ -575,10 +575,16 @@ See [Sanitizers docs](./docs/build/sanitizers.md) for more details.
| `assert` | OFF | Enable assertions. |
| `coverage` | OFF | Prepare the coverage report. |
| `tests` | OFF | Build tests. |
| `unity` | OFF | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld application, and not just the libxrpl library. |
| `werr` | OFF | Treat compilation warnings as errors |
| `wextra` | OFF | Enable additional compilation warnings |
[Unity builds][5] may be faster for the first build (at the cost of much more
memory) since they concatenate sources into fewer translation units. Non-unity
builds may be faster for incremental builds, and can be helpful for detecting
`#include` omissions.
## Troubleshooting
### Conan
@@ -645,6 +651,7 @@ If you want to experiment with a new package, follow these steps:
[1]: https://github.com/conan-io/conan-center-index/issues/13168
[2]: https://en.cppreference.com/w/cpp/compiler_support/20
[3]: https://docs.conan.io/en/latest/getting_started.html
[5]: https://en.wikipedia.org/wiki/Unity_build
[6]: https://github.com/boostorg/beast/issues/2648
[7]: https://github.com/boostorg/beast/issues/2661
[gcovr]: https://gcovr.com/en/stable/getting-started.html

View File

@@ -940,23 +940,7 @@
#
# path Location to store the database
#
# Optional keys
#
# cache_size Size of cache for database records. Default is 16384.
# Setting this value to 0 will use the default value.
#
# cache_age Length of time in minutes to keep database records
# cached. Default is 5 minutes. Setting this value to
# 0 will use the default value.
#
# Note: if neither cache_size nor cache_age is
# specified, the cache for database records will not
# be created. If only one of cache_size or cache_age
# is specified, the cache will be created using the
# default value for the unspecified parameter.
#
# Note: the cache will not be created if online_delete
# is specified.
# Optional keys for NuDB and RocksDB:
#
# fast_load Boolean. If set, load the last persisted ledger
# from disk upon process start before syncing to
@@ -964,8 +948,6 @@
# if sufficient IOPS capacity is available.
# Default 0.
#
# Optional keys for NuDB or RocksDB:
#
# earliest_seq The default is 32570 to match the XRP ledger
# network's earliest allowed sequence. Alternate
# networks may set this value. Minimum value of 1.

View File

@@ -4,7 +4,12 @@
include(target_protobuf_sources)
# Protocol buffers cannot participate in a unity build,
# because all the generated sources
# define a bunch of `static const` variables with the same names,
# so we just build them as a separate library.
add_library(xrpl.libpb)
set_target_properties(xrpl.libpb PROPERTIES UNITY_BUILD OFF)
target_protobuf_sources(xrpl.libpb xrpl/proto LANGUAGE cpp IMPORT_DIRS include/xrpl/proto
PROTOS include/xrpl/proto/xrpl.proto)

View File

@@ -30,6 +30,14 @@ if (tests)
endif ()
endif ()
option(unity "Creates a build using UNITY support in cmake." OFF)
if (unity)
if (NOT is_ci)
set(CMAKE_UNITY_BUILD_BATCH_SIZE 15 CACHE STRING "")
endif ()
set(CMAKE_UNITY_BUILD ON CACHE BOOL "Do a unity build")
endif ()
if (is_clang AND is_linux)
option(voidstar "Enable Antithesis instrumentation." OFF)
endif ()

View File

@@ -6,7 +6,7 @@
"sqlite3/3.49.1#8631739a4c9b93bd3d6b753bac548a63%1765850149.926",
"soci/4.0.3#a9f8d773cd33e356b5879a4b0564f287%1765850149.46",
"snappy/1.1.10#968fef506ff261592ec30c574d4a7809%1765850147.878",
"secp256k1/0.7.0#0fda78daa3b864deb8a2fbc083398356%1770226294.524",
"secp256k1/0.7.1#3a61e95e220062ef32c48d019e9c81f7%1770306721.686",
"rocksdb/10.5.1#4a197eca381a3e5ae8adf8cffa5aacd0%1765850186.86",
"re2/20230301#ca3b241baec15bd31ea9187150e0b333%1765850148.103",
"protobuf/6.32.1#f481fd276fc23a33b85a3ed1e898b693%1765850161.038",

View File

@@ -23,6 +23,7 @@ class Xrpl(ConanFile):
"shared": [True, False],
"static": [True, False],
"tests": [True, False],
"unity": [True, False],
"xrpld": [True, False],
}
@@ -32,7 +33,7 @@ class Xrpl(ConanFile):
"libarchive/3.8.1",
"nudb/2.0.9",
"openssl/3.5.5",
"secp256k1/0.7.0",
"secp256k1/0.7.1",
"soci/4.0.3",
"zlib/1.3.1",
]
@@ -54,6 +55,7 @@ class Xrpl(ConanFile):
"shared": False,
"static": True,
"tests": False,
"unity": False,
"xrpld": False,
"date/*:header_only": True,
"ed25519/*:shared": False,
@@ -166,6 +168,7 @@ class Xrpl(ConanFile):
tc.variables["rocksdb"] = self.options.rocksdb
tc.variables["BUILD_SHARED_LIBS"] = self.options.shared
tc.variables["static"] = self.options.static
tc.variables["unity"] = self.options.unity
tc.variables["xrpld"] = self.options.xrpld
tc.generate()

View File

@@ -133,10 +133,6 @@ public:
std::uint32_t ledgerSeq,
std::function<void(std::shared_ptr<NodeObject> const&)>&& callback);
/** Remove expired entries from the positive and negative caches. */
virtual void
sweep() = 0;
/** Gather statistics pertaining to read and write activities.
*
* @param obj Json object reference into which to place counters.

View File

@@ -23,32 +23,6 @@ public:
beast::Journal j)
: Database(scheduler, readThreads, config, j), backend_(std::move(backend))
{
std::optional<int> cacheSize, cacheAge;
if (config.exists("cache_size"))
{
cacheSize = get<int>(config, "cache_size");
if (cacheSize.value() < 0)
{
Throw<std::runtime_error>("Specified negative value for cache_size");
}
}
if (config.exists("cache_age"))
{
cacheAge = get<int>(config, "cache_age");
if (cacheAge.value() < 0)
{
Throw<std::runtime_error>("Specified negative value for cache_age");
}
}
if (cacheSize != 0 || cacheAge != 0)
{
cache_ = std::make_shared<TaggedCache<uint256, NodeObject>>(
"DatabaseNodeImp", cacheSize.value_or(0), std::chrono::minutes(cacheAge.value_or(0)), stopwatch(), j);
}
XRPL_ASSERT(
backend_,
"xrpl::NodeStore::DatabaseNodeImp::DatabaseNodeImp : non-null "
@@ -103,13 +77,7 @@ public:
std::uint32_t ledgerSeq,
std::function<void(std::shared_ptr<NodeObject> const&)>&& callback) override;
void
sweep() override;
private:
// Cache for database objects. This cache is not always initialized. Check
// for null before using.
std::shared_ptr<TaggedCache<uint256, NodeObject>> cache_;
// Persistent key/value storage
std::shared_ptr<Backend> backend_;

View File

@@ -55,9 +55,6 @@ public:
void
sync() override;
void
sweep() override;
private:
std::shared_ptr<Backend> writableBackend_;
std::shared_ptr<Backend> archiveBackend_;

View File

@@ -85,8 +85,7 @@ registerSSLCerts(boost::asio::ssl::context& ctx, boost::system::error_code& ec,
// There is a very unpleasant interaction between <wincrypt> and
// openssl x509 types (namely the former has macros that stomp
// on the latter), these undefs allow this TU to be safely used in
// unity builds without messing up subsequent TUs. Although we
// no longer use unity builds, leaving the undefs here does no harm.
// unity builds without messing up subsequent TUs.
#if BOOST_OS_WINDOWS
#undef X509_NAME
#undef X509_EXTENSIONS

View File

@@ -10,11 +10,6 @@ DatabaseNodeImp::store(NodeObjectType type, Blob&& data, uint256 const& hash, st
auto obj = NodeObject::createObject(type, std::move(data), hash);
backend_->store(obj);
if (cache_)
{
// After the store, replace a negative cache entry if there is one
cache_->canonicalize(hash, obj, [](std::shared_ptr<NodeObject> const& n) { return n->getType() == hotDUMMY; });
}
}
void
@@ -23,77 +18,36 @@ DatabaseNodeImp::asyncFetch(
std::uint32_t ledgerSeq,
std::function<void(std::shared_ptr<NodeObject> const&)>&& callback)
{
if (cache_)
{
std::shared_ptr<NodeObject> obj = cache_->fetch(hash);
if (obj)
{
callback(obj->getType() == hotDUMMY ? nullptr : obj);
return;
}
}
Database::asyncFetch(hash, ledgerSeq, std::move(callback));
}
void
DatabaseNodeImp::sweep()
{
if (cache_)
cache_->sweep();
}
std::shared_ptr<NodeObject>
DatabaseNodeImp::fetchNodeObject(uint256 const& hash, std::uint32_t, FetchReport& fetchReport, bool duplicate)
{
std::shared_ptr<NodeObject> nodeObject = cache_ ? cache_->fetch(hash) : nullptr;
std::shared_ptr<NodeObject> nodeObject = nullptr;
Status status;
if (!nodeObject)
try
{
JLOG(j_.trace()) << "fetchNodeObject " << hash << ": record not " << (cache_ ? "cached" : "found");
Status status;
try
{
status = backend_->fetch(hash.data(), &nodeObject);
}
catch (std::exception const& e)
{
JLOG(j_.fatal()) << "fetchNodeObject " << hash << ": Exception fetching from backend: " << e.what();
Rethrow();
}
switch (status)
{
case ok:
if (cache_)
{
if (nodeObject)
cache_->canonicalize_replace_client(hash, nodeObject);
else
{
auto notFound = NodeObject::createObject(hotDUMMY, {}, hash);
cache_->canonicalize_replace_client(hash, notFound);
if (notFound->getType() != hotDUMMY)
nodeObject = notFound;
}
}
break;
case notFound:
break;
case dataCorrupt:
JLOG(j_.fatal()) << "fetchNodeObject " << hash << ": nodestore data is corrupted";
break;
default:
JLOG(j_.warn()) << "fetchNodeObject " << hash << ": backend returns unknown result " << status;
break;
}
status = backend_->fetch(hash.data(), &nodeObject);
}
else
catch (std::exception const& e)
{
JLOG(j_.trace()) << "fetchNodeObject " << hash << ": record found in cache";
if (nodeObject->getType() == hotDUMMY)
nodeObject.reset();
JLOG(j_.fatal()) << "fetchNodeObject " << hash << ": Exception fetching from backend: " << e.what();
Rethrow();
}
switch (status)
{
case ok:
case notFound:
break;
case dataCorrupt:
JLOG(j_.fatal()) << "fetchNodeObject " << hash << ": nodestore data is corrupted";
break;
default:
JLOG(j_.warn()) << "fetchNodeObject " << hash << ": backend returns unknown result " << status;
break;
}
if (nodeObject)
@@ -105,66 +59,36 @@ DatabaseNodeImp::fetchNodeObject(uint256 const& hash, std::uint32_t, FetchReport
std::vector<std::shared_ptr<NodeObject>>
DatabaseNodeImp::fetchBatch(std::vector<uint256> const& hashes)
{
std::vector<std::shared_ptr<NodeObject>> results{hashes.size()};
using namespace std::chrono;
auto const before = steady_clock::now();
std::unordered_map<uint256 const*, size_t> indexMap;
std::vector<uint256 const*> cacheMisses;
uint64_t hits = 0;
uint64_t fetches = 0;
std::vector<uint256 const*> batch{};
batch.reserve(hashes.size());
for (size_t i = 0; i < hashes.size(); ++i)
{
auto const& hash = hashes[i];
// See if the object already exists in the cache
auto nObj = cache_ ? cache_->fetch(hash) : nullptr;
++fetches;
if (!nObj)
{
// Try the database
indexMap[&hash] = i;
cacheMisses.push_back(&hash);
}
else
{
results[i] = nObj->getType() == hotDUMMY ? nullptr : nObj;
// It was in the cache.
++hits;
}
batch.push_back(&hash);
}
JLOG(j_.debug()) << "fetchBatch - cache hits = " << (hashes.size() - cacheMisses.size())
<< " - cache misses = " << cacheMisses.size();
auto dbResults = backend_->fetchBatch(cacheMisses).first;
for (size_t i = 0; i < dbResults.size(); ++i)
// Get the node objects that match the hashes from the backend. To protect
// against the backends returning fewer or more results than expected, the
// container is resized to the number of hashes.
auto results = backend_->fetchBatch(batch).first;
XRPL_ASSERT(
results.size() == hashes.size() || results.empty(),
"number of output objects either matches number of input hashes or is empty");
results.resize(hashes.size());
for (size_t i = 0; i < results.size(); ++i)
{
auto nObj = std::move(dbResults[i]);
size_t index = indexMap[cacheMisses[i]];
auto const& hash = hashes[index];
if (nObj)
{
// Ensure all threads get the same object
if (cache_)
cache_->canonicalize_replace_client(hash, nObj);
}
else
if (!results[i])
{
JLOG(j_.error()) << "fetchBatch - "
<< "record not found in db or cache. hash = " << strHex(hash);
if (cache_)
{
auto notFound = NodeObject::createObject(hotDUMMY, {}, hash);
cache_->canonicalize_replace_client(hash, notFound);
if (notFound->getType() != hotDUMMY)
nObj = std::move(notFound);
}
<< "record not found in db. hash = " << strHex(hashes[i]);
}
results[index] = std::move(nObj);
}
auto fetchDurationUs = std::chrono::duration_cast<std::chrono::microseconds>(steady_clock::now() - before).count();
updateFetchMetrics(fetches, hits, fetchDurationUs);
updateFetchMetrics(hashes.size(), 0, fetchDurationUs);
return results;
}

View File

@@ -93,12 +93,6 @@ DatabaseRotatingImp::store(NodeObjectType type, Blob&& data, uint256 const& hash
storeStats(1, nObj->getData().size());
}
void
DatabaseRotatingImp::sweep()
{
// nothing to do
}
std::shared_ptr<NodeObject>
DatabaseRotatingImp::fetchNodeObject(uint256 const& hash, std::uint32_t, FetchReport& fetchReport, bool duplicate)
{

View File

@@ -592,20 +592,18 @@ class LendingHelpers_test : public beast::unit_test::suite
auto const periodicRate = loanPeriodicRate(loanInterestRate, paymentInterval);
Number const overpaymentAmount{50};
ExtendedPaymentComponents const overpaymentComponents = computeOverpaymentComponents(
auto const overpaymentComponents = computeOverpaymentComponents(
asset, loanScale, overpaymentAmount, TenthBips32(0), TenthBips32(0), managementFeeRate);
auto const loanProperites = computeLoanProperties(
auto const loanProperties = computeLoanProperties(
asset, loanPrincipal, loanInterestRate, paymentInterval, paymentsRemaining, managementFeeRate, loanScale);
Number const periodicPayment = loanProperites.periodicPayment;
auto const ret = tryOverpayment(
asset,
loanScale,
overpaymentComponents,
loanProperites.loanState,
periodicPayment,
loanProperties.loanState,
loanProperties.periodicPayment,
periodicRate,
paymentsRemaining,
managementFeeRate,
@@ -636,20 +634,20 @@ class LendingHelpers_test : public beast::unit_test::suite
// =========== VALIDATE STATE CHANGES ===========
BEAST_EXPECTS(
loanProperites.loanState.interestDue - newState.interestDue == 0,
loanProperties.loanState.interestDue - newState.interestDue == 0,
" interest change mismatch: expected 0, got " +
to_string(loanProperites.loanState.interestDue - newState.interestDue));
to_string(loanProperties.loanState.interestDue - newState.interestDue));
BEAST_EXPECTS(
loanProperites.loanState.managementFeeDue - newState.managementFeeDue == 0,
loanProperties.loanState.managementFeeDue - newState.managementFeeDue == 0,
" management fee change mismatch: expected 0, got " +
to_string(loanProperites.loanState.managementFeeDue - newState.managementFeeDue));
to_string(loanProperties.loanState.managementFeeDue - newState.managementFeeDue));
BEAST_EXPECTS(
actualPaymentParts.principalPaid ==
loanProperites.loanState.principalOutstanding - newState.principalOutstanding,
loanProperties.loanState.principalOutstanding - newState.principalOutstanding,
" principalPaid mismatch: expected " +
to_string(loanProperites.loanState.principalOutstanding - newState.principalOutstanding) + ", got " +
to_string(loanProperties.loanState.principalOutstanding - newState.principalOutstanding) + ", got " +
to_string(actualPaymentParts.principalPaid));
}
@@ -672,7 +670,7 @@ class LendingHelpers_test : public beast::unit_test::suite
std::uint32_t const paymentsRemaining = 10;
auto const periodicRate = loanPeriodicRate(loanInterestRate, paymentInterval);
ExtendedPaymentComponents const overpaymentComponents = computeOverpaymentComponents(
auto const overpaymentComponents = computeOverpaymentComponents(
asset,
loanScale,
Number{50, 0},
@@ -680,17 +678,15 @@ class LendingHelpers_test : public beast::unit_test::suite
TenthBips32(10'000), // 10% overpayment fee
managementFeeRate);
auto const loanProperites = computeLoanProperties(
auto const loanProperties = computeLoanProperties(
asset, loanPrincipal, loanInterestRate, paymentInterval, paymentsRemaining, managementFeeRate, loanScale);
Number const periodicPayment = loanProperites.periodicPayment;
auto const ret = tryOverpayment(
asset,
loanScale,
overpaymentComponents,
loanProperites.loanState,
periodicPayment,
loanProperties.loanState,
loanProperties.periodicPayment,
periodicRate,
paymentsRemaining,
managementFeeRate,
@@ -721,21 +717,21 @@ class LendingHelpers_test : public beast::unit_test::suite
// =========== VALIDATE STATE CHANGES ===========
// With no Loan interest, interest outstanding should not change
BEAST_EXPECTS(
loanProperites.loanState.interestDue - newState.interestDue == 0,
loanProperties.loanState.interestDue - newState.interestDue == 0,
" interest change mismatch: expected 0, got " +
to_string(loanProperites.loanState.interestDue - newState.interestDue));
to_string(loanProperties.loanState.interestDue - newState.interestDue));
// With no Loan management fee, management fee due should not change
BEAST_EXPECTS(
loanProperites.loanState.managementFeeDue - newState.managementFeeDue == 0,
loanProperties.loanState.managementFeeDue - newState.managementFeeDue == 0,
" management fee change mismatch: expected 0, got " +
to_string(loanProperites.loanState.managementFeeDue - newState.managementFeeDue));
to_string(loanProperties.loanState.managementFeeDue - newState.managementFeeDue));
BEAST_EXPECTS(
actualPaymentParts.principalPaid ==
loanProperites.loanState.principalOutstanding - newState.principalOutstanding,
loanProperties.loanState.principalOutstanding - newState.principalOutstanding,
" principalPaid mismatch: expected " +
to_string(loanProperites.loanState.principalOutstanding - newState.principalOutstanding) + ", got " +
to_string(loanProperties.loanState.principalOutstanding - newState.principalOutstanding) + ", got " +
to_string(actualPaymentParts.principalPaid));
}
@@ -758,7 +754,7 @@ class LendingHelpers_test : public beast::unit_test::suite
std::uint32_t const paymentsRemaining = 10;
auto const periodicRate = loanPeriodicRate(loanInterestRate, paymentInterval);
ExtendedPaymentComponents const overpaymentComponents = computeOverpaymentComponents(
auto const overpaymentComponents = computeOverpaymentComponents(
asset,
loanScale,
Number{50, 0},
@@ -766,17 +762,15 @@ class LendingHelpers_test : public beast::unit_test::suite
TenthBips32(0), // 0% overpayment fee
managementFeeRate);
auto const loanProperites = computeLoanProperties(
auto const loanProperties = computeLoanProperties(
asset, loanPrincipal, loanInterestRate, paymentInterval, paymentsRemaining, managementFeeRate, loanScale);
Number const periodicPayment = loanProperites.periodicPayment;
auto const ret = tryOverpayment(
asset,
loanScale,
overpaymentComponents,
loanProperites.loanState,
periodicPayment,
loanProperties.loanState,
loanProperties.periodicPayment,
periodicRate,
paymentsRemaining,
managementFeeRate,
@@ -812,22 +806,22 @@ class LendingHelpers_test : public beast::unit_test::suite
// =========== VALIDATE STATE CHANGES ===========
BEAST_EXPECTS(
actualPaymentParts.principalPaid ==
loanProperites.loanState.principalOutstanding - newState.principalOutstanding,
loanProperties.loanState.principalOutstanding - newState.principalOutstanding,
" principalPaid mismatch: expected " +
to_string(loanProperites.loanState.principalOutstanding - newState.principalOutstanding) + ", got " +
to_string(loanProperties.loanState.principalOutstanding - newState.principalOutstanding) + ", got " +
to_string(actualPaymentParts.principalPaid));
BEAST_EXPECTS(
actualPaymentParts.valueChange == newState.interestDue - loanProperites.loanState.interestDue,
actualPaymentParts.valueChange == newState.interestDue - loanProperties.loanState.interestDue,
" valueChange mismatch: expected " +
to_string(newState.interestDue - loanProperites.loanState.interestDue) + ", got " +
to_string(newState.interestDue - loanProperties.loanState.interestDue) + ", got " +
to_string(actualPaymentParts.valueChange));
// With no Loan management fee, management fee due should not change
BEAST_EXPECTS(
loanProperites.loanState.managementFeeDue - newState.managementFeeDue == 0,
loanProperties.loanState.managementFeeDue - newState.managementFeeDue == 0,
" management fee change mismatch: expected 0, got " +
to_string(loanProperites.loanState.managementFeeDue - newState.managementFeeDue));
to_string(loanProperties.loanState.managementFeeDue - newState.managementFeeDue));
}
void
@@ -849,7 +843,7 @@ class LendingHelpers_test : public beast::unit_test::suite
std::uint32_t const paymentsRemaining = 10;
auto const periodicRate = loanPeriodicRate(loanInterestRate, paymentInterval);
ExtendedPaymentComponents const overpaymentComponents = computeOverpaymentComponents(
auto const overpaymentComponents = computeOverpaymentComponents(
asset,
loanScale,
Number{50, 0},
@@ -857,17 +851,15 @@ class LendingHelpers_test : public beast::unit_test::suite
TenthBips32(0), // 0% overpayment fee
managementFeeRate);
auto const loanProperites = computeLoanProperties(
auto const loanProperties = computeLoanProperties(
asset, loanPrincipal, loanInterestRate, paymentInterval, paymentsRemaining, managementFeeRate, loanScale);
Number const periodicPayment = loanProperites.periodicPayment;
auto const ret = tryOverpayment(
asset,
loanScale,
overpaymentComponents,
loanProperites.loanState,
periodicPayment,
loanProperties.loanState,
loanProperties.periodicPayment,
periodicRate,
paymentsRemaining,
managementFeeRate,
@@ -904,26 +896,26 @@ class LendingHelpers_test : public beast::unit_test::suite
// =========== VALIDATE STATE CHANGES ===========
BEAST_EXPECTS(
actualPaymentParts.principalPaid ==
loanProperites.loanState.principalOutstanding - newState.principalOutstanding,
loanProperties.loanState.principalOutstanding - newState.principalOutstanding,
" principalPaid mismatch: expected " +
to_string(loanProperites.loanState.principalOutstanding - newState.principalOutstanding) + ", got " +
to_string(loanProperties.loanState.principalOutstanding - newState.principalOutstanding) + ", got " +
to_string(actualPaymentParts.principalPaid));
// The change in interest is equal to the value change sans the
// overpayment interest
BEAST_EXPECTS(
actualPaymentParts.valueChange - actualPaymentParts.interestPaid ==
newState.interestDue - loanProperites.loanState.interestDue,
newState.interestDue - loanProperties.loanState.interestDue,
" valueChange mismatch: expected " +
to_string(
newState.interestDue - loanProperites.loanState.interestDue + actualPaymentParts.interestPaid) +
newState.interestDue - loanProperties.loanState.interestDue + actualPaymentParts.interestPaid) +
", got " + to_string(actualPaymentParts.valueChange));
// With no Loan management fee, management fee due should not change
BEAST_EXPECTS(
loanProperites.loanState.managementFeeDue - newState.managementFeeDue == 0,
loanProperties.loanState.managementFeeDue - newState.managementFeeDue == 0,
" management fee change mismatch: expected 0, got " +
to_string(loanProperites.loanState.managementFeeDue - newState.managementFeeDue));
to_string(loanProperties.loanState.managementFeeDue - newState.managementFeeDue));
}
void
@@ -947,7 +939,7 @@ class LendingHelpers_test : public beast::unit_test::suite
std::uint32_t const paymentsRemaining = 10;
auto const periodicRate = loanPeriodicRate(loanInterestRate, paymentInterval);
ExtendedPaymentComponents const overpaymentComponents = computeOverpaymentComponents(
auto const overpaymentComponents = computeOverpaymentComponents(
asset,
loanScale,
Number{50, 0},
@@ -955,17 +947,15 @@ class LendingHelpers_test : public beast::unit_test::suite
TenthBips32(0), // 0% overpayment fee
managementFeeRate);
auto const loanProperites = computeLoanProperties(
auto const loanProperties = computeLoanProperties(
asset, loanPrincipal, loanInterestRate, paymentInterval, paymentsRemaining, managementFeeRate, loanScale);
Number const periodicPayment = loanProperites.periodicPayment;
auto const ret = tryOverpayment(
asset,
loanScale,
overpaymentComponents,
loanProperites.loanState,
periodicPayment,
loanProperties.loanState,
loanProperties.periodicPayment,
periodicRate,
paymentsRemaining,
managementFeeRate,
@@ -1004,23 +994,23 @@ class LendingHelpers_test : public beast::unit_test::suite
// =========== VALIDATE STATE CHANGES ===========
BEAST_EXPECTS(
actualPaymentParts.principalPaid ==
loanProperites.loanState.principalOutstanding - newState.principalOutstanding,
loanProperties.loanState.principalOutstanding - newState.principalOutstanding,
" principalPaid mismatch: expected " +
to_string(loanProperites.loanState.principalOutstanding - newState.principalOutstanding) + ", got " +
to_string(loanProperties.loanState.principalOutstanding - newState.principalOutstanding) + ", got " +
to_string(actualPaymentParts.principalPaid));
// Note that the management fee value change is not captured, as this
// value is not needed to correctly update the Vault state.
BEAST_EXPECTS(
(newState.managementFeeDue - loanProperites.loanState.managementFeeDue == Number{-20592, -5}),
(newState.managementFeeDue - loanProperties.loanState.managementFeeDue == Number{-20592, -5}),
" management fee change mismatch: expected " + to_string(Number{-20592, -5}) + ", got " +
to_string(newState.managementFeeDue - loanProperites.loanState.managementFeeDue));
to_string(newState.managementFeeDue - loanProperties.loanState.managementFeeDue));
BEAST_EXPECTS(
actualPaymentParts.valueChange - actualPaymentParts.interestPaid ==
newState.interestDue - loanProperites.loanState.interestDue,
newState.interestDue - loanProperties.loanState.interestDue,
" valueChange mismatch: expected " +
to_string(newState.interestDue - loanProperites.loanState.interestDue) + ", got " +
to_string(newState.interestDue - loanProperties.loanState.interestDue) + ", got " +
to_string(actualPaymentParts.valueChange - actualPaymentParts.interestPaid));
}
@@ -1043,7 +1033,7 @@ class LendingHelpers_test : public beast::unit_test::suite
std::uint32_t const paymentsRemaining = 10;
auto const periodicRate = loanPeriodicRate(loanInterestRate, paymentInterval);
ExtendedPaymentComponents const overpaymentComponents = computeOverpaymentComponents(
auto const overpaymentComponents = computeOverpaymentComponents(
asset,
loanScale,
Number{50, 0},
@@ -1051,17 +1041,15 @@ class LendingHelpers_test : public beast::unit_test::suite
TenthBips32(10'000), // 10% overpayment fee
managementFeeRate);
auto const loanProperites = computeLoanProperties(
auto const loanProperties = computeLoanProperties(
asset, loanPrincipal, loanInterestRate, paymentInterval, paymentsRemaining, managementFeeRate, loanScale);
Number const periodicPayment = loanProperites.periodicPayment;
auto const ret = tryOverpayment(
asset,
loanScale,
overpaymentComponents,
loanProperites.loanState,
periodicPayment,
loanProperties.loanState,
loanProperties.periodicPayment,
periodicRate,
paymentsRemaining,
managementFeeRate,
@@ -1101,23 +1089,23 @@ class LendingHelpers_test : public beast::unit_test::suite
BEAST_EXPECTS(
actualPaymentParts.principalPaid ==
loanProperites.loanState.principalOutstanding - newState.principalOutstanding,
loanProperties.loanState.principalOutstanding - newState.principalOutstanding,
" principalPaid mismatch: expected " +
to_string(loanProperites.loanState.principalOutstanding - newState.principalOutstanding) + ", got " +
to_string(loanProperties.loanState.principalOutstanding - newState.principalOutstanding) + ", got " +
to_string(actualPaymentParts.principalPaid));
// Note that the management fee value change is not captured, as this
// value is not needed to correctly update the Vault state.
BEAST_EXPECTS(
(newState.managementFeeDue - loanProperites.loanState.managementFeeDue == Number{-18304, -5}),
(newState.managementFeeDue - loanProperties.loanState.managementFeeDue == Number{-18304, -5}),
" management fee change mismatch: expected " + to_string(Number{-18304, -5}) + ", got " +
to_string(newState.managementFeeDue - loanProperites.loanState.managementFeeDue));
to_string(newState.managementFeeDue - loanProperties.loanState.managementFeeDue));
BEAST_EXPECTS(
actualPaymentParts.valueChange - actualPaymentParts.interestPaid ==
newState.interestDue - loanProperites.loanState.interestDue,
newState.interestDue - loanProperties.loanState.interestDue,
" valueChange mismatch: expected " +
to_string(newState.interestDue - loanProperites.loanState.interestDue) + ", got " +
to_string(newState.interestDue - loanProperties.loanState.interestDue) + ", got " +
to_string(actualPaymentParts.valueChange - actualPaymentParts.interestPaid));
}

View File

@@ -827,8 +827,13 @@ public:
// applyManifest should accept new manifests with
// higher sequence numbers
auto const seq0 = cache.sequence();
BEAST_EXPECT(cache.applyManifest(clone(s_a0)) == ManifestDisposition::accepted);
BEAST_EXPECT(cache.sequence() > seq0);
auto const seq1 = cache.sequence();
BEAST_EXPECT(cache.applyManifest(clone(s_a0)) == ManifestDisposition::stale);
BEAST_EXPECT(cache.sequence() == seq1);
BEAST_EXPECT(cache.applyManifest(clone(s_a1)) == ManifestDisposition::accepted);
BEAST_EXPECT(cache.applyManifest(clone(s_a1)) == ManifestDisposition::stale);

View File

@@ -490,19 +490,8 @@ public:
Env env(*this, envconfig(onlineDelete));
/////////////////////////////////////////////////////////////
// Create the backend. Normally, SHAMapStoreImp handles all these
// details
auto nscfg = env.app().config().section(ConfigSection::nodeDatabase());
// Provide default values:
if (!nscfg.exists("cache_size"))
nscfg.set(
"cache_size", std::to_string(env.app().config().getValueFor(SizedItem::treeCacheSize, std::nullopt)));
if (!nscfg.exists("cache_age"))
nscfg.set(
"cache_age", std::to_string(env.app().config().getValueFor(SizedItem::treeCacheAge, std::nullopt)));
// Create NodeStore with two backends to allow online deletion of data.
// Normally, SHAMapStoreImp handles all these details.
NodeStoreScheduler scheduler(env.app().getJobQueue());
std::string const writableDb = "write";
@@ -510,9 +499,8 @@ public:
auto writableBackend = makeBackendRotating(env, scheduler, writableDb);
auto archiveBackend = makeBackendRotating(env, scheduler, archiveDb);
// Create NodeStore with two backends to allow online deletion of
// data
constexpr int readThreads = 4;
auto nscfg = env.app().config().section(ConfigSection::nodeDatabase());
auto dbr = std::make_unique<NodeStore::DatabaseRotatingImp>(
scheduler,
readThreads,

View File

@@ -908,10 +908,6 @@ public:
JLOG(m_journal.debug()) << "MasterTransaction sweep. Size before: " << oldMasterTxSize
<< "; size after: " << masterTxCache.size();
}
{
// Does not appear to have an associated cache.
getNodeStore().sweep();
}
{
std::size_t const oldLedgerMasterCacheSize = getLedgerMaster().getFetchPackCacheSize();

View File

@@ -130,14 +130,6 @@ std::unique_ptr<NodeStore::Database>
SHAMapStoreImp::makeNodeStore(int readThreads)
{
auto nscfg = app_.config().section(ConfigSection::nodeDatabase());
// Provide default values:
if (!nscfg.exists("cache_size"))
nscfg.set("cache_size", std::to_string(app_.config().getValueFor(SizedItem::treeCacheSize, std::nullopt)));
if (!nscfg.exists("cache_age"))
nscfg.set("cache_age", std::to_string(app_.config().getValueFor(SizedItem::treeCacheAge, std::nullopt)));
std::unique_ptr<NodeStore::Database> db;
if (deleteInterval_)
@@ -226,8 +218,6 @@ SHAMapStoreImp::run()
LedgerIndex lastRotated = state_db_.getState().lastRotated;
netOPs_ = &app_.getOPs();
ledgerMaster_ = &app_.getLedgerMaster();
fullBelowCache_ = &(*app_.getNodeFamily().getFullBelowCache());
treeNodeCache_ = &(*app_.getNodeFamily().getTreeNodeCache());
if (advisoryDelete_)
canDelete_ = state_db_.getCanDelete();
@@ -490,16 +480,19 @@ void
SHAMapStoreImp::clearCaches(LedgerIndex validatedSeq)
{
ledgerMaster_->clearLedgerCachePrior(validatedSeq);
fullBelowCache_->clear();
// Also clear the FullBelowCache so its generation counter is bumped.
// This prevents stale "full below" markers from persisting across
// backend rotation/online deletion and interfering with SHAMap sync.
app_.getNodeFamily().getFullBelowCache()->clear();
}
void
SHAMapStoreImp::freshenCaches()
{
if (freshenCache(*treeNodeCache_))
return;
if (freshenCache(app_.getMasterTransaction().getCache()))
if (freshenCache(*app_.getNodeFamily().getTreeNodeCache()))
return;
freshenCache(app_.getMasterTransaction().getCache());
}
void

View File

@@ -93,8 +93,6 @@ private:
// as of run() or before
NetworkOPs* netOPs_ = nullptr;
LedgerMaster* ledgerMaster_ = nullptr;
FullBelowCache* fullBelowCache_ = nullptr;
TreeNodeCache* treeNodeCache_ = nullptr;
static constexpr auto nodeStoreName_ = "NodeStore";

View File

@@ -459,6 +459,10 @@ ManifestCache::applyManifest(Manifest m)
auto masterKey = m.masterKey;
map_.emplace(std::move(masterKey), std::move(m));
// Something has changed. Keep track of it.
seq_++;
return ManifestDisposition::accepted;
}