Compare commits

...

108 Commits

Author SHA1 Message Date
Olek
7bdf5fa8b8 Fix build.md wamr version (#5535) 2025-07-04 00:48:03 +05:30
Olek
65b0b976d9 Sync error codes (#5527)
* Sync error codes
2025-07-02 17:33:39 -04:00
Elliot.
a0d275feec chore: Clear CODEOWNERS (#5528) 2025-07-02 10:39:57 -07:00
Mayukha Vadari
ece3a8d7be Merge branch 'develop' into develop4 2025-06-30 21:33:30 +05:30
Olek
463acf51b5 preflight checks for wasm (#5517) 2025-06-30 09:34:38 -04:00
Olek
1cd16fab87 Host-functions perf test fixes (#5514) 2025-06-27 09:59:28 -04:00
Vlad
e18f27f5f7 test: switch some unit tests to doctest (#5383)
This change moves some tests from the current unit tests that are compiled into the rippled binary to using the doctest framework.
2025-06-26 19:35:31 +00:00
Jingchen
df6daf0d8f Add XRPL_ABANDON and use it to abandon OwnerPaysFee (#5510) 2025-06-26 12:09:05 -04:00
Olek
add55c4f33 Host functions gas cost for wasm_runtime interface (#5500) 2025-06-25 14:04:04 +00:00
Jingchen
e9d46f0bfc Remove OwnerPaysFee as it's never fully supported (#5435)
The OwnerPaysFee amendment was never fully supported, and this change removes the feature to the extent possible.
2025-06-24 18:56:58 +00:00
Bart
42fd74b77b Removes release notes from codebase (#5508) 2025-06-24 13:10:00 -04:00
tequ
c55ea56c5e Add nftoken_id, nftoken_ids, offer_id to meta for transaction stream (#5230) 2025-06-24 09:02:22 -04:00
Michael Legleux
1e01cd34f7 Set version to 2.5.0 2025-06-23 10:13:01 -07:00
Alex Kremer
e2fa5c1b7c chore: Change libXRPL check conan remote to dev (#5482)
This change aligns the Conan remote used by the libXRPL Clio compatibility check workflow with the recent changes applied to Clio.
2025-06-20 17:02:16 +00:00
Ed Hennis
fc0984d286 Require a message on "Application::signalStop" (#5255)
This change adds a message parameter to Application::signalStop for extra context.
2025-06-20 16:24:34 +00:00
Valentin Balaschenko
8b3dcd41f7 refactor: Change getNodeFat Missing Node State Tree error into warning (#5455) 2025-06-20 15:44:42 +00:00
Denis Angell
8f2f5310e2 Fix: Improve error handling in Batch RPC response (#5503) 2025-06-18 17:46:45 -04:00
Olek
51a9c0ff59 Host function gas cost (#5488)
* Update Wamr to 2.3.1
* Add gas cost per host-function
* Fix windows build
* Fix wasm test
* Add no import test
2025-06-12 15:54:49 -04:00
Michael Legleux
edb4f0342c Set version to 2.5.0-rc2 2025-06-11 17:10:45 -07:00
yinyiqian1
ea17abb92a fix: Ensure delegate tests do not silently fail with batch (#5476)
The tests that ensure `tfInnerBatchTxn` won't block delegated transactions silently fail in `Delegate_test.cpp`. This change removes these cases from that file and adds them to `Batch_test.cpp` instead where they do not silently fail, because there the batch delegate results are explicitly checked. Moving them to that file further avoids refactoring many helper functions.
2025-06-11 13:21:24 +08:00
Mayukha Vadari
35a40a8e62 fix: Improve multi-sign usage of simulate (#5479)
This change allows users to submit simulate requests from a multi-sign account without needing to specify the accounts that are doing the multi-signing, and fixes an error with simulate that allowed double-"signed" (both single-sign and multi-sign public keys are provided) transactions.
2025-06-10 14:47:27 +08:00
Ed Hennis
d494bf45b2 refactor: Collapse some split log messages into one (#5347)
Multi-line log messages are hard to work with. Writing these handful of related messages as one message should make the log a tiny bit easier to manage.
2025-06-06 16:01:02 +00:00
Mayukha Vadari
6e8a5f0f4e fix: make host function traces easier to use, fix get_NFT bug (#5466)
Co-authored-by: Olek <115580134+oleks-rip@users.noreply.github.com>
2025-06-05 14:24:13 -04:00
Mayukha Vadari
8a33702f26 fix merge issues 2025-06-05 12:38:26 -04:00
Mayukha Vadari
a072d49802 Merge remote-tracking branch 'upstream/ripple/smart-escrow' into develop3.5 2025-06-05 11:51:53 -04:00
Mayukha Vadari
a0aeeb8e07 Merge remote-tracking branch 'upstream/develop' into develop3.5 2025-06-05 11:50:38 -04:00
Vlad
8bf4a5cbff chore: Remove external project build cores division (#5475)
The CMake statements that make it seem as if the number of cores used to build external project dependencies is halved don't actually do anything. This change removes these statements.
2025-06-05 13:37:30 +00:00
Denis Angell
58c2c82a30 fix: Amendment-guard TokenEscrow preclaim and expand tests (#5473)
This change amendment-guards the preclaim for `TokenEscrow`, as well as expands tests to increase code coverage.
2025-06-05 12:54:45 +00:00
Olek
383b225690 Fix processing nonexistent field (#5467) 2025-06-04 17:32:11 -04:00
Mayukha Vadari
ace2247800 Merge remote-tracking branch 'upstream/ripple/smart-escrow' into develop3.5 2025-06-04 14:15:17 -04:00
Michael Legleux
11edaa441d Set version to 2.5.0-rc1 (#5472) 2025-06-04 17:55:23 +00:00
yinyiqian1
a5e953b191 fix: Add tecNO_DELEGATE_PERMISSION and fix flags (#5465)
* Adds `tecNO_DELEGATE_PERMISSION` for unauthorized transactions sent by a delegated account.
* Returns `tecNO_TARGET` instead of `terNO_ACCOUNT` for the `DelegateSet` transaction if the delegated account does not exist.
* Fixes `tfFullyCanonicalSig` and `tfInnerBatchTxn` blocking transactions issue by adding `tfUniversal` in the permission related masks in `txFlags.h`
2025-06-03 22:20:29 +00:00
Mark Travis
506ae12a8c Increase network i/o capacity (#5464)
The change increases the default network I/O worker thread pool size from 2 to 6. This will improve stability, as worker thread saturation correlates to desyncs, particularly on high-traffic peers, such as hubs.
2025-06-03 21:33:09 +00:00
Ayaz Salikhov
0310c5cbe0 fix: Specify transitive_headers when building with Conan 2 (#5462)
To be able to consume `rippled` in Conan 2, the recipe should specify transitive_headers for external libraries that are present in the exported header files. This change remains compatibility with Conan 1, where this flag was not present.
2025-06-03 17:33:32 +00:00
Denis Angell
053e1af7ff Add support for XLS-85 Token Escrow (#5185)
- Specification: https://github.com/XRPLF/XRPL-Standards/pull/272
- Amendment: `TokenEscrow`
- Enables escrowing of IOU and MPT tokens in addition to native XRP.
- Allows accounts to lock issued tokens (IOU/MPT) in escrow objects, with support for freeze, authorization, and transfer rates.
- Adds new ledger fields (`sfLockedAmount`, `sfIssuerNode`, etc.) to track locked balances for IOU and MPT escrows.
- Updates EscrowCreate, EscrowFinish, and EscrowCancel transaction logic to support IOU and MPT assets, including proper handling of trustlines and MPT authorization, transfer rates, and locked balances.
- Enforces invariant checks for escrowed IOU/MPT amounts.
- Extends GatewayBalances RPC to report locked (escrowed) balances.
2025-06-03 12:51:55 -04:00
Vlad
7e24adbdd0 fix: Address NFT interactions with trustlines (#5297)
The changes are focused on fixing NFT transactions bypassing the trustline authorization requirement and potential invariant violation when interacting with deep frozen trustlines.
2025-06-02 16:13:20 +00:00
Gregory Tsipenyuk
621df422a7 fix: Add AMMv1_3 amendment (#5203)
* Add AMM bid/create/deposit/swap/withdraw/vote invariants:
  - Deposit, Withdrawal invariants: `sqrt(asset1Balance * asset2Balance) >= LPTokens`.
  - Bid: `sqrt(asset1Balance * asset2Balance) > LPTokens` and the pool balances don't change.
  - Create: `sqrt(asset1Balance * assetBalance2) == LPTokens`.
  - Swap: `asset1BalanceAfter * asset2BalanceAfter >= asset1BalanceBefore * asset2BalanceBefore`
     and `LPTokens` don't change.
  - Vote: `LPTokens` and pool balances don't change.
  - All AMM and swap transactions: amounts and tokens are greater than zero, except on withdrawal if all tokens
    are withdrawn.
* Add AMM deposit and withdraw rounding to ensure AMM invariant:
  - On deposit, tokens out are rounded downward and deposit amount is rounded upward.
  - On withdrawal, tokens in are rounded upward and withdrawal amount is rounded downward.
* Add Order Book Offer invariant to verify consumed amounts. Consumed amounts are less than the offer.
* Fix Bid validation. `AuthAccount` can't have duplicate accounts or the submitter account.
2025-06-02 09:52:10 -04:00
Olek
6a6fed5dce More hostfunctions (#5451)
* Bug fixes:
- Fix bugs found during schedule table tests
- Add more tests
- Add parameters passing for runEscrowWasm function

* Add new host-functions
 fix wamr logging
 add runtime passing through HF
 fix runEscrowWasm interface

* Improve logs

* Fix logging bug

* Set 4k limit for update_data HF

* allHF wasm module fixes
2025-05-30 19:01:27 -04:00
Shawn Xie
0a34b5c691 Add support for XLS-81 Permissioned DEX (#5404)
Modified transactions:
- OfferCreate
- Payment

Modified RPCs:
- book_changes
- subscribe
- book_offers
- ripple_path_find
- path_find

Spec: https://github.com/XRPLF/XRPL-Standards/pull/281
2025-05-30 13:24:48 -04:00
Matt Mankins
e0bc3dd51f docs: update example keyserver host in SECURITY.md (#5460) 2025-05-30 08:46:08 -04:00
Bronek Kozicki
dacecd24ba Fix unit build error (#5459)
This change fixes the issue that there is a `using namespace` statement inside a namespace scope.
2025-05-29 20:53:31 +00:00
Mayukha Vadari
1f8aece8cd feat: add a GasUsed parameter to the metadata (#5456) 2025-05-29 16:36:55 -04:00
Mayukha Vadari
6c6f8cd4f9 Merge remote-tracking branch 'upstream/develop' into develop3 2025-05-29 13:05:11 -04:00
Mayukha Vadari
05105743e9 chore[tests]: improve env.meta usage (#5457)
This commit changes the ledger close in env.meta to be conditional on if it hasn't already been closed (i.e. the current ledger doesn't have any transactions in it). This change will make it a bit easier to use, as it will still work if you close the ledger outside of this usage. Previously, if you accidentally closed the ledger outside of the meta function, it would segfault and it was incredibly difficult to debug.
2025-05-29 16:28:09 +00:00
Mayukha Vadari
fb1311e013 uncomment???? 2025-05-28 14:00:50 -04:00
Mayukha Vadari
ce31acf030 debug comments 2025-05-28 13:48:38 -04:00
Bronek Kozicki
9e1fe9a85e Fix: Improve handling of expired credentials in VaultDeposit (#5452)
This change returns `tecEXPIRED` from VaultDeposit to allow the Transactor to remove the expired credentials.
2025-05-28 10:28:18 -04:00
Vito Tumas
d71ce51901 feat: improve squelching configuration (#5438)
This commit introduces the following changes:
* Renames `vp_enable config` option to `vp_base_squelch_enable` to enable squelching for validators.
* Removes `vp_squelch` config option which was used to configure whether to send squelch messages to peers or not. With this flag removed, if squelching is enabled, squelch messages will be sent. This was an option used for debugging.
* Introduces a temporary `vp_base_squelch_max_trusted_peers` config option to change the max number of peers who are selected as validator message sources. This is a temporary option, which will be removed once a good value is found.
* Adds a traffic counter to count the number of times peers ignored squelch messages and kept sending messages for squelched validators.
* Moves the decision whether squelching is enabled and ready into Slot.h.
2025-05-28 06:30:03 -04:00
Mayukha Vadari
31ad5ac63b Merge remote-tracking branch 'upstream/ripple/smart-escrow' into develop3 2025-05-27 18:29:41 -04:00
Michael Legleux
be668ee26d chore: Update CPP ref source (#5453) 2025-05-27 20:46:25 +00:00
Bart
cae5294b4e chore: Rename docs job (#5398) 2025-05-27 20:03:23 +00:00
Elliot.
cd777f79ef docs: add -j $(nproc) to BUILD.md (#5288)
This improves build times.
2025-05-27 19:11:13 +00:00
Valentin Balaschenko
8b9e21e3f5 docs: Update build instructions for Ubuntu 22.04+ (#5292) 2025-05-27 18:32:25 +00:00
Mayukha Vadari
1ede0bdec4 fix: fix fixtures (#5445) 2025-05-23 17:37:14 -04:00
Denis Angell
2a61aee562 Add Batch feature (XLS-56) (#5060)
- Specification: [XRPLF/XRPL-Standards 56](https://github.com/XRPLF/XRPL-Standards/blob/master/XLS-0056d-batch/README.md)
- Amendment: `Batch`
- Implements execution of multiple transactions within a single batch transaction with four execution modes: `tfAllOrNothing`, `tfOnlyOne`, `tfUntilFailure`, and `tfIndependent`.
- Enables atomic multi-party transactions where multiple accounts can participate in a single batch, with up to 8 inner transactions and 8 batch signers per batch transaction.
- Inner transactions use `tfInnerBatchTxn` flag with zero fees, no signature, and empty signing public key.
- Inner transactions are applied after the outer batch succeeds via the `applyBatchTransactions` function in apply.cpp.
- Network layer prevents relay of transactions with `tfInnerBatchTxn` flag - each peer applies inner transactions locally from the batch.
- Batch transactions are excluded from AccountDelegate permissions but inner transactions retain full delegation support.
- Metadata includes `ParentBatchID` linking inner transactions to their containing batch for traceability and auditing.
- Extended STTx with batch-specific signature verification methods and added protocol structures (`sfRawTransactions`, `sfBatchSigners`).
2025-05-23 19:53:53 +00:00
Mayukha Vadari
aef32ead2c better WASM logging to match rippled (#5395)
* basic logging

* pass in Journal

* log level based on journal level

* clean up

* attempt at adding WAMR logging properly

* improve logline

* maybe_unused

* fix

* fix

* fix segfault

* add test
2025-05-23 10:31:02 -04:00
Bronek Kozicki
40ce8a8833 fix: Fix pseudo-account ID calculation (#5447)
Before #5224, the pseudoaccount ID was calculated using prefix expressed in `std::uint16_t`. The refactoring to move the pseudoaccount ID calculation to View.cpp had accidentally changed the prefix type to `int` (derived from `auto i = 0`) which in turn changed the length of the input to `sha512Half` from 2 bytes to 4, altering the result.

This resulted in a different ID of the pseudoaccount calculated from the function after the refactoring, breaking the ledger. This impacts AMMCreate, even when the `SingleAssetVault` amendment is not active. This change restores the prefix type to `std::uint16_t`.
2025-05-23 14:05:36 +00:00
Bronek Kozicki
7713ff8c5c Add codecov badge, raise .codecov.yml thresholds (#5428) 2025-05-22 14:43:41 +00:00
Olek
70371a4344 Fix initializer list initialization for GCC-15 (#5443) 2025-05-21 13:28:18 -04:00
Mayukha Vadari
5b43ec7f73 refactor: switch function name from ready to finish (#5430) 2025-05-20 16:12:19 -04:00
Bronek Kozicki
e514de76ed Add single asset vault (XLS-65d) (#5224)
- Specification: XRPLF/XRPL-Standards#239
- Amendment: `SingleAssetVault`
- Implements a vault feature used to store a fungible asset (XRP, IOU, or MPT, but not NFT) and to receive shares in the vault (an MPT) in exchange.
- A vault can be private or public.
- A private vault can use permissioned domains, subject to the `PermissionedDomains` amendment.
- Shares can be exchanged back into asset with `VaultWithdraw`.
- Permissions on the asset in the vault are transitively applied on shares in the vault.
- Issuer of the asset in the vault can clawback with `VaultClawback`.
- Extended `MPTokenIssuance` with `DomainID`, used by the permissioned domain on the vault shares.

Co-authored-by: John Freeman <jfreeman08@gmail.com>
2025-05-20 14:06:41 -04:00
Bart
dd62cfcc22 fix: Update path in CODEOWNERS (#5440) 2025-05-20 15:24:07 +00:00
Michael Legleux
09690f1b38 Set version to 2.5.0-b1 2025-05-18 20:39:18 +01:00
Valentin Balaschenko
380ba9f1c1 Fix: Resolve slow test on macOS pipeline (#5392)
Using std::barrier performs extremely poorly (~1 hour vs ~1 minute to run the test suite) in certain macOS environments.
To unblock our macOS CI pipeline, std::barrier has been replaced with a custom mutex-based barrier (Barrier) that significantly improves performance without compromising correctness.
2025-05-16 10:31:51 +00:00
brettmollin
c3e9380fb4 fix: Update validators-example.txt fix xrplf example URL (#5384) 2025-05-16 09:49:14 +00:00
Jingchen
e3ebc253fa fix: Ensure that coverage file generation is atomic. (#5426)
Running unit tests in parallel and multiple threads can write into one file can corrupt output files, and then gcovr won't be able to parse the corrupted file. This change adds -fprofile-update=atomic as instructed by https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68080.
2025-05-12 14:54:01 +00:00
Bart
c6c7c84355 Configure CODEOWNERS for changes to RPC code (#5266)
To ensure changes to any RPC-related code are compatible with other services, such as Clio, the RPC team will be required to review them.
2025-05-12 12:42:03 +00:00
yinyiqian1
28f50cb7cf fix: enable LedgerStateFix for delegation (#5427) 2025-05-10 10:36:11 -04:00
Olek
1e9ff88a00 Fix CI build issues
* Mac build fix
* Windows build fix
* Windows instruction counter fix
2025-05-08 12:39:37 -04:00
Vito Tumas
3e152fec74 refactor: use east const convention (#5409)
This change refactors the codebase to use the "east const convention", and adds a clang-format rule to follow this convention.
2025-05-08 11:00:42 +00:00
yinyiqian1
2db2791805 Add PermissionDelegation feature (#5354)
This change implements the account permission delegation described in XLS-75d, see https://github.com/XRPLF/XRPL-Standards/pull/257.

* Introduces transaction-level and granular permissions that can be delegated to other accounts.
* Adds `DelegateSet` transaction to grant specified permissions to another account.
* Adds `ltDelegate` ledger object to maintain the permission list for delegating/delegated account pair.
* Adds an optional `Delegate` field in common fields, allowing a delegated account to send transactions on behalf of the delegating account within the granted permission scope. The `Account` field remains the delegating account; the `Delegate` field specifies the delegated account. The transaction is signed by the delegated account.
2025-05-08 06:14:02 -04:00
Vito Tumas
9ec2d7f8ff Enable passive squelching (#5358)
This change updates the squelching logic to accept squelch messages for untrusted validators. As a result, servers will also squelch untrusted validator messages reducing duplicate traffic they generate.

In particular:
* Updates squelch message handling logic to squelch messages for all validators, not only trusted ones.
* Updates the logic to send squelch messages to peers that don't squelch themselves
* Increases the threshold for the number of messages that a peer has to deliver to consider it as a candidate for validator messages.
2025-05-02 11:01:45 -04:00
Mayukha Vadari
bb9bb5f5c5 Merge branch 'ripple/smart-escrow' into develop2 2025-05-01 18:44:06 -04:00
Mayukha Vadari
c533abd8b6 Update size and compute cap defaults (#5417) 2025-05-01 18:41:51 -04:00
Olek
bb9bc764bc Switch to WAMR (#5416)
* Switch to WAMR
2025-05-01 18:02:06 -04:00
Ed Hennis
4a084ce34c Improve transaction relay logic (#4985)
Combines four related changes:
1. "Decrease `shouldRelay` limit to 30s." Pretty self-explanatory. Currently, the limit is 5 minutes, by which point the `HashRouter` entry could have expired, making this transaction look brand new (and thus causing it to be relayed back to peers which have sent it to us recently).
2.  "Give a transaction more chances to be retried." Will put a transaction into `LedgerMaster`'s held transactions if the transaction gets a `ter`, `tel`, or `tef` result. Old behavior was just `ter`.
     * Additionally, to prevent a transaction from being repeatedly held indefinitely, it must meet some extra conditions. (Documented in a comment in the code.)
3. "Pop all transactions with sequential sequences, or tickets." When a transaction is processed successfully, currently, one held transaction for the same account (if any) will be popped out of the held transactions list, and queued up for the next transaction batch. This change pops all transactions for the account, but only if they have sequential sequences (for non-ticket transactions) or use a ticket. This issue was identified from interactions with @mtrippled's #4504, which was merged, but unfortunately reverted later by #4852. When the batches were spaced out, it could potentially take a very long time for a large number of held transactions for an account to get processed through. However, whether batched or not, this change will help get held transactions cleared out, particularly if a missing earlier transaction is what held them up.
4. "Process held transactions through existing NetworkOPs batching." In the current processing, at the end of each consensus round, all held transactions are directly applied to the open ledger, then the held list is reset. This bypasses all of the logic in `NetworkOPs::apply` which, among other things, broadcasts successful transactions to peers. This means that the transaction may not get broadcast to peers for a really long time (5 minutes in the current implementation, or 30 seconds with this first commit). If the node is a bottleneck (either due to network configuration, or because the transaction was submitted locally), the transaction may not be seen by any other nodes or validators before it expires or causes other problems.
2025-05-01 13:58:18 -04:00
Mayukha Vadari
b4b53a6cb7 Merge branch 'ripple/smart-escrow' into develop2 2025-04-29 15:25:54 -04:00
Mayukha Vadari
9c0204906c fix reference fee tests 2025-04-29 15:25:00 -04:00
Mayukha Vadari
4670b373c1 try to fix tests 2025-04-29 14:10:27 -04:00
Mayukha Vadari
f03b5883bd More host functions (#5411)
* getNFT

* escrow keylet

* account keylet

* credential keylet

* oracle keylet

* hook everything in

* fix stuff
2025-04-29 12:39:12 -04:00
Mayukha Vadari
f8b2fe4dd5 fix imports 2025-04-28 17:43:15 -04:00
Mayukha Vadari
be4a0c9c2b Merge remote-tracking branch 'upstream/ripple/smart-escrow' into develop2 2025-04-28 17:14:28 -04:00
Vito Tumas
3502df2174 fix: Replaces random endpoint resolution with sequential (#5365)
This change addresses an issue where `rippled` attempts to connect to an IPv6 address, even when the local network lacks IPv6 support, resulting in a "Network is unreachable" error.

The fix replaces the custom endpoint selection logic with `boost::async_connect`, which sequentially attempts to connect to available endpoints until one succeeds or all fail.
2025-04-28 15:38:55 -04:00
Vlad
fa1e25abef chore: Small clarification to lsfDefaultRipple comment (#5410) 2025-04-25 15:21:27 +00:00
Denis Angell
217ba8dd4d fix: CTID to use correct ledger_index (#5408) 2025-04-24 10:24:10 -04:00
Mayukha Vadari
f37d52d8e9 Set up fees for WASM processing (#5393)
* set up fields

* throw error if allowance is too high

* votable gas price

* fix comments

* hook everything together

* make test less flaky (hopefully)

* fix other tests

* fix some tests

* fix tests

* clean up

* add more tests

* uncomment other tests

* respond to comments

* fix build

* respond to comments
2025-04-24 08:47:13 -04:00
Ed Hennis
405f4613d8 chore: Run CI on PRs that are Ready or have the "DraftRunCI" label (#5400)
- Avoids costly overhead for idle PRs where the CI results don't add any
  value.
2025-04-11 22:20:59 +00:00
Mayukha Vadari
cba512068b refactor: Clean up test logging to make it easier to search (#5396)
This PR replaces the word `failed` with `failure` in any test names and renames some test files to fix MSVC warnings, so that it is easier to search through the test output to find tests that failed.
2025-04-11 09:07:42 +00:00
Valentin Balaschenko
1c99ea23d1 Temporary disable automatic triggering macOS pipeline (#5397)
We temporarily disable running unit tests on macOS on the CI pipeline while we are investigating the delays.
2025-04-10 21:58:29 +02:00
Denis Angell
c4308b216f fix: Adds CTID to RPC tx and updates error (#4738)
This change fixes a number of issues involved with CTID:
* CTID is not present on all RPC tx transactions.
* rpcWRONG_NETWORK is missing in the ErrorCodes.cpp
2025-04-10 12:38:52 +00:00
Wietse Wind
aafd2d8525 Fix: admin RPC webhook queue limit removal and timeout reduction (#5163)
When using subscribe at admin RPC port to send webhooks for the transaction stream to a backend, on large(r) ledgers the endpoint receives fewer HTTP POSTs with TX information than the amount of transactions in a ledger. This change removes the hardcoded queue length to avoid dropping TX notifications for the admin-only command. In addition, the per-request TTL for outgoing RPC HTTP calls has been reduced from 10 minutes to 30 seconds.
2025-04-10 06:37:24 +00:00
Denis Angell
a574ec6023 fix: fixPayChanV1 (#4717)
This change introduces a new fix amendment (`fixPayChanV1`) that prevents the creation of new `PaymentChannelCreate` transaction with a `CancelAfter` time less than the current ledger time. It piggy backs off of fix1571.

Once the amendment is activated, creating a new `PaymentChannel` will require that if you specify the `CancelAfter` time/value, that value must be greater than or equal to the current ledger time.

Currently users can create a payment channel where the `CancelAfter` time is before the current ledger time. This results in the payment channel being immediately closed on the next PaymentChannel transaction.
2025-04-09 22:08:44 +00:00
Mayukha Vadari
e429455f4d refactor(trivial): reorganize ledger entry tests and helper functions (#5376)
This PR splits out `ledger_entry` tests into its own file (`LedgerEntry_test.cpp`) and alphabetizes the helper functions in `LedgerEntry.cpp`. These commits were split out of #5237 to make that PR a little more manageable, since these basic trivial changes are most of the diff. There is no code change, just moving code around.
2025-04-09 17:02:03 +00:00
Vito Tumas
7692eeb9a0 Instrument proposal, validation and transaction messages (#5348)
Adds metric counters for the following P2P message types:

* Untrusted proposal and validation messages
* Duplicate proposal, validation and transaction messages
2025-04-09 15:33:17 +02:00
Bronek Kozicki
a099f5a804 Remove UNREACHABLE from NetworkOPsImp::processTrustedProposal (#5387)
It’s possible for this to happen legitimately if a set of peers, including a validator, are connected in a cycle, and the latency and message processing time between those peers is significantly less than the latency between the validator and the last peer. It’s unlikely in the real world, but obviously easy to simulate with Antithesis.
2025-04-08 14:43:34 +00:00
Michael Legleux
ca0bc767fe fix: Use the build image from ghcr.io (#5390)
The ci pipelines are constantly hitting Docker Hub's public rate limiting since increasing the number of jobs we're running. This change switches over to images hosted in GitHub's registry.
2025-04-05 02:24:31 +00:00
Mayukha Vadari
4ba9288935 fix: disable channel_authorize when signing_support is disabled (#5385) 2025-04-05 01:08:34 +00:00
Valentin Balaschenko
e923ec6d36 Fix to correct memory ordering for compare_exchange_weak and wait in the intrusive reference counting logic (#5381)
This change addresses a memory ordering assertion failure observed on one of the Windows test machines during the IntrusiveShared_test suite.
2025-04-04 18:21:17 +00:00
Vlad
851d99d99e fix: uint128 ambiguousness breaking macos unity build (#5386) 2025-04-04 08:28:33 -04:00
Bart
f608e653ca Fix undefined uint128_t type on Windows non-unity builds (#5377)
As part of import optimization, a transitive include had been removed that defined `BOOST_COMP_MSVC` on Windows. In unity builds, this definition was pulled in, but in non-unity builds it was not - causing a compilation error. An inspection of the Boost code revealed that we can just gate the statements by `_MS_VER` instead. A `#pragma message` is added to verify that the statement is only printed on Windows builds.
2025-04-01 11:21:59 -04:00
Vlad
72e076b694 test: enable compile time param to change reference fee value (#5159)
Adds an extra CI pipeline to perform unit tests using different values for fees.
2025-03-27 23:40:36 +00:00
Bart
6cf37c4abe refactor: Move integration tests from 'examples/' into 'tests/' (#5367)
This change moves `examples/example` into `tests/conan` to make it clear it is an integration test, and adjusts the `conan` CI job accordingly
2025-03-27 14:49:09 +00:00
Valentin Balaschenko
fc204773d6 Intrusive SHAMap smart pointers for efficient memory use and lock-free synchronization (#5152)
The main goal of this optimisation is memory reduction in SHAMapTreeNodes by introducing intrusive pointers instead of standard std::shared_ptr and std::weak_ptr.
2025-03-25 18:40:25 +00:00
Mayukha Vadari
177cdaf550 Connect votable gas limit into VM (#5360)
* [WIP] add gas limit

* [WIP] host function escrow tests

* finish test

* uncomment out tests
2025-03-25 10:55:33 -04:00
Vlad
2bc5cb240f test: enable unit tests to work with variable reference fee (#5145)
Fix remaining unit tests to be able to process reference fee values other than 10.
2025-03-25 10:31:25 -04:00
pwang200
1573a443b7 smart escrow devnet 1 host functions (#5353)
* devnet 1 host functions

* clang-format

* fix build issues
2025-03-24 17:07:17 -04:00
Mayukha Vadari
911c0466c0 Merge develop into ripple/smart-escrow (#5357)
* Set version to 2.4.0

* refactor: Remove unused and add missing includes (#5293)

The codebase is filled with includes that are unused, and which thus can be removed. At the same time, the files often do not include all headers that contain the definitions used in those files. This change uses clang-format and clang-tidy to clean up the includes, with minor manual intervention to ensure the code compiles on all platforms.

* refactor: Calculate numFeatures automatically (#5324)

Requiring manual updates of numFeatures is an annoying manual process that is easily forgotten, and leads to frequent merge conflicts. This change takes advantage of the `XRPL_FEATURE` and `XRPL_FIX` macros, and adds a new `XRPL_RETIRE` macro to automatically set `numFeatures`.

* refactor: Improve ordering of headers with clang-format (#5343)

Removes all manual header groupings from source and header files by leveraging clang-format options.

* Rename "deadlock" to "stall" in `LoadManager` (#5341)

What the LoadManager class does is stall detection, which is not the same as deadlock detection. In the condition of severe CPU starvation, LoadManager will currently intentionally crash rippled reporting `LogicError: Deadlock detected`. This error message is misleading as the condition being detected is not a deadlock. This change fixes and refactors the code in response.

* Adds hub.xrpl-commons.org as a new Bootstrap Cluster (#5263)

* fix: Error message for ledger_entry rpc (#5344)

Changes the error to `malformedAddress` for `permissioned_domain` in the `ledger_entry` rpc, when the account is not a string. This change makes it more clear to a user what is wrong with their request.

* fix: Handle invalid marker parameter in grpc call (#5317)

The `end_marker` is used to limit the range of ledger entries to fetch. If `end_marker` is less than `marker`, a crash can occur. This change adds an additional check.

* fix: trust line RPC no ripple flag (#5345)

The Trustline RPC `no_ripple` flag gets set depending on `lsfDefaultRipple` flag, which is not a flag of a trustline but of the account root. The `lsfDefaultRipple` flag does not provide any insight if this particular trust line has `lsfLowNoRipple` or `lsfHighNoRipple` flag set, so it should not be used here at all. This change simplifies the logic.

* refactor: Updates Conan dependencies: RocksDB (#5335)

Updates RocksDB to version 9.7.3, the latest version supported in Conan 1.x. A patch for 9.7.4 that fixes a memory leak is included.

* fix: Remove null pointer deref, just do abort (#5338)

This change removes the existing undefined behavior from `LogicError`, so we can be certain that there will be always a stacktrace.

De-referencing a null pointer is an old trick to generate `SIGSEGV`, which would typically also create a stacktrace. However it is also an undefined behaviour and compilers can do something else. A more robust way to create a stacktrace while crashing the program is to use `std::abort`, which we have also used in this location for a long time. If we combine the two, we might not get the expected behaviour - namely, the nullpointer deref followed by `std::abort`, as handled in certain compiler versions may not immediately cause a crash. We have observed stacktrace being wiped instead, and thread put in indeterminate state, then stacktrace created without any useful information.

* chore: Add PR number to payload (#5310)

This PR adds one more payload field to the libXRPL compatibility check workflow - the PR number itself.

* chore: Update link to ripple-binary-codec (#5355)

The link to ripple-binary-codec's definitions.json appears to be outdated. The updated link is also documented here: https://xrpl.org/docs/references/protocol/binary-format#definitions-file

* Prevent consensus from getting stuck in the establish phase (#5277)

- Detects if the consensus process is "stalled". If it is, then we can declare a 
  consensus and end successfully even if we do not have 80% agreement on
  our proposal.
  - "Stalled" is defined as:
    - We have a close time consensus
    - Each disputed transaction is individually stalled:
      - It has been in the final "stuck" 95% requirement for at least 2
        (avMIN_ROUNDS) "inner rounds" of phaseEstablish,
      - and either all of the other trusted proposers or this validator, if proposing,
        have had the same vote(s) for at least 4 (avSTALLED_ROUNDS) "inner
        rounds", and at least 80% of the validators (including this one, if
        appropriate) agree about the vote (whether yes or no).
- If we have been in the establish phase for more than 10x the previous
  consensus establish phase's time, then consensus is considered "expired",
  and we will leave the round, which sends a partial validation (indicating
  that the node is moving on without validating). Two restrictions avoid
  prematurely exiting, or having an extended exit in extreme situations.
  - The 10x time is clamped to be within a range of 15s
    (ledgerMAX_CONSENSUS) to 120s (ledgerABANDON_CONSENSUS).
  - If consensus has not had an opportunity to walk through all avalanche
    states (defined as not going through 8 "inner rounds" of phaseEstablish),
    then ConsensusState::Expired is treated as ConsensusState::No.
- When enough nodes leave the round, any remaining nodes will see they've
  fallen behind, and move on, too, generally before hitting the timeout. Any
  validations or partial validations sent during this time will help the
  consensus process bring the nodes back together.

---------

Co-authored-by: Michael Legleux <mlegleux@ripple.com>
Co-authored-by: Bart <bthomee@users.noreply.github.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
Co-authored-by: Darius Tumas <Tokeiito@users.noreply.github.com>
Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
Co-authored-by: cyan317 <120398799+cindyyan317@users.noreply.github.com>
Co-authored-by: Vlad <129996061+vvysokikh1@users.noreply.github.com>
Co-authored-by: Alex Kremer <akremer@ripple.com>
2025-03-20 16:47:14 -04:00
Mayukha Vadari
b6a95f9970 PoC Smart Escrows (#5340)
* wasmedge in unittest

* add WashVM.h and cpp

* accountID comparison (vector<u8>) working

* json decode tx and ledger object with two buffers working

* wasm return a buffer working

* add a failure test case to P2P3

* host function return ledger sqn

* instruction gas and host function gas

* basics

* add scaffold

* add amendment check

* working PoC

* get test working

* fix clang-format

* prototype #2

* p2p3

* [WIP] P4

* P5

* add calculateBaseFee

* add FinishFunction preflight checks (+ tests)

* additional reserve for sfFinishFunction

* higher fees for EscrowFinish

* rename amendment to SmartEscrow

* make fee voting changes, add basic tests

* clean up

* clean up

* clean up

* more cleanup

* add subscribe tests

* add more tests

* undo formatting

* undo formatting

* remove bad comment

* more debugging statements

* fix clang-format

* fix rebase issues

* fix more rebase issues

* more rebase fixes

* add source code for wasm

* respond to comments

* add const

---------

Co-authored-by: Peng Wang <pwang200@gmail.com>
2025-03-20 14:08:06 -04:00
554 changed files with 63284 additions and 14443 deletions

View File

@@ -1,5 +1,5 @@
---
Language: Cpp
Language: Cpp
AccessModifierOffset: -4
AlignAfterOpenBracket: AlwaysBreak
AlignConsecutiveAssignments: false
@@ -19,52 +19,52 @@ AlwaysBreakTemplateDeclarations: true
BinPackArguments: false
BinPackParameters: false
BraceWrapping:
AfterClass: true
AfterClass: true
AfterControlStatement: true
AfterEnum: false
AfterFunction: true
AfterNamespace: false
AfterEnum: false
AfterFunction: true
AfterNamespace: false
AfterObjCDeclaration: true
AfterStruct: true
AfterUnion: true
BeforeCatch: true
BeforeElse: true
IndentBraces: false
AfterStruct: true
AfterUnion: true
BeforeCatch: true
BeforeElse: true
IndentBraces: false
BreakBeforeBinaryOperators: false
BreakBeforeBraces: Custom
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true
ColumnLimit: 80
CommentPragmas: '^ IWYU pragma:'
ColumnLimit: 80
CommentPragmas: "^ IWYU pragma:"
ConstructorInitializerAllOnOneLineOrOnePerLine: true
ConstructorInitializerIndentWidth: 4
ContinuationIndentWidth: 4
Cpp11BracedListStyle: true
DerivePointerAlignment: false
DisableFormat: false
DisableFormat: false
ExperimentalAutoDetectBinPacking: false
ForEachMacros: [ Q_FOREACH, BOOST_FOREACH ]
IncludeBlocks: Regroup
ForEachMacros: [Q_FOREACH, BOOST_FOREACH]
IncludeBlocks: Regroup
IncludeCategories:
- Regex: '^<(test)/'
Priority: 0
- Regex: '^<(xrpld)/'
Priority: 1
- Regex: '^<(xrpl)/'
Priority: 2
- Regex: '^<(boost)/'
Priority: 3
- Regex: '^.*/'
Priority: 4
- Regex: '^.*\.h'
Priority: 5
- Regex: '.*'
Priority: 6
IncludeIsMainRegex: '$'
- Regex: "^<(test)/"
Priority: 0
- Regex: "^<(xrpld)/"
Priority: 1
- Regex: "^<(xrpl)/"
Priority: 2
- Regex: "^<(boost)/"
Priority: 3
- Regex: "^.*/"
Priority: 4
- Regex: '^.*\.h'
Priority: 5
- Regex: ".*"
Priority: 6
IncludeIsMainRegex: "$"
IndentCaseLabels: true
IndentFunctionDeclarationAfterType: false
IndentRequiresClause: true
IndentWidth: 4
IndentWidth: 4
IndentWrappedFunctionNames: false
KeepEmptyLinesAtTheStartOfBlocks: false
MaxEmptyLinesToKeep: 1
@@ -78,19 +78,20 @@ PenaltyBreakString: 1000
PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 200
PointerAlignment: Left
ReflowComments: true
ReflowComments: true
RequiresClausePosition: OwnLine
SortIncludes: true
SortIncludes: true
SpaceAfterCStyleCast: false
SpaceBeforeAssignmentOperators: true
SpaceBeforeParens: ControlStatements
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 2
SpacesInAngles: false
SpacesInAngles: false
SpacesInContainerLiterals: true
SpacesInCStyleCastParentheses: false
SpacesInParentheses: false
SpacesInSquareBrackets: false
Standard: Cpp11
TabWidth: 8
UseTab: Never
Standard: Cpp11
TabWidth: 8
UseTab: Never
QualifierAlignment: Right

View File

@@ -7,13 +7,13 @@ comment:
show_carryforward_flags: false
coverage:
range: "60..80"
range: "70..85"
precision: 1
round: nearest
status:
project:
default:
target: 60%
target: 75%
threshold: 2%
patch:
default:

2
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,2 @@
# Allow anyone to review any change by default.
*

View File

@@ -17,6 +17,7 @@ runs:
conan export external/rocksdb rocksdb/9.7.3@
conan export external/soci soci/4.0.3@
conan export external/nudb nudb/2.0.8@
conan export external/wamr wamr/2.3.1@
- name: add Ripple Conan remote
shell: bash
run: |

View File

@@ -1,9 +1,13 @@
name: clang-format
on: [push, pull_request]
on:
push:
pull_request:
types: [opened, reopened, synchronize, ready_for_review]
jobs:
check:
if: ${{ github.event_name == 'push' || github.event.pull_request.draft != true || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
runs-on: ubuntu-24.04
env:
CLANG_VERSION: 18
@@ -20,7 +24,7 @@ jobs:
sudo apt-get update
sudo apt-get install clang-format-${CLANG_VERSION}
- name: Format first-party sources
run: find include src -type f \( -name '*.cpp' -o -name '*.hpp' -o -name '*.h' -o -name '*.ipp' \) -exec clang-format-${CLANG_VERSION} -i {} +
run: find include src tests -type f \( -name '*.cpp' -o -name '*.hpp' -o -name '*.h' -o -name '*.ipp' \) -exec clang-format-${CLANG_VERSION} -i {} +
- name: Check for differences
id: assert
run: |

View File

@@ -10,11 +10,11 @@ concurrency:
cancel-in-progress: true
jobs:
job:
documentation:
runs-on: ubuntu-latest
permissions:
contents: write
container: rippleci/rippled-build-ubuntu:aaf5e3e
container: ghcr.io/xrplf/rippled-build-ubuntu:aaf5e3e
steps:
- name: checkout
uses: actions/checkout@v4

View File

@@ -1,9 +1,13 @@
name: levelization
on: [push, pull_request]
on:
push:
pull_request:
types: [opened, reopened, synchronize, ready_for_review]
jobs:
check:
if: ${{ github.event_name == 'push' || github.event.pull_request.draft != true || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
runs-on: ubuntu-latest
env:
CLANG_VERSION: 10

View File

@@ -1,6 +1,6 @@
name: Check libXRPL compatibility with Clio
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/dev
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
on:
@@ -8,19 +8,21 @@ on:
paths:
- 'src/libxrpl/protocol/BuildInfo.cpp'
- '.github/workflows/libxrpl.yml'
types: [opened, reopened, synchronize, ready_for_review]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
publish:
if: ${{ github.event_name == 'push' || github.event.pull_request.draft != true || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
name: Publish libXRPL
outputs:
outcome: ${{ steps.upload.outputs.outcome }}
version: ${{ steps.version.outputs.version }}
channel: ${{ steps.channel.outputs.channel }}
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
container: ghcr.io/xrplf/rippled-build-ubuntu:aaf5e3e
steps:
- name: Wait for essential checks to succeed
uses: lewagon/wait-on-check-action@v1.3.4

View File

@@ -1,6 +1,7 @@
name: macos
on:
pull_request:
types: [opened, reopened, synchronize, ready_for_review]
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows, instrumentation)
@@ -18,6 +19,7 @@ concurrency:
jobs:
test:
if: ${{ github.event_name == 'push' || github.event.pull_request.draft != true || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
strategy:
matrix:
platform:
@@ -69,6 +71,9 @@ jobs:
nproc --version
echo -n "nproc returns: "
nproc
system_profiler SPHardwareDataType
sysctl -n hw.logicalcpu
clang --version
- name: configure Conan
run : |
conan profile new default --detect || true
@@ -91,4 +96,7 @@ jobs:
run: |
n=$(nproc)
echo "Using $n test jobs"
${build_dir}/rippled --unittest --unittest-jobs $n
cd ${build_dir}
./rippled --unittest --unittest-jobs $n
ctest -j $n --output-on-failure

View File

@@ -1,6 +1,7 @@
name: nix
on:
pull_request:
types: [opened, reopened, synchronize, ready_for_review]
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows)
@@ -39,6 +40,7 @@ concurrency:
jobs:
dependencies:
if: ${{ github.event_name == 'push' || github.event.pull_request.draft != true || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
strategy:
fail-fast: false
matrix:
@@ -62,7 +64,7 @@ jobs:
cc: /usr/bin/clang-14
cxx: /usr/bin/clang++-14
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
container: ghcr.io/xrplf/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
steps:
@@ -124,7 +126,7 @@ jobs:
- "-Dunity=ON"
needs: dependencies
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
container: ghcr.io/xrplf/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
steps:
@@ -161,8 +163,65 @@ jobs:
cmake-args: "-Dassert=TRUE -Dwerr=TRUE ${{ matrix.cmake-args }}"
- name: test
run: |
${build_dir}/rippled --unittest --unittest-jobs $(nproc)
cd ${build_dir}
./rippled --unittest --unittest-jobs $(nproc)
ctest -j $(nproc) --output-on-failure
reference-fee-test:
strategy:
fail-fast: false
matrix:
platform:
- linux
compiler:
- gcc
configuration:
- Debug
cmake-args:
- "-DUNIT_TEST_REFERENCE_FEE=200"
- "-DUNIT_TEST_REFERENCE_FEE=1000"
needs: dependencies
runs-on: [self-hosted, heavy]
container: ghcr.io/xrplf/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
steps:
- name: upgrade conan
run: |
pip install --upgrade "conan<2"
- name: download cache
uses: actions/download-artifact@v4
with:
name: ${{ matrix.platform }}-${{ matrix.compiler }}-${{ matrix.configuration }}
- name: extract cache
run: |
mkdir -p ~/.conan
tar -xzf conan.tar -C ~/.conan
- name: check environment
run: |
env | sort
echo ${PATH} | tr ':' '\n'
conan --version
cmake --version
- name: checkout
uses: actions/checkout@v4
- name: dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
with:
configuration: ${{ matrix.configuration }}
- name: build
uses: ./.github/actions/build
with:
generator: Ninja
configuration: ${{ matrix.configuration }}
cmake-args: "-Dassert=TRUE -Dwerr=TRUE ${{ matrix.cmake-args }}"
- name: test
run: |
cd ${build_dir}
./rippled --unittest --unittest-jobs $(nproc)
ctest -j $(nproc) --output-on-failure
coverage:
strategy:
fail-fast: false
@@ -175,7 +234,7 @@ jobs:
- Debug
needs: dependencies
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
container: ghcr.io/xrplf/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
steps:
@@ -191,7 +250,7 @@ jobs:
mkdir -p ~/.conan
tar -xzf conan.tar -C ~/.conan
- name: install gcovr
run: pip install "gcovr>=7,<8"
run: pip install "gcovr>=7,<9"
- name: check environment
run: |
echo ${PATH} | tr ':' '\n'
@@ -249,7 +308,7 @@ jobs:
conan:
needs: dependencies
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
container: ghcr.io/xrplf/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
configuration: Release
@@ -288,7 +347,7 @@ jobs:
echo "reference=${reference}" >> "${GITHUB_ENV}"
- name: build
run: |
cd examples/example
cd tests/conan
mkdir ${build_dir}
cd ${build_dir}
conan install .. --output-folder . \
@@ -304,6 +363,7 @@ jobs:
# later
instrumentation-build:
if: ${{ github.event_name == 'push' || github.event.pull_request.draft != true || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
env:
CLANG_RELEASE: 16
strategy:
@@ -351,6 +411,7 @@ jobs:
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
conan export external/snappy snappy/1.1.10@
conan export external/soci soci/4.0.3@
conan export external/wamr wamr/2.3.1@
- name: build dependencies
run: |
@@ -384,3 +445,4 @@ jobs:
run: |
cd ${BUILD_DIR}
./rippled -u --unittest-jobs $(( $(nproc)/4 ))
ctest -j $(nproc) --output-on-failure

View File

@@ -2,6 +2,7 @@ name: windows
on:
pull_request:
types: [opened, reopened, synchronize, ready_for_review]
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows, instrumentation)
@@ -21,6 +22,7 @@ concurrency:
jobs:
test:
if: ${{ github.event_name == 'push' || github.event.pull_request.draft != true || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
strategy:
fail-fast: false
matrix:
@@ -93,5 +95,6 @@ jobs:
shell: bash
if: ${{ matrix.configuration.tests }}
run: |
${build_dir}/${{ matrix.configuration.type }}/rippled --unittest \
--unittest-jobs $(nproc)
cd ${build_dir}/${{ matrix.configuration.type }}
./rippled --unittest --unittest-jobs $(nproc)
ctest -j $(nproc) --output-on-failure

View File

@@ -83,9 +83,17 @@ The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conve
The `network_id` field was added in the `server_info` response in version 1.5.0 (2019), but it is not returned in [reporting mode](https://xrpl.org/rippled-server-modes.html#reporting-mode). However, use of reporting mode is now discouraged, in favor of using [Clio](https://github.com/XRPLF/clio) instead.
## XRP Ledger server version 2.5.0
As of 2025-04-04, version 2.5.0 is in development. You can use a pre-release version by building from source or [using the `nightly` package](https://xrpl.org/docs/infrastructure/installation/install-rippled-on-ubuntu).
### Additions and bugfixes in 2.5.0
- `channel_authorize`: If `signing_support` is not enabled in the config, the RPC is disabled.
## XRP Ledger server version 2.4.0
As of 2025-01-28, version 2.4.0 is in development. You can use a pre-release version by building from source or [using the `nightly` package](https://xrpl.org/docs/infrastructure/installation/install-rippled-on-ubuntu).
[Version 2.4.0](https://github.com/XRPLF/rippled/releases/tag/2.4.0) was released on March 4, 2025.
### Additions and bugfixes in 2.4.0

View File

@@ -204,6 +204,17 @@ It fixes some source files to add missing `#include`s.
conan export --version 2.0.8 external/nudb
```
Export our [Conan recipe for WAMR](./external/wamr).
It add metering and expose some internal structures.
```
# Conan 1.x
conan export external/wamr wamr/2.3.1@
# Conan 2.x
conan export --version 2.3.1 external/wamr
```
### Build and Test
1. Create a build directory and move into it.
@@ -288,7 +299,7 @@ It fixes some source files to add missing `#include`s.
Single-config generators:
```
cmake --build .
cmake --build . -j $(nproc)
```
Multi-config generators:

View File

@@ -132,6 +132,7 @@ test.shamap > xrpl.protocol
test.toplevel > test.csf
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
tests.libxrpl > xrpl.basics
xrpl.json > xrpl.basics
xrpl.protocol > xrpl.basics
xrpl.protocol > xrpl.json

View File

@@ -16,6 +16,18 @@ set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU")
# GCC-specific fixes
add_compile_options(-Wno-unknown-pragmas -Wno-subobject-linkage)
# -Wno-subobject-linkage can be removed when we upgrade GCC version to at least 13.3
elseif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
# Clang-specific fixes
add_compile_options(-Wno-unknown-warning-option) # Ignore unknown warning options
elseif(MSVC)
# MSVC-specific fixes
add_compile_options(/wd4068) # Ignore unknown pragmas
endif()
# make GIT_COMMIT_HASH define available to all sources
find_package(Git)
if(Git_FOUND)
@@ -78,6 +90,11 @@ set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2
)
set(SECP256K1_INSTALL TRUE)
set(SECP256K1_BUILD_BENCHMARK FALSE)
set(SECP256K1_BUILD_TESTS FALSE)
set(SECP256K1_BUILD_EXHAUSTIVE_TESTS FALSE)
set(SECP256K1_BUILD_CTIME_TESTS FALSE)
set(SECP256K1_BUILD_EXAMPLES FALSE)
add_subdirectory(external/secp256k1)
add_library(secp256k1::secp256k1 ALIAS secp256k1)
add_subdirectory(external/ed25519-donna)
@@ -103,6 +120,7 @@ endif()
find_package(nudb REQUIRED)
find_package(date REQUIRED)
find_package(xxHash REQUIRED)
find_package(wamr REQUIRED)
target_link_libraries(ripple_libs INTERFACE
ed25519::ed25519
@@ -132,3 +150,8 @@ set(PROJECT_EXPORT_SET RippleExports)
include(RippledCore)
include(RippledInstall)
include(RippledValidatorKeys)
if(tests)
include(CTest)
add_subdirectory(src/tests/libxrpl)
endif()

View File

@@ -1,3 +1,5 @@
[![codecov](https://codecov.io/gh/XRPLF/rippled/graph/badge.svg?token=WyFr5ajq3O)](https://codecov.io/gh/XRPLF/rippled)
# The XRP Ledger
The [XRP Ledger](https://xrpl.org/) is a decentralized cryptographic ledger powered by a network of peer-to-peer nodes. The XRP Ledger uses a novel Byzantine Fault Tolerant consensus algorithm to settle and record transactions in a secure distributed database without a central operator.

File diff suppressed because it is too large Load Diff

View File

@@ -83,7 +83,7 @@ To report a qualifying bug, please send a detailed report to:
|Long Key ID | `0xCD49A0AFC57929BE` |
|Fingerprint | `24E6 3B02 37E0 FA9C 5E96 8974 CD49 A0AF C579 29BE` |
The full PGP key for this address, which is also available on several key servers (e.g. on [keys.gnupg.net](https://keys.gnupg.net)), is:
The full PGP key for this address, which is also available on several key servers (e.g. on [keyserver.ubuntu.com](https://keyserver.ubuntu.com)), is:
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFUwGHYBEAC0wpGpBPkd8W1UdQjg9+cEFzeIEJRaoZoeuJD8mofwI5Ejnjdt

View File

@@ -1249,6 +1249,39 @@
# Example:
# owner_reserve = 2000000 # 2 XRP
#
# extension_compute_limit = <gas>
#
# The extension compute limit is the maximum amount of gas that can be
# consumed by a single transaction. The gas limit is used to prevent
# transactions from consuming too many resources.
#
# If this parameter is unspecified, rippled will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# extension_compute_limit = 1000000 # 1 million gas
#
# extension_size_limit = <bytes>
#
# The extension size limit is the maximum size of a WASM extension in
# bytes. The size limit is used to prevent extensions from consuming
# too many resources.
#
# If this parameter is unspecified, rippled will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# extension_size_limit = 100000 # 100 kb
#
# gas_price = <bytes>
#
# The gas price is the conversion between WASM gas and its price in drops.
#
# If this parameter is unspecified, rippled will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# gas_price = 1000000 # 1 drop per gas
#-------------------------------------------------------------------------------
#
# 9. Misc Settings

View File

@@ -26,7 +26,7 @@
#
# Examples:
# https://vl.ripple.com
# https://vl.xrplf.org
# https://unl.xrplf.org
# http://127.0.0.1:8000
# file:///etc/opt/ripple/vl.txt
#

View File

@@ -98,6 +98,9 @@
# 2024-04-03, Bronek Kozicki
# - add support for output formats: jacoco, clover, lcov
#
# 2025-05-12, Jingchen Wu
# - add -fprofile-update=atomic to ensure atomic profile generation
#
# USAGE:
#
# 1. Copy this file into your cmake modules path.
@@ -200,15 +203,27 @@ set(COVERAGE_COMPILER_FLAGS "-g --coverage"
CACHE INTERNAL "")
if(CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
include(CheckCXXCompilerFlag)
include(CheckCCompilerFlag)
check_cxx_compiler_flag(-fprofile-abs-path HAVE_cxx_fprofile_abs_path)
if(HAVE_cxx_fprofile_abs_path)
set(COVERAGE_CXX_COMPILER_FLAGS "${COVERAGE_COMPILER_FLAGS} -fprofile-abs-path")
endif()
include(CheckCCompilerFlag)
check_c_compiler_flag(-fprofile-abs-path HAVE_c_fprofile_abs_path)
if(HAVE_c_fprofile_abs_path)
set(COVERAGE_C_COMPILER_FLAGS "${COVERAGE_COMPILER_FLAGS} -fprofile-abs-path")
endif()
check_cxx_compiler_flag(-fprofile-update HAVE_cxx_fprofile_update)
if(HAVE_cxx_fprofile_update)
set(COVERAGE_CXX_COMPILER_FLAGS "${COVERAGE_COMPILER_FLAGS} -fprofile-update=atomic")
endif()
check_c_compiler_flag(-fprofile-update HAVE_c_fprofile_update)
if(HAVE_c_fprofile_update)
set(COVERAGE_C_COMPILER_FLAGS "${COVERAGE_COMPILER_FLAGS} -fprofile-update=atomic")
endif()
endif()
set(CMAKE_Fortran_FLAGS_COVERAGE

View File

@@ -65,6 +65,7 @@ target_link_libraries(xrpl.imports.main
xrpl.libpb
xxHash::xxhash
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp>
wamr::wamr
)
include(add_module)
@@ -136,6 +137,9 @@ if(xrpld)
add_executable(rippled)
if(tests)
target_compile_definitions(rippled PUBLIC ENABLE_TESTS)
target_compile_definitions(rippled PRIVATE
UNIT_TEST_REFERENCE_FEE=${UNIT_TEST_REFERENCE_FEE}
)
endif()
target_include_directories(rippled
PRIVATE

View File

@@ -53,9 +53,9 @@ set(download_script "${CMAKE_BINARY_DIR}/docs/download-cppreference.cmake")
file(WRITE
"${download_script}"
"file(DOWNLOAD \
http://upload.cppreference.com/mwiki/images/b/b2/html_book_20190607.zip \
https://github.com/PeterFeicht/cppreference-doc/releases/download/v20250209/html-book-20250209.zip \
${CMAKE_BINARY_DIR}/docs/cppreference.zip \
EXPECTED_HASH MD5=82b3a612d7d35a83e3cb1195a63689ab \
EXPECTED_HASH MD5=bda585f72fbca4b817b29a3d5746567b \
)\n \
execute_process( \
COMMAND \"${CMAKE_COMMAND}\" -E tar -xf cppreference.zip \

View File

@@ -2,16 +2,6 @@
convenience variables and sanity checks
#]===================================================================]
include(ProcessorCount)
if (NOT ep_procs)
ProcessorCount(ep_procs)
if (ep_procs GREATER 1)
# never use more than half of cores for EP builds
math (EXPR ep_procs "${ep_procs} / 2")
message (STATUS "Using ${ep_procs} cores for ExternalProject builds.")
endif ()
endif ()
get_property(is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set (CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "" FORCE)

View File

@@ -11,6 +11,12 @@ option(assert "Enables asserts, even in release builds" OFF)
option(xrpld "Build xrpld" ON)
option(tests "Build tests" ON)
if(tests)
# This setting allows making a separate workflow to test fees other than default 10
if(NOT UNIT_TEST_REFERENCE_FEE)
set(UNIT_TEST_REFERENCE_FEE "10" CACHE STRING "")
endif()
endif()
option(unity "Creates a build using UNITY support in cmake. This is the default" ON)
if(unity)

41
cmake/xrpl_add_test.cmake Normal file
View File

@@ -0,0 +1,41 @@
include(isolate_headers)
function(xrpl_add_test name)
set(target ${PROJECT_NAME}.test.${name})
file(GLOB_RECURSE sources CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/${name}/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/${name}.cpp"
)
add_executable(${target} EXCLUDE_FROM_ALL ${ARGN} ${sources})
isolate_headers(
${target}
"${CMAKE_SOURCE_DIR}"
"${CMAKE_SOURCE_DIR}/tests/${name}"
PRIVATE
)
# Make sure the test isn't optimized away in unity builds
set_target_properties(${target} PROPERTIES
UNITY_BUILD_MODE GROUP
UNITY_BUILD_BATCH_SIZE 0) # Adjust as needed
add_test(NAME ${target} COMMAND ${target})
set_tests_properties(
${target} PROPERTIES
FIXTURES_REQUIRED ${target}_fixture
)
add_test(
NAME ${target}.build
COMMAND
${CMAKE_COMMAND}
--build ${CMAKE_BINARY_DIR}
--config $<CONFIG>
--target ${target}
)
set_tests_properties(${target}.build PROPERTIES
FIXTURES_SETUP ${target}_fixture
)
endfunction()

View File

@@ -1,7 +1,8 @@
from conan import ConanFile
from conan import ConanFile, __version__ as conan_version
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
import re
class Xrpl(ConanFile):
name = 'xrpl'
@@ -24,14 +25,14 @@ class Xrpl(ConanFile):
}
requires = [
'date/3.0.3',
'doctest/2.4.11',
'grpc/1.50.1',
'libarchive/3.7.6',
'nudb/2.0.8',
'openssl/1.1.1v',
'soci/4.0.3',
'xxhash/0.8.2',
'zlib/1.3.1',
'wamr/2.3.1',
]
tool_requires = [
@@ -99,7 +100,10 @@ class Xrpl(ConanFile):
self.options['boost'].visibility = 'global'
def requirements(self):
self.requires('boost/1.83.0', force=True)
# Conan 2 requires transitive headers to be specified
transitive_headers_opt = {'transitive_headers': True} if conan_version.split('.')[0] == '2' else {}
self.requires('boost/1.83.0', force=True, **transitive_headers_opt)
self.requires('date/3.0.3', **transitive_headers_opt)
self.requires('lz4/1.10.0', force=True)
self.requires('protobuf/3.21.9', force=True)
self.requires('sqlite3/3.47.0', force=True)
@@ -107,6 +111,7 @@ class Xrpl(ConanFile):
self.requires('jemalloc/5.3.0')
if self.options.rocksdb:
self.requires('rocksdb/9.7.3')
self.requires('xxhash/0.8.2', **transitive_headers_opt)
exports_sources = (
'CMakeLists.txt',
@@ -125,6 +130,7 @@ class Xrpl(ConanFile):
self.folders.generators = 'build/generators'
generators = 'CMakeDeps'
def generate(self):
tc = CMakeToolchain(self)
tc.variables['tests'] = self.options.tests

View File

@@ -23,7 +23,7 @@ direction.
```
apt update
apt install --yes curl git libssl-dev python3.10-dev python3-pip make g++-11 libprotobuf-dev protobuf-compiler
apt install --yes curl git libssl-dev pipx python3.10-dev python3-pip make g++-11 libprotobuf-dev protobuf-compiler
curl --location --remote-name \
"https://github.com/Kitware/CMake/releases/download/v3.25.1/cmake-3.25.1.tar.gz"
@@ -35,7 +35,8 @@ make --jobs $(nproc)
make install
cd ..
pip3 install 'conan<2'
pipx install 'conan<2'
pipx ensurepath
```
[1]: https://github.com/thejohnfreeman/rippled-docker/blob/master/ubuntu-22.04/install.sh

6
external/wamr/conandata.yml vendored Normal file
View File

@@ -0,0 +1,6 @@
patches:
2.3.1:
- patch_description: add metering to iwasm interpreter
patch_file: patches/ripp_metering.patch
patch_type: conan

92
external/wamr/conanfile.py vendored Normal file
View File

@@ -0,0 +1,92 @@
from conans import ConanFile, tools
from conan.tools.cmake import CMake, CMakeToolchain, CMakeDeps, cmake_layout
from conan.tools.files import (
apply_conandata_patches,
export_conandata_patches,
# get,
)
# import os
required_conan_version = ">=1.55.0"
class WamrConan(ConanFile):
name = "wamr"
version = "2.3.1"
license = "Apache License v2.0"
url = "https://github.com/bytecodealliance/wasm-micro-runtime.git"
description = "Webassembly micro runtime"
package_type = "library"
settings = "os", "compiler", "build_type", "arch"
options = {"shared": [True, False], "fPIC": [True, False]}
default_options = {"shared": False, "fPIC": True}
generators = "CMakeToolchain", "CMakeDeps"
# requires = [("llvm/20.1.1@")]
def export_sources(self):
export_conandata_patches(self)
pass
# def build_requirements(self):
# self.tool_requires("llvm/20.1.1")
def config_options(self):
if self.settings.os == "Windows":
del self.options.fPIC
def layout(self):
cmake_layout(self, src_folder="src")
def source(self):
git = tools.Git()
git.clone(
"https://github.com/bytecodealliance/wasm-micro-runtime.git",
"2a303861cc916dc182b7fecaa0aacc1b797e7ac6",
shallow=True,
)
# get(self, **self.conan_data["sources"][self.version], strip_root=True)
def generate(self):
tc = CMakeToolchain(self)
tc.variables["WAMR_BUILD_INTERP"] = 1
tc.variables["WAMR_BUILD_FAST_INTERP"] = 1
tc.variables["WAMR_BUILD_INSTRUCTION_METERING"] = 1
tc.variables["WAMR_BUILD_AOT"] = 0
tc.variables["WAMR_BUILD_JIT"] = 0
tc.variables["WAMR_BUILD_FAST_JIT"] = 0
tc.variables["WAMR_DISABLE_HW_BOUND_CHECK"] = 1
tc.variables["WAMR_DISABLE_STACK_HW_BOUND_CHECK"] = 1
tc.variables["WAMR_BH_LOG"] = "wamr_log_to_rippled"
# tc.variables["WAMR_BUILD_FAST_JIT"] = 0 if self.settings.os == "Windows" else 1
# ll_dep = self.dependencies["llvm"]
# self.output.info(f"-----------package_folder: {type(ll_dep.__dict__)}")
# tc.variables["LLVM_DIR"] = os.path.join(ll_dep.package_folder, "lib", "cmake", "llvm")
tc.generate()
# This generates "foo-config.cmake" and "bar-config.cmake" in self.generators_folder
deps = CMakeDeps(self)
deps.generate()
def build(self):
apply_conandata_patches(self)
cmake = CMake(self)
cmake.verbose = True
cmake.configure()
cmake.build()
# self.run(f'echo {self.source_folder}')
# Explicit way:
# self.run('cmake %s/hello %s' % (self.source_folder, cmake.command_line))
# self.run("cmake --build . %s" % cmake.build_config)
def package(self):
cmake = CMake(self)
cmake.verbose = True
cmake.install()
def package_info(self):
self.cpp_info.libs = ["iwasm"]
self.cpp_info.names["cmake_find_package"] = "wamr"
self.cpp_info.names["cmake_find_package_multi"] = "wamr"

View File

@@ -0,0 +1,724 @@
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 88a1642b..aeb29912 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -1,7 +1,7 @@
# Copyright (C) 2019 Intel Corporation. All rights reserved.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
-cmake_minimum_required (VERSION 3.14)
+cmake_minimum_required (VERSION 3.20)
option(BUILD_SHARED_LIBS "Build using shared libraries" OFF)
diff --git a/core/iwasm/aot/aot_runtime.c b/core/iwasm/aot/aot_runtime.c
index b2c9ed62..87947a18 100644
--- a/core/iwasm/aot/aot_runtime.c
+++ b/core/iwasm/aot/aot_runtime.c
@@ -5484,7 +5484,7 @@ aot_resolve_import_func(AOTModule *module, AOTImportFunc *import_func)
import_func->func_ptr_linked = wasm_native_resolve_symbol(
import_func->module_name, import_func->func_name,
import_func->func_type, &import_func->signature,
- &import_func->attachment, &import_func->call_conv_raw);
+ &import_func->attachment, NULL, &import_func->call_conv_raw);
#if WASM_ENABLE_MULTI_MODULE != 0
if (!import_func->func_ptr_linked) {
if (!wasm_runtime_is_built_in_module(import_func->module_name)) {
diff --git a/core/iwasm/common/wasm_c_api.c b/core/iwasm/common/wasm_c_api.c
index 269ec577..34eb7c34 100644
--- a/core/iwasm/common/wasm_c_api.c
+++ b/core/iwasm/common/wasm_c_api.c
@@ -3242,10 +3242,20 @@ wasm_func_copy(const wasm_func_t *func)
cloned->func_idx_rt = func->func_idx_rt;
cloned->inst_comm_rt = func->inst_comm_rt;
+ cloned->gas = func->gas;
RETURN_OBJ(cloned, wasm_func_delete)
}
+uint32_t
+wasm_func_set_gas(wasm_func_t *func, uint32_t gas)
+{
+ if(!func) return 0;
+
+ func->gas = gas;
+ return gas;
+}
+
own wasm_functype_t *
wasm_func_type(const wasm_func_t *func)
{
@@ -4998,11 +5008,11 @@ wasm_instance_new_with_args_ex(wasm_store_t *store, const wasm_module_t *module,
goto failed;
}
+ WASMModuleInstance *wasm_module_inst = NULL;
/* create the c-api func import list */
#if WASM_ENABLE_INTERP != 0
if (instance->inst_comm_rt->module_type == Wasm_Module_Bytecode) {
- WASMModuleInstance *wasm_module_inst =
- (WASMModuleInstance *)instance->inst_comm_rt;
+ wasm_module_inst = (WASMModuleInstance *)instance->inst_comm_rt;
p_func_imports = &(wasm_module_inst->c_api_func_imports);
import_func_count = MODULE_INTERP(module)->import_function_count;
}
@@ -5052,6 +5062,13 @@ wasm_instance_new_with_args_ex(wasm_store_t *store, const wasm_module_t *module,
}
bh_assert(func_import->func_ptr_linked);
+ // fill gas
+ if(wasm_module_inst) {
+ WASMFunctionInstance *fi = wasm_module_inst->e->functions + func_host->func_idx_rt;
+ if(fi) fi->gas = func_host->gas;
+ }
+
+
func_import++;
}
@@ -5389,3 +5406,8 @@ wasm_instance_get_wasm_func_exec_time(const wasm_instance_t *instance,
return -1.0;
#endif
}
+
+wasm_exec_env_t wasm_instance_exec_env(const wasm_instance_t *instance)
+{
+ return wasm_runtime_get_exec_env_singleton(instance->inst_comm_rt);
+}
diff --git a/core/iwasm/common/wasm_c_api_internal.h b/core/iwasm/common/wasm_c_api_internal.h
index 49a17a96..19a85980 100644
--- a/core/iwasm/common/wasm_c_api_internal.h
+++ b/core/iwasm/common/wasm_c_api_internal.h
@@ -142,6 +142,10 @@ struct wasm_func_t {
void (*finalizer)(void *);
} cb_env;
} u;
+
+ // gas cost for import func
+ uint32 gas;
+
/*
* an index in both functions runtime instance lists
* of interpreter mode and aot mode
diff --git a/core/iwasm/common/wasm_exec_env.c b/core/iwasm/common/wasm_exec_env.c
index 47752950..d5821c4f 100644
--- a/core/iwasm/common/wasm_exec_env.c
+++ b/core/iwasm/common/wasm_exec_env.c
@@ -86,7 +86,9 @@ wasm_exec_env_create_internal(struct WASMModuleInstanceCommon *module_inst,
#endif
#if WASM_ENABLE_INSTRUCTION_METERING != 0
- exec_env->instructions_to_execute = -1;
+ exec_env->instructions_to_execute = INT64_MAX;
+ for(int i = 0; i < 256; ++i)
+ exec_env->instructions_schedule[i] = 1;
#endif
return exec_env;
diff --git a/core/iwasm/common/wasm_exec_env.h b/core/iwasm/common/wasm_exec_env.h
index 5d80312f..2713a092 100644
--- a/core/iwasm/common/wasm_exec_env.h
+++ b/core/iwasm/common/wasm_exec_env.h
@@ -89,7 +89,8 @@ typedef struct WASMExecEnv {
#if WASM_ENABLE_INSTRUCTION_METERING != 0
/* instructions to execute */
- int instructions_to_execute;
+ int64 instructions_to_execute;
+ int64 instructions_schedule[256];
#endif
#if WASM_ENABLE_FAST_JIT != 0
diff --git a/core/iwasm/common/wasm_native.c b/core/iwasm/common/wasm_native.c
index 060bb2c3..9221c36a 100644
--- a/core/iwasm/common/wasm_native.c
+++ b/core/iwasm/common/wasm_native.c
@@ -180,9 +180,9 @@ native_symbol_cmp(const void *native_symbol1, const void *native_symbol2)
((const NativeSymbol *)native_symbol2)->symbol);
}
-static void *
+static NativeSymbol *
lookup_symbol(NativeSymbol *native_symbols, uint32 n_native_symbols,
- const char *symbol, const char **p_signature, void **p_attachment)
+ const char *symbol)
{
NativeSymbol *native_symbol, key = { 0 };
@@ -190,9 +190,7 @@ lookup_symbol(NativeSymbol *native_symbols, uint32 n_native_symbols,
if ((native_symbol = bsearch(&key, native_symbols, n_native_symbols,
sizeof(NativeSymbol), native_symbol_cmp))) {
- *p_signature = native_symbol->signature;
- *p_attachment = native_symbol->attachment;
- return native_symbol->func_ptr;
+ return native_symbol;
}
return NULL;
@@ -205,25 +203,36 @@ lookup_symbol(NativeSymbol *native_symbols, uint32 n_native_symbols,
void *
wasm_native_resolve_symbol(const char *module_name, const char *field_name,
const WASMFuncType *func_type,
- const char **p_signature, void **p_attachment,
+ const char **p_signature, void **p_attachment, uint32_t *gas,
bool *p_call_conv_raw)
{
NativeSymbolsNode *node, *node_next;
const char *signature = NULL;
void *func_ptr = NULL, *attachment = NULL;
+ NativeSymbol *native_symbol = NULL;
node = g_native_symbols_list;
while (node) {
node_next = node->next;
if (!strcmp(node->module_name, module_name)) {
- if ((func_ptr =
+ if ((native_symbol =
lookup_symbol(node->native_symbols, node->n_native_symbols,
- field_name, &signature, &attachment))
+ field_name))
|| (field_name[0] == '_'
- && (func_ptr = lookup_symbol(
+ && (native_symbol = lookup_symbol(
node->native_symbols, node->n_native_symbols,
- field_name + 1, &signature, &attachment))))
- break;
+ field_name + 1))))
+ {
+ func_ptr = native_symbol->func_ptr;
+ if(func_ptr)
+ {
+ if(gas)
+ *gas = native_symbol->gas;
+ signature = native_symbol->signature;
+ attachment = native_symbol->attachment;
+ break;
+ }
+ }
}
node = node_next;
}
diff --git a/core/iwasm/common/wasm_native.h b/core/iwasm/common/wasm_native.h
index 9a6afee1..0fe4739f 100644
--- a/core/iwasm/common/wasm_native.h
+++ b/core/iwasm/common/wasm_native.h
@@ -52,7 +52,7 @@ wasm_native_lookup_libc_builtin_global(const char *module_name,
void *
wasm_native_resolve_symbol(const char *module_name, const char *field_name,
const WASMFuncType *func_type,
- const char **p_signature, void **p_attachment,
+ const char **p_signature, void **p_attachment, uint32_t *gas,
bool *p_call_conv_raw);
bool
diff --git a/core/iwasm/common/wasm_runtime_common.c b/core/iwasm/common/wasm_runtime_common.c
index dcee0aea..10c516d6 100644
--- a/core/iwasm/common/wasm_runtime_common.c
+++ b/core/iwasm/common/wasm_runtime_common.c
@@ -2288,10 +2288,26 @@ wasm_runtime_access_exce_check_guard_page()
#if WASM_ENABLE_INSTRUCTION_METERING != 0
void
wasm_runtime_set_instruction_count_limit(WASMExecEnv *exec_env,
- int instructions_to_execute)
+ int64 instructions_to_execute)
{
+ if(instructions_to_execute == -1)
+ instructions_to_execute = INT64_MAX;
exec_env->instructions_to_execute = instructions_to_execute;
}
+
+int64
+wasm_runtime_get_instruction_count_limit(WASMExecEnv *exec_env)
+{
+ return exec_env->instructions_to_execute;
+}
+
+void
+wasm_runtime_set_instruction_schedule(WASMExecEnv *exec_env,
+ int64 const *instructions_schedule)
+{
+ for(int i = 0; i < 256; ++i)
+ exec_env->instructions_schedule[i] = instructions_schedule[i];
+}
#endif
WASMFuncType *
@@ -7348,7 +7364,7 @@ wasm_runtime_is_import_func_linked(const char *module_name,
const char *func_name)
{
return wasm_native_resolve_symbol(module_name, func_name, NULL, NULL, NULL,
- NULL);
+ NULL, NULL);
}
bool
@@ -7805,13 +7821,14 @@ wasm_runtime_get_module_name(wasm_module_t module)
bool
wasm_runtime_detect_native_stack_overflow(WASMExecEnv *exec_env)
{
+#if WASM_DISABLE_STACK_HW_BOUND_CHECK == 0
uint8 *boundary = exec_env->native_stack_boundary;
RECORD_STACK_USAGE(exec_env, (uint8 *)&boundary);
if (boundary == NULL) {
/* the platform doesn't support os_thread_get_stack_boundary */
return true;
}
-#if defined(OS_ENABLE_HW_BOUND_CHECK) && WASM_DISABLE_STACK_HW_BOUND_CHECK == 0
+#if defined(OS_ENABLE_HW_BOUND_CHECK)
uint32 page_size = os_getpagesize();
uint32 guard_page_count = STACK_OVERFLOW_CHECK_GUARD_PAGE_COUNT;
boundary = boundary + page_size * guard_page_count;
@@ -7821,6 +7838,7 @@ wasm_runtime_detect_native_stack_overflow(WASMExecEnv *exec_env)
"native stack overflow");
return false;
}
+#endif
return true;
}
@@ -7843,7 +7861,7 @@ wasm_runtime_detect_native_stack_overflow_size(WASMExecEnv *exec_env,
boundary = boundary - WASM_STACK_GUARD_SIZE + requested_size;
if ((uint8 *)&boundary < boundary) {
wasm_runtime_set_exception(wasm_runtime_get_module_inst(exec_env),
- "native stack overflow");
+ "native s stack overflow");
return false;
}
return true;
diff --git a/core/iwasm/common/wasm_runtime_common.h b/core/iwasm/common/wasm_runtime_common.h
index 64a6cd79..f4c55e2f 100644
--- a/core/iwasm/common/wasm_runtime_common.h
+++ b/core/iwasm/common/wasm_runtime_common.h
@@ -795,7 +795,14 @@ wasm_runtime_set_native_stack_boundary(WASMExecEnv *exec_env,
/* See wasm_export.h for description */
WASM_RUNTIME_API_EXTERN void
wasm_runtime_set_instruction_count_limit(WASMExecEnv *exec_env,
- int instructions_to_execute);
+ int64 instructions_to_execute);
+WASM_RUNTIME_API_EXTERN int64
+wasm_runtime_get_instruction_count_limit(WASMExecEnv *exec_env);
+
+WASM_RUNTIME_API_EXTERN void
+wasm_runtime_set_instruction_schedule(WASMExecEnv *exec_env,
+ int64 const *instructions_schedule);
+
#endif
#if WASM_CONFIGURABLE_BOUNDS_CHECKS != 0
diff --git a/core/iwasm/include/lib_export.h b/core/iwasm/include/lib_export.h
index 0ca668f5..93bcf807 100644
--- a/core/iwasm/include/lib_export.h
+++ b/core/iwasm/include/lib_export.h
@@ -24,6 +24,8 @@ typedef struct NativeSymbol {
/* attachment which can be retrieved in native API by
calling wasm_runtime_get_function_attachment(exec_env) */
void *attachment;
+ // gas cost for import func
+ uint32_t gas;
} NativeSymbol;
/* clang-format off */
diff --git a/core/iwasm/include/wasm_c_api.h b/core/iwasm/include/wasm_c_api.h
index 241a0eec..1141744c 100644
--- a/core/iwasm/include/wasm_c_api.h
+++ b/core/iwasm/include/wasm_c_api.h
@@ -19,8 +19,10 @@
#if defined(_MSC_BUILD)
#if defined(COMPILING_WASM_RUNTIME_API)
#define WASM_API_EXTERN __declspec(dllexport)
-#else
+#elif defined(_DLL)
#define WASM_API_EXTERN __declspec(dllimport)
+#else
+#define WASM_API_EXTERN
#endif
#else
#define WASM_API_EXTERN
@@ -592,6 +594,8 @@ WASM_API_EXTERN size_t wasm_func_result_arity(const wasm_func_t*);
WASM_API_EXTERN own wasm_trap_t* wasm_func_call(
const wasm_func_t*, const wasm_val_vec_t* args, wasm_val_vec_t* results);
+WASM_API_EXTERN own uint32_t wasm_func_set_gas(wasm_func_t*, uint32_t);
+
// Global Instances
@@ -701,6 +705,11 @@ WASM_API_EXTERN double wasm_instance_sum_wasm_exec_time(const wasm_instance_t*);
// func_name. If the function is not found, return 0.
WASM_API_EXTERN double wasm_instance_get_wasm_func_exec_time(const wasm_instance_t*, const char *);
+struct WASMExecEnv;
+typedef struct WASMExecEnv *wasm_exec_env_t;
+
+WASM_API_EXTERN wasm_exec_env_t wasm_instance_exec_env(const wasm_instance_t*);
+
///////////////////////////////////////////////////////////////////////////////
// Convenience
diff --git a/core/iwasm/include/wasm_export.h b/core/iwasm/include/wasm_export.h
index b4ab34be..3fd0949f 100644
--- a/core/iwasm/include/wasm_export.h
+++ b/core/iwasm/include/wasm_export.h
@@ -20,8 +20,10 @@
#if defined(_MSC_BUILD)
#if defined(COMPILING_WASM_RUNTIME_API)
#define WASM_RUNTIME_API_EXTERN __declspec(dllexport)
-#else
+#elif defined(_DLL)
#define WASM_RUNTIME_API_EXTERN __declspec(dllimport)
+#else
+#define WASM_RUNTIME_API_EXTERN
#endif
#elif defined(__GNUC__) || defined(__clang__)
#define WASM_RUNTIME_API_EXTERN __attribute__((visibility("default")))
@@ -1833,7 +1835,14 @@ wasm_runtime_set_native_stack_boundary(wasm_exec_env_t exec_env,
*/
WASM_RUNTIME_API_EXTERN void
wasm_runtime_set_instruction_count_limit(wasm_exec_env_t exec_env,
- int instruction_count);
+ int64_t instruction_count);
+
+WASM_RUNTIME_API_EXTERN int64_t
+wasm_runtime_get_instruction_count_limit(wasm_exec_env_t exec_env);
+
+WASM_RUNTIME_API_EXTERN void
+wasm_runtime_set_instruction_schedule(wasm_exec_env_t exec_env,
+ int64_t const *instructions_schedule);
/**
* Dump runtime memory consumption, including:
diff --git a/core/iwasm/interpreter/wasm.h b/core/iwasm/interpreter/wasm.h
index ddc0b15b..3a707878 100644
--- a/core/iwasm/interpreter/wasm.h
+++ b/core/iwasm/interpreter/wasm.h
@@ -579,6 +579,9 @@ typedef struct WASMFunctionImport {
WASMModule *import_module;
WASMFunction *import_func_linked;
#endif
+ // gas cost for import func
+ uint32 gas;
+
} WASMFunctionImport;
#if WASM_ENABLE_TAGS != 0
diff --git a/core/iwasm/interpreter/wasm_interp_classic.c b/core/iwasm/interpreter/wasm_interp_classic.c
index 1e98b0fa..ae24ff8b 100644
--- a/core/iwasm/interpreter/wasm_interp_classic.c
+++ b/core/iwasm/interpreter/wasm_interp_classic.c
@@ -1569,13 +1569,14 @@ get_global_addr(uint8 *global_data, WASMGlobalInstance *global)
}
#if WASM_ENABLE_INSTRUCTION_METERING != 0
-#define CHECK_INSTRUCTION_LIMIT() \
- if (instructions_left == 0) { \
- wasm_set_exception(module, "instruction limit exceeded"); \
- goto got_exception; \
- } \
- else if (instructions_left > 0) \
- instructions_left--;
+#define CHECK_INSTRUCTION_LIMIT() \
+ do { \
+ instructions_left -= instructions_schedule[opcode]; \
+ if (instructions_left < 0) { \
+ wasm_set_exception(module, "instruction limit exceeded"); \
+ goto got_exception; \
+ } \
+ } while (0)
#else
#define CHECK_INSTRUCTION_LIMIT() (void)0
#endif
@@ -1625,9 +1626,11 @@ wasm_interp_call_func_bytecode(WASMModuleInstance *module,
uint32 cache_index, type_index, param_cell_num, cell_num;
#if WASM_ENABLE_INSTRUCTION_METERING != 0
- int instructions_left = -1;
+ int64 instructions_left = INT64_MAX;
+ int64 const *instructions_schedule = NULL;
if (exec_env) {
instructions_left = exec_env->instructions_to_execute;
+ instructions_schedule = exec_env->instructions_schedule;
}
#endif
@@ -6885,6 +6888,11 @@ wasm_interp_call_func_bytecode(WASMModuleInstance *module,
FREE_FRAME(exec_env, frame);
wasm_exec_env_set_cur_frame(exec_env, prev_frame);
+#if WASM_ENABLE_INSTRUCTION_METERING != 0
+ if(exec_env)
+ exec_env->instructions_to_execute = instructions_left;
+#endif
+
if (!prev_frame->ip) {
/* Called from native. */
return;
@@ -6925,6 +6933,12 @@ wasm_interp_call_func_bytecode(WASMModuleInstance *module,
}
#endif
SYNC_ALL_TO_FRAME();
+
+#if WASM_ENABLE_INSTRUCTION_METERING != 0
+ if(exec_env)
+ exec_env->instructions_to_execute = instructions_left;
+#endif
+
return;
#if WASM_ENABLE_LABELS_AS_VALUES == 0
diff --git a/core/iwasm/interpreter/wasm_interp_fast.c b/core/iwasm/interpreter/wasm_interp_fast.c
index 4e5edf41..04c39e1f 100644
--- a/core/iwasm/interpreter/wasm_interp_fast.c
+++ b/core/iwasm/interpreter/wasm_interp_fast.c
@@ -106,14 +106,14 @@ typedef float64 CellType_F64;
} while (0)
#if WASM_ENABLE_INSTRUCTION_METERING != 0
-#define CHECK_INSTRUCTION_LIMIT() \
- if (instructions_left == 0) { \
- wasm_set_exception(module, "instruction limit exceeded"); \
- goto got_exception; \
- } \
- else if (instructions_left > 0) \
- instructions_left--;
-
+#define CHECK_INSTRUCTION_LIMIT() \
+ do { \
+ instructions_left -= instructions_schedule[opcode]; \
+ if (instructions_left < 0) { \
+ wasm_set_exception(module, "instruction limit exceeded"); \
+ goto got_exception; \
+ } \
+ } while (0)
#else
#define CHECK_INSTRUCTION_LIMIT() (void)0
#endif
@@ -1454,7 +1454,6 @@ wasm_interp_dump_op_count()
do { \
const void *p_label_addr = *(void **)frame_ip; \
frame_ip += sizeof(void *); \
- CHECK_INSTRUCTION_LIMIT(); \
goto *p_label_addr; \
} while (0)
#else
@@ -1466,7 +1465,6 @@ wasm_interp_dump_op_count()
/* int32 relative offset was emitted in 64-bit target */ \
p_label_addr = label_base + (int32)LOAD_U32_WITH_2U16S(frame_ip); \
frame_ip += sizeof(int32); \
- CHECK_INSTRUCTION_LIMIT(); \
goto *p_label_addr; \
} while (0)
#else
@@ -1477,17 +1475,18 @@ wasm_interp_dump_op_count()
/* uint32 label address was emitted in 32-bit target */ \
p_label_addr = (void *)(uintptr_t)LOAD_U32_WITH_2U16S(frame_ip); \
frame_ip += sizeof(int32); \
- CHECK_INSTRUCTION_LIMIT(); \
goto *p_label_addr; \
} while (0)
#endif
#endif /* end of WASM_CPU_SUPPORTS_UNALIGNED_ADDR_ACCESS */
-#define HANDLE_OP_END() FETCH_OPCODE_AND_DISPATCH()
+#define HANDLE_OP_END() CHECK_INSTRUCTION_LIMIT(); FETCH_OPCODE_AND_DISPATCH()
#else /* else of WASM_ENABLE_LABELS_AS_VALUES */
#define HANDLE_OP(opcode) case opcode:
-#define HANDLE_OP_END() continue
+#define HANDLE_OP_END() \
+ CHECK_INSTRUCTION_LIMIT(); \
+ continue
#endif /* end of WASM_ENABLE_LABELS_AS_VALUES */
@@ -1508,6 +1507,25 @@ get_global_addr(uint8 *global_data, WASMGlobalInstance *global)
#endif
}
+static int64 const def_instructions_schedule[256] = {
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
+};
+
static void
wasm_interp_call_func_bytecode(WASMModuleInstance *module,
WASMExecEnv *exec_env,
@@ -1556,9 +1574,11 @@ wasm_interp_call_func_bytecode(WASMModuleInstance *module,
uint8 opcode = 0, local_type, *global_addr;
#if WASM_ENABLE_INSTRUCTION_METERING != 0
- int instructions_left = -1;
+ int64 instructions_left = INT64_MAX;
+ int64 const *instructions_schedule = def_instructions_schedule;
if (exec_env) {
instructions_left = exec_env->instructions_to_execute;
+ instructions_schedule = exec_env->instructions_schedule;
}
#endif
#if !defined(OS_ENABLE_HW_BOUND_CHECK) \
@@ -7694,6 +7714,7 @@ wasm_interp_call_func_bytecode(WASMModuleInstance *module,
{
wasm_interp_call_func_native(module, exec_env, cur_func,
prev_frame);
+ instructions_left -= cur_func->gas;
}
#if WASM_ENABLE_TAIL_CALL != 0 || WASM_ENABLE_GC != 0
@@ -7806,6 +7827,11 @@ wasm_interp_call_func_bytecode(WASMModuleInstance *module,
FREE_FRAME(exec_env, frame);
wasm_exec_env_set_cur_frame(exec_env, (WASMRuntimeFrame *)prev_frame);
+#if WASM_ENABLE_INSTRUCTION_METERING != 0
+ if (exec_env)
+ exec_env->instructions_to_execute = instructions_left;
+#endif
+
if (!prev_frame->ip)
/* Called from native. */
return;
@@ -7834,6 +7860,10 @@ wasm_interp_call_func_bytecode(WASMModuleInstance *module,
got_exception:
SYNC_ALL_TO_FRAME();
+#if WASM_ENABLE_INSTRUCTION_METERING != 0
+ if (exec_env)
+ exec_env->instructions_to_execute = instructions_left;
+#endif
return;
#if WASM_ENABLE_LABELS_AS_VALUES == 0
diff --git a/core/iwasm/interpreter/wasm_mini_loader.c b/core/iwasm/interpreter/wasm_mini_loader.c
index e66c08ba..d52e677e 100644
--- a/core/iwasm/interpreter/wasm_mini_loader.c
+++ b/core/iwasm/interpreter/wasm_mini_loader.c
@@ -636,6 +636,7 @@ load_function_import(const uint8 **p_buf, const uint8 *buf_end,
const char *linked_signature = NULL;
void *linked_attachment = NULL;
bool linked_call_conv_raw = false;
+ uint32_t gas = 0;
read_leb_uint32(p, p_end, declare_type_index);
*p_buf = p;
@@ -647,7 +648,7 @@ load_function_import(const uint8 **p_buf, const uint8 *buf_end,
/* check built-in modules */
linked_func = wasm_native_resolve_symbol(
sub_module_name, function_name, declare_func_type, &linked_signature,
- &linked_attachment, &linked_call_conv_raw);
+ &linked_attachment, &gas, &linked_call_conv_raw);
function->module_name = (char *)sub_module_name;
function->field_name = (char *)function_name;
@@ -656,6 +657,7 @@ load_function_import(const uint8 **p_buf, const uint8 *buf_end,
function->signature = linked_signature;
function->attachment = linked_attachment;
function->call_conv_raw = linked_call_conv_raw;
+ function->gas = gas;
return true;
}
diff --git a/core/iwasm/interpreter/wasm_runtime.c b/core/iwasm/interpreter/wasm_runtime.c
index 3cc2afe0..55859d35 100644
--- a/core/iwasm/interpreter/wasm_runtime.c
+++ b/core/iwasm/interpreter/wasm_runtime.c
@@ -168,7 +168,7 @@ wasm_resolve_import_func(const WASMModule *module, WASMFunctionImport *function)
#endif
function->func_ptr_linked = wasm_native_resolve_symbol(
function->module_name, function->field_name, function->func_type,
- &function->signature, &function->attachment, &function->call_conv_raw);
+ &function->signature, &function->attachment, &function->gas, &function->call_conv_raw);
if (function->func_ptr_linked) {
return true;
@@ -820,6 +820,7 @@ functions_instantiate(const WASMModule *module, WASMModuleInstance *module_inst,
function->param_count =
(uint16)function->u.func_import->func_type->param_count;
function->param_types = function->u.func_import->func_type->types;
+ function->gas = import->u.function.gas;
function->local_cell_num = 0;
function->local_count = 0;
function->local_types = NULL;
diff --git a/core/iwasm/interpreter/wasm_runtime.h b/core/iwasm/interpreter/wasm_runtime.h
index 8d38c883..a687ab89 100644
--- a/core/iwasm/interpreter/wasm_runtime.h
+++ b/core/iwasm/interpreter/wasm_runtime.h
@@ -228,6 +228,10 @@ struct WASMFunctionInstance {
WASMFunctionImport *func_import;
WASMFunction *func;
} u;
+
+ // gas cost for import func
+ uint32 gas;
+
#if WASM_ENABLE_MULTI_MODULE != 0
WASMModuleInstance *import_module_inst;
WASMFunctionInstance *import_func_inst;
diff --git a/core/iwasm/libraries/libc-builtin/libc_builtin_wrapper.c b/core/iwasm/libraries/libc-builtin/libc_builtin_wrapper.c
index a68c0749..cafb6915 100644
--- a/core/iwasm/libraries/libc-builtin/libc_builtin_wrapper.c
+++ b/core/iwasm/libraries/libc-builtin/libc_builtin_wrapper.c
@@ -1038,16 +1038,16 @@ print_f64_wrapper(wasm_exec_env_t exec_env, double f64)
/* clang-format off */
#define REG_NATIVE_FUNC(func_name, signature) \
- { #func_name, func_name##_wrapper, signature, NULL }
+ { #func_name, func_name##_wrapper, signature, NULL, 0 }
/* clang-format on */
static NativeSymbol native_symbols_libc_builtin[] = {
REG_NATIVE_FUNC(printf, "($*)i"),
REG_NATIVE_FUNC(sprintf, "($$*)i"),
REG_NATIVE_FUNC(snprintf, "(*~$*)i"),
- { "vprintf", printf_wrapper, "($*)i", NULL },
- { "vsprintf", sprintf_wrapper, "($$*)i", NULL },
- { "vsnprintf", snprintf_wrapper, "(*~$*)i", NULL },
+ { "vprintf", printf_wrapper, "($*)i", NULL, 0 },
+ { "vsprintf", sprintf_wrapper, "($$*)i", NULL, 0 },
+ { "vsnprintf", snprintf_wrapper, "(*~$*)i", NULL, 0 },
REG_NATIVE_FUNC(puts, "($)i"),
REG_NATIVE_FUNC(putchar, "(i)i"),
REG_NATIVE_FUNC(memcmp, "(**~)i"),
diff --git a/core/iwasm/libraries/libc-wasi/libc_wasi_wrapper.c b/core/iwasm/libraries/libc-wasi/libc_wasi_wrapper.c
index 6d057a6a..25879f33 100644
--- a/core/iwasm/libraries/libc-wasi/libc_wasi_wrapper.c
+++ b/core/iwasm/libraries/libc-wasi/libc_wasi_wrapper.c
@@ -2257,7 +2257,7 @@ wasi_sched_yield(wasm_exec_env_t exec_env)
/* clang-format off */
#define REG_NATIVE_FUNC(func_name, signature) \
- { #func_name, wasi_##func_name, signature, NULL }
+ { #func_name, wasi_##func_name, signature, NULL, 0 }
/* clang-format on */
static NativeSymbol native_symbols_libc_wasi[] = {
diff --git a/core/shared/platform/include/platform_wasi_types.h b/core/shared/platform/include/platform_wasi_types.h
index ac1a95ea..e23b500e 100644
--- a/core/shared/platform/include/platform_wasi_types.h
+++ b/core/shared/platform/include/platform_wasi_types.h
@@ -36,7 +36,11 @@ extern "C" {
#if WASM_ENABLE_UVWASI != 0 || WASM_ENABLE_LIBC_WASI == 0
#define assert_wasi_layout(expr, message) /* nothing */
#else
-#define assert_wasi_layout(expr, message) _Static_assert(expr, message)
+ #ifndef _MSC_VER
+ #define assert_wasi_layout(expr, message) _Static_assert(expr, message)
+ #else
+ #define assert_wasi_layout(expr, message) static_assert(expr, message)
+ #endif
#endif
assert_wasi_layout(_Alignof(int8_t) == 1, "non-wasi data layout");

View File

@@ -367,7 +367,7 @@ get(Section const& section,
}
inline std::string
get(Section const& section, std::string const& name, const char* defaultValue)
get(Section const& section, std::string const& name, char const* defaultValue)
{
try
{

View File

@@ -55,7 +55,7 @@ lz4Compress(void const* in, std::size_t inSize, BufferFactory&& bf)
auto compressed = bf(outCapacity);
auto compressedSize = LZ4_compress_default(
reinterpret_cast<const char*>(in),
reinterpret_cast<char const*>(in),
reinterpret_cast<char*>(compressed),
inSize,
outCapacity);
@@ -89,7 +89,7 @@ lz4Decompress(
Throw<std::runtime_error>("lz4Decompress: integer overflow (output)");
if (LZ4_decompress_safe(
reinterpret_cast<const char*>(in),
reinterpret_cast<char const*>(in),
reinterpret_cast<char*>(decompressed),
inSize,
decompressedSize) != decompressedSize)

View File

@@ -93,7 +93,7 @@ public:
{
}
constexpr const E&
constexpr E const&
value() const&
{
return val_;
@@ -111,7 +111,7 @@ public:
return std::move(val_);
}
constexpr const E&&
constexpr E const&&
value() const&&
{
return std::move(val_);

View File

@@ -0,0 +1,515 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2023 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_BASICS_INTRUSIVEPOINTER_H_INCLUDED
#define RIPPLE_BASICS_INTRUSIVEPOINTER_H_INCLUDED
#include <concepts>
#include <cstdint>
#include <type_traits>
#include <utility>
namespace ripple {
//------------------------------------------------------------------------------
/** Tag to create an intrusive pointer from another intrusive pointer by using a
static cast. This is useful to create an intrusive pointer to a derived
class from an intrusive pointer to a base class.
*/
struct StaticCastTagSharedIntrusive
{
};
/** Tag to create an intrusive pointer from another intrusive pointer by using a
dynamic cast. This is useful to create an intrusive pointer to a derived
class from an intrusive pointer to a base class. If the cast fails an empty
(null) intrusive pointer is created.
*/
struct DynamicCastTagSharedIntrusive
{
};
/** When creating or adopting a raw pointer, controls whether the strong count
is incremented or not. Use this tag to increment the strong count.
*/
struct SharedIntrusiveAdoptIncrementStrongTag
{
};
/** When creating or adopting a raw pointer, controls whether the strong count
is incremented or not. Use this tag to leave the strong count unchanged.
*/
struct SharedIntrusiveAdoptNoIncrementTag
{
};
//------------------------------------------------------------------------------
//
template <class T>
concept CAdoptTag = std::is_same_v<T, SharedIntrusiveAdoptIncrementStrongTag> ||
std::is_same_v<T, SharedIntrusiveAdoptNoIncrementTag>;
//------------------------------------------------------------------------------
/** A shared intrusive pointer class that supports weak pointers.
This is meant to be used for SHAMapInnerNodes, but may be useful for other
cases. Since the reference counts are stored on the pointee, the pointee is
not destroyed until both the strong _and_ weak pointer counts go to zero.
When the strong pointer count goes to zero, the "partialDestructor" is
called. This can be used to destroy as much of the object as possible while
still retaining the reference counts. For example, for SHAMapInnerNodes the
children may be reset in that function. Note that std::shared_poiner WILL
run the destructor when the strong count reaches zero, but may not free the
memory used by the object until the weak count reaches zero. In rippled, we
typically allocate shared pointers with the `make_shared` function. When
that is used, the memory is not reclaimed until the weak count reaches zero.
*/
template <class T>
class SharedIntrusive
{
public:
SharedIntrusive() = default;
template <CAdoptTag TAdoptTag>
SharedIntrusive(T* p, TAdoptTag) noexcept;
SharedIntrusive(SharedIntrusive const& rhs);
template <class TT>
// TODO: convertible_to isn't quite right. That include a static castable.
// Find the right concept.
requires std::convertible_to<TT*, T*>
SharedIntrusive(SharedIntrusive<TT> const& rhs);
SharedIntrusive(SharedIntrusive&& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedIntrusive(SharedIntrusive<TT>&& rhs);
SharedIntrusive&
operator=(SharedIntrusive const& rhs);
bool
operator!=(std::nullptr_t) const;
bool
operator==(std::nullptr_t) const;
template <class TT>
requires std::convertible_to<TT*, T*>
SharedIntrusive&
operator=(SharedIntrusive<TT> const& rhs);
SharedIntrusive&
operator=(SharedIntrusive&& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedIntrusive&
operator=(SharedIntrusive<TT>&& rhs);
/** Adopt the raw pointer. The strong reference may or may not be
incremented, depending on the TAdoptTag
*/
template <CAdoptTag TAdoptTag = SharedIntrusiveAdoptIncrementStrongTag>
void
adopt(T* p);
~SharedIntrusive();
/** Create a new SharedIntrusive by statically casting the pointer
controlled by the rhs param.
*/
template <class TT>
SharedIntrusive(
StaticCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs);
/** Create a new SharedIntrusive by statically casting the pointer
controlled by the rhs param.
*/
template <class TT>
SharedIntrusive(StaticCastTagSharedIntrusive, SharedIntrusive<TT>&& rhs);
/** Create a new SharedIntrusive by dynamically casting the pointer
controlled by the rhs param.
*/
template <class TT>
SharedIntrusive(
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs);
/** Create a new SharedIntrusive by dynamically casting the pointer
controlled by the rhs param.
*/
template <class TT>
SharedIntrusive(DynamicCastTagSharedIntrusive, SharedIntrusive<TT>&& rhs);
T&
operator*() const noexcept;
T*
operator->() const noexcept;
explicit
operator bool() const noexcept;
/** Set the pointer to null, decrement the strong count, and run the
appropriate release action.
*/
void
reset();
/** Get the raw pointer */
T*
get() const;
/** Return the strong count */
std::size_t
use_count() const;
template <class TT, class... Args>
friend SharedIntrusive<TT>
make_SharedIntrusive(Args&&... args);
template <class TT>
friend class SharedIntrusive;
template <class TT>
friend class SharedWeakUnion;
template <class TT>
friend class WeakIntrusive;
private:
/** Return the raw pointer held by this object. */
T*
unsafeGetRawPtr() const;
/** Exchange the current raw pointer held by this object with the given
pointer. Decrement the strong count of the raw pointer previously held
by this object and run the appropriate release action.
*/
void
unsafeReleaseAndStore(T* next);
/** Set the raw pointer directly. This is wrapped in a function so the class
can support both atomic and non-atomic pointers in a future patch.
*/
void
unsafeSetRawPtr(T* p);
/** Exchange the raw pointer directly.
This sets the raw pointer to the given value and returns the previous
value. This is wrapped in a function so the class can support both
atomic and non-atomic pointers in a future patch.
*/
T*
unsafeExchange(T* p);
/** pointer to the type with an intrusive count */
T* ptr_{nullptr};
};
//------------------------------------------------------------------------------
/** A weak intrusive pointer class for the SharedIntrusive pointer class.
Note that this weak pointer class asks differently from normal weak pointer
classes. When the strong pointer count goes to zero, the "partialDestructor"
is called. See the comment on SharedIntrusive for a fuller explanation.
*/
template <class T>
class WeakIntrusive
{
public:
WeakIntrusive() = default;
WeakIntrusive(WeakIntrusive const& rhs);
WeakIntrusive(WeakIntrusive&& rhs);
WeakIntrusive(SharedIntrusive<T> const& rhs);
// There is no move constructor from a strong intrusive ptr because
// moving would be move expensive than copying in this case (the strong
// ref would need to be decremented)
WeakIntrusive(SharedIntrusive<T> const&& rhs) = delete;
// Since there are no current use cases for copy assignment in
// WeakIntrusive, we delete this operator to simplify the implementation. If
// a need arises in the future, we can reintroduce it with proper
// consideration."
WeakIntrusive&
operator=(WeakIntrusive const&) = delete;
template <class TT>
requires std::convertible_to<TT*, T*>
WeakIntrusive&
operator=(SharedIntrusive<TT> const& rhs);
/** Adopt the raw pointer and increment the weak count. */
void
adopt(T* ptr);
~WeakIntrusive();
/** Get a strong pointer from the weak pointer, if possible. This will
only return a seated pointer if the strong count on the raw pointer
is non-zero before locking.
*/
SharedIntrusive<T>
lock() const;
/** Return true if the strong count is zero. */
bool
expired() const;
/** Set the pointer to null and decrement the weak count.
Note: This may run the destructor if the strong count is zero.
*/
void
reset();
private:
T* ptr_ = nullptr;
/** Decrement the weak count. This does _not_ set the raw pointer to
null.
Note: This may run the destructor if the strong count is zero.
*/
void
unsafeReleaseNoStore();
};
//------------------------------------------------------------------------------
/** A combination of a strong and a weak intrusive pointer stored in the
space of a single pointer.
This class is similar to a `std::variant<SharedIntrusive,WeakIntrusive>`
with some optimizations. In particular, it uses a low-order bit to
determine if the raw pointer represents a strong pointer or a weak
pointer. It can also be quickly switched between its strong pointer and
weak pointer representations. This class is useful for storing intrusive
pointers in tagged caches.
*/
template <class T>
class SharedWeakUnion
{
// Tagged pointer. Low bit determines if this is a strong or a weak
// pointer. The low bit must be masked to zero when converting back to a
// pointer. If the low bit is '1', this is a weak pointer.
static_assert(
alignof(T) >= 2,
"Bad alignment: Combo pointer requires low bit to be zero");
public:
SharedWeakUnion() = default;
SharedWeakUnion(SharedWeakUnion const& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakUnion(SharedIntrusive<TT> const& rhs);
SharedWeakUnion(SharedWeakUnion&& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakUnion(SharedIntrusive<TT>&& rhs);
SharedWeakUnion&
operator=(SharedWeakUnion const& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakUnion&
operator=(SharedIntrusive<TT> const& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakUnion&
operator=(SharedIntrusive<TT>&& rhs);
~SharedWeakUnion();
/** Return a strong pointer if this is already a strong pointer (i.e.
don't lock the weak pointer. Use the `lock` method if that's what's
needed)
*/
SharedIntrusive<T>
getStrong() const;
/** Return true if this is a strong pointer and the strong pointer is
seated.
*/
explicit
operator bool() const noexcept;
/** Set the pointer to null, decrement the appropriate ref count, and
run the appropriate release action.
*/
void
reset();
/** If this is a strong pointer, return the raw pointer. Otherwise
return null.
*/
T*
get() const;
/** If this is a strong pointer, return the strong count. Otherwise
* return 0
*/
std::size_t
use_count() const;
/** Return true if there is a non-zero strong count. */
bool
expired() const;
/** If this is a strong pointer, return the strong pointer. Otherwise
attempt to lock the weak pointer.
*/
SharedIntrusive<T>
lock() const;
/** Return true is this represents a strong pointer. */
bool
isStrong() const;
/** Return true is this represents a weak pointer. */
bool
isWeak() const;
/** If this is a weak pointer, attempt to convert it to a strong
pointer.
@return true if successfully converted to a strong pointer (or was
already a strong pointer). Otherwise false.
*/
bool
convertToStrong();
/** If this is a strong pointer, attempt to convert it to a weak
pointer.
@return false if the pointer is null. Otherwise return true.
*/
bool
convertToWeak();
private:
// Tagged pointer. Low bit determines if this is a strong or a weak
// pointer. The low bit must be masked to zero when converting back to a
// pointer. If the low bit is '1', this is a weak pointer.
std::uintptr_t tp_{0};
static constexpr std::uintptr_t tagMask = 1;
static constexpr std::uintptr_t ptrMask = ~tagMask;
private:
/** Return the raw pointer held by this object.
*/
T*
unsafeGetRawPtr() const;
enum class RefStrength { strong, weak };
/** Set the raw pointer and tag bit directly.
*/
void
unsafeSetRawPtr(T* p, RefStrength rs);
/** Set the raw pointer and tag bit to all zeros (strong null pointer).
*/
void unsafeSetRawPtr(std::nullptr_t);
/** Decrement the appropriate ref count, and run the appropriate release
action. Note: this does _not_ set the raw pointer to null.
*/
void
unsafeReleaseNoStore();
};
//------------------------------------------------------------------------------
/** Create a shared intrusive pointer.
Note: unlike std::shared_ptr, where there is an advantage of allocating
the pointer and control block together, there is no benefit for intrusive
pointers.
*/
template <class TT, class... Args>
SharedIntrusive<TT>
make_SharedIntrusive(Args&&... args)
{
auto p = new TT(std::forward<Args>(args)...);
static_assert(
noexcept(SharedIntrusive<TT>(
std::declval<TT*>(),
std::declval<SharedIntrusiveAdoptNoIncrementTag>())),
"SharedIntrusive constructor should not throw or this can leak "
"memory");
return SharedIntrusive<TT>(p, SharedIntrusiveAdoptNoIncrementTag{});
}
//------------------------------------------------------------------------------
namespace intr_ptr {
template <class T>
using SharedPtr = SharedIntrusive<T>;
template <class T>
using WeakPtr = WeakIntrusive<T>;
template <class T>
using SharedWeakUnionPtr = SharedWeakUnion<T>;
template <class T, class... A>
SharedPtr<T>
make_shared(A&&... args)
{
return make_SharedIntrusive<T>(std::forward<A>(args)...);
}
template <class T, class TT>
SharedPtr<T>
static_pointer_cast(TT const& v)
{
return SharedPtr<T>(StaticCastTagSharedIntrusive{}, v);
}
template <class T, class TT>
SharedPtr<T>
dynamic_pointer_cast(TT const& v)
{
return SharedPtr<T>(DynamicCastTagSharedIntrusive{}, v);
}
} // namespace intr_ptr
} // namespace ripple
#endif

View File

@@ -0,0 +1,740 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2023 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_BASICS_INTRUSIVEPOINTER_IPP_INCLUDED
#define RIPPLE_BASICS_INTRUSIVEPOINTER_IPP_INCLUDED
#include <xrpl/basics/IntrusivePointer.h>
#include <xrpl/basics/IntrusiveRefCounts.h>
#include <utility>
namespace ripple {
template <class T>
template <CAdoptTag TAdoptTag>
SharedIntrusive<T>::SharedIntrusive(T* p, TAdoptTag) noexcept : ptr_{p}
{
if constexpr (std::is_same_v<
TAdoptTag,
SharedIntrusiveAdoptIncrementStrongTag>)
{
if (p)
p->addStrongRef();
}
}
template <class T>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive const& rhs)
: ptr_{[&] {
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addStrongRef();
return p;
}()}
{
}
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive<TT> const& rhs)
: ptr_{[&] {
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addStrongRef();
return p;
}()}
{
}
template <class T>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive&& rhs)
: ptr_{rhs.unsafeExchange(nullptr)}
{
}
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedIntrusive<T>::SharedIntrusive(SharedIntrusive<TT>&& rhs)
: ptr_{rhs.unsafeExchange(nullptr)}
{
}
template <class T>
SharedIntrusive<T>&
SharedIntrusive<T>::operator=(SharedIntrusive const& rhs)
{
if (this == &rhs)
return *this;
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addStrongRef();
unsafeReleaseAndStore(p);
return *this;
}
template <class T>
template <class TT>
// clang-format off
requires std::convertible_to<TT*, T*>
// clang-format on
SharedIntrusive<T>&
SharedIntrusive<T>::operator=(SharedIntrusive<TT> const& rhs)
{
if constexpr (std::is_same_v<T, TT>)
{
// This case should never be hit. The operator above will run instead.
// (The normal operator= is needed or it will be marked `deleted`)
if (this == &rhs)
return *this;
}
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addStrongRef();
unsafeReleaseAndStore(p);
return *this;
}
template <class T>
SharedIntrusive<T>&
SharedIntrusive<T>::operator=(SharedIntrusive&& rhs)
{
if (this == &rhs)
return *this;
unsafeReleaseAndStore(rhs.unsafeExchange(nullptr));
return *this;
}
template <class T>
template <class TT>
// clang-format off
requires std::convertible_to<TT*, T*>
// clang-format on
SharedIntrusive<T>&
SharedIntrusive<T>::operator=(SharedIntrusive<TT>&& rhs)
{
static_assert(
!std::is_same_v<T, TT>,
"This overload should not be instantiated for T == TT");
unsafeReleaseAndStore(rhs.unsafeExchange(nullptr));
return *this;
}
template <class T>
bool
SharedIntrusive<T>::operator!=(std::nullptr_t) const
{
return this->get() != nullptr;
}
template <class T>
bool
SharedIntrusive<T>::operator==(std::nullptr_t) const
{
return this->get() == nullptr;
}
template <class T>
template <CAdoptTag TAdoptTag>
void
SharedIntrusive<T>::adopt(T* p)
{
if constexpr (std::is_same_v<
TAdoptTag,
SharedIntrusiveAdoptIncrementStrongTag>)
{
if (p)
p->addStrongRef();
}
unsafeReleaseAndStore(p);
}
template <class T>
SharedIntrusive<T>::~SharedIntrusive()
{
unsafeReleaseAndStore(nullptr);
};
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
StaticCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs)
: ptr_{[&] {
auto p = static_cast<T*>(rhs.unsafeGetRawPtr());
if (p)
p->addStrongRef();
return p;
}()}
{
}
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
StaticCastTagSharedIntrusive,
SharedIntrusive<TT>&& rhs)
: ptr_{static_cast<T*>(rhs.unsafeExchange(nullptr))}
{
}
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT> const& rhs)
: ptr_{[&] {
auto p = dynamic_cast<T*>(rhs.unsafeGetRawPtr());
if (p)
p->addStrongRef();
return p;
}()}
{
}
template <class T>
template <class TT>
SharedIntrusive<T>::SharedIntrusive(
DynamicCastTagSharedIntrusive,
SharedIntrusive<TT>&& rhs)
{
// This can be simplified without the `exchange`, but the `exchange` is kept
// in anticipation of supporting atomic operations.
auto toSet = rhs.unsafeExchange(nullptr);
if (toSet)
{
ptr_ = dynamic_cast<T*>(toSet);
if (!ptr_)
// need to set the pointer back or will leak
rhs.unsafeExchange(toSet);
}
}
template <class T>
T&
SharedIntrusive<T>::operator*() const noexcept
{
return *unsafeGetRawPtr();
}
template <class T>
T*
SharedIntrusive<T>::operator->() const noexcept
{
return unsafeGetRawPtr();
}
template <class T>
SharedIntrusive<T>::operator bool() const noexcept
{
return bool(unsafeGetRawPtr());
}
template <class T>
void
SharedIntrusive<T>::reset()
{
unsafeReleaseAndStore(nullptr);
}
template <class T>
T*
SharedIntrusive<T>::get() const
{
return unsafeGetRawPtr();
}
template <class T>
std::size_t
SharedIntrusive<T>::use_count() const
{
if (auto p = unsafeGetRawPtr())
return p->use_count();
return 0;
}
template <class T>
T*
SharedIntrusive<T>::unsafeGetRawPtr() const
{
return ptr_;
}
template <class T>
void
SharedIntrusive<T>::unsafeSetRawPtr(T* p)
{
ptr_ = p;
}
template <class T>
T*
SharedIntrusive<T>::unsafeExchange(T* p)
{
return std::exchange(ptr_, p);
}
template <class T>
void
SharedIntrusive<T>::unsafeReleaseAndStore(T* next)
{
auto prev = unsafeExchange(next);
if (!prev)
return;
using enum ReleaseStrongRefAction;
auto action = prev->releaseStrongRef();
switch (action)
{
case noop:
break;
case destroy:
delete prev;
break;
case partialDestroy:
prev->partialDestructor();
partialDestructorFinished(&prev);
// prev is null and may no longer be used
break;
}
}
//------------------------------------------------------------------------------
template <class T>
WeakIntrusive<T>::WeakIntrusive(WeakIntrusive const& rhs) : ptr_{rhs.ptr_}
{
if (ptr_)
ptr_->addWeakRef();
}
template <class T>
WeakIntrusive<T>::WeakIntrusive(WeakIntrusive&& rhs) : ptr_{rhs.ptr_}
{
rhs.ptr_ = nullptr;
}
template <class T>
WeakIntrusive<T>::WeakIntrusive(SharedIntrusive<T> const& rhs)
: ptr_{rhs.unsafeGetRawPtr()}
{
if (ptr_)
ptr_->addWeakRef();
}
template <class T>
template <class TT>
// clang-format off
requires std::convertible_to<TT*, T*>
// clang-format on
WeakIntrusive<T>&
WeakIntrusive<T>::operator=(SharedIntrusive<TT> const& rhs)
{
unsafeReleaseNoStore();
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addWeakRef();
return *this;
}
template <class T>
void
WeakIntrusive<T>::adopt(T* ptr)
{
unsafeReleaseNoStore();
if (ptr)
ptr->addWeakRef();
ptr_ = ptr;
}
template <class T>
WeakIntrusive<T>::~WeakIntrusive()
{
unsafeReleaseNoStore();
}
template <class T>
SharedIntrusive<T>
WeakIntrusive<T>::lock() const
{
if (ptr_ && ptr_->checkoutStrongRefFromWeak())
{
return SharedIntrusive<T>{ptr_, SharedIntrusiveAdoptNoIncrementTag{}};
}
return {};
}
template <class T>
bool
WeakIntrusive<T>::expired() const
{
return (!ptr_ || ptr_->expired());
}
template <class T>
void
WeakIntrusive<T>::reset()
{
unsafeReleaseNoStore();
ptr_ = nullptr;
}
template <class T>
void
WeakIntrusive<T>::unsafeReleaseNoStore()
{
if (!ptr_)
return;
using enum ReleaseWeakRefAction;
auto action = ptr_->releaseWeakRef();
switch (action)
{
case noop:
break;
case destroy:
delete ptr_;
break;
}
}
//------------------------------------------------------------------------------
template <class T>
SharedWeakUnion<T>::SharedWeakUnion(SharedWeakUnion const& rhs) : tp_{rhs.tp_}
{
auto p = rhs.unsafeGetRawPtr();
if (!p)
return;
if (rhs.isStrong())
p->addStrongRef();
else
p->addWeakRef();
}
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakUnion<T>::SharedWeakUnion(SharedIntrusive<TT> const& rhs)
{
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addStrongRef();
unsafeSetRawPtr(p, RefStrength::strong);
}
template <class T>
SharedWeakUnion<T>::SharedWeakUnion(SharedWeakUnion&& rhs) : tp_{rhs.tp_}
{
rhs.unsafeSetRawPtr(nullptr);
}
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakUnion<T>::SharedWeakUnion(SharedIntrusive<TT>&& rhs)
{
auto p = rhs.unsafeGetRawPtr();
if (p)
unsafeSetRawPtr(p, RefStrength::strong);
rhs.unsafeSetRawPtr(nullptr);
}
template <class T>
SharedWeakUnion<T>&
SharedWeakUnion<T>::operator=(SharedWeakUnion const& rhs)
{
if (this == &rhs)
return *this;
unsafeReleaseNoStore();
if (auto p = rhs.unsafeGetRawPtr())
{
if (rhs.isStrong())
{
p->addStrongRef();
unsafeSetRawPtr(p, RefStrength::strong);
}
else
{
p->addWeakRef();
unsafeSetRawPtr(p, RefStrength::weak);
}
}
else
{
unsafeSetRawPtr(nullptr);
}
return *this;
}
template <class T>
template <class TT>
// clang-format off
requires std::convertible_to<TT*, T*>
// clang-format on
SharedWeakUnion<T>&
SharedWeakUnion<T>::operator=(SharedIntrusive<TT> const& rhs)
{
unsafeReleaseNoStore();
auto p = rhs.unsafeGetRawPtr();
if (p)
p->addStrongRef();
unsafeSetRawPtr(p, RefStrength::strong);
return *this;
}
template <class T>
template <class TT>
// clang-format off
requires std::convertible_to<TT*, T*>
// clang-format on
SharedWeakUnion<T>&
SharedWeakUnion<T>::operator=(SharedIntrusive<TT>&& rhs)
{
unsafeReleaseNoStore();
unsafeSetRawPtr(rhs.unsafeGetRawPtr(), RefStrength::strong);
rhs.unsafeSetRawPtr(nullptr);
return *this;
}
template <class T>
SharedWeakUnion<T>::~SharedWeakUnion()
{
unsafeReleaseNoStore();
};
// Return a strong pointer if this is already a strong pointer (i.e. don't
// lock the weak pointer. Use the `lock` method if that's what's needed)
template <class T>
SharedIntrusive<T>
SharedWeakUnion<T>::getStrong() const
{
SharedIntrusive<T> result;
auto p = unsafeGetRawPtr();
if (p && isStrong())
{
result.template adopt<SharedIntrusiveAdoptIncrementStrongTag>(p);
}
return result;
}
template <class T>
SharedWeakUnion<T>::operator bool() const noexcept
{
return bool(get());
}
template <class T>
void
SharedWeakUnion<T>::reset()
{
unsafeReleaseNoStore();
unsafeSetRawPtr(nullptr);
}
template <class T>
T*
SharedWeakUnion<T>::get() const
{
return isStrong() ? unsafeGetRawPtr() : nullptr;
}
template <class T>
std::size_t
SharedWeakUnion<T>::use_count() const
{
if (auto p = get())
return p->use_count();
return 0;
}
template <class T>
bool
SharedWeakUnion<T>::expired() const
{
auto p = unsafeGetRawPtr();
return (!p || p->expired());
}
template <class T>
SharedIntrusive<T>
SharedWeakUnion<T>::lock() const
{
SharedIntrusive<T> result;
auto p = unsafeGetRawPtr();
if (!p)
return result;
if (isStrong())
{
result.template adopt<SharedIntrusiveAdoptIncrementStrongTag>(p);
return result;
}
if (p->checkoutStrongRefFromWeak())
{
result.template adopt<SharedIntrusiveAdoptNoIncrementTag>(p);
return result;
}
return result;
}
template <class T>
bool
SharedWeakUnion<T>::isStrong() const
{
return !(tp_ & tagMask);
}
template <class T>
bool
SharedWeakUnion<T>::isWeak() const
{
return tp_ & tagMask;
}
template <class T>
bool
SharedWeakUnion<T>::convertToStrong()
{
if (isStrong())
return true;
auto p = unsafeGetRawPtr();
if (p && p->checkoutStrongRefFromWeak())
{
[[maybe_unused]] auto action = p->releaseWeakRef();
XRPL_ASSERT(
(action == ReleaseWeakRefAction::noop),
"ripple::SharedWeakUnion::convertToStrong : "
"action is noop");
unsafeSetRawPtr(p, RefStrength::strong);
return true;
}
return false;
}
template <class T>
bool
SharedWeakUnion<T>::convertToWeak()
{
if (isWeak())
return true;
auto p = unsafeGetRawPtr();
if (!p)
return false;
using enum ReleaseStrongRefAction;
auto action = p->addWeakReleaseStrongRef();
switch (action)
{
case noop:
break;
case destroy:
// We just added a weak ref. How could we destroy?
UNREACHABLE(
"ripple::SharedWeakUnion::convertToWeak : destroying freshly "
"added ref");
delete p;
unsafeSetRawPtr(nullptr);
return true; // Should never happen
case partialDestroy:
// This is a weird case. We just converted the last strong
// pointer to a weak pointer.
p->partialDestructor();
partialDestructorFinished(&p);
// p is null and may no longer be used
break;
}
unsafeSetRawPtr(p, RefStrength::weak);
return true;
}
template <class T>
T*
SharedWeakUnion<T>::unsafeGetRawPtr() const
{
return reinterpret_cast<T*>(tp_ & ptrMask);
}
template <class T>
void
SharedWeakUnion<T>::unsafeSetRawPtr(T* p, RefStrength rs)
{
tp_ = reinterpret_cast<std::uintptr_t>(p);
if (tp_ && rs == RefStrength::weak)
tp_ |= tagMask;
}
template <class T>
void
SharedWeakUnion<T>::unsafeSetRawPtr(std::nullptr_t)
{
tp_ = 0;
}
template <class T>
void
SharedWeakUnion<T>::unsafeReleaseNoStore()
{
auto p = unsafeGetRawPtr();
if (!p)
return;
if (isStrong())
{
using enum ReleaseStrongRefAction;
auto strongAction = p->releaseStrongRef();
switch (strongAction)
{
case noop:
break;
case destroy:
delete p;
break;
case partialDestroy:
p->partialDestructor();
partialDestructorFinished(&p);
// p is null and may no longer be used
break;
}
}
else
{
using enum ReleaseWeakRefAction;
auto weakAction = p->releaseWeakRef();
switch (weakAction)
{
case noop:
break;
case destroy:
delete p;
break;
}
}
}
} // namespace ripple
#endif

View File

@@ -0,0 +1,502 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2023 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_BASICS_INTRUSIVEREFCOUNTS_H_INCLUDED
#define RIPPLE_BASICS_INTRUSIVEREFCOUNTS_H_INCLUDED
#include <xrpl/beast/utility/instrumentation.h>
#include <atomic>
#include <cstdint>
namespace ripple {
/** Action to perform when releasing a strong pointer.
noop: Do nothing. For example, a `noop` action will occur when a count is
decremented to a non-zero value.
partialDestroy: Run the `partialDestructor`. This action will happen when a
strong count is decremented to zero and the weak count is non-zero.
destroy: Run the destructor. This action will occur when either the strong
count or weak count is decremented and the other count is also zero.
*/
enum class ReleaseStrongRefAction { noop, partialDestroy, destroy };
/** Action to perform when releasing a weak pointer.
noop: Do nothing. For example, a `noop` action will occur when a count is
decremented to a non-zero value.
destroy: Run the destructor. This action will occur when either the strong
count or weak count is decremented and the other count is also zero.
*/
enum class ReleaseWeakRefAction { noop, destroy };
/** Implement the strong count, weak count, and bit flags for an intrusive
pointer.
A class can satisfy the requirements of a ripple::IntrusivePointer by
inheriting from this class.
*/
struct IntrusiveRefCounts
{
virtual ~IntrusiveRefCounts() noexcept;
// This must be `noexcept` or the make_SharedIntrusive function could leak
// memory.
void
addStrongRef() const noexcept;
void
addWeakRef() const noexcept;
ReleaseStrongRefAction
releaseStrongRef() const;
// Same as:
// {
// addWeakRef();
// return releaseStrongRef;
// }
// done as one atomic operation
ReleaseStrongRefAction
addWeakReleaseStrongRef() const;
ReleaseWeakRefAction
releaseWeakRef() const;
// Returns true is able to checkout a strong ref. False otherwise
bool
checkoutStrongRefFromWeak() const noexcept;
bool
expired() const noexcept;
std::size_t
use_count() const noexcept;
// This function MUST be called after a partial destructor finishes running.
// Calling this function may cause other threads to delete the object
// pointed to by `o`, so `o` should never be used after calling this
// function. The parameter will be set to a `nullptr` after calling this
// function to emphasize that it should not be used.
// Note: This is intentionally NOT called at the end of `partialDestructor`.
// The reason for this is if new classes are written to support this smart
// pointer class, they need to write their own `partialDestructor` function
// and ensure `partialDestructorFinished` is called at the end. Putting this
// call inside the smart pointer class itself is expected to be less error
// prone.
// Note: The "two-star" programming is intentional. It emphasizes that `o`
// may be deleted and the unergonomic API is meant to signal the special
// nature of this function call to callers.
// Note: This is a template to support incompletely defined classes.
template <class T>
friend void
partialDestructorFinished(T** o);
private:
// TODO: We may need to use a uint64_t for both counts. This will reduce the
// memory savings. We need to audit the code to make sure 16 bit counts are
// enough for strong pointers and 14 bit counts are enough for weak
// pointers. Use type aliases to make it easy to switch types.
using CountType = std::uint16_t;
static constexpr size_t StrongCountNumBits = sizeof(CountType) * 8;
static constexpr size_t WeakCountNumBits = StrongCountNumBits - 2;
using FieldType = std::uint32_t;
static constexpr size_t FieldTypeBits = sizeof(FieldType) * 8;
static constexpr FieldType one = 1;
/** `refCounts` consists of four fields that are treated atomically:
1. Strong count. This is a count of the number of shared pointers that
hold a reference to this object. When the strong counts goes to zero,
if the weak count is zero, the destructor is run. If the weak count is
non-zero when the strong count goes to zero then the partialDestructor
is run.
2. Weak count. This is a count of the number of weak pointer that hold
a reference to this object. When the weak count goes to zero and the
strong count is also zero, then the destructor is run.
3. Partial destroy started bit. This bit is set if the
`partialDestructor` function has been started (or is about to be
started). This is used to prevent the destructor from running
concurrently with the partial destructor. This can easily happen when
the last strong pointer release its reference in one thread and starts
the partialDestructor, while in another thread the last weak pointer
goes out of scope and starts the destructor while the partialDestructor
is still running. Both a start and finished bit is needed to handle a
corner-case where the last strong pointer goes out of scope, then then
last `weakPointer` goes out of scope, but this happens before the
`partialDestructor` bit is set. It would be possible to use a single
bit if it could also be set atomically when the strong count goes to
zero and the weak count is non-zero, but that would add complexity (and
likely slow down common cases as well).
4. Partial destroy finished bit. This bit is set when the
`partialDestructor` has finished running. See (3) above for more
information.
*/
mutable std::atomic<FieldType> refCounts{strongDelta};
/** Amount to change the strong count when adding or releasing a reference
Note: The strong count is stored in the low `StrongCountNumBits` bits
of refCounts
*/
static constexpr FieldType strongDelta = 1;
/** Amount to change the weak count when adding or releasing a reference
Note: The weak count is stored in the high `WeakCountNumBits` bits of
refCounts
*/
static constexpr FieldType weakDelta = (one << StrongCountNumBits);
/** Flag that is set when the partialDestroy function has started running
(or is about to start running).
See description of the `refCounts` field for a fuller description of
this field.
*/
static constexpr FieldType partialDestroyStartedMask =
(one << (FieldTypeBits - 1));
/** Flag that is set when the partialDestroy function has finished running
See description of the `refCounts` field for a fuller description of
this field.
*/
static constexpr FieldType partialDestroyFinishedMask =
(one << (FieldTypeBits - 2));
/** Mask that will zero out all the `count` bits and leave the tag bits
unchanged.
*/
static constexpr FieldType tagMask =
partialDestroyStartedMask | partialDestroyFinishedMask;
/** Mask that will zero out the `tag` bits and leave the count bits
unchanged.
*/
static constexpr FieldType valueMask = ~tagMask;
/** Mask that will zero out everything except the strong count.
*/
static constexpr FieldType strongMask =
((one << StrongCountNumBits) - 1) & valueMask;
/** Mask that will zero out everything except the weak count.
*/
static constexpr FieldType weakMask =
(((one << WeakCountNumBits) - 1) << StrongCountNumBits) & valueMask;
/** Unpack the count and tag fields from the packed atomic integer form. */
struct RefCountPair
{
CountType strong;
CountType weak;
/** The `partialDestroyStartedBit` is set to on when the partial
destroy function is started. It is not a boolean; it is a uint32
with all bits zero with the possible exception of the
`partialDestroyStartedMask` bit. This is done so it can be directly
masked into the `combinedValue`.
*/
FieldType partialDestroyStartedBit{0};
/** The `partialDestroyFinishedBit` is set to on when the partial
destroy function has finished.
*/
FieldType partialDestroyFinishedBit{0};
RefCountPair(FieldType v) noexcept;
RefCountPair(CountType s, CountType w) noexcept;
/** Convert back to the packed integer form. */
FieldType
combinedValue() const noexcept;
static constexpr CountType maxStrongValue =
static_cast<CountType>((one << StrongCountNumBits) - 1);
static constexpr CountType maxWeakValue =
static_cast<CountType>((one << WeakCountNumBits) - 1);
/** Put an extra margin to detect when running up against limits.
This is only used in debug code, and is useful if we reduce the
number of bits in the strong and weak counts (to 16 and 14 bits).
*/
static constexpr CountType checkStrongMaxValue = maxStrongValue - 32;
static constexpr CountType checkWeakMaxValue = maxWeakValue - 32;
};
};
inline void
IntrusiveRefCounts::addStrongRef() const noexcept
{
refCounts.fetch_add(strongDelta, std::memory_order_acq_rel);
}
inline void
IntrusiveRefCounts::addWeakRef() const noexcept
{
refCounts.fetch_add(weakDelta, std::memory_order_acq_rel);
}
inline ReleaseStrongRefAction
IntrusiveRefCounts::releaseStrongRef() const
{
// Subtract `strongDelta` from refCounts. If this releases the last strong
// ref, set the `partialDestroyStarted` bit. It is important that the ref
// count and the `partialDestroyStartedBit` are changed atomically (hence
// the loop and `compare_exchange` op). If this didn't need to be done
// atomically, the loop could be replaced with a `fetch_sub` and a
// conditional `fetch_or`. This loop will almost always run once.
using enum ReleaseStrongRefAction;
auto prevIntVal = refCounts.load(std::memory_order_acquire);
while (1)
{
RefCountPair const prevVal{prevIntVal};
XRPL_ASSERT(
(prevVal.strong >= strongDelta),
"ripple::IntrusiveRefCounts::releaseStrongRef : previous ref "
"higher than new");
auto nextIntVal = prevIntVal - strongDelta;
ReleaseStrongRefAction action = noop;
if (prevVal.strong == 1)
{
if (prevVal.weak == 0)
{
action = destroy;
}
else
{
nextIntVal |= partialDestroyStartedMask;
action = partialDestroy;
}
}
if (refCounts.compare_exchange_weak(
prevIntVal, nextIntVal, std::memory_order_acq_rel))
{
// Can't be in partial destroy because only decrementing the strong
// count to zero can start a partial destroy, and that can't happen
// twice.
XRPL_ASSERT(
(action == noop) || !(prevIntVal & partialDestroyStartedMask),
"ripple::IntrusiveRefCounts::releaseStrongRef : not in partial "
"destroy");
return action;
}
}
}
inline ReleaseStrongRefAction
IntrusiveRefCounts::addWeakReleaseStrongRef() const
{
using enum ReleaseStrongRefAction;
static_assert(weakDelta > strongDelta);
auto constexpr delta = weakDelta - strongDelta;
auto prevIntVal = refCounts.load(std::memory_order_acquire);
// This loop will almost always run once. The loop is needed to atomically
// change the counts and flags (the count could be atomically changed, but
// the flags depend on the current value of the counts).
//
// Note: If this becomes a perf bottleneck, the `partialDestoryStartedMask`
// may be able to be set non-atomically. But it is easier to reason about
// the code if the flag is set atomically.
while (1)
{
RefCountPair const prevVal{prevIntVal};
// Converted the last strong pointer to a weak pointer.
//
// Can't be in partial destroy because only decrementing the
// strong count to zero can start a partial destroy, and that
// can't happen twice.
XRPL_ASSERT(
(!prevVal.partialDestroyStartedBit),
"ripple::IntrusiveRefCounts::addWeakReleaseStrongRef : not in "
"partial destroy");
auto nextIntVal = prevIntVal + delta;
ReleaseStrongRefAction action = noop;
if (prevVal.strong == 1)
{
if (prevVal.weak == 0)
{
action = noop;
}
else
{
nextIntVal |= partialDestroyStartedMask;
action = partialDestroy;
}
}
if (refCounts.compare_exchange_weak(
prevIntVal, nextIntVal, std::memory_order_acq_rel))
{
XRPL_ASSERT(
(!(prevIntVal & partialDestroyStartedMask)),
"ripple::IntrusiveRefCounts::addWeakReleaseStrongRef : not "
"started partial destroy");
return action;
}
}
}
inline ReleaseWeakRefAction
IntrusiveRefCounts::releaseWeakRef() const
{
auto prevIntVal = refCounts.fetch_sub(weakDelta, std::memory_order_acq_rel);
RefCountPair prev = prevIntVal;
if (prev.weak == 1 && prev.strong == 0)
{
if (!prev.partialDestroyStartedBit)
{
// This case should only be hit if the partialDestroyStartedBit is
// set non-atomically (and even then very rarely). The code is kept
// in case we need to set the flag non-atomically for perf reasons.
refCounts.wait(prevIntVal, std::memory_order_acquire);
prevIntVal = refCounts.load(std::memory_order_acquire);
prev = RefCountPair{prevIntVal};
}
if (!prev.partialDestroyFinishedBit)
{
// partial destroy MUST finish before running a full destroy (when
// using weak pointers)
refCounts.wait(prevIntVal - weakDelta, std::memory_order_acquire);
}
return ReleaseWeakRefAction::destroy;
}
return ReleaseWeakRefAction::noop;
}
inline bool
IntrusiveRefCounts::checkoutStrongRefFromWeak() const noexcept
{
auto curValue = RefCountPair{1, 1}.combinedValue();
auto desiredValue = RefCountPair{2, 1}.combinedValue();
while (!refCounts.compare_exchange_weak(
curValue, desiredValue, std::memory_order_acq_rel))
{
RefCountPair const prev{curValue};
if (!prev.strong)
return false;
desiredValue = curValue + strongDelta;
}
return true;
}
inline bool
IntrusiveRefCounts::expired() const noexcept
{
RefCountPair const val = refCounts.load(std::memory_order_acquire);
return val.strong == 0;
}
inline std::size_t
IntrusiveRefCounts::use_count() const noexcept
{
RefCountPair const val = refCounts.load(std::memory_order_acquire);
return val.strong;
}
inline IntrusiveRefCounts::~IntrusiveRefCounts() noexcept
{
#ifndef NDEBUG
auto v = refCounts.load(std::memory_order_acquire);
XRPL_ASSERT(
(!(v & valueMask)),
"ripple::IntrusiveRefCounts::~IntrusiveRefCounts : count must be zero");
auto t = v & tagMask;
XRPL_ASSERT(
(!t || t == tagMask),
"ripple::IntrusiveRefCounts::~IntrusiveRefCounts : valid tag");
#endif
}
//------------------------------------------------------------------------------
inline IntrusiveRefCounts::RefCountPair::RefCountPair(
IntrusiveRefCounts::FieldType v) noexcept
: strong{static_cast<CountType>(v & strongMask)}
, weak{static_cast<CountType>((v & weakMask) >> StrongCountNumBits)}
, partialDestroyStartedBit{v & partialDestroyStartedMask}
, partialDestroyFinishedBit{v & partialDestroyFinishedMask}
{
XRPL_ASSERT(
(strong < checkStrongMaxValue && weak < checkWeakMaxValue),
"ripple::IntrusiveRefCounts::RefCountPair(FieldType) : inputs inside "
"range");
}
inline IntrusiveRefCounts::RefCountPair::RefCountPair(
IntrusiveRefCounts::CountType s,
IntrusiveRefCounts::CountType w) noexcept
: strong{s}, weak{w}
{
XRPL_ASSERT(
(strong < checkStrongMaxValue && weak < checkWeakMaxValue),
"ripple::IntrusiveRefCounts::RefCountPair(CountType, CountType) : "
"inputs inside range");
}
inline IntrusiveRefCounts::FieldType
IntrusiveRefCounts::RefCountPair::combinedValue() const noexcept
{
XRPL_ASSERT(
(strong < checkStrongMaxValue && weak < checkWeakMaxValue),
"ripple::IntrusiveRefCounts::RefCountPair::combinedValue : inputs "
"inside range");
return (static_cast<IntrusiveRefCounts::FieldType>(weak)
<< IntrusiveRefCounts::StrongCountNumBits) |
static_cast<IntrusiveRefCounts::FieldType>(strong) |
partialDestroyStartedBit | partialDestroyFinishedBit;
}
template <class T>
inline void
partialDestructorFinished(T** o)
{
T& self = **o;
IntrusiveRefCounts::RefCountPair p =
self.refCounts.fetch_or(IntrusiveRefCounts::partialDestroyFinishedMask);
XRPL_ASSERT(
(!p.partialDestroyFinishedBit && p.partialDestroyStartedBit &&
!p.strong),
"ripple::partialDestructorFinished : not a weak ref");
if (!p.weak)
{
// There was a weak count before the partial destructor ran (or we would
// have run the full destructor) and now there isn't a weak count. Some
// thread is waiting to run the destructor.
self.refCounts.notify_one();
}
// Set the pointer to null to emphasize that the object shouldn't be used
// after calling this function as it may be destroyed in another thread.
*o = nullptr;
}
//------------------------------------------------------------------------------
} // namespace ripple
#endif

View File

@@ -0,0 +1,135 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2023 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_BASICS_SHAREDWEAKCACHEPOINTER_H_INCLUDED
#define RIPPLE_BASICS_SHAREDWEAKCACHEPOINTER_H_INCLUDED
#include <memory>
#include <variant>
namespace ripple {
/** A combination of a std::shared_ptr and a std::weak_pointer.
This class is a wrapper to a `std::variant<std::shared_ptr,std::weak_ptr>`
This class is useful for storing intrusive pointers in tagged caches using less
memory than storing both pointers directly.
*/
template <class T>
class SharedWeakCachePointer
{
public:
SharedWeakCachePointer() = default;
SharedWeakCachePointer(SharedWeakCachePointer const& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer(std::shared_ptr<TT> const& rhs);
SharedWeakCachePointer(SharedWeakCachePointer&& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer(std::shared_ptr<TT>&& rhs);
SharedWeakCachePointer&
operator=(SharedWeakCachePointer const& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer&
operator=(std::shared_ptr<TT> const& rhs);
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer&
operator=(std::shared_ptr<TT>&& rhs);
~SharedWeakCachePointer();
/** Return a strong pointer if this is already a strong pointer (i.e. don't
lock the weak pointer. Use the `lock` method if that's what's needed)
*/
std::shared_ptr<T> const&
getStrong() const;
/** Return true if this is a strong pointer and the strong pointer is
seated.
*/
explicit
operator bool() const noexcept;
/** Set the pointer to null, decrement the appropriate ref count, and run
the appropriate release action.
*/
void
reset();
/** If this is a strong pointer, return the raw pointer. Otherwise return
null.
*/
T*
get() const;
/** If this is a strong pointer, return the strong count. Otherwise return 0
*/
std::size_t
use_count() const;
/** Return true if there is a non-zero strong count. */
bool
expired() const;
/** If this is a strong pointer, return the strong pointer. Otherwise
attempt to lock the weak pointer.
*/
std::shared_ptr<T>
lock() const;
/** Return true is this represents a strong pointer. */
bool
isStrong() const;
/** Return true is this represents a weak pointer. */
bool
isWeak() const;
/** If this is a weak pointer, attempt to convert it to a strong pointer.
@return true if successfully converted to a strong pointer (or was
already a strong pointer). Otherwise false.
*/
bool
convertToStrong();
/** If this is a strong pointer, attempt to convert it to a weak pointer.
@return false if the pointer is null. Otherwise return true.
*/
bool
convertToWeak();
private:
std::variant<std::shared_ptr<T>, std::weak_ptr<T>> combo_;
};
} // namespace ripple
#endif

View File

@@ -0,0 +1,192 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2023 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_BASICS_SHAREDWEAKCACHEPOINTER_IPP_INCLUDED
#define RIPPLE_BASICS_SHAREDWEAKCACHEPOINTER_IPP_INCLUDED
#include <xrpl/basics/SharedWeakCachePointer.h>
namespace ripple {
template <class T>
SharedWeakCachePointer<T>::SharedWeakCachePointer(
SharedWeakCachePointer const& rhs) = default;
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>::SharedWeakCachePointer(
std::shared_ptr<TT> const& rhs)
: combo_{rhs}
{
}
template <class T>
SharedWeakCachePointer<T>::SharedWeakCachePointer(
SharedWeakCachePointer&& rhs) = default;
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>::SharedWeakCachePointer(std::shared_ptr<TT>&& rhs)
: combo_{std::move(rhs)}
{
}
template <class T>
SharedWeakCachePointer<T>&
SharedWeakCachePointer<T>::operator=(SharedWeakCachePointer const& rhs) =
default;
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>&
SharedWeakCachePointer<T>::operator=(std::shared_ptr<TT> const& rhs)
{
combo_ = rhs;
return *this;
}
template <class T>
template <class TT>
requires std::convertible_to<TT*, T*>
SharedWeakCachePointer<T>&
SharedWeakCachePointer<T>::operator=(std::shared_ptr<TT>&& rhs)
{
combo_ = std::move(rhs);
return *this;
}
template <class T>
SharedWeakCachePointer<T>::~SharedWeakCachePointer() = default;
// Return a strong pointer if this is already a strong pointer (i.e. don't
// lock the weak pointer. Use the `lock` method if that's what's needed)
template <class T>
std::shared_ptr<T> const&
SharedWeakCachePointer<T>::getStrong() const
{
static std::shared_ptr<T> const empty;
if (auto p = std::get_if<std::shared_ptr<T>>(&combo_))
return *p;
return empty;
}
template <class T>
SharedWeakCachePointer<T>::operator bool() const noexcept
{
return !!std::get_if<std::shared_ptr<T>>(&combo_);
}
template <class T>
void
SharedWeakCachePointer<T>::reset()
{
combo_ = std::shared_ptr<T>{};
}
template <class T>
T*
SharedWeakCachePointer<T>::get() const
{
return std::get_if<std::shared_ptr<T>>(&combo_).get();
}
template <class T>
std::size_t
SharedWeakCachePointer<T>::use_count() const
{
if (auto p = std::get_if<std::shared_ptr<T>>(&combo_))
return p->use_count();
return 0;
}
template <class T>
bool
SharedWeakCachePointer<T>::expired() const
{
if (auto p = std::get_if<std::weak_ptr<T>>(&combo_))
return p->expired();
return !std::get_if<std::shared_ptr<T>>(&combo_);
}
template <class T>
std::shared_ptr<T>
SharedWeakCachePointer<T>::lock() const
{
if (auto p = std::get_if<std::shared_ptr<T>>(&combo_))
return *p;
if (auto p = std::get_if<std::weak_ptr<T>>(&combo_))
return p->lock();
return {};
}
template <class T>
bool
SharedWeakCachePointer<T>::isStrong() const
{
if (auto p = std::get_if<std::shared_ptr<T>>(&combo_))
return !!p->get();
return false;
}
template <class T>
bool
SharedWeakCachePointer<T>::isWeak() const
{
return !isStrong();
}
template <class T>
bool
SharedWeakCachePointer<T>::convertToStrong()
{
if (isStrong())
return true;
if (auto p = std::get_if<std::weak_ptr<T>>(&combo_))
{
if (auto s = p->lock())
{
combo_ = std::move(s);
return true;
}
}
return false;
}
template <class T>
bool
SharedWeakCachePointer<T>::convertToWeak()
{
if (isWeak())
return true;
if (auto p = std::get_if<std::shared_ptr<T>>(&combo_))
{
combo_ = std::weak_ptr<T>(*p);
return true;
}
return false;
}
} // namespace ripple
#endif

View File

@@ -20,7 +20,9 @@
#ifndef RIPPLE_BASICS_TAGGEDCACHE_H_INCLUDED
#define RIPPLE_BASICS_TAGGEDCACHE_H_INCLUDED
#include <xrpl/basics/IntrusivePointer.h>
#include <xrpl/basics/Log.h>
#include <xrpl/basics/SharedWeakCachePointer.ipp>
#include <xrpl/basics/UnorderedContainers.h>
#include <xrpl/basics/hardened_hash.h>
#include <xrpl/beast/clock/abstract_clock.h>
@@ -51,6 +53,8 @@ template <
class Key,
class T,
bool IsKeyCache = false,
class SharedWeakUnionPointerType = SharedWeakCachePointer<T>,
class SharedPointerType = std::shared_ptr<T>,
class Hash = hardened_hash<>,
class KeyEqual = std::equal_to<Key>,
class Mutex = std::recursive_mutex>
@@ -61,6 +65,8 @@ public:
using key_type = Key;
using mapped_type = T;
using clock_type = beast::abstract_clock<std::chrono::steady_clock>;
using shared_weak_combo_pointer_type = SharedWeakUnionPointerType;
using shared_pointer_type = SharedPointerType;
public:
TaggedCache(
@@ -70,231 +76,48 @@ public:
clock_type& clock,
beast::Journal journal,
beast::insight::Collector::ptr const& collector =
beast::insight::NullCollector::New())
: m_journal(journal)
, m_clock(clock)
, m_stats(
name,
std::bind(&TaggedCache::collect_metrics, this),
collector)
, m_name(name)
, m_target_size(size)
, m_target_age(expiration)
, m_cache_count(0)
, m_hits(0)
, m_misses(0)
{
}
beast::insight::NullCollector::New());
public:
/** Return the clock associated with the cache. */
clock_type&
clock()
{
return m_clock;
}
clock();
/** Returns the number of items in the container. */
std::size_t
size() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
}
void
setTargetSize(int s)
{
std::lock_guard lock(m_mutex);
m_target_size = s;
if (s > 0)
{
for (auto& partition : m_cache.map())
{
partition.rehash(static_cast<std::size_t>(
(s + (s >> 2)) /
(partition.max_load_factor() * m_cache.partitions()) +
1));
}
}
JLOG(m_journal.debug()) << m_name << " target size set to " << s;
}
clock_type::duration
getTargetAge() const
{
std::lock_guard lock(m_mutex);
return m_target_age;
}
void
setTargetAge(clock_type::duration s)
{
std::lock_guard lock(m_mutex);
m_target_age = s;
JLOG(m_journal.debug())
<< m_name << " target age set to " << m_target_age.count();
}
size() const;
int
getCacheSize() const
{
std::lock_guard lock(m_mutex);
return m_cache_count;
}
getCacheSize() const;
int
getTrackSize() const
{
std::lock_guard lock(m_mutex);
return m_cache.size();
}
getTrackSize() const;
float
getHitRate()
{
std::lock_guard lock(m_mutex);
auto const total = static_cast<float>(m_hits + m_misses);
return m_hits * (100.0f / std::max(1.0f, total));
}
getHitRate();
void
clear()
{
std::lock_guard lock(m_mutex);
m_cache.clear();
m_cache_count = 0;
}
clear();
void
reset()
{
std::lock_guard lock(m_mutex);
m_cache.clear();
m_cache_count = 0;
m_hits = 0;
m_misses = 0;
}
reset();
/** Refresh the last access time on a key if present.
@return `true` If the key was found.
*/
template <class KeyComparable>
bool
touch_if_exists(KeyComparable const& key)
{
std::lock_guard lock(m_mutex);
auto const iter(m_cache.find(key));
if (iter == m_cache.end())
{
++m_stats.misses;
return false;
}
iter->second.touch(m_clock.now());
++m_stats.hits;
return true;
}
touch_if_exists(KeyComparable const& key);
using SweptPointersVector = std::pair<
std::vector<std::shared_ptr<mapped_type>>,
std::vector<std::weak_ptr<mapped_type>>>;
using SweptPointersVector = std::vector<SharedWeakUnionPointerType>;
void
sweep()
{
// Keep references to all the stuff we sweep
// For performance, each worker thread should exit before the swept data
// is destroyed but still within the main cache lock.
std::vector<SweptPointersVector> allStuffToSweep(m_cache.partitions());
clock_type::time_point const now(m_clock.now());
clock_type::time_point when_expire;
auto const start = std::chrono::steady_clock::now();
{
std::lock_guard lock(m_mutex);
if (m_target_size == 0 ||
(static_cast<int>(m_cache.size()) <= m_target_size))
{
when_expire = now - m_target_age;
}
else
{
when_expire =
now - m_target_age * m_target_size / m_cache.size();
clock_type::duration const minimumAge(std::chrono::seconds(1));
if (when_expire > (now - minimumAge))
when_expire = now - minimumAge;
JLOG(m_journal.trace())
<< m_name << " is growing fast " << m_cache.size() << " of "
<< m_target_size << " aging at "
<< (now - when_expire).count() << " of "
<< m_target_age.count();
}
std::vector<std::thread> workers;
workers.reserve(m_cache.partitions());
std::atomic<int> allRemovals = 0;
for (std::size_t p = 0; p < m_cache.partitions(); ++p)
{
workers.push_back(sweepHelper(
when_expire,
now,
m_cache.map()[p],
allStuffToSweep[p],
allRemovals,
lock));
}
for (std::thread& worker : workers)
worker.join();
m_cache_count -= allRemovals;
}
// At this point allStuffToSweep will go out of scope outside the lock
// and decrement the reference count on each strong pointer.
JLOG(m_journal.debug())
<< m_name << " TaggedCache sweep lock duration "
<< std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - start)
.count()
<< "ms";
}
sweep();
bool
del(const key_type& key, bool valid)
{
// Remove from cache, if !valid, remove from map too. Returns true if
// removed from cache
std::lock_guard lock(m_mutex);
auto cit = m_cache.find(key);
if (cit == m_cache.end())
return false;
Entry& entry = cit->second;
bool ret = false;
if (entry.isCached())
{
--m_cache_count;
entry.ptr.reset();
ret = true;
}
if (!valid || entry.isExpired())
m_cache.erase(cit);
return ret;
}
del(key_type const& key, bool valid);
public:
/** Replace aliased objects with originals.
Due to concurrency it is possible for two separate objects with
@@ -308,100 +131,23 @@ public:
@return `true` If the key already existed.
*/
public:
template <class R>
bool
canonicalize(
const key_type& key,
std::shared_ptr<T>& data,
std::function<bool(std::shared_ptr<T> const&)>&& replace)
{
// Return canonical value, store if needed, refresh in cache
// Return values: true=we had the data already
std::lock_guard lock(m_mutex);
auto cit = m_cache.find(key);
if (cit == m_cache.end())
{
m_cache.emplace(
std::piecewise_construct,
std::forward_as_tuple(key),
std::forward_as_tuple(m_clock.now(), data));
++m_cache_count;
return false;
}
Entry& entry = cit->second;
entry.touch(m_clock.now());
if (entry.isCached())
{
if (replace(entry.ptr))
{
entry.ptr = data;
entry.weak_ptr = data;
}
else
{
data = entry.ptr;
}
return true;
}
auto cachedData = entry.lock();
if (cachedData)
{
if (replace(entry.ptr))
{
entry.ptr = data;
entry.weak_ptr = data;
}
else
{
entry.ptr = cachedData;
data = cachedData;
}
++m_cache_count;
return true;
}
entry.ptr = data;
entry.weak_ptr = data;
++m_cache_count;
return false;
}
key_type const& key,
SharedPointerType& data,
R&& replaceCallback);
bool
canonicalize_replace_cache(
const key_type& key,
std::shared_ptr<T> const& data)
{
return canonicalize(
key,
const_cast<std::shared_ptr<T>&>(data),
[](std::shared_ptr<T> const&) { return true; });
}
key_type const& key,
SharedPointerType const& data);
bool
canonicalize_replace_client(const key_type& key, std::shared_ptr<T>& data)
{
return canonicalize(
key, data, [](std::shared_ptr<T> const&) { return false; });
}
canonicalize_replace_client(key_type const& key, SharedPointerType& data);
std::shared_ptr<T>
fetch(const key_type& key)
{
std::lock_guard<mutex_type> l(m_mutex);
auto ret = initialFetch(key, l);
if (!ret)
++m_misses;
return ret;
}
SharedPointerType
fetch(key_type const& key);
/** Insert the element into the container.
If the key already exists, nothing happens.
@@ -410,26 +156,11 @@ public:
template <class ReturnType = bool>
auto
insert(key_type const& key, T const& value)
-> std::enable_if_t<!IsKeyCache, ReturnType>
{
auto p = std::make_shared<T>(std::cref(value));
return canonicalize_replace_client(key, p);
}
-> std::enable_if_t<!IsKeyCache, ReturnType>;
template <class ReturnType = bool>
auto
insert(key_type const& key) -> std::enable_if_t<IsKeyCache, ReturnType>
{
std::lock_guard lock(m_mutex);
clock_type::time_point const now(m_clock.now());
auto [it, inserted] = m_cache.emplace(
std::piecewise_construct,
std::forward_as_tuple(key),
std::forward_as_tuple(now));
if (!inserted)
it->second.last_access = now;
return inserted;
}
insert(key_type const& key) -> std::enable_if_t<IsKeyCache, ReturnType>;
// VFALCO NOTE It looks like this returns a copy of the data in
// the output parameter 'data'. This could be expensive.
@@ -437,50 +168,18 @@ public:
// simply return an iterator.
//
bool
retrieve(const key_type& key, T& data)
{
// retrieve the value of the stored data
auto entry = fetch(key);
if (!entry)
return false;
data = *entry;
return true;
}
retrieve(key_type const& key, T& data);
mutex_type&
peekMutex()
{
return m_mutex;
}
peekMutex();
std::vector<key_type>
getKeys() const
{
std::vector<key_type> v;
{
std::lock_guard lock(m_mutex);
v.reserve(m_cache.size());
for (auto const& _ : m_cache)
v.push_back(_.first);
}
return v;
}
getKeys() const;
// CachedSLEs functions.
/** Returns the fraction of cache hits. */
double
rate() const
{
std::lock_guard lock(m_mutex);
auto const tot = m_hits + m_misses;
if (tot == 0)
return 0;
return double(m_hits) / tot;
}
rate() const;
/** Fetch an item from the cache.
If the digest was not found, Handler
@@ -488,73 +187,16 @@ public:
std::shared_ptr<SLE const>(void)
*/
template <class Handler>
std::shared_ptr<T>
fetch(key_type const& digest, Handler const& h)
{
{
std::lock_guard l(m_mutex);
if (auto ret = initialFetch(digest, l))
return ret;
}
auto sle = h();
if (!sle)
return {};
std::lock_guard l(m_mutex);
++m_misses;
auto const [it, inserted] =
m_cache.emplace(digest, Entry(m_clock.now(), std::move(sle)));
if (!inserted)
it->second.touch(m_clock.now());
return it->second.ptr;
}
SharedPointerType
fetch(key_type const& digest, Handler const& h);
// End CachedSLEs functions.
private:
std::shared_ptr<T>
initialFetch(key_type const& key, std::lock_guard<mutex_type> const& l)
{
auto cit = m_cache.find(key);
if (cit == m_cache.end())
return {};
Entry& entry = cit->second;
if (entry.isCached())
{
++m_hits;
entry.touch(m_clock.now());
return entry.ptr;
}
entry.ptr = entry.lock();
if (entry.isCached())
{
// independent of cache size, so not counted as a hit
++m_cache_count;
entry.touch(m_clock.now());
return entry.ptr;
}
m_cache.erase(cit);
return {};
}
SharedPointerType
initialFetch(key_type const& key, std::lock_guard<mutex_type> const& l);
void
collect_metrics()
{
m_stats.size.set(getCacheSize());
{
beast::insight::Gauge::value_type hit_rate(0);
{
std::lock_guard lock(m_mutex);
auto const total(m_hits + m_misses);
if (total != 0)
hit_rate = (m_hits * 100) / total;
}
m_stats.hit_rate.set(hit_rate);
}
}
collect_metrics();
private:
struct Stats
@@ -600,36 +242,37 @@ private:
class ValueEntry
{
public:
std::shared_ptr<mapped_type> ptr;
std::weak_ptr<mapped_type> weak_ptr;
shared_weak_combo_pointer_type ptr;
clock_type::time_point last_access;
ValueEntry(
clock_type::time_point const& last_access_,
std::shared_ptr<mapped_type> const& ptr_)
: ptr(ptr_), weak_ptr(ptr_), last_access(last_access_)
shared_pointer_type const& ptr_)
: ptr(ptr_), last_access(last_access_)
{
}
bool
isWeak() const
{
return ptr == nullptr;
if (!ptr)
return true;
return ptr.isWeak();
}
bool
isCached() const
{
return ptr != nullptr;
return ptr && ptr.isStrong();
}
bool
isExpired() const
{
return weak_ptr.expired();
return ptr.expired();
}
std::shared_ptr<mapped_type>
SharedPointerType
lock()
{
return weak_ptr.lock();
return ptr.lock();
}
void
touch(clock_type::time_point const& now)
@@ -658,72 +301,7 @@ private:
typename KeyValueCacheType::map_type& partition,
SweptPointersVector& stuffToSweep,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
{
return std::thread([&, this]() {
int cacheRemovals = 0;
int mapRemovals = 0;
// Keep references to all the stuff we sweep
// so that we can destroy them outside the lock.
stuffToSweep.first.reserve(partition.size());
stuffToSweep.second.reserve(partition.size());
{
auto cit = partition.begin();
while (cit != partition.end())
{
if (cit->second.isWeak())
{
// weak
if (cit->second.isExpired())
{
stuffToSweep.second.push_back(
std::move(cit->second.weak_ptr));
++mapRemovals;
cit = partition.erase(cit);
}
else
{
++cit;
}
}
else if (cit->second.last_access <= when_expire)
{
// strong, expired
++cacheRemovals;
if (cit->second.ptr.use_count() == 1)
{
stuffToSweep.first.push_back(
std::move(cit->second.ptr));
++mapRemovals;
cit = partition.erase(cit);
}
else
{
// remains weakly cached
cit->second.ptr.reset();
++cit;
}
}
else
{
// strong, not expired
++cit;
}
}
}
if (mapRemovals || cacheRemovals)
{
JLOG(m_journal.debug())
<< "TaggedCache partition sweep " << m_name
<< ": cache = " << partition.size() << "-" << cacheRemovals
<< ", map-=" << mapRemovals;
}
allRemovals += cacheRemovals;
});
}
std::lock_guard<std::recursive_mutex> const&);
[[nodiscard]] std::thread
sweepHelper(
@@ -732,45 +310,7 @@ private:
typename KeyOnlyCacheType::map_type& partition,
SweptPointersVector&,
std::atomic<int>& allRemovals,
std::lock_guard<std::recursive_mutex> const&)
{
return std::thread([&, this]() {
int cacheRemovals = 0;
int mapRemovals = 0;
// Keep references to all the stuff we sweep
// so that we can destroy them outside the lock.
{
auto cit = partition.begin();
while (cit != partition.end())
{
if (cit->second.last_access > now)
{
cit->second.last_access = now;
++cit;
}
else if (cit->second.last_access <= when_expire)
{
cit = partition.erase(cit);
}
else
{
++cit;
}
}
}
if (mapRemovals || cacheRemovals)
{
JLOG(m_journal.debug())
<< "TaggedCache partition sweep " << m_name
<< ": cache = " << partition.size() << "-" << cacheRemovals
<< ", map-=" << mapRemovals;
}
allRemovals += cacheRemovals;
});
};
std::lock_guard<std::recursive_mutex> const&);
beast::Journal m_journal;
clock_type& m_clock;
@@ -782,10 +322,10 @@ private:
std::string m_name;
// Desired number of cache entries (0 = ignore)
int m_target_size;
int const m_target_size;
// Desired maximum cache age
clock_type::duration m_target_age;
clock_type::duration const m_target_age;
// Number of items cached
int m_cache_count;

File diff suppressed because it is too large Load Diff

View File

@@ -374,7 +374,7 @@ public:
}
base_uint&
operator^=(const base_uint& b)
operator^=(base_uint const& b)
{
for (int i = 0; i < WIDTH; i++)
data_[i] ^= b.data_[i];
@@ -383,7 +383,7 @@ public:
}
base_uint&
operator&=(const base_uint& b)
operator&=(base_uint const& b)
{
for (int i = 0; i < WIDTH; i++)
data_[i] &= b.data_[i];
@@ -392,7 +392,7 @@ public:
}
base_uint&
operator|=(const base_uint& b)
operator|=(base_uint const& b)
{
for (int i = 0; i < WIDTH; i++)
data_[i] |= b.data_[i];
@@ -415,11 +415,11 @@ public:
return *this;
}
const base_uint
base_uint const
operator++(int)
{
// postfix operator
const base_uint ret = *this;
base_uint const ret = *this;
++(*this);
return ret;
@@ -441,11 +441,11 @@ public:
return *this;
}
const base_uint
base_uint const
operator--(int)
{
// postfix operator
const base_uint ret = *this;
base_uint const ret = *this;
--(*this);
return ret;
@@ -466,7 +466,7 @@ public:
}
base_uint&
operator+=(const base_uint& b)
operator+=(base_uint const& b)
{
std::uint64_t carry = 0;
@@ -511,7 +511,7 @@ public:
}
[[nodiscard]] constexpr bool
parseHex(const char* str)
parseHex(char const* str)
{
return parseHex(std::string_view{str});
}

View File

@@ -43,7 +43,7 @@ struct less
using result_type = bool;
constexpr bool
operator()(const T& left, const T& right) const
operator()(T const& left, T const& right) const
{
return std::less<T>()(left, right);
}
@@ -55,7 +55,7 @@ struct equal_to
using result_type = bool;
constexpr bool
operator()(const T& left, const T& right) const
operator()(T const& left, T const& right) const
{
return std::equal_to<T>()(left, right);
}

View File

@@ -52,7 +52,7 @@ template <
typename Value,
typename Hash,
typename Pred = std::equal_to<Key>,
typename Alloc = std::allocator<std::pair<const Key, Value>>>
typename Alloc = std::allocator<std::pair<Key const, Value>>>
class partitioned_unordered_map
{
std::size_t partitions_;

View File

@@ -76,13 +76,13 @@ public:
}
bool
operator<(const tagged_integer& rhs) const noexcept
operator<(tagged_integer const& rhs) const noexcept
{
return m_value < rhs.m_value;
}
bool
operator==(const tagged_integer& rhs) const noexcept
operator==(tagged_integer const& rhs) const noexcept
{
return m_value == rhs.m_value;
}
@@ -144,14 +144,14 @@ public:
}
tagged_integer&
operator<<=(const tagged_integer& rhs) noexcept
operator<<=(tagged_integer const& rhs) noexcept
{
m_value <<= rhs.m_value;
return *this;
}
tagged_integer&
operator>>=(const tagged_integer& rhs) noexcept
operator>>=(tagged_integer const& rhs) noexcept
{
m_value >>= rhs.m_value;
return *this;

View File

@@ -30,7 +30,7 @@ namespace beast {
template <class Hasher = xxhasher>
struct uhash
{
explicit uhash() = default;
uhash() = default;
using result_type = typename Hasher::result_type;

View File

@@ -215,7 +215,7 @@ namespace std {
template <>
struct hash<::beast::IP::Endpoint>
{
explicit hash() = default;
hash() = default;
std::size_t
operator()(::beast::IP::Endpoint const& endpoint) const
@@ -230,7 +230,7 @@ namespace boost {
template <>
struct hash<::beast::IP::Endpoint>
{
explicit hash() = default;
hash() = default;
std::size_t
operator()(::beast::IP::Endpoint const& endpoint) const

View File

@@ -37,9 +37,9 @@ class temp_dir
public:
#if !GENERATING_DOCS
temp_dir(const temp_dir&) = delete;
temp_dir(temp_dir const&) = delete;
temp_dir&
operator=(const temp_dir&) = delete;
operator=(temp_dir const&) = delete;
#endif
/// Construct a temporary directory.

View File

@@ -39,7 +39,7 @@ class Reader
{
public:
using Char = char;
using Location = const Char*;
using Location = Char const*;
/** \brief Constructs a Reader allowing all features
* for parsing.
@@ -64,7 +64,7 @@ public:
* error occurred.
*/
bool
parse(const char* beginDoc, const char* endDoc, Value& root);
parse(char const* beginDoc, char const* endDoc, Value& root);
/// \brief Parse from input stream.
/// \see Json::operator>>(std::istream&, Json::Value&).
@@ -133,7 +133,7 @@ private:
using Errors = std::deque<ErrorInfo>;
bool
expectToken(TokenType type, Token& token, const char* message);
expectToken(TokenType type, Token& token, char const* message);
bool
readToken(Token& token);
void

View File

@@ -20,11 +20,13 @@
#ifndef RIPPLE_JSON_JSON_VALUE_H_INCLUDED
#define RIPPLE_JSON_JSON_VALUE_H_INCLUDED
#include <xrpl/basics/Number.h>
#include <xrpl/json/json_forwards.h>
#include <cstring>
#include <map>
#include <string>
#include <utility>
#include <vector>
/** \brief JSON (JavaScript Object Notation).
@@ -61,24 +63,24 @@ enum ValueType {
class StaticString
{
public:
constexpr explicit StaticString(const char* czstring) : str_(czstring)
constexpr explicit StaticString(char const* czstring) : str_(czstring)
{
}
constexpr
operator const char*() const
operator char const*() const
{
return str_;
}
constexpr const char*
constexpr char const*
c_str() const
{
return str_;
}
private:
const char* str_;
char const* str_;
};
inline bool
@@ -156,10 +158,10 @@ public:
using Int = Json::Int;
using ArrayIndex = UInt;
static const Value null;
static const Int minInt;
static const Int maxInt;
static const UInt maxUInt;
static Value const null;
static Int const minInt;
static Int const maxInt;
static UInt const maxUInt;
private:
class CZString
@@ -171,24 +173,24 @@ private:
duplicateOnCopy
};
CZString(int index);
CZString(const char* cstr, DuplicationPolicy allocate);
CZString(const CZString& other);
CZString(char const* cstr, DuplicationPolicy allocate);
CZString(CZString const& other);
~CZString();
CZString&
operator=(const CZString& other) = delete;
operator=(CZString const& other) = delete;
bool
operator<(const CZString& other) const;
operator<(CZString const& other) const;
bool
operator==(const CZString& other) const;
operator==(CZString const& other) const;
int
index() const;
const char*
char const*
c_str() const;
bool
isStaticString() const;
private:
const char* cstr_;
char const* cstr_;
int index_;
};
@@ -215,7 +217,8 @@ public:
Value(Int value);
Value(UInt value);
Value(double value);
Value(const char* value);
Value(char const* value);
Value(ripple::Number const& value);
/** \brief Constructs a value from a static string.
* Like other value string constructor but do not duplicate the string for
@@ -227,10 +230,10 @@ public:
* Json::Value aValue( StaticString("some text") );
* \endcode
*/
Value(const StaticString& value);
Value(StaticString const& value);
Value(std::string const& value);
Value(bool value);
Value(const Value& other);
Value(Value const& other);
~Value();
Value&
@@ -247,7 +250,7 @@ public:
ValueType
type() const;
const char*
char const*
asCString() const;
/** Returns the unquoted string value. */
std::string
@@ -317,12 +320,12 @@ public:
/// Access an array element (zero based index )
/// (You may need to say 'value[0u]' to get your compiler to distinguish
/// this from the operator[] which takes a string.)
const Value&
Value const&
operator[](UInt index) const;
/// If the array contains at least index+1 elements, returns the element
/// value, otherwise returns defaultValue.
Value
get(UInt index, const Value& defaultValue) const;
get(UInt index, Value const& defaultValue) const;
/// Return true if index < size().
bool
isValidIndex(UInt index) const;
@@ -330,25 +333,25 @@ public:
///
/// Equivalent to jsonvalue[jsonvalue.size()] = value;
Value&
append(const Value& value);
append(Value const& value);
Value&
append(Value&& value);
/// Access an object value by name, create a null member if it does not
/// exist.
Value&
operator[](const char* key);
operator[](char const* key);
/// Access an object value by name, returns null if there is no member with
/// that name.
const Value&
operator[](const char* key) const;
Value const&
operator[](char const* key) const;
/// Access an object value by name, create a null member if it does not
/// exist.
Value&
operator[](std::string const& key);
/// Access an object value by name, returns null if there is no member with
/// that name.
const Value&
Value const&
operator[](std::string const& key) const;
/** \brief Access an object value by name, create a null member if it does
not exist.
@@ -364,14 +367,16 @@ public:
* \endcode
*/
Value&
operator[](const StaticString& key);
operator[](StaticString const& key);
Value const&
operator[](StaticString const& key) const;
/// Return the member named key if it exist, defaultValue otherwise.
Value
get(const char* key, const Value& defaultValue) const;
get(char const* key, Value const& defaultValue) const;
/// Return the member named key if it exist, defaultValue otherwise.
Value
get(std::string const& key, const Value& defaultValue) const;
get(std::string const& key, Value const& defaultValue) const;
/// \brief Remove and return the named member.
///
@@ -380,14 +385,14 @@ public:
/// \pre type() is objectValue or nullValue
/// \post type() is unchanged
Value
removeMember(const char* key);
removeMember(char const* key);
/// Same as removeMember(const char*)
Value
removeMember(std::string const& key);
/// Return true if the object has a member named key.
bool
isMember(const char* key) const;
isMember(char const* key) const;
/// Return true if the object has a member named key.
bool
isMember(std::string const& key) const;
@@ -414,13 +419,13 @@ public:
end();
friend bool
operator==(const Value&, const Value&);
operator==(Value const&, Value const&);
friend bool
operator<(const Value&, const Value&);
operator<(Value const&, Value const&);
private:
Value&
resolveReference(const char* key, bool isStatic);
resolveReference(char const* key, bool isStatic);
private:
union ValueHolder
@@ -436,32 +441,38 @@ private:
int allocated_ : 1; // Notes: if declared as bool, bitfield is useless.
};
inline Value
to_json(ripple::Number const& number)
{
return to_string(number);
}
bool
operator==(const Value&, const Value&);
operator==(Value const&, Value const&);
inline bool
operator!=(const Value& x, const Value& y)
operator!=(Value const& x, Value const& y)
{
return !(x == y);
}
bool
operator<(const Value&, const Value&);
operator<(Value const&, Value const&);
inline bool
operator<=(const Value& x, const Value& y)
operator<=(Value const& x, Value const& y)
{
return !(y < x);
}
inline bool
operator>(const Value& x, const Value& y)
operator>(Value const& x, Value const& y)
{
return y < x;
}
inline bool
operator>=(const Value& x, const Value& y)
operator>=(Value const& x, Value const& y)
{
return !(x < y);
}
@@ -482,11 +493,11 @@ public:
virtual ~ValueAllocator() = default;
virtual char*
makeMemberName(const char* memberName) = 0;
makeMemberName(char const* memberName) = 0;
virtual void
releaseMemberName(char* memberName) = 0;
virtual char*
duplicateStringValue(const char* value, unsigned int length = unknown) = 0;
duplicateStringValue(char const* value, unsigned int length = unknown) = 0;
virtual void
releaseStringValue(char* value) = 0;
};
@@ -503,16 +514,16 @@ public:
ValueIteratorBase();
explicit ValueIteratorBase(const Value::ObjectValues::iterator& current);
explicit ValueIteratorBase(Value::ObjectValues::iterator const& current);
bool
operator==(const SelfType& other) const
operator==(SelfType const& other) const
{
return isEqual(other);
}
bool
operator!=(const SelfType& other) const
operator!=(SelfType const& other) const
{
return !isEqual(other);
}
@@ -528,7 +539,7 @@ public:
/// Return the member name of the referenced Value. "" if it is not an
/// objectValue.
const char*
char const*
memberName() const;
protected:
@@ -542,13 +553,13 @@ protected:
decrement();
difference_type
computeDistance(const SelfType& other) const;
computeDistance(SelfType const& other) const;
bool
isEqual(const SelfType& other) const;
isEqual(SelfType const& other) const;
void
copy(const SelfType& other);
copy(SelfType const& other);
private:
Value::ObjectValues::iterator current_;
@@ -566,8 +577,8 @@ class ValueConstIterator : public ValueIteratorBase
public:
using size_t = unsigned int;
using difference_type = int;
using reference = const Value&;
using pointer = const Value*;
using reference = Value const&;
using pointer = Value const*;
using SelfType = ValueConstIterator;
ValueConstIterator() = default;
@@ -575,11 +586,11 @@ public:
private:
/*! \internal Use by Value to create an iterator.
*/
explicit ValueConstIterator(const Value::ObjectValues::iterator& current);
explicit ValueConstIterator(Value::ObjectValues::iterator const& current);
public:
SelfType&
operator=(const ValueIteratorBase& other);
operator=(ValueIteratorBase const& other);
SelfType
operator++(int)
@@ -632,17 +643,17 @@ public:
using SelfType = ValueIterator;
ValueIterator() = default;
ValueIterator(const ValueConstIterator& other);
ValueIterator(const ValueIterator& other);
ValueIterator(ValueConstIterator const& other);
ValueIterator(ValueIterator const& other);
private:
/*! \internal Use by Value to create an iterator.
*/
explicit ValueIterator(const Value::ObjectValues::iterator& current);
explicit ValueIterator(Value::ObjectValues::iterator const& current);
public:
SelfType&
operator=(const SelfType& other);
operator=(SelfType const& other);
SelfType
operator++(int)

View File

@@ -39,7 +39,7 @@ public:
{
}
virtual std::string
write(const Value& root) = 0;
write(Value const& root) = 0;
};
/** \brief Outputs a Value in <a HREF="http://www.json.org">JSON</a> format
@@ -60,11 +60,11 @@ public:
public: // overridden from Writer
std::string
write(const Value& root) override;
write(Value const& root) override;
private:
void
writeValue(const Value& value);
writeValue(Value const& value);
std::string document_;
};
@@ -101,15 +101,15 @@ public: // overridden from Writer
* JSON document that represents the root value.
*/
std::string
write(const Value& root) override;
write(Value const& root) override;
private:
void
writeValue(const Value& value);
writeValue(Value const& value);
void
writeArrayValue(const Value& value);
writeArrayValue(Value const& value);
bool
isMultineArray(const Value& value);
isMultineArray(Value const& value);
void
pushValue(std::string const& value);
void
@@ -168,15 +168,15 @@ public:
* return a value.
*/
void
write(std::ostream& out, const Value& root);
write(std::ostream& out, Value const& root);
private:
void
writeValue(const Value& value);
writeValue(Value const& value);
void
writeArrayValue(const Value& value);
writeArrayValue(Value const& value);
bool
isMultineArray(const Value& value);
isMultineArray(Value const& value);
void
pushValue(std::string const& value);
void
@@ -207,12 +207,12 @@ valueToString(double value);
std::string
valueToString(bool value);
std::string
valueToQuotedString(const char* value);
valueToQuotedString(char const* value);
/// \brief Output using the StyledStreamWriter.
/// \see Json::operator>>()
std::ostream&
operator<<(std::ostream&, const Value& root);
operator<<(std::ostream&, Value const& root);
//------------------------------------------------------------------------------

View File

@@ -37,7 +37,7 @@ pretty(Value const&);
/** Output using the StyledStreamWriter. @see Json::operator>>(). */
std::ostream&
operator<<(std::ostream&, const Value& root);
operator<<(std::ostream&, Value const& root);
} // namespace Json

View File

@@ -48,14 +48,6 @@ class STObject;
class STAmount;
class Rules;
/** Calculate AMM account ID.
*/
AccountID
ammAccountID(
std::uint16_t prefix,
uint256 const& parentHash,
uint256 const& ammID);
/** Calculate Liquidity Provider Token (LPT) Currency.
*/
Currency

View File

@@ -149,7 +149,7 @@ namespace std {
template <>
struct hash<ripple::AccountID> : ripple::AccountID::hasher
{
explicit hash() = default;
hash() = default;
};
} // namespace std

View File

@@ -20,6 +20,7 @@
#ifndef RIPPLE_PROTOCOL_ASSET_H_INCLUDED
#define RIPPLE_PROTOCOL_ASSET_H_INCLUDED
#include <xrpl/basics/Number.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/protocol/Issue.h>
#include <xrpl/protocol/MPTIssue.h>
@@ -27,6 +28,7 @@
namespace ripple {
class Asset;
class STAmount;
template <typename TIss>
concept ValidIssueType =
@@ -92,6 +94,9 @@ public:
void
setJson(Json::Value& jv) const;
STAmount
operator()(Number const&) const;
bool
native() const
{
@@ -114,6 +119,14 @@ public:
equalTokens(Asset const& lhs, Asset const& rhs);
};
inline Json::Value
to_json(Asset const& asset)
{
Json::Value jv;
asset.setJson(jv);
return jv;
}
template <ValidIssueType TIss>
constexpr bool
Asset::holds() const
@@ -219,9 +232,6 @@ validJSONAsset(Json::Value const& jv);
Asset
assetFromJson(Json::Value const& jv);
Json::Value
to_json(Asset const& asset);
} // namespace ripple
#endif // RIPPLE_PROTOCOL_ASSET_H_INCLUDED

View File

@@ -0,0 +1,37 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2024 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <xrpl/protocol/HashPrefix.h>
#include <xrpl/protocol/STVector256.h>
#include <xrpl/protocol/Serializer.h>
namespace ripple {
inline void
serializeBatch(
Serializer& msg,
std::uint32_t const& flags,
std::vector<uint256> const& txids)
{
msg.add32(HashPrefix::batch);
msg.add32(flags);
msg.add32(std::uint32_t(txids.size()));
for (auto const& txid : txids)
msg.addBitString(txid);
}
} // namespace ripple

View File

@@ -21,6 +21,7 @@
#define RIPPLE_PROTOCOL_BOOK_H_INCLUDED
#include <xrpl/basics/CountedObject.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/protocol/Issue.h>
#include <boost/utility/base_from_member.hpp>
@@ -36,12 +37,17 @@ class Book final : public CountedObject<Book>
public:
Issue in;
Issue out;
std::optional<uint256> domain;
Book()
{
}
Book(Issue const& in_, Issue const& out_) : in(in_), out(out_)
Book(
Issue const& in_,
Issue const& out_,
std::optional<uint256> const& domain_)
: in(in_), out(out_), domain(domain_)
{
}
};
@@ -61,6 +67,8 @@ hash_append(Hasher& h, Book const& b)
{
using beast::hash_append;
hash_append(h, b.in, b.out);
if (b.domain)
hash_append(h, *(b.domain));
}
Book
@@ -71,7 +79,8 @@ reversed(Book const& book);
[[nodiscard]] inline constexpr bool
operator==(Book const& lhs, Book const& rhs)
{
return (lhs.in == rhs.in) && (lhs.out == rhs.out);
return (lhs.in == rhs.in) && (lhs.out == rhs.out) &&
(lhs.domain == rhs.domain);
}
/** @} */
@@ -82,7 +91,18 @@ operator<=>(Book const& lhs, Book const& rhs)
{
if (auto const c{lhs.in <=> rhs.in}; c != 0)
return c;
return lhs.out <=> rhs.out;
if (auto const c{lhs.out <=> rhs.out}; c != 0)
return c;
// Manually compare optionals
if (lhs.domain && rhs.domain)
return *lhs.domain <=> *rhs.domain; // Compare values if both exist
if (!lhs.domain && rhs.domain)
return std::weak_ordering::less; // Empty is considered less
if (lhs.domain && !rhs.domain)
return std::weak_ordering::greater; // Non-empty is greater
return std::weak_ordering::equivalent; // Both are empty
}
/** @} */
@@ -104,7 +124,7 @@ private:
boost::base_from_member<std::hash<ripple::AccountID>, 1>;
public:
explicit hash() = default;
hash() = default;
using value_type = std::size_t;
using argument_type = ripple::Issue;
@@ -126,12 +146,14 @@ template <>
struct hash<ripple::Book>
{
private:
using hasher = std::hash<ripple::Issue>;
using issue_hasher = std::hash<ripple::Issue>;
using uint256_hasher = ripple::uint256::hasher;
hasher m_hasher;
issue_hasher m_issue_hasher;
uint256_hasher m_uint256_hasher;
public:
explicit hash() = default;
hash() = default;
using value_type = std::size_t;
using argument_type = ripple::Book;
@@ -139,8 +161,12 @@ public:
value_type
operator()(argument_type const& value) const
{
value_type result(m_hasher(value.in));
boost::hash_combine(result, m_hasher(value.out));
value_type result(m_issue_hasher(value.in));
boost::hash_combine(result, m_issue_hasher(value.out));
if (value.domain)
boost::hash_combine(result, m_uint256_hasher(*value.domain));
return result;
}
};
@@ -154,7 +180,7 @@ namespace boost {
template <>
struct hash<ripple::Issue> : std::hash<ripple::Issue>
{
explicit hash() = default;
hash() = default;
using Base = std::hash<ripple::Issue>;
// VFALCO NOTE broken in vs2012
@@ -164,7 +190,7 @@ struct hash<ripple::Issue> : std::hash<ripple::Issue>
template <>
struct hash<ripple::Book> : std::hash<ripple::Book>
{
explicit hash() = default;
hash() = default;
using Base = std::hash<ripple::Book>;
// VFALCO NOTE broken in vs2012

View File

@@ -120,7 +120,7 @@ enum error_code_i {
rpcSRC_ACT_MALFORMED = 65,
rpcSRC_ACT_MISSING = 66,
rpcSRC_ACT_NOT_FOUND = 67,
// unused 68,
rpcDELEGATE_ACT_NOT_FOUND = 68,
rpcSRC_CUR_MALFORMED = 69,
rpcSRC_ISR_MALFORMED = 70,
rpcSTREAM_MALFORMED = 71,
@@ -154,7 +154,10 @@ enum error_code_i {
// Simulate
rpcTX_SIGNED = 96,
rpcLAST = rpcTX_SIGNED // rpcLAST should always equal the last code.
// Pathfinding
rpcDOMAIN_MALFORMED = 97,
rpcLAST = rpcDOMAIN_MALFORMED // rpcLAST should always equal the last code.
};
/** Codes returned in the `warnings` array of certain RPC commands.

View File

@@ -55,6 +55,18 @@
* `VoteBehavior::DefaultYes`. The communication process is beyond
* the scope of these instructions.
*
* 5) A feature marked as Obsolete can mean either:
* 1) It is in the ledger (marked as Supported::yes) and it is on its way to
* become Retired
* 2) The feature is not in the ledger (has always been marked as
* Supported::no) and the code to support it has been removed
*
* If we want to discontinue a feature that we've never fully supported and
* the feature has never been enabled, we should remove all the related
* code, and mark the feature as "abandoned". To do this:
*
* 1) Open features.macro, move the feature to the abandoned section and
* change the macro to XRPL_ABANDON
*
* When a feature has been enabled for several years, the conditional code
* may be removed, and the feature "retired". To retire a feature:
@@ -88,10 +100,13 @@ namespace detail {
#undef XRPL_FIX
#pragma push_macro("XRPL_RETIRE")
#undef XRPL_RETIRE
#pragma push_macro("XRPL_ABANDON")
#undef XRPL_ABANDON
#define XRPL_FEATURE(name, supported, vote) +1
#define XRPL_FIX(name, supported, vote) +1
#define XRPL_RETIRE(name) +1
#define XRPL_ABANDON(name) +1
// This value SHOULD be equal to the number of amendments registered in
// Feature.cpp. Because it's only used to reserve storage, and determine how
@@ -108,6 +123,8 @@ static constexpr std::size_t numFeatures =
#pragma pop_macro("XRPL_FIX")
#undef XRPL_FEATURE
#pragma pop_macro("XRPL_FEATURE")
#undef XRPL_ABANDON
#pragma pop_macro("XRPL_ABANDON")
/** Amendments that this server supports and the default voting behavior.
Whether they are enabled depends on the Rules defined in the validated
@@ -349,10 +366,13 @@ foreachFeature(FeatureBitset bs, F&& f)
#undef XRPL_FIX
#pragma push_macro("XRPL_RETIRE")
#undef XRPL_RETIRE
#pragma push_macro("XRPL_ABANDON")
#undef XRPL_ABANDON
#define XRPL_FEATURE(name, supported, vote) extern uint256 const feature##name;
#define XRPL_FIX(name, supported, vote) extern uint256 const fix##name;
#define XRPL_RETIRE(name)
#define XRPL_ABANDON(name)
#include <xrpl/protocol/detail/features.macro>
@@ -362,6 +382,8 @@ foreachFeature(FeatureBitset bs, F&& f)
#pragma pop_macro("XRPL_FIX")
#undef XRPL_FEATURE
#pragma pop_macro("XRPL_FEATURE")
#undef XRPL_ABANDON
#pragma pop_macro("XRPL_ABANDON")
} // namespace ripple

View File

@@ -336,7 +336,7 @@ public:
// Output Fees as just their numeric value.
template <class Char, class Traits, class UnitTag, class T>
std::basic_ostream<Char, Traits>&
operator<<(std::basic_ostream<Char, Traits>& os, const TaggedFee<UnitTag, T>& q)
operator<<(std::basic_ostream<Char, Traits>& os, TaggedFee<UnitTag, T> const& q)
{
return os << q.value();
}
@@ -451,8 +451,8 @@ mulDivU(Source1 value, Dest mul, Source2 div)
}
using namespace boost::multiprecision;
uint128_t product;
using uint128 = boost::multiprecision::uint128_t;
uint128 product;
product = multiply(
product,
static_cast<std::uint64_t>(value.value()),

View File

@@ -24,6 +24,8 @@
namespace ripple {
constexpr std::uint32_t MICRO_DROPS_PER_DROP{1'000'000};
/** Reflects the fee settings for a particular ledger.
The fees are always the same for any transactions applied
@@ -34,6 +36,10 @@ struct Fees
XRPAmount base{0}; // Reference tx cost (drops)
XRPAmount reserve{0}; // Reserve base (drops)
XRPAmount increment{0}; // Reserve increment (drops)
std::uint32_t extensionComputeLimit{
0}; // Extension compute limit (instructions)
std::uint32_t extensionSizeLimit{0}; // Extension size limit (bytes)
std::uint32_t gasPrice{0}; // price of WASM gas (micro-drops)
explicit Fees() = default;
Fees(Fees const&) = default;

View File

@@ -88,6 +88,9 @@ enum class HashPrefix : std::uint32_t {
/** Credentials signature */
credential = detail::make_hash_prefix('C', 'R', 'D'),
/** Batch */
batch = detail::make_hash_prefix('B', 'C', 'H'),
};
template <class Hasher>

View File

@@ -28,7 +28,6 @@
#include <cstdint>
#include <string>
#include <utility>
namespace ripple {
@@ -99,6 +98,12 @@ public:
static IOUAmount
minPositiveAmount();
friend std::ostream&
operator<<(std::ostream& os, IOUAmount const& x)
{
return os << to_string(x);
}
};
inline IOUAmount::IOUAmount(beast::Zero)

View File

@@ -230,6 +230,12 @@ page(Keylet const& root, std::uint64_t index = 0) noexcept
Keylet
escrow(AccountID const& src, std::uint32_t seq) noexcept;
inline Keylet
escrow(uint256 const& key) noexcept
{
return {ltESCROW, key};
}
/** A PaymentChannel */
Keylet
payChan(AccountID const& src, AccountID const& dst, std::uint32_t seq) noexcept;
@@ -279,6 +285,10 @@ amm(Asset const& issue1, Asset const& issue2) noexcept;
Keylet
amm(uint256 const& amm) noexcept;
/** A keylet for Delegate object */
Keylet
delegate(AccountID const& account, AccountID const& authorizedAccount) noexcept;
Keylet
bridge(STXChainBridge const& bridge, STXChainBridge::ChainType chainType);
@@ -330,6 +340,15 @@ mptoken(uint256 const& mptokenKey)
Keylet
mptoken(uint256 const& issuanceKey, AccountID const& holder) noexcept;
Keylet
vault(AccountID const& owner, std::uint32_t seq) noexcept;
inline Keylet
vault(uint256 const& vaultKey)
{
return {ltVAULT, vaultKey};
}
Keylet
permissionedDomain(AccountID const& account, std::uint32_t seq) noexcept;

View File

@@ -132,7 +132,7 @@ enum LedgerSpecificFlags {
lsfNoFreeze = 0x00200000, // True, cannot freeze ripple states
lsfGlobalFreeze = 0x00400000, // True, all assets frozen
lsfDefaultRipple =
0x00800000, // True, trust lines allow rippling by default
0x00800000, // True, incoming trust lines allow rippling by default
lsfDepositAuth = 0x01000000, // True, all deposits require authorization
/* // reserved for Hooks amendment
lsfTshCollect = 0x02000000, // True, allow TSH collect-calls to acc hooks
@@ -145,13 +145,15 @@ enum LedgerSpecificFlags {
0x10000000, // True, reject new paychans
lsfDisallowIncomingTrustline =
0x20000000, // True, reject new trustlines (only if no issued assets)
// 0x40000000 is available
lsfAllowTrustLineLocking =
0x40000000, // True, enable trustline locking
lsfAllowTrustLineClawback =
0x80000000, // True, enable clawback
// ltOFFER
lsfPassive = 0x00010000,
lsfSell = 0x00020000, // True, offer was placed as a sell.
lsfHybrid = 0x00040000, // True, offer is hybrid.
// ltRIPPLE_STATE
lsfLowReserve = 0x00010000, // True, if entry counts toward reserve.
@@ -191,6 +193,9 @@ enum LedgerSpecificFlags {
// ltCREDENTIAL
lsfAccepted = 0x00010000,
// ltVAULT
lsfVaultPrivate = 0x00010000,
};
//------------------------------------------------------------------------------

View File

@@ -24,15 +24,12 @@
#include <xrpl/basics/contract.h>
#include <xrpl/basics/safe_cast.h>
#include <xrpl/beast/utility/Zero.h>
#include <xrpl/json/json_value.h>
#include <boost/multiprecision/cpp_int.hpp>
#include <boost/operators.hpp>
#include <cstdint>
#include <optional>
#include <string>
#include <type_traits>
namespace ripple {
@@ -152,11 +149,12 @@ mulRatio(
bool roundUp)
{
using namespace boost::multiprecision;
using int128 = boost::multiprecision::int128_t;
if (!den)
Throw<std::runtime_error>("division by zero");
int128_t const amt128(amt.value());
int128 const amt128(amt.value());
auto const neg = amt.value() < 0;
auto const m = amt128 * num;
auto r = m / den;

View File

@@ -42,8 +42,11 @@ public:
AccountID const&
getIssuer() const;
MPTID const&
getMptID() const;
constexpr MPTID const&
getMptID() const
{
return mptID_;
}
std::string
getText() const;

View File

@@ -80,7 +80,7 @@ struct MultiApiJson
}
void
set(const char* key, auto const& v)
set(char const* key, auto const& v)
requires std::constructible_from<Json::Value, decltype(v)>
{
for (auto& a : this->val)
@@ -91,7 +91,7 @@ struct MultiApiJson
enum IsMemberResult : int { none = 0, some, all };
[[nodiscard]] IsMemberResult
isMember(const char* key) const
isMember(char const* key) const
{
int count = 0;
for (auto& a : this->val)

View File

@@ -28,6 +28,8 @@
namespace ripple {
namespace RPC {
/**
Adds common synthetic fields to transaction-related JSON responses
@@ -40,6 +42,7 @@ insertNFTSyntheticInJson(
TxMeta const&);
/** @} */
} // namespace RPC
} // namespace ripple
#endif

View File

@@ -0,0 +1,97 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2025 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_PROTOCOL_PERMISSION_H_INCLUDED
#define RIPPLE_PROTOCOL_PERMISSION_H_INCLUDED
#include <xrpl/protocol/TxFormats.h>
#include <optional>
#include <string>
#include <unordered_map>
#include <unordered_set>
namespace ripple {
/**
* We have both transaction type permissions and granular type permissions.
* Since we will reuse the TransactionFormats to parse the Transaction
* Permissions, only the GranularPermissionType is defined here. To prevent
* conflicts with TxType, the GranularPermissionType is always set to a value
* greater than the maximum value of uint16.
*/
enum GranularPermissionType : std::uint32_t {
#pragma push_macro("PERMISSION")
#undef PERMISSION
#define PERMISSION(type, txType, value) type = value,
#include <xrpl/protocol/detail/permissions.macro>
#undef PERMISSION
#pragma pop_macro("PERMISSION")
};
enum Delegation { delegatable, notDelegatable };
class Permission
{
private:
Permission();
std::unordered_map<std::uint16_t, Delegation> delegatableTx_;
std::unordered_map<std::string, GranularPermissionType>
granularPermissionMap_;
std::unordered_map<GranularPermissionType, std::string> granularNameMap_;
std::unordered_map<GranularPermissionType, TxType> granularTxTypeMap_;
public:
static Permission const&
getInstance();
Permission(Permission const&) = delete;
Permission&
operator=(Permission const&) = delete;
std::optional<std::uint32_t>
getGranularValue(std::string const& name) const;
std::optional<std::string>
getGranularName(GranularPermissionType const& value) const;
std::optional<TxType>
getGranularTxType(GranularPermissionType const& gpType) const;
bool
isDelegatable(std::uint32_t const& permissionValue) const;
// for tx level permission, permission value is equal to tx type plus one
uint32_t
txToPermissionType(TxType const& type) const;
// tx type value is permission value minus one
TxType
permissionToTxType(uint32_t const& value) const;
};
} // namespace ripple
#endif

View File

@@ -116,6 +116,20 @@ std::size_t constexpr maxMPTokenMetadataLength = 1024;
/** The maximum amount of MPTokenIssuance */
std::uint64_t constexpr maxMPTokenAmount = 0x7FFF'FFFF'FFFF'FFFFull;
/** The maximum length of Data payload */
std::size_t constexpr maxDataPayloadLength = 256;
/** Vault withdrawal policies */
std::uint8_t constexpr vaultStrategyFirstComeFirstServe = 1;
/** Maximum recursion depth for vault shares being put as an asset inside
* another vault; counted from 0 */
std::uint8_t constexpr maxAssetCheckDepth = 5;
/** The maximum length of a Data field in Escrow object that can be updated by
* Wasm code */
std::size_t constexpr maxWasmDataLength = 4 * 1024;
/** A ledger index. */
using LedgerIndex = std::uint32_t;
@@ -155,6 +169,13 @@ std::size_t constexpr maxPriceScale = 20;
*/
std::size_t constexpr maxTrim = 25;
/** The maximum number of delegate permissions an account can grant
*/
std::size_t constexpr permissionMaxSize = 10;
/** The maximum number of transactions that can be in a batch. */
std::size_t constexpr maxBatchTxCount = 8;
} // namespace ripple
#endif

View File

@@ -113,8 +113,8 @@ public:
// have lower unsigned integer representations.
using value_type = std::uint64_t;
static const int minTickSize = 3;
static const int maxTickSize = 16;
static int const minTickSize = 3;
static int const maxTickSize = 16;
private:
// This has the same representation as STAmount, see the comment on the

View File

@@ -28,6 +28,9 @@
namespace ripple {
bool
isFeatureEnabled(uint256 const& feature);
class DigestAwareReadView;
/** Rules controlling protocol behavior. */

View File

@@ -25,7 +25,6 @@
#include <cstdint>
#include <map>
#include <utility>
namespace ripple {
@@ -182,22 +181,22 @@ public:
private_access_tag_t,
SerializedTypeID tid,
int fv,
const char* fn,
char const* fn,
int meta = sMD_Default,
IsSigning signing = IsSigning::yes);
explicit SField(private_access_tag_t, int fc);
static const SField&
static SField const&
getField(int fieldCode);
static const SField&
static SField const&
getField(std::string const& fieldName);
static const SField&
static SField const&
getField(int type, int value)
{
return getField(field_code(type, value));
}
static const SField&
static SField const&
getField(SerializedTypeID type, int value)
{
return getField(field_code(type, value));
@@ -284,19 +283,19 @@ public:
}
bool
operator==(const SField& f) const
operator==(SField const& f) const
{
return fieldCode == f.fieldCode;
}
bool
operator!=(const SField& f) const
operator!=(SField const& f) const
{
return fieldCode != f.fieldCode;
}
static int
compare(const SField& f1, const SField& f2);
compare(SField const& f1, SField const& f2);
static std::map<int, SField const*> const&
getKnownCodeToField()

View File

@@ -58,7 +58,7 @@ public:
add(Serializer& s) const override;
bool
isEquivalent(const STBase& t) const override;
isEquivalent(STBase const& t) const override;
bool
isDefault() const override;

View File

@@ -62,20 +62,20 @@ private:
public:
using value_type = STAmount;
static const int cMinOffset = -96;
static const int cMaxOffset = 80;
static int const cMinOffset = -96;
static int const cMaxOffset = 80;
// Maximum native value supported by the code
static const std::uint64_t cMinValue = 1000000000000000ull;
static const std::uint64_t cMaxValue = 9999999999999999ull;
static const std::uint64_t cMaxNative = 9000000000000000000ull;
static std::uint64_t const cMinValue = 1000000000000000ull;
static std::uint64_t const cMaxValue = 9999999999999999ull;
static std::uint64_t const cMaxNative = 9000000000000000000ull;
// Max native value on network.
static const std::uint64_t cMaxNativeN = 100000000000000000ull;
static const std::uint64_t cIssuedCurrency = 0x8000000000000000ull;
static const std::uint64_t cPositive = 0x4000000000000000ull;
static const std::uint64_t cMPToken = 0x2000000000000000ull;
static const std::uint64_t cValueMask = ~(cPositive | cMPToken);
static std::uint64_t const cMaxNativeN = 100000000000000000ull;
static std::uint64_t const cIssuedCurrency = 0x8000000000000000ull;
static std::uint64_t const cPositive = 0x4000000000000000ull;
static std::uint64_t const cMPToken = 0x2000000000000000ull;
static std::uint64_t const cValueMask = ~(cPositive | cMPToken);
static std::uint64_t const uRateOne;
@@ -153,6 +153,12 @@ public:
template <AssetType A>
STAmount(A const& asset, int mantissa, int exponent = 0);
template <AssetType A>
STAmount(A const& asset, Number const& number)
: STAmount(asset, number.mantissa(), number.exponent())
{
}
// Legacy support for new-style amounts
STAmount(IOUAmount const& amount, Issue const& issue);
STAmount(XRPAmount const& amount);
@@ -230,6 +236,9 @@ public:
STAmount&
operator=(XRPAmount const& amount);
STAmount&
operator=(Number const&);
//--------------------------------------------------------------------------
//
// Modification
@@ -268,13 +277,13 @@ public:
std::string
getText() const override;
Json::Value getJson(JsonOptions) const override;
Json::Value getJson(JsonOptions = JsonOptions::none) const override;
void
add(Serializer& s) const override;
bool
isEquivalent(const STBase& t) const override;
isEquivalent(STBase const& t) const override;
bool
isDefault() const override;
@@ -417,7 +426,7 @@ STAmount
amountFromQuality(std::uint64_t rate);
STAmount
amountFromString(Asset const& issue, std::string const& amount);
amountFromString(Asset const& asset, std::string const& amount);
STAmount
amountFromJson(SField const& name, Json::Value const& v);
@@ -541,6 +550,16 @@ STAmount::operator=(XRPAmount const& amount)
return *this;
}
inline STAmount&
STAmount::operator=(Number const& number)
{
mIsNegative = number.mantissa() < 0;
mValue = mIsNegative ? -number.mantissa() : number.mantissa();
mOffset = number.exponent();
canonicalize();
return *this;
}
inline void
STAmount::negate()
{
@@ -684,6 +703,12 @@ isXRP(STAmount const& amount)
return amount.native();
}
bool
canAdd(STAmount const& amt1, STAmount const& amt2);
bool
canSubtract(STAmount const& amt1, STAmount const& amt2);
// Since `canonicalize` does not have access to a ledger, this is needed to put
// the low-level routine stAmountCanonicalize on an amendment switch. Only
// transactions need to use this switchover. Outside of a transaction it's safe

View File

@@ -128,13 +128,13 @@ public:
add(Serializer& s) const override;
void
sort(bool (*compare)(const STObject& o1, const STObject& o2));
sort(bool (*compare)(STObject const& o1, STObject const& o2));
bool
operator==(const STArray& s) const;
operator==(STArray const& s) const;
bool
operator!=(const STArray& s) const;
operator!=(STArray const& s) const;
iterator
erase(iterator pos);
@@ -152,7 +152,7 @@ public:
getSType() const override;
bool
isEquivalent(const STBase& t) const override;
isEquivalent(STBase const& t) const override;
bool
isDefault() const override;
@@ -275,13 +275,13 @@ STArray::swap(STArray& a) noexcept
}
inline bool
STArray::operator==(const STArray& s) const
STArray::operator==(STArray const& s) const
{
return v_ == s.v_;
}
inline bool
STArray::operator!=(const STArray& s) const
STArray::operator!=(STArray const& s) const
{
return v_ != s.v_;
}

View File

@@ -92,6 +92,16 @@ struct JsonOptions
}
};
template <typename T>
requires requires(T const& t) {
{ t.getJson(JsonOptions::none) } -> std::convertible_to<Json::Value>;
}
Json::Value
to_json(T const& t)
{
return t.getJson(JsonOptions::none);
}
namespace detail {
class STVar;
}
@@ -129,16 +139,16 @@ class STBase
public:
virtual ~STBase() = default;
STBase();
STBase(const STBase&) = default;
STBase(STBase const&) = default;
STBase&
operator=(const STBase& t);
operator=(STBase const& t);
explicit STBase(SField const& n);
bool
operator==(const STBase& t) const;
operator==(STBase const& t) const;
bool
operator!=(const STBase& t) const;
operator!=(STBase const& t) const;
template <class D>
D&
@@ -157,7 +167,7 @@ public:
virtual std::string
getText() const;
virtual Json::Value getJson(JsonOptions /*options*/) const;
virtual Json::Value getJson(JsonOptions = JsonOptions::none) const;
virtual void
add(Serializer& s) const;
@@ -197,7 +207,7 @@ private:
//------------------------------------------------------------------------------
std::ostream&
operator<<(std::ostream& out, const STBase& t);
operator<<(std::ostream& out, STBase const& t);
template <class D>
D&

View File

@@ -45,8 +45,8 @@ public:
STBitString() = default;
STBitString(SField const& n);
STBitString(const value_type& v);
STBitString(SField const& n, const value_type& v);
STBitString(value_type const& v);
STBitString(SField const& n, value_type const& v);
STBitString(SerialIter& sit, SField const& name);
SerializedTypeID
@@ -56,7 +56,7 @@ public:
getText() const override;
bool
isEquivalent(const STBase& t) const override;
isEquivalent(STBase const& t) const override;
void
add(Serializer& s) const override;
@@ -93,12 +93,12 @@ inline STBitString<Bits>::STBitString(SField const& n) : STBase(n)
}
template <int Bits>
inline STBitString<Bits>::STBitString(const value_type& v) : value_(v)
inline STBitString<Bits>::STBitString(value_type const& v) : value_(v)
{
}
template <int Bits>
inline STBitString<Bits>::STBitString(SField const& n, const value_type& v)
inline STBitString<Bits>::STBitString(SField const& n, value_type const& v)
: STBase(n), value_(v)
{
}
@@ -160,9 +160,9 @@ STBitString<Bits>::getText() const
template <int Bits>
bool
STBitString<Bits>::isEquivalent(const STBase& t) const
STBitString<Bits>::isEquivalent(STBase const& t) const
{
const STBitString* v = dynamic_cast<const STBitString*>(&t);
STBitString const* v = dynamic_cast<STBitString const*>(&t);
return v && (value_ == v->value_);
}

View File

@@ -63,7 +63,7 @@ public:
add(Serializer& s) const override;
bool
isEquivalent(const STBase& t) const override;
isEquivalent(STBase const& t) const override;
bool
isDefault() const override;

View File

@@ -65,7 +65,7 @@ public:
add(Serializer& s) const override;
bool
isEquivalent(const STBase& t) const override;
isEquivalent(STBase const& t) const override;
bool
isDefault() const override;

View File

@@ -54,7 +54,7 @@ public:
isDefault() const override;
bool
isEquivalent(const STBase& t) const override;
isEquivalent(STBase const& t) const override;
STInteger&
operator=(value_type const& v);
@@ -127,9 +127,9 @@ STInteger<Integer>::isDefault() const
template <typename Integer>
inline bool
STInteger<Integer>::isEquivalent(const STBase& t) const
STInteger<Integer>::isEquivalent(STBase const& t) const
{
const STInteger* v = dynamic_cast<const STInteger*>(&t);
STInteger const* v = dynamic_cast<STInteger const*>(&t);
return v && (value_ == v->value_);
}

View File

@@ -37,6 +37,7 @@ public:
using value_type = Asset;
STIssue() = default;
STIssue(STIssue const& rhs) = default;
explicit STIssue(SerialIter& sit, SField const& name);
@@ -45,6 +46,9 @@ public:
explicit STIssue(SField const& name);
STIssue&
operator=(STIssue const& rhs) = default;
template <ValidIssueType TIss>
TIss const&
get() const;
@@ -71,7 +75,7 @@ public:
add(Serializer& s) const override;
bool
isEquivalent(const STBase& t) const override;
isEquivalent(STBase const& t) const override;
bool
isDefault() const override;

View File

@@ -35,7 +35,7 @@ class STLedgerEntry final : public STObject, public CountedObject<STLedgerEntry>
public:
using pointer = std::shared_ptr<STLedgerEntry>;
using ref = const std::shared_ptr<STLedgerEntry>&;
using ref = std::shared_ptr<STLedgerEntry> const&;
/** Create an empty object with the given key and type. */
explicit STLedgerEntry(Keylet const& k);

View File

@@ -63,6 +63,13 @@ public:
void
setValue(Number const& v);
STNumber&
operator=(Number const& rhs)
{
setValue(rhs);
return *this;
}
bool
isEquivalent(STBase const& t) const override;
bool
@@ -83,6 +90,19 @@ private:
std::ostream&
operator<<(std::ostream& out, STNumber const& rhs);
struct NumberParts
{
std::uint64_t mantissa = 0;
int exponent = 0;
bool negative = false;
};
NumberParts
partsFromString(std::string const& number);
STNumber
numberFromJson(SField const& field, Json::Value const& value);
} // namespace ripple
#endif

View File

@@ -99,8 +99,8 @@ public:
STObject&
operator=(STObject&& other);
STObject(const SOTemplate& type, SField const& name);
STObject(const SOTemplate& type, SerialIter& sit, SField const& name);
STObject(SOTemplate const& type, SField const& name);
STObject(SOTemplate const& type, SerialIter& sit, SField const& name);
STObject(SerialIter& sit, SField const& name, int depth = 0);
STObject(SerialIter&& sit, SField const& name);
explicit STObject(SField const& name);
@@ -121,7 +121,7 @@ public:
reserve(std::size_t n);
void
applyTemplate(const SOTemplate& type);
applyTemplate(SOTemplate const& type);
void
applyTemplateFromSField(SField const&);
@@ -130,7 +130,7 @@ public:
isFree() const;
void
set(const SOTemplate&);
set(SOTemplate const&);
bool
set(SerialIter& u, int depth = 0);
@@ -139,7 +139,7 @@ public:
getSType() const override;
bool
isEquivalent(const STBase& t) const override;
isEquivalent(STBase const& t) const override;
bool
isDefault() const override;
@@ -154,8 +154,7 @@ public:
getText() const override;
// TODO(tom): options should be an enum.
Json::Value
getJson(JsonOptions options) const override;
Json::Value getJson(JsonOptions = JsonOptions::none) const override;
void
addWithoutSigningFields(Serializer& s) const;
@@ -183,13 +182,13 @@ public:
uint256
getSigningHash(HashPrefix prefix) const;
const STBase&
STBase const&
peekAtIndex(int offset) const;
STBase&
getIndex(int offset);
const STBase*
STBase const*
peekAtPIndex(int offset) const;
STBase*
@@ -201,13 +200,13 @@ public:
SField const&
getFieldSType(int index) const;
const STBase&
STBase const&
peekAtField(SField const& field) const;
STBase&
getField(SField const& field);
const STBase*
STBase const*
peekAtPField(SField const& field) const;
STBase*
@@ -241,11 +240,11 @@ public:
getFieldAmount(SField const& field) const;
STPathSet const&
getFieldPathSet(SField const& field) const;
const STVector256&
STVector256 const&
getFieldV256(SField const& field) const;
const STArray&
STArray const&
getFieldArray(SField const& field) const;
const STCurrency&
STCurrency const&
getFieldCurrency(SField const& field) const;
STNumber const&
getFieldNumber(SField const& field) const;
@@ -409,12 +408,12 @@ public:
delField(int index);
bool
hasMatchingEntry(const STBase&);
hasMatchingEntry(STBase const&);
bool
operator==(const STObject& o) const;
operator==(STObject const& o) const;
bool
operator!=(const STObject& o) const;
operator!=(STObject const& o) const;
class FieldErr;
@@ -484,9 +483,19 @@ private:
template <class T>
class STObject::Proxy
{
protected:
public:
using value_type = typename T::value_type;
value_type
value() const;
value_type
operator*() const;
T const*
operator->() const;
protected:
STObject* st_;
SOEStyle style_;
TypedField<T> const* f_;
@@ -495,9 +504,6 @@ protected:
Proxy(STObject* st, TypedField<T> const* f);
value_type
value() const;
T const*
find() const;
@@ -512,7 +518,7 @@ template <typename U>
concept IsArithmetic = std::is_arithmetic_v<U> || std::is_same_v<U, STAmount>;
template <class T>
class STObject::ValueProxy : private Proxy<T>
class STObject::ValueProxy : public Proxy<T>
{
private:
using value_type = typename T::value_type;
@@ -538,6 +544,13 @@ public:
operator value_type() const;
template <typename U>
friend bool
operator==(U const& lhs, STObject::ValueProxy<T> const& rhs)
{
return rhs.value() == lhs;
}
private:
friend class STObject;
@@ -545,7 +558,7 @@ private:
};
template <class T>
class STObject::OptionalProxy : private Proxy<T>
class STObject::OptionalProxy : public Proxy<T>
{
private:
using value_type = typename T::value_type;
@@ -565,15 +578,6 @@ public:
explicit
operator bool() const noexcept;
/** Return the contained value
Throws:
STObject::FieldErr if !engaged()
*/
value_type
operator*() const;
operator optional_type() const;
/** Explicit conversion to std::optional */
@@ -717,6 +721,20 @@ STObject::Proxy<T>::value() const -> value_type
return value_type{};
}
template <class T>
auto
STObject::Proxy<T>::operator*() const -> value_type
{
return this->value();
}
template <class T>
T const*
STObject::Proxy<T>::operator->() const
{
return this->find();
}
template <class T>
inline T const*
STObject::Proxy<T>::find() const
@@ -792,13 +810,6 @@ STObject::OptionalProxy<T>::operator bool() const noexcept
return engaged();
}
template <class T>
auto
STObject::OptionalProxy<T>::operator*() const -> value_type
{
return this->value();
}
template <class T>
STObject::OptionalProxy<T>::operator typename STObject::OptionalProxy<
T>::optional_type() const
@@ -970,7 +981,7 @@ STObject::getCount() const
return v_.size();
}
inline const STBase&
inline STBase const&
STObject::peekAtIndex(int offset) const
{
return v_[offset].get();
@@ -982,7 +993,7 @@ STObject::getIndex(int offset)
return v_[offset].get();
}
inline const STBase*
inline STBase const*
STObject::peekAtPIndex(int offset) const
{
return &v_[offset].get();
@@ -1117,7 +1128,7 @@ STObject::setFieldH160(SField const& field, base_uint<160, Tag> const& v)
}
inline bool
STObject::operator!=(const STObject& o) const
STObject::operator!=(STObject const& o) const
{
return !(*this == o);
}
@@ -1126,7 +1137,7 @@ template <typename T, typename V>
V
STObject::getFieldByValue(SField const& field) const
{
const STBase* rf = peekAtPField(field);
STBase const* rf = peekAtPField(field);
if (!rf)
throwFieldNotFound(field);
@@ -1136,7 +1147,7 @@ STObject::getFieldByValue(SField const& field) const
if (id == STI_NOTPRESENT)
return V(); // optional field not present
const T* cf = dynamic_cast<const T*>(rf);
T const* cf = dynamic_cast<T const*>(rf);
if (!cf)
Throw<std::runtime_error>("Wrong field type");
@@ -1153,7 +1164,7 @@ template <typename T, typename V>
V const&
STObject::getFieldByConstRef(SField const& field, V const& empty) const
{
const STBase* rf = peekAtPField(field);
STBase const* rf = peekAtPField(field);
if (!rf)
throwFieldNotFound(field);
@@ -1163,7 +1174,7 @@ STObject::getFieldByConstRef(SField const& field, V const& empty) const
if (id == STI_NOTPRESENT)
return empty; // optional field not present
const T* cf = dynamic_cast<const T*>(rf);
T const* cf = dynamic_cast<T const*>(rf);
if (!cf)
Throw<std::runtime_error>("Wrong field type");

View File

@@ -106,10 +106,10 @@ public:
getIssuerID() const;
bool
operator==(const STPathElement& t) const;
operator==(STPathElement const& t) const;
bool
operator!=(const STPathElement& t) const;
operator!=(STPathElement const& t) const;
private:
static std::size_t
@@ -164,7 +164,7 @@ public:
STPathElement&
operator[](int i);
const STPathElement&
STPathElement const&
operator[](int i) const;
void
@@ -196,7 +196,7 @@ public:
assembleAdd(STPath const& base, STPathElement const& tail);
bool
isEquivalent(const STBase& t) const override;
isEquivalent(STBase const& t) const override;
bool
isDefault() const override;
@@ -375,7 +375,7 @@ STPathElement::getIssuerID() const
}
inline bool
STPathElement::operator==(const STPathElement& t) const
STPathElement::operator==(STPathElement const& t) const
{
return (mType & typeAccount) == (t.mType & typeAccount) &&
hash_value_ == t.hash_value_ && mAccountID == t.mAccountID &&
@@ -383,7 +383,7 @@ STPathElement::operator==(const STPathElement& t) const
}
inline bool
STPathElement::operator!=(const STPathElement& t) const
STPathElement::operator!=(STPathElement const& t) const
{
return !operator==(t);
}
@@ -455,7 +455,7 @@ STPath::operator[](int i)
return mPath[i];
}
inline const STPathElement&
inline STPathElement const&
STPath::operator[](int i) const
{
return mPath[i];

View File

@@ -102,6 +102,10 @@ public:
SeqProxy
getSeqProxy() const;
/** Returns the first non-zero value of (Sequence, TicketSequence). */
std::uint32_t
getSeqValue() const;
boost::container::flat_set<AccountID>
getMentionedAccounts() const;
@@ -121,10 +125,16 @@ public:
@return `true` if valid signature. If invalid, the error message string.
*/
enum class RequireFullyCanonicalSig : bool { no, yes };
Expected<void, std::string>
checkSign(RequireFullyCanonicalSig requireCanonicalSig, Rules const& rules)
const;
Expected<void, std::string>
checkBatchSign(
RequireFullyCanonicalSig requireCanonicalSig,
Rules const& rules) const;
// SQL Functions with metadata.
static std::string const&
getMetaSQLInsertReplaceHeader();
@@ -140,6 +150,9 @@ public:
char status,
std::string const& escapedMetaData) const;
std::vector<uint256>
getBatchTransactionIDs() const;
private:
Expected<void, std::string>
checkSingleSign(RequireFullyCanonicalSig requireCanonicalSig) const;
@@ -149,12 +162,24 @@ private:
RequireFullyCanonicalSig requireCanonicalSig,
Rules const& rules) const;
Expected<void, std::string>
checkBatchSingleSign(
STObject const& batchSigner,
RequireFullyCanonicalSig requireCanonicalSig) const;
Expected<void, std::string>
checkBatchMultiSign(
STObject const& batchSigner,
RequireFullyCanonicalSig requireCanonicalSig,
Rules const& rules) const;
STBase*
copy(std::size_t n, void* buf) const override;
STBase*
move(std::size_t n, void* buf) override;
friend class detail::STVar;
mutable std::vector<uint256> batch_txn_ids_;
};
bool

View File

@@ -50,7 +50,7 @@ public:
Json::Value getJson(JsonOptions) const override;
bool
isEquivalent(const STBase& t) const override;
isEquivalent(STBase const& t) const override;
bool
isDefault() const override;
@@ -62,7 +62,7 @@ public:
operator=(std::vector<uint256>&& v);
void
setValue(const STVector256& v);
setValue(STVector256 const& v);
/** Retrieve a copy of the vector we contain */
explicit
@@ -153,7 +153,7 @@ STVector256::operator=(std::vector<uint256>&& v)
}
inline void
STVector256::setValue(const STVector256& v)
STVector256::setValue(STVector256 const& v)
{
mValue = v.mValue;
}

View File

@@ -107,7 +107,7 @@ public:
add(Serializer& s) const override;
bool
isEquivalent(const STBase& t) const override;
isEquivalent(STBase const& t) const override;
bool
isDefault() const override;

View File

@@ -139,9 +139,9 @@ public:
int
addRaw(Slice slice);
int
addRaw(const void* ptr, int len);
addRaw(void const* ptr, int len);
int
addRaw(const Serializer& s);
addRaw(Serializer const& s);
int
addVL(Blob const& vector);
@@ -151,7 +151,7 @@ public:
int
addVL(Iter begin, Iter end, int len);
int
addVL(const void* ptr, int len);
addVL(void const* ptr, int len);
// disassemble functions
bool
@@ -161,7 +161,7 @@ public:
bool
getInteger(Integer& number, int offset)
{
static const auto bytes = sizeof(Integer);
static auto const bytes = sizeof(Integer);
if ((offset + bytes) > mData.size())
return false;
number = 0;
@@ -220,7 +220,7 @@ public:
{
return mData.size();
}
const void*
void const*
getDataPtr() const
{
return mData.data();
@@ -238,7 +238,7 @@ public:
std::string
getString() const
{
return std::string(static_cast<const char*>(getDataPtr()), size());
return std::string(static_cast<char const*>(getDataPtr()), size());
}
void
erase()
@@ -296,12 +296,12 @@ public:
return v != mData;
}
bool
operator==(const Serializer& v) const
operator==(Serializer const& v) const
{
return v.mData == mData;
}
bool
operator!=(const Serializer& v) const
operator!=(Serializer const& v) const
{
return v.mData != mData;
}

View File

@@ -139,8 +139,10 @@ enum TEMcodes : TERUnderlyingType {
temARRAY_EMPTY,
temARRAY_TOO_LARGE,
temBAD_TRANSFER_FEE,
temINVALID_INNER_BATCH,
temBAD_WASM,
};
//------------------------------------------------------------------------------
@@ -185,6 +187,8 @@ enum TEFcodes : TERUnderlyingType {
tefNO_TICKET,
tefNFTOKEN_IS_NOT_TRANSFERABLE,
tefINVALID_LEDGER_FIX_TYPE,
tefNO_WASM,
tefWASM_FIELD_NOT_INCLUDED,
};
//------------------------------------------------------------------------------
@@ -225,6 +229,8 @@ enum TERcodes : TERUnderlyingType {
terQUEUED, // Transaction is being held in TxQ until fee drops
terPRE_TICKET, // Ticket is not yet in ledger but might be on its way
terNO_AMM, // AMM doesn't exist for the asset pair
terADDRESS_COLLISION, // Failed to allocate AccountID when trying to
// create a pseudo-account
};
//------------------------------------------------------------------------------
@@ -265,6 +271,17 @@ enum TECcodes : TERUnderlyingType {
// Otherwise, treated as terRETRY.
//
// DO NOT CHANGE THESE NUMBERS: They appear in ledger meta data.
//
// Note:
// tecNO_ENTRY is often used interchangeably with tecOBJECT_NOT_FOUND.
// While there does not seem to be a clear rule which to use when, the
// following guidance will help to keep errors consistent with the
// majority of (but not all) transaction types:
// - tecNO_ENTRY : cannot find the primary ledger object on which the
// transaction is being attempted
// - tecOBJECT_NOT_FOUND : cannot find the additional object(s) needed to
// complete the transaction
tecCLAIM = 100,
tecPATH_PARTIAL = 101,
tecUNFUNDED_ADD = 102, // Unused legacy code
@@ -344,6 +361,12 @@ enum TECcodes : TERUnderlyingType {
tecARRAY_TOO_LARGE = 191,
tecLOCKED = 192,
tecBAD_CREDENTIALS = 193,
tecWRONG_ASSET = 194,
tecLIMIT_EXCEEDED = 195,
tecPSEUDO_ACCOUNT = 196,
tecPRECISION_LOSS = 197,
tecNO_DELEGATE_PERMISSION = 198,
tecWASM_REJECTED = 199,
};
//------------------------------------------------------------------------------
@@ -629,37 +652,37 @@ using TER = TERSubset<CanCvtToTER>;
//------------------------------------------------------------------------------
inline bool
isTelLocal(TER x)
isTelLocal(TER x) noexcept
{
return ((x) >= telLOCAL_ERROR && (x) < temMALFORMED);
return (x >= telLOCAL_ERROR && x < temMALFORMED);
}
inline bool
isTemMalformed(TER x)
isTemMalformed(TER x) noexcept
{
return ((x) >= temMALFORMED && (x) < tefFAILURE);
return (x >= temMALFORMED && x < tefFAILURE);
}
inline bool
isTefFailure(TER x)
isTefFailure(TER x) noexcept
{
return ((x) >= tefFAILURE && (x) < terRETRY);
return (x >= tefFAILURE && x < terRETRY);
}
inline bool
isTerRetry(TER x)
isTerRetry(TER x) noexcept
{
return ((x) >= terRETRY && (x) < tesSUCCESS);
return (x >= terRETRY && x < tesSUCCESS);
}
inline bool
isTesSuccess(TER x)
isTesSuccess(TER x) noexcept
{
return ((x) == tesSUCCESS);
return (x == tesSUCCESS);
}
inline bool
isTecClaim(TER x)
isTecClaim(TER x) noexcept
{
return ((x) >= tecCLAIM);
}

View File

@@ -58,7 +58,8 @@ namespace ripple {
// clang-format off
// Universal Transaction flags:
constexpr std::uint32_t tfFullyCanonicalSig = 0x80000000;
constexpr std::uint32_t tfUniversal = tfFullyCanonicalSig;
constexpr std::uint32_t tfInnerBatchTxn = 0x40000000;
constexpr std::uint32_t tfUniversal = tfFullyCanonicalSig | tfInnerBatchTxn;
constexpr std::uint32_t tfUniversalMask = ~tfUniversal;
// AccountSet flags:
@@ -91,14 +92,16 @@ constexpr std::uint32_t asfDisallowIncomingCheck = 13;
constexpr std::uint32_t asfDisallowIncomingPayChan = 14;
constexpr std::uint32_t asfDisallowIncomingTrustline = 15;
constexpr std::uint32_t asfAllowTrustLineClawback = 16;
constexpr std::uint32_t asfAllowTrustLineLocking = 17;
// OfferCreate flags:
constexpr std::uint32_t tfPassive = 0x00010000;
constexpr std::uint32_t tfImmediateOrCancel = 0x00020000;
constexpr std::uint32_t tfFillOrKill = 0x00040000;
constexpr std::uint32_t tfSell = 0x00080000;
constexpr std::uint32_t tfHybrid = 0x00100000;
constexpr std::uint32_t tfOfferCreateMask =
~(tfUniversal | tfPassive | tfImmediateOrCancel | tfFillOrKill | tfSell);
~(tfUniversal | tfPassive | tfImmediateOrCancel | tfFillOrKill | tfSell | tfHybrid);
// Payment flags:
constexpr std::uint32_t tfNoRippleDirect = 0x00010000;
@@ -119,6 +122,7 @@ constexpr std::uint32_t tfClearDeepFreeze = 0x00800000;
constexpr std::uint32_t tfTrustSetMask =
~(tfUniversal | tfSetfAuth | tfSetNoRipple | tfClearNoRipple | tfSetFreeze |
tfClearFreeze | tfSetDeepFreeze | tfClearDeepFreeze);
constexpr std::uint32_t tfTrustSetPermissionMask = ~(tfUniversal | tfSetfAuth | tfSetFreeze | tfClearFreeze);
// EnableAmendment flags:
constexpr std::uint32_t tfGotMajority = 0x00010000;
@@ -155,6 +159,7 @@ constexpr std::uint32_t const tfMPTokenAuthorizeMask = ~(tfUniversal | tfMPTUna
constexpr std::uint32_t const tfMPTLock = 0x00000001;
constexpr std::uint32_t const tfMPTUnlock = 0x00000002;
constexpr std::uint32_t const tfMPTokenIssuanceSetMask = ~(tfUniversal | tfMPTLock | tfMPTUnlock);
constexpr std::uint32_t const tfMPTokenIssuanceSetPermissionMask = ~(tfUniversal | tfMPTLock | tfMPTUnlock);
// MPTokenIssuanceDestroy flags:
constexpr std::uint32_t const tfMPTokenIssuanceDestroyMask = ~tfUniversal;
@@ -224,6 +229,26 @@ constexpr std::uint32_t tfAMMClawbackMask = ~(tfUniversal | tfClawTwoAssets);
// BridgeModify flags:
constexpr std::uint32_t tfClearAccountCreateAmount = 0x00010000;
constexpr std::uint32_t tfBridgeModifyMask = ~(tfUniversal | tfClearAccountCreateAmount);
// VaultCreate flags:
constexpr std::uint32_t const tfVaultPrivate = 0x00010000;
static_assert(tfVaultPrivate == lsfVaultPrivate);
constexpr std::uint32_t const tfVaultShareNonTransferable = 0x00020000;
constexpr std::uint32_t const tfVaultCreateMask = ~(tfUniversal | tfVaultPrivate | tfVaultShareNonTransferable);
// Batch Flags:
constexpr std::uint32_t tfAllOrNothing = 0x00010000;
constexpr std::uint32_t tfOnlyOne = 0x00020000;
constexpr std::uint32_t tfUntilFailure = 0x00040000;
constexpr std::uint32_t tfIndependent = 0x00080000;
/**
* @note If nested Batch transactions are supported in the future, the tfInnerBatchTxn flag
* will need to be removed from this mask to allow Batch transaction to be inside
* the sfRawTransactions array.
*/
constexpr std::uint32_t const tfBatchMask =
~(tfUniversal | tfAllOrNothing | tfOnlyOne | tfUntilFailure | tfIndependent) | tfInnerBatchTxn;
// clang-format on
} // namespace ripple

View File

@@ -59,7 +59,7 @@ enum TxType : std::uint16_t
#pragma push_macro("TRANSACTION")
#undef TRANSACTION
#define TRANSACTION(tag, value, name, fields) tag = value,
#define TRANSACTION(tag, value, name, delegatable, fields) tag = value,
#include <xrpl/protocol/detail/transactions.macro>

View File

@@ -130,6 +130,47 @@ public:
return static_cast<bool>(mDelivered);
}
void
setParentBatchId(uint256 const& parentBatchId)
{
parentBatchId_ = parentBatchId;
}
uint256
getParentBatchId() const
{
XRPL_ASSERT(
hasParentBatchId(),
"ripple::TxMeta::getParentBatchId : non-null batch id");
return *parentBatchId_;
}
bool
hasParentBatchId() const
{
return static_cast<bool>(parentBatchId_);
}
void
setGasUsed(std::uint32_t const& gasUsed)
{
gasUsed_ = gasUsed;
}
std::uint32_t
getGasUsed() const
{
XRPL_ASSERT(
hasGasUsed(), "ripple::TxMeta::getGasUsed : non-null batch id");
return *gasUsed_;
}
bool
hasGasUsed() const
{
return static_cast<bool>(gasUsed_);
}
private:
uint256 mTransactionID;
std::uint32_t mLedger;
@@ -137,6 +178,8 @@ private:
int mResult;
std::optional<STAmount> mDelivered;
std::optional<std::uint32_t> gasUsed_;
std::optional<uint256> parentBatchId_;
STArray mNodes;
};

View File

@@ -63,6 +63,9 @@ using NodeID = base_uint<160, detail::NodeIDTag>;
* and a 160-bit account */
using MPTID = base_uint<192>;
/** Domain is a 256-bit hash representing a specific domain. */
using Domain = base_uint<256>;
/** XRP currency. */
Currency const&
xrpCurrency();
@@ -119,25 +122,25 @@ namespace std {
template <>
struct hash<ripple::Currency> : ripple::Currency::hasher
{
explicit hash() = default;
hash() = default;
};
template <>
struct hash<ripple::NodeID> : ripple::NodeID::hasher
{
explicit hash() = default;
hash() = default;
};
template <>
struct hash<ripple::Directory> : ripple::Directory::hasher
{
explicit hash() = default;
hash() = default;
};
template <>
struct hash<ripple::uint256> : ripple::uint256::hasher
{
explicit hash() = default;
hash() = default;
};
} // namespace std

View File

@@ -267,7 +267,7 @@ XRPAmount::decimalXRP() const
// Output XRPAmount as just the drops value.
template <class Char, class Traits>
std::basic_ostream<Char, Traits>&
operator<<(std::basic_ostream<Char, Traits>& os, const XRPAmount& q)
operator<<(std::basic_ostream<Char, Traits>& os, XRPAmount const& q)
{
return os << q.drops();
}
@@ -286,11 +286,12 @@ mulRatio(
bool roundUp)
{
using namespace boost::multiprecision;
using int128 = boost::multiprecision::int128_t;
if (!den)
Throw<std::runtime_error>("division by zero");
int128_t const amt128(amt.drops());
int128 const amt128(amt.drops());
auto const neg = amt.drops() < 0;
auto const m = amt128 * num;
auto r = m / den;

View File

@@ -26,13 +26,24 @@
#if !defined(XRPL_RETIRE)
#error "undefined macro: XRPL_RETIRE"
#endif
#if !defined(XRPL_ABANDON)
#error "undefined macro: XRPL_ABANDON"
#endif
// Add new amendments to the top of this list.
// Keep it sorted in reverse chronological order.
// If you add an amendment here, then do not forget to increment `numFeatures`
// in include/xrpl/protocol/Feature.h.
// Check flags in Credential transactions
XRPL_FEATURE(SmartEscrow, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(TokenEscrow, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (EnforceNFTokenTrustlineV2, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (AMMv1_3, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionedDEX, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(Batch, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(SingleAssetVault, Supported::no, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionDelegation, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (PayChanCancelAfter, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (InvalidTxFlags, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FIX (FrozenLPTokenTransfer, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(DeepFreeze, Supported::yes, VoteBehavior::DefaultNo)
@@ -106,7 +117,6 @@ XRPL_FEATURE(DepositAuth, Supported::yes, VoteBehavior::DefaultYe
XRPL_FIX (1513, Supported::yes, VoteBehavior::DefaultYes)
XRPL_FEATURE(FlowCross, Supported::yes, VoteBehavior::DefaultYes)
XRPL_FEATURE(Flow, Supported::yes, VoteBehavior::DefaultYes)
XRPL_FEATURE(OwnerPaysFee, Supported::no, VoteBehavior::DefaultNo)
// The following amendments are obsolete, but must remain supported
// because they could potentially get enabled.
@@ -124,6 +134,11 @@ XRPL_FIX (NFTokenDirV1, Supported::yes, VoteBehavior::Obsolete)
XRPL_FEATURE(NonFungibleTokensV1, Supported::yes, VoteBehavior::Obsolete)
XRPL_FEATURE(CryptoConditionsSuite, Supported::yes, VoteBehavior::Obsolete)
// The following amendments were never supported, never enabled, and
// we've abanded them. These features should never be in the ledger,
// and we've removed all the related code.
XRPL_ABANDON(OwnerPaysFee)
// The following amendments have been active for at least two years. Their
// pre-amendment code has been removed and the identifiers are deprecated.
// All known amendments and amendments that may appear in a validated

View File

@@ -165,7 +165,8 @@ LEDGER_ENTRY(ltACCOUNT_ROOT, 0x0061, AccountRoot, account, ({
{sfMintedNFTokens, soeDEFAULT},
{sfBurnedNFTokens, soeDEFAULT},
{sfFirstNFTokenSequence, soeOPTIONAL},
{sfAMMID, soeOPTIONAL},
{sfAMMID, soeOPTIONAL}, // pseudo-account designator
{sfVaultID, soeOPTIONAL}, // pseudo-account designator
}))
/** A ledger object which contains a list of object identifiers.
@@ -187,6 +188,7 @@ LEDGER_ENTRY(ltDIR_NODE, 0x0064, DirectoryNode, directory, ({
{sfNFTokenID, soeOPTIONAL},
{sfPreviousTxnID, soeOPTIONAL},
{sfPreviousTxnLgrSeq, soeOPTIONAL},
{sfDomainID, soeOPTIONAL}
}))
/** The ledger object which lists details about amendments on the network.
@@ -248,6 +250,8 @@ LEDGER_ENTRY(ltOFFER, 0x006f, Offer, offer, ({
{sfPreviousTxnID, soeREQUIRED},
{sfPreviousTxnLgrSeq, soeREQUIRED},
{sfExpiration, soeOPTIONAL},
{sfDomainID, soeOPTIONAL},
{sfAdditionalBooks, soeOPTIONAL},
}))
/** A ledger object which describes a deposit preauthorization.
@@ -315,6 +319,12 @@ LEDGER_ENTRY(ltFEE_SETTINGS, 0x0073, FeeSettings, fee, ({
{sfBaseFeeDrops, soeOPTIONAL},
{sfReserveBaseDrops, soeOPTIONAL},
{sfReserveIncrementDrops, soeOPTIONAL},
// New fields
{sfExtensionComputeLimit, soeOPTIONAL},
{sfExtensionSizeLimit, soeOPTIONAL},
{sfGasPrice, soeOPTIONAL},
{sfPreviousTxnID, soeOPTIONAL},
{sfPreviousTxnLgrSeq, soeOPTIONAL},
}))
@@ -344,12 +354,16 @@ LEDGER_ENTRY(ltESCROW, 0x0075, Escrow, escrow, ({
{sfCondition, soeOPTIONAL},
{sfCancelAfter, soeOPTIONAL},
{sfFinishAfter, soeOPTIONAL},
{sfFinishFunction, soeOPTIONAL},
{sfData, soeOPTIONAL},
{sfSourceTag, soeOPTIONAL},
{sfDestinationTag, soeOPTIONAL},
{sfOwnerNode, soeREQUIRED},
{sfPreviousTxnID, soeREQUIRED},
{sfPreviousTxnLgrSeq, soeREQUIRED},
{sfDestinationNode, soeOPTIONAL},
{sfTransferRate, soeOPTIONAL},
{sfIssuerNode, soeOPTIONAL},
}))
/** A ledger object describing a single unidirectional XRP payment channel.
@@ -390,6 +404,37 @@ LEDGER_ENTRY(ltAMM, 0x0079, AMM, amm, ({
{sfPreviousTxnLgrSeq, soeOPTIONAL},
}))
/** A ledger object which tracks MPTokenIssuance
\sa keylet::mptIssuance
*/
LEDGER_ENTRY(ltMPTOKEN_ISSUANCE, 0x007e, MPTokenIssuance, mpt_issuance, ({
{sfIssuer, soeREQUIRED},
{sfSequence, soeREQUIRED},
{sfTransferFee, soeDEFAULT},
{sfOwnerNode, soeREQUIRED},
{sfAssetScale, soeDEFAULT},
{sfMaximumAmount, soeOPTIONAL},
{sfOutstandingAmount, soeREQUIRED},
{sfLockedAmount, soeOPTIONAL},
{sfMPTokenMetadata, soeOPTIONAL},
{sfPreviousTxnID, soeREQUIRED},
{sfPreviousTxnLgrSeq, soeREQUIRED},
{sfDomainID, soeOPTIONAL},
}))
/** A ledger object which tracks MPToken
\sa keylet::mptoken
*/
LEDGER_ENTRY(ltMPTOKEN, 0x007f, MPToken, mptoken, ({
{sfAccount, soeREQUIRED},
{sfMPTokenIssuanceID, soeREQUIRED},
{sfMPTAmount, soeDEFAULT},
{sfLockedAmount, soeOPTIONAL},
{sfOwnerNode, soeREQUIRED},
{sfPreviousTxnID, soeREQUIRED},
{sfPreviousTxnLgrSeq, soeREQUIRED},
}))
/** A ledger object which tracks Oracle
\sa keylet::oracle
*/
@@ -405,34 +450,6 @@ LEDGER_ENTRY(ltORACLE, 0x0080, Oracle, oracle, ({
{sfPreviousTxnLgrSeq, soeREQUIRED},
}))
/** A ledger object which tracks MPTokenIssuance
\sa keylet::mptIssuance
*/
LEDGER_ENTRY(ltMPTOKEN_ISSUANCE, 0x007e, MPTokenIssuance, mpt_issuance, ({
{sfIssuer, soeREQUIRED},
{sfSequence, soeREQUIRED},
{sfTransferFee, soeDEFAULT},
{sfOwnerNode, soeREQUIRED},
{sfAssetScale, soeDEFAULT},
{sfMaximumAmount, soeOPTIONAL},
{sfOutstandingAmount, soeREQUIRED},
{sfMPTokenMetadata, soeOPTIONAL},
{sfPreviousTxnID, soeREQUIRED},
{sfPreviousTxnLgrSeq, soeREQUIRED},
}))
/** A ledger object which tracks MPToken
\sa keylet::mptoken
*/
LEDGER_ENTRY(ltMPTOKEN, 0x007f, MPToken, mptoken, ({
{sfAccount, soeREQUIRED},
{sfMPTokenIssuanceID, soeREQUIRED},
{sfMPTAmount, soeDEFAULT},
{sfOwnerNode, soeREQUIRED},
{sfPreviousTxnID, soeREQUIRED},
{sfPreviousTxnLgrSeq, soeREQUIRED},
}))
/** A ledger object which tracks Credential
\sa keylet::credential
*/
@@ -460,6 +477,41 @@ LEDGER_ENTRY(ltPERMISSIONED_DOMAIN, 0x0082, PermissionedDomain, permissioned_dom
{sfPreviousTxnLgrSeq, soeREQUIRED},
}))
/** A ledger object representing permissions an account has delegated to another account.
\sa keylet::delegate
*/
LEDGER_ENTRY(ltDELEGATE, 0x0083, Delegate, delegate, ({
{sfAccount, soeREQUIRED},
{sfAuthorize, soeREQUIRED},
{sfPermissions, soeREQUIRED},
{sfOwnerNode, soeREQUIRED},
{sfPreviousTxnID, soeREQUIRED},
{sfPreviousTxnLgrSeq, soeREQUIRED},
}))
/** A ledger object representing a single asset vault.
\sa keylet::mptoken
*/
LEDGER_ENTRY(ltVAULT, 0x0084, Vault, vault, ({
{sfPreviousTxnID, soeREQUIRED},
{sfPreviousTxnLgrSeq, soeREQUIRED},
{sfSequence, soeREQUIRED},
{sfOwnerNode, soeREQUIRED},
{sfOwner, soeREQUIRED},
{sfAccount, soeREQUIRED},
{sfData, soeOPTIONAL},
{sfAsset, soeREQUIRED},
{sfAssetsTotal, soeREQUIRED},
{sfAssetsAvailable, soeREQUIRED},
{sfAssetsMaximum, soeDEFAULT},
{sfLossUnrealized, soeREQUIRED},
{sfShareMPTID, soeREQUIRED},
{sfWithdrawalPolicy, soeREQUIRED},
// no SharesTotal ever (use MPTIssuance.sfOutstandingAmount)
// no PermissionedDomainID ever (use MPTIssuance.sfDomainID)
}))
#undef EXPAND
#undef LEDGER_ENTRY_DUPLICATE

View File

@@ -0,0 +1,68 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2025 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#if !defined(PERMISSION)
#error "undefined macro: PERMISSION"
#endif
/**
* PERMISSION(name, type, txType, value)
*
* This macro defines a permission:
* name: the name of the permission.
* type: the GranularPermissionType enum.
* txType: the corresponding TxType for this permission.
* value: the uint32 numeric value for the enum type.
*/
/** This permission grants the delegated account the ability to authorize a trustline. */
PERMISSION(TrustlineAuthorize, ttTRUST_SET, 65537)
/** This permission grants the delegated account the ability to freeze a trustline. */
PERMISSION(TrustlineFreeze, ttTRUST_SET, 65538)
/** This permission grants the delegated account the ability to unfreeze a trustline. */
PERMISSION(TrustlineUnfreeze, ttTRUST_SET, 65539)
/** This permission grants the delegated account the ability to set Domain. */
PERMISSION(AccountDomainSet, ttACCOUNT_SET, 65540)
/** This permission grants the delegated account the ability to set EmailHashSet. */
PERMISSION(AccountEmailHashSet, ttACCOUNT_SET, 65541)
/** This permission grants the delegated account the ability to set MessageKey. */
PERMISSION(AccountMessageKeySet, ttACCOUNT_SET, 65542)
/** This permission grants the delegated account the ability to set TransferRate. */
PERMISSION(AccountTransferRateSet, ttACCOUNT_SET, 65543)
/** This permission grants the delegated account the ability to set TickSize. */
PERMISSION(AccountTickSizeSet, ttACCOUNT_SET, 65544)
/** This permission grants the delegated account the ability to mint payment, which means sending a payment for a currency where the sending account is the issuer. */
PERMISSION(PaymentMint, ttPAYMENT, 65545)
/** This permission grants the delegated account the ability to burn payment, which means sending a payment for a currency where the destination account is the issuer */
PERMISSION(PaymentBurn, ttPAYMENT, 65546)
/** This permission grants the delegated account the ability to lock MPToken. */
PERMISSION(MPTokenIssuanceLock, ttMPTOKEN_ISSUANCE_SET, 65547)
/** This permission grants the delegated account the ability to unlock MPToken. */
PERMISSION(MPTokenIssuanceUnlock, ttMPTOKEN_ISSUANCE_SET, 65548)

Some files were not shown because too many files have changed in this diff Show More