Compare commits

..

93 Commits

Author SHA1 Message Date
Nicholas Dudfield
0f6aad948b feat: add scribbles 2025-09-12 15:46:42 +07:00
Nicholas Dudfield
e061823561 feat: disable 2025-09-12 12:02:49 +07:00
Nicholas Dudfield
7763434646 feat: upgrade openssl 2025-09-12 10:58:33 +07:00
Nicholas Dudfield
afad05b526 feat: add blake3 benchmarking and hash performance analysis
- Add blake3_bench and sha512_bench parameters to map_stats RPC
- Track keylet hash input sizes in digest.h for performance analysis
- Implement comprehensive BLAKE3 test suite with real-world benchmarks
- Add performance comparison documentation for BLAKE3 vs SHA-512
- Include Gemini research on hash functions for small inputs

Benchmarks show BLAKE3 provides:
- 1.78x speedup for keylet operations (22-102 bytes)
- 1.35x speedup for leaf nodes (167 bytes avg)
- 1.20x speedup for inner nodes (516 bytes)
- Overall 10-13% reduction in validation time

The analysis reveals that while BLAKE3 offers measurable improvements,
the gains are modest rather than revolutionary due to:
- SHAMap traversal consuming ~47% of total time
- Diminishing returns as input size increases
- Architectural requirement for high-entropy keys
2025-09-12 10:29:20 +07:00
Nicholas Dudfield
9a3723b1dc feat: commit more scribbles 2025-09-11 08:07:37 +07:00
Nicholas Dudfield
22b6ea961e docs: add notes 2025-09-10 16:57:05 +07:00
Nicholas Dudfield
974249380a feat: add ttHASH_MIGRATION 2025-09-10 16:33:00 +07:00
Nicholas Dudfield
8362318d25 feat: add blake3 to conan/cmake 2025-09-10 15:12:31 +07:00
Nicholas Dudfield
508bdd5b33 chore: disable CI flakey at times too ledger replay tests temporarily 2025-09-10 13:52:15 +07:00
Nicholas Dudfield
1269803aa6 feat: claudes first pass at the tests 2025-09-10 13:16:58 +07:00
Nicholas Dudfield
ae46394788 feat: remove unclassified hash_options constructor 2025-09-10 10:54:01 +07:00
Nicholas Dudfield
d7bfff2bef feat: classify hashes further 2025-09-10 10:49:54 +07:00
Nicholas Dudfield
717464a2d8 feat: classify hashes further 2025-09-10 10:26:58 +07:00
Nicholas Dudfield
f9d6346e6d feat: ledger_index all the things 2025-09-09 21:39:04 +07:00
Nicholas Dudfield
c968fdfb1a feat: snap 2025-09-09 15:19:51 +07:00
Nicholas Dudfield
7a9d48d53c fix(tests): allow multi threaded writes to suite log 2025-08-15 07:23:08 +07:00
Niq Dudfield
849d447a20 docs(freeze): canceling escrows with deep frozen assets is allowed (#540) 2025-07-09 13:48:59 +10:00
tequ
ee27049687 IOUIssuerWeakTSH (#388) 2025-07-09 13:48:26 +10:00
tequ
60dec74baf Add DeepFreeze test for URIToken (#539) 2025-07-09 12:49:47 +10:00
Denis Angell
9abea13649 Feature Clawback (#534) 2025-07-09 12:48:46 +10:00
Denis Angell
810e15319c Feature DeepFreeze (#536)
---------

Co-authored-by: tequ <git@tequ.dev>
2025-07-09 10:33:08 +10:00
Niq Dudfield
d593f3bef5 fix: provisional PreviousTxn{Id,LedgerSeq} double threading (#515)
---------

Co-authored-by: tequ <git@tequ.dev>
2025-07-08 18:04:39 +10:00
Niq Dudfield
1233694b6c chore: add suspicious_patterns to .scripts/pre-hook and not-suspicious filter (#525)
* chore: add suspicious_patterns to .scripts/pre-hook and not-suspicious filter

* rm: kill annoying checkpatterns job

* chore: cleanup

---------

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2025-07-01 20:58:06 +10:00
tequ
a1d42b7380 Improve unittests (#494)
* Match unit tests on start of test name (#4634)

* For example, without this change, to run the TxQ tests, must specify
  `--unittest=TxQ1,TxQ2` on the command line. With this change, can use
  `--unittest=TxQ`, and both will be run.
* An exact match will prevent any further partial matching.
* This could have some side effects for different tests with a common
  name beginning. For example, NFToken, NFTokenBurn, NFTokenDir. This
  might be useful. If not, the shorter-named test(s) can be renamed. For
  example, NFToken to NFTokens.
* Split the NFToken, NFTokenBurn, and Offer test classes. Potentially speeds
  up parallel tests by a factor of 5.

* SetHook_test, SetHookTSH_test, XahauGenesis_test

---------

Co-authored-by: Ed Hennis <ed@ripple.com>
2025-06-30 10:03:02 +10:00
tequ
f6d2bf819d Fix governance vote purge (#221)
governance hook should be independently and deterministically recompiled before being voted in
2025-06-16 17:12:06 +10:00
Denis Angell
a5ea86fdfc Add Conan Building For Development (#432) 2025-05-14 14:00:20 +10:00
RichardAH
615f56570a Sus pat (#507) 2025-05-01 17:23:56 +10:00
RichardAH
5e005cd6ee remove false positives from sus pat finder (#506) 2025-05-01 09:54:41 +10:00
Denis Angell
80a7197590 fix warnings (#505) 2025-04-30 11:51:58 +02:00
tequ
7b581443d1 Suppress build warning introduced in Catalogue (#499) 2025-04-29 08:25:55 +10:00
tequ
5400f43359 Supress logs for Catalogue_test, Import_test (#495) 2025-04-24 17:46:09 +10:00
Denis Angell
8cf7d485ab fix: ledger_index (#498) 2025-04-24 16:45:01 +10:00
tequ
372f25d09b Remove #ifndef DEBUG guards and exception handling wrappers (#496) 2025-04-24 16:38:14 +10:00
Denis Angell
401395a204 patch remarks (#497) 2025-04-24 16:36:57 +10:00
tequ
4221dcf568 Add tests for SetRemarks (#491) 2025-04-18 09:34:44 +10:00
tequ
989532702d Update clang-format workflow (#490) 2025-04-17 16:16:59 +10:00
RichardAH
f9cd2e0d21 Remarks amendment (#301)
Co-authored-by: Denis Angell <dangell@transia.co>
2025-04-16 08:42:04 +10:00
tequ
59e334c099 fixRewardClaimFlags (#487) 2025-04-15 20:08:19 +10:00
tequ
9018596532 HookCanEmit (#392) 2025-04-15 13:32:35 +10:00
Niq Dudfield
b827f0170d feat(catalogue): add cli commands and fix file_size (#486)
* feat(catalogue): add cli commands and fix file_size

* feat(catalogue): add cli commands and fix file_size

* feat(catalogue): fix tests

* feat(catalogue): fix tests

* feat(catalogue): use formatBytesIEC

* feat: add file_size_estimated

* feat: add file_size_estimated

* feat: add file_size_estimated
2025-04-15 08:50:15 +10:00
tequ
e4b7e8f0f2 Update sfcodes script (#479) 2025-04-10 09:44:31 +10:00
tequ
1485078d91 Update CHooks build script (#465) 2025-04-09 20:22:34 +10:00
tequ
6625d2be92 Add xpop_slot test (#470) 2025-04-09 20:20:23 +10:00
tequ
2fb5c92140 feat: Run unittests in parallel with Github Actions (#483)
Implement parallel execution for unit tests using Github Actions to improve CI pipeline efficiency and reduce build times.
2025-04-04 19:32:47 +02:00
Niq Dudfield
c4b5ae3787 Fix missing includes in Catalogue.cpp for non-unity builds (#485) 2025-04-04 12:53:45 +10:00
Niq Dudfield
d546d761ce Fix using using Status with rpcError (#484) 2025-04-01 21:00:13 +10:00
RichardAH
e84a36867b Catalogue (#443) 2025-04-01 16:47:48 +10:00
Niq Dudfield
0b675465b4 Fix ServerDefinitions_test regression intro in #475 (#477) 2025-03-19 12:32:27 +10:00
Niq Dudfield
d088ad61a9 Prevent dangling reference in getHash() (#475)
Replace temporary uint256 with static variable when returning fallback hash
to avoid returning a const reference to a local temporary object.
2025-03-18 18:37:18 +10:00
Niq Dudfield
ef77b02d7f CI Release Builder (#455) 2025-03-11 13:19:28 +01:00
RichardAH
7385828983 Touch Amendment (#294) 2025-03-06 08:25:42 +01:00
Niq Dudfield
88b01514c1 fix: remove negative rate test failing on MacOS (#452) 2025-03-03 13:12:13 +01:00
Denis Angell
aeece15096 [fix] github runner (#451)
Co-authored-by: Niq Dudfield <ndudfield@gmail.com>
2025-03-03 09:55:51 +01:00
tequ
89cacb1258 Enhance shell script error handling and debugging on GHA (#447) 2025-02-24 10:33:21 +01:00
tequ
8ccff44e8c Fix Error handling on build action (#412) 2025-02-24 18:16:21 +10:00
tequ
420240a2ab Fixed not to use a large fixed range in the magic_enum. (#436) 2025-02-24 17:46:42 +10:00
Richard Holland
230873f196 debug gh builds 2025-02-06 15:21:37 +11:00
Wietse Wind
1fb1a99ea2 Update build-in-docker.yml 2025-02-05 08:23:49 +01:00
Richard Holland
e0b63ac70e Revert "debug account tx tests under release builder"
This reverts commit da8df63be3.

Revert "add strict filtering to account_tx api (#429)"

This reverts commit 317bd4bc6e.
2025-02-05 14:59:33 +11:00
Richard Holland
da8df63be3 debug account tx tests under release builder 2025-02-04 17:02:17 +11:00
RichardAH
317bd4bc6e add strict filtering to account_tx api (#429) 2025-02-03 17:56:08 +10:00
RichardAH
2fd465bb3f fix20250131 (#428)
Co-authored-by: Denis Angell <dangell@transia.co>
2025-02-03 10:33:19 +10:00
Wietse Wind
fa71bda29c Artifact v4 continue on error 2025-02-01 08:58:13 +01:00
Wietse Wind
412593d7bc Update artifact 2025-02-01 08:57:48 +01:00
Wietse Wind
12d8342c34 Update artifact 2025-02-01 08:57:25 +01:00
tequ
d17f7151ab Fix HookResult(ExitType) when accept() is not called (#415) 2025-01-22 13:33:59 +10:00
tequ
4466175231 Update boost link for build-full.sh (#421) 2025-01-22 08:38:12 +10:00
tequ
621ca9c865 Add space to trace_float log (#424) 2025-01-22 08:34:33 +10:00
tequ
85a752235a add URITokenIssuer to account_flags for account_info (#404) 2024-12-16 16:10:01 +10:00
RichardAH
d878fd4a6e allow multiple datagram monitor endpoints (#408) 2024-12-14 08:44:40 +10:00
Richard Holland
532a471a35 fixReduceImport (#398)
Co-authored-by: Denis Angell <dangell@transia.co>
2024-12-11 13:29:37 +11:00
RichardAH
e9468d8b4a Datagram monitor (#400)
Co-authored-by: Denis Angell <dangell@transia.co>
2024-12-11 13:29:30 +11:00
Denis Angell
9d54da3880 Fix: failing assert (#397) 2024-12-11 13:08:50 +11:00
Ekiserrepé
542172f0a1 Update README.md (#396)
Updated Xaman link.
2024-12-11 13:08:50 +11:00
Richard Holland
e086724772 UDP RPC (admin) support (#390) 2024-12-11 13:08:44 +11:00
RichardAH
21863b05f3 Limit xahau genesis to networks starting with 2133X (#395) 2024-11-23 21:19:09 +10:00
Denis Angell
61ac04aacc Sync: Ripple(d) 1.11.0 (#299)
* Add jss fields used by Clio `nft_info`: (#4320)

Add Clio-specific JSS constants to ensure a common vocabulary of
keywords in Clio and this project. By providing visibility of the full
API keyword namespace, it reduces the likelihood of developers
introducing minor variations on names used by Clio, or unknowingly
claiming a keyword that Clio has already claimed. This change moves this
project slightly away from having only the code necessary for running
the core server, but it is a step toward the goal of keeping this
server's and Clio's APIs similar. The added JSS constants are annotated
to indicate their relevance to Clio.

Clio can be found here: https://github.com/XRPLF/clio

Signed-off-by: ledhed2222 <ledhed2222@users.noreply.github.com>

* Introduce support for a slabbed allocator: (#4218)

When instantiating a large amount of fixed-sized objects on the heap
the overhead that dynamic memory allocation APIs impose will quickly
become significant.

In some cases, allocating a large amount of memory at once and using
a slabbing allocator to carve the large block into fixed-sized units
that are used to service requests for memory out will help to reduce
memory fragmentation significantly and, potentially, improve overall
performance.

This commit introduces a new `SlabAllocator<>` class that exposes an
API that is _similar_ to the C++ concept of an `Allocator` but it is
not meant to be a general-purpose allocator.

It should not be used unless profiling and analysis of specific memory
allocation patterns indicates that the additional complexity introduced
will improve the performance of the system overall, and subsequent
profiling proves it.

A helper class, `SlabAllocatorSet<>` simplifies handling of variably
sized objects that benefit from slab allocations.

This commit incorporates improvements suggested by Greg Popovitch
(@greg7mdp).

Commit 1 of 3 in #4218.

* Optimize `SHAMapItem` and leverage new slab allocator: (#4218)

The `SHAMapItem` class contains a variable-sized buffer that
holds the serialized data associated with a particular item
inside a `SHAMap`.

Prior to this commit, the buffer for the serialized data was
allocated separately. Coupled with the fact that most instances
of `SHAMapItem` were wrapped around a `std::shared_ptr` meant
that an instantiation might result in up to three separate
memory allocations.

This commit switches away from `std::shared_ptr` for `SHAMapItem`
and uses `boost::intrusive_ptr` instead, allowing the reference
count for an instance to live inside the instance itself. Coupled
with using a slab-based allocator to optimize memory allocation
for the most commonly sized buffers, the net result is significant
memory savings. In testing, the reduction in memory usage hovers
between 400MB and 650MB. Other scenarios might result in larger
savings.

In performance testing with NFTs, this commit reduces memory size by
about 15% sustained over long duration.

Commit 2 of 3 in #4218.

* Avoid using std::shared_ptr when not necessary: (#4218)

The `Ledger` class contains two `SHAMap` instances: the state and
transaction maps. Previously, the maps were dynamically allocated using
`std::make_shared` despite the fact that they did not require lifetime
management separate from the lifetime of the `Ledger` instance to which
they belong.

The two `SHAMap` instances are now regular member variables. Some smart
pointers and dynamic memory allocation was avoided by using stack-based
alternatives.

Commit 3 of 3 in #4218.

* Prevent replay attacks with NetworkID field: (#4370)

Add a `NetworkID` field to help prevent replay attacks on and from
side-chains.

The new field must be used when the server is using a network id > 1024.

To preserve legacy behavior, all chains with a network ID less than 1025
retain the existing behavior. This includes Mainnet, Testnet, Devnet,
and hooks-testnet. If `sfNetworkID` is present in any transaction
submitted to any of the nodes on one of these chains, then
`telNETWORK_ID_MAKES_TX_NON_CANONICAL` is returned.

Since chains with a network ID less than 1025, including Mainnet, retain
the existing behavior, there is no need for an amendment.

The `NetworkID` helps to prevent replay attacks because users specify a
`NetworkID` field in every transaction for that chain.

This change introduces a new UINT32 field, `sfNetworkID` ("NetworkID").
There are also three new local error codes for transaction results:

- `telNETWORK_ID_MAKES_TX_NON_CANONICAL`
- `telREQUIRES_NETWORK_ID`
- `telWRONG_NETWORK`

To learn about the other transaction result codes, see:
https://xrpl.org/transaction-results.html

Local error codes were chosen because a transaction is not necessarily
malformed if it is submitted to a node running on the incorrect chain.
This is a local error specific to that node and could be corrected by
switching to a different node or by changing the `network_id` on that
node. See:
https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html

In addition to using `NetworkID`, it is still generally recommended to
use different accounts and keys on side-chains. However, people will
undoubtedly use the same keys on multiple chains; for example, this is
common practice on other blockchain networks. There are also some
legitimate use cases for this.

A `app.NetworkID` test suite has been added, and `core.Config` was
updated to include some network_id tests.

* Fix the fix for std::result_of (#4496)

Newer compilers, such as Apple Clang 15.0, have removed `std::result_of`
as part of C++20. The build instructions provided a fix for this (by
adding a preprocessor definition), but the fix was broken.

This fixes the fix by:
* Adding the `conf` prefix for tool configurations (which had been
  forgotten).
* Passing `extra_b2_flags` to `boost` package to fix its build.
  * Define `BOOST_ASIO_HAS_STD_INVOKE_RESULT` in order to build boost
    1.77 with a newer compiler.

* Use quorum specified via command line: (#4489)

If `--quorum` setting is present on the command line, use the specified
value as the minimum quorum. This allows for the use of a potentially
fork-unsafe quorum, but it is sometimes necessary for small and test
networks.

Fix #4488.

---------

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>

* Fix errors for Clang 16: (#4501)

Address issues related to the removal of `std::{u,bi}nary_function` in
C++17 and some warnings with Clang 16. Some warnings appeared with the
upgrade to Apple clang version 14.0.3 (clang-1403.0.22.14.1).

- `std::{u,bi}nary_function` were removed in C++17. They were empty
  classes with a few associated types. We already have conditional code
  to define the types. Just make it unconditional.
- libc++ checks a cast in an unevaluated context to see if a type
  inherits from a binary function class in the standard library, e.g.
  `std::equal_to`, and this causes an error when the type privately
  inherits from such a class. Change these instances to public
  inheritance.
- We don't need a middle-man for the empty base optimization. Prefer to
  inherit directly from an empty class than from
  `beast::detail::empty_base_optimization`.
- Clang warns when all the uses of a variable are removed by conditional
  compilation of assertions. Add a `[[maybe_unused]]` annotation to
  suppress it.
- As a drive-by clean-up, remove commented code.

See related work in #4486.

* Fix typo (#4508)

* fix!: Prevent API from accepting seed or public key for account (#4404)

The API would allow seeds (and public keys) to be used in place of
accounts at several locations in the API. For example, when calling
account_info, you could pass `"account": "foo"`. The string "foo" is
treated like a seed, so the method returns `actNotFound` (instead of
`actMalformed`, as most developers would expect). In the early days,
this was a convenience to make testing easier. However, it allows for
poor security practices, so it is no longer a good idea. Allowing a
secret or passphrase is now considered a bug. Previously, it was
controlled by the `strict` option on some methods. With this commit,
since the API does not interpret `account` as `seed`, the option
`strict` is no longer needed and is removed.

Removing this behavior from the API is a [breaking
change](https://xrpl.org/request-formatting.html#breaking-changes). One
could argue that it shouldn't be done without bumping the API version;
however, in this instance, there is no evidence that anyone is using the
API in the "legacy" way. Furthermore, it is a potential security hole,
as it allows users to send secrets to places where they are not needed,
where they could end up in logs, error messages, etc. There's no reason
to take such a risk with a seed/secret, since only the public address is
needed.

Resolves: #3329, #3330, #4337

BREAKING CHANGE: Remove non-strict account parsing (#3330)

* Add nftoken_id, nftoken_ids, offer_id fields for NFTokens (#4447)

Three new fields are added to the `Tx` responses for NFTs:

1. `nftoken_id`: This field is included in the `Tx` responses for
   `NFTokenMint` and `NFTokenAcceptOffer`. This field indicates the
   `NFTokenID` for the `NFToken` that was modified on the ledger by the
   transaction.
2. `nftoken_ids`: This array is included in the `Tx` response for
   `NFTokenCancelOffer`. This field provides a list of all the
   `NFTokenID`s for the `NFToken`s that were modified on the ledger by
   the transaction.
3. `offer_id`: This field is included in the `Tx` response for
   `NFTokenCreateOffer` transactions and shows the OfferID of the
   `NFTokenOffer` created.

The fields make it easier to track specific tokens and offers. The
implementation includes code (by @ledhed2222) from the Clio project to
extract NFTokenIDs from mint transactions.

* Ensure that switchover vars are initialized before use: (#4527)

Global variables in different TUs are initialized in an undefined order.
At least one global variable was accessing a global switchover variable.
This caused the switchover variable to be accessed in an uninitialized
state.

Since the switchover is always explicitly set before transaction
processing, this bug can not effect transaction processing, but could
effect unit tests (and potentially the value of some global variables).
Note: at the time of this patch the offending bug is not yet in
production.

* Move faulty assert (#4533)

This assert was put in the wrong place, but it only triggers if shards
are configured. This change moves the assert to the right place and
updates it to ensure correctness.

The assert could be hit after the server downloads some shards. It may
be necessary to restart after the shards are downloaded.

Note that asserts are normally checked only in debug builds, so release
packages should not be affected.

Introduced in: #4319 (66627b26cf)

* Fix unaligned load and stores: (#4528) (#4531)

Misaligned load and store operations are supported by both Intel and ARM
CPUs. However, in C++, these operations are undefined behavior (UB).
Substituting these operations with a `memcpy` fixes this UB. The
compiled assembly code is equivalent to the original, so there is no
performance penalty to using memcpy.

For context: The unaligned load and store operations fixed here were
originally introduced in the slab allocator (#4218).

* Add missing includes for gcc 13.1: (#4555)

gcc 13.1 failed to compile due to missing headers. This patch adds the
needed headers.

* Trivial: add comments for NFToken-related invariants (#4558)

* fix node size estimation (#4536)

Fix a bug in the `NODE_SIZE` auto-detection feature in `Config.cpp`.
Specifically, this patch corrects the calculation for the total amount
of RAM available, which was previously returned in bytes, but is now
being returned in units of the system's memory unit. Additionally, the
patch adjusts the node size based on the number of available hardware
threads of execution.

* fix: remove redundant moves (#4565)

- Resolve gcc compiler warning:
      AccountObjects.cpp:182:47: warning: redundant move in initialization [-Wredundant-move]
  - The std::move() operation on trivially copyable types may generate a
    compile warning in newer versions of gcc.
- Remove extraneous header (unused imports) from a unit test file.

* Revert "Fix the fix for std::result_of (#4496)"

This reverts commit cee8409d60.

* Revert "Fix typo (#4508)"

This reverts commit 2956f14de8.

* clang

* [fold] bad merge

* [fold] fix bad merge

- add back filter for ripple state on account_channels
- add back network id test (env auto adds network id in xahau)

* [fold] fix build error

---------

Signed-off-by: ledhed2222 <ledhed2222@users.noreply.github.com>
Co-authored-by: ledhed2222 <ledhed2222@users.noreply.github.com>
Co-authored-by: Nik Bougalis <nikb@bougalis.net>
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
Co-authored-by: John Freeman <jfreeman08@gmail.com>
Co-authored-by: Mark Travis <mtrippled@users.noreply.github.com>
Co-authored-by: solmsted <steven.olm@gmail.com>
Co-authored-by: drlongle <drlongle@gmail.com>
Co-authored-by: Shawn Xie <35279399+shawnxie999@users.noreply.github.com>
Co-authored-by: Scott Determan <scott.determan@yahoo.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
Co-authored-by: Scott Schurr <scott@ripple.com>
Co-authored-by: Chenna Keshava B S <21219765+ckeshava@users.noreply.github.com>
2024-11-20 10:54:03 +10:00
tequ
57a1329bff Fix lexicographical_compare_three_way build error at macos (#391)
Co-authored-by: Denis Angell <dangell@transia.co>
2024-11-15 08:33:55 +10:00
Denis Angell
daf22b3b85 Fix: RWDB (#389) 2024-11-15 07:31:55 +10:00
RichardAH
2b225977e2 Feature: RWDB (#378)
Co-authored-by: Denis Angell <dangell@transia.co>
2024-11-12 08:55:56 +10:00
Denis Angell
58b22901cb Fix: float_divide rounding error (#351)
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2024-11-09 15:17:00 +10:00
Denis Angell
8ba37a3138 Add Script for SfCode generation (#358)
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
Co-authored-by: tequ <git@tequ.dev>
2024-11-09 14:17:49 +10:00
tequ
8cffd3054d add trace message to exception on etxn_fee_base (#387) 2024-11-09 14:00:59 +10:00
Denis Angell
6b26045cbc Update settings.json (#342) 2024-10-25 11:56:16 +10:00
Wietse Wind
08f13b7cfe Fix account_tx sluggishness as per https://github.com/XRPLF/rippled/commit/2e9261cb (#308) 2024-10-25 11:13:42 +10:00
tequ
766f5d7ee1 Update macro.h (#366) 2024-10-25 10:10:43 +10:00
Wietse Wind
287c01ad04 Improve Admin command RPC Post (#384)
* Improve ADMIN HTTP POST RPC notifications: no queue limit, shorter HTTP call TTL
2024-10-25 10:10:14 +10:00
tequ
4239124750 Update amendments for rippled-standalone.cfg (#385) 2024-10-25 09:10:45 +10:00
RichardAH
1e45d4120c Update to boost186 (#377)
Co-Authored-By: Denis Angell <dangell@transia.co>
2024-10-17 01:29:17 +02:00
Denis Angell
9e446bcc85 Fix: Missing Headers - Linker Errors (#300) 2024-10-16 18:19:21 +10:00
RichardAH
376727d20c use std::lexicographical_compare_three_way for uint spaceship operator (#374)
* use std::lexicographical_compare_three_way for uint spaceship operator

* clang
2024-10-16 11:37:26 +10:00
Richard Holland
d921c87c88 also update max transactions for tx queue 2024-10-11 09:59:04 +11:00
RichardAH
7b94d3d99d increase txn in ledger target to 1000 (#372) 2024-10-11 08:21:23 +10:00
438 changed files with 38337 additions and 11993 deletions

12
.githooks/pre-commit Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
# Pre-commit hook that runs the suspicious patterns check on staged files
# Get the repository's root directory
repo_root=$(git rev-parse --show-toplevel)
# Run the suspicious patterns script in pre-commit mode
"$repo_root/suspicious_patterns.sh" --pre-commit
# Exit with the same code as the script
exit $?

4
.githooks/setup.sh Normal file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
echo "Configuring git to use .githooks directory..."
git config core.hooksPath .githooks

View File

@@ -0,0 +1,31 @@
name: 'Configure ccache'
description: 'Sets up ccache with consistent configuration'
inputs:
max_size:
description: 'Maximum cache size'
required: false
default: '2G'
hash_dir:
description: 'Whether to include directory paths in hash'
required: false
default: 'true'
compiler_check:
description: 'How to check compiler for changes'
required: false
default: 'content'
runs:
using: 'composite'
steps:
- name: Configure ccache
shell: bash
run: |
mkdir -p ~/.ccache
export CONF_PATH="${CCACHE_CONFIGPATH:-${CCACHE_DIR:-$HOME/.ccache}/ccache.conf}"
mkdir -p $(dirname "$CONF_PATH")
echo "max_size = ${{ inputs.max_size }}" > "$CONF_PATH"
echo "hash_dir = ${{ inputs.hash_dir }}" >> "$CONF_PATH"
echo "compiler_check = ${{ inputs.compiler_check }}" >> "$CONF_PATH"
ccache -p # Print config for verification
ccache -z # Zero statistics before the build

View File

@@ -0,0 +1,108 @@
name: build
description: 'Builds the project with ccache integration'
inputs:
generator:
description: 'CMake generator to use'
required: true
configuration:
description: 'Build configuration (Debug, Release, etc.)'
required: true
build_dir:
description: 'Directory to build in'
required: false
default: '.build'
cc:
description: 'C compiler to use'
required: false
default: ''
cxx:
description: 'C++ compiler to use'
required: false
default: ''
compiler-id:
description: 'Unique identifier for compiler/version combination used for cache keys'
required: false
default: ''
cache_version:
description: 'Cache version for invalidation'
required: false
default: '1'
ccache_enabled:
description: 'Whether to use ccache'
required: false
default: 'true'
main_branch:
description: 'Main branch name for restore keys'
required: false
default: 'dev'
runs:
using: 'composite'
steps:
- name: Generate safe branch name
if: inputs.ccache_enabled == 'true'
id: safe-branch
shell: bash
run: |
SAFE_BRANCH=$(echo "${{ github.ref_name }}" | tr -c 'a-zA-Z0-9_.-' '-')
echo "name=${SAFE_BRANCH}" >> $GITHUB_OUTPUT
- name: Restore ccache directory
if: inputs.ccache_enabled == 'true'
id: ccache-restore
uses: actions/cache/restore@v4
with:
path: ~/.ccache
key: ${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ inputs.configuration }}-${{ steps.safe-branch.outputs.name }}
restore-keys: |
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ inputs.configuration }}-${{ inputs.main_branch }}
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ inputs.configuration }}-
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-
- name: Configure project
shell: bash
run: |
mkdir -p ${{ inputs.build_dir }}
cd ${{ inputs.build_dir }}
# Set compiler environment variables if provided
if [ -n "${{ inputs.cc }}" ]; then
export CC="${{ inputs.cc }}"
fi
if [ -n "${{ inputs.cxx }}" ]; then
export CXX="${{ inputs.cxx }}"
fi
# Configure ccache launcher args
CCACHE_ARGS=""
if [ "${{ inputs.ccache_enabled }}" = "true" ]; then
CCACHE_ARGS="-DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache"
fi
# Run CMake configure
cmake .. \
-G "${{ inputs.generator }}" \
$CCACHE_ARGS \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=${{ inputs.configuration }}
- name: Build project
shell: bash
run: |
cd ${{ inputs.build_dir }}
cmake --build . --config ${{ inputs.configuration }} --parallel $(nproc)
- name: Show ccache statistics
if: inputs.ccache_enabled == 'true'
shell: bash
run: ccache -s
- name: Save ccache directory
if: inputs.ccache_enabled == 'true'
uses: actions/cache/save@v4
with:
path: ~/.ccache
key: ${{ steps.ccache-restore.outputs.cache-primary-key }}

View File

@@ -0,0 +1,86 @@
name: dependencies
description: 'Installs build dependencies with caching'
inputs:
configuration:
description: 'Build configuration (Debug, Release, etc.)'
required: true
build_dir:
description: 'Directory to build dependencies in'
required: false
default: '.build'
compiler-id:
description: 'Unique identifier for compiler/version combination used for cache keys'
required: false
default: ''
cache_version:
description: 'Cache version for invalidation'
required: false
default: '1'
cache_enabled:
description: 'Whether to use caching'
required: false
default: 'true'
main_branch:
description: 'Main branch name for restore keys'
required: false
default: 'dev'
outputs:
cache-hit:
description: 'Whether there was a cache hit'
value: ${{ steps.cache-restore-conan.outputs.cache-hit }}
runs:
using: 'composite'
steps:
- name: Generate safe branch name
if: inputs.cache_enabled == 'true'
id: safe-branch
shell: bash
run: |
SAFE_BRANCH=$(echo "${{ github.ref_name }}" | tr -c 'a-zA-Z0-9_.-' '-')
echo "name=${SAFE_BRANCH}" >> $GITHUB_OUTPUT
- name: Restore Conan cache
if: inputs.cache_enabled == 'true'
id: cache-restore-conan
uses: actions/cache/restore@v4
with:
path: |
~/.conan
~/.conan2
key: ${{ runner.os }}-conan-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ hashFiles('**/conanfile.txt', '**/conanfile.py') }}-${{ inputs.configuration }}
restore-keys: |
${{ runner.os }}-conan-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ hashFiles('**/conanfile.txt', '**/conanfile.py') }}-
${{ runner.os }}-conan-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-
${{ runner.os }}-conan-v${{ inputs.cache_version }}-
- name: Export custom recipes
shell: bash
run: |
conan export external/snappy snappy/1.1.9@
conan export external/soci soci/4.0.3@
- name: Install dependencies
shell: bash
run: |
# Create build directory
mkdir -p ${{ inputs.build_dir }}
cd ${{ inputs.build_dir }}
# Install dependencies using conan
conan install \
--output-folder . \
--build missing \
--settings build_type=${{ inputs.configuration }} \
..
- name: Save Conan cache
if: inputs.cache_enabled == 'true' && steps.cache-restore-conan.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.conan
~/.conan2
key: ${{ steps.cache-restore-conan.outputs.cache-primary-key }}

View File

@@ -2,37 +2,104 @@ name: Build using Docker
on:
push:
branches: [ "dev", "candidate", "release", "jshooks" ]
branches: ["dev", "candidate", "release", "jshooks"]
pull_request:
branches: [ "dev", "candidate", "release", "jshooks" ]
branches: ["dev", "candidate", "release", "jshooks"]
concurrency:
group: ${{ github.workflow }}
cancel-in-progress: false
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
DEBUG_BUILD_CONTAINERS_AFTER_CLEANUP: 1
jobs:
checkout:
runs-on: [self-hosted, vanity]
outputs:
checkout_path: ${{ steps.vars.outputs.checkout_path }}
steps:
- uses: actions/checkout@v3
with:
clean: false
- name: Prepare checkout path
id: vars
run: |
SAFE_BRANCH=$(echo "${{ github.ref_name }}" | sed -e 's/[^a-zA-Z0-9._-]/-/g')
CHECKOUT_PATH="${SAFE_BRANCH}-${{ github.sha }}"
echo "checkout_path=${CHECKOUT_PATH}" >> "$GITHUB_OUTPUT"
- uses: actions/checkout@v4
with:
path: ${{ steps.vars.outputs.checkout_path }}
clean: true
fetch-depth: 2 # Only get the last 2 commits, to avoid fetching all history
checkpatterns:
runs-on: [self-hosted, vanity]
needs: checkout
defaults:
run:
working-directory: ${{ needs.checkout.outputs.checkout_path }}
steps:
- name: Check for suspicious patterns
run: /bin/bash suspicious_patterns.sh
- name: Check for suspicious patterns
run: /bin/bash suspicious_patterns.sh
build:
runs-on: [self-hosted, vanity]
needs: checkpatterns
needs: [checkpatterns, checkout]
defaults:
run:
working-directory: ${{ needs.checkout.outputs.checkout_path }}
steps:
- name: Build using Docker
run: /bin/bash release-builder.sh
- name: Set Cleanup Script Path
run: |
echo "JOB_CLEANUP_SCRIPT=$(mktemp)" >> $GITHUB_ENV
- name: Build using Docker
run: /bin/bash release-builder.sh
- name: Stop Container (Cleanup)
if: always()
run: |
echo "Running cleanup script: $JOB_CLEANUP_SCRIPT"
/bin/bash -e -x "$JOB_CLEANUP_SCRIPT"
CLEANUP_EXIT_CODE=$?
if [[ "$CLEANUP_EXIT_CODE" -eq 0 ]]; then
echo "Cleanup script succeeded."
rm -f "$JOB_CLEANUP_SCRIPT"
echo "Cleanup script removed."
else
echo "⚠️ Cleanup script failed! Keeping for debugging: $JOB_CLEANUP_SCRIPT"
fi
if [[ "${DEBUG_BUILD_CONTAINERS_AFTER_CLEANUP}" == "1" ]]; then
echo "🔍 Checking for leftover containers..."
BUILD_CONTAINERS=$(docker ps --format '{{.Names}}' | grep '^xahaud_cached_builder' || echo "")
if [[ -n "$BUILD_CONTAINERS" ]]; then
echo "⚠️ WARNING: Some build containers are still running"
echo "$BUILD_CONTAINERS"
else
echo "✅ No build containers found"
fi
fi
tests:
runs-on: [self-hosted, vanity]
needs: build
needs: [build, checkout]
defaults:
run:
working-directory: ${{ needs.checkout.outputs.checkout_path }}
steps:
- name: Unit tests
run: /bin/bash docker-unit-tests.sh
- name: Unit tests
run: /bin/bash docker-unit-tests.sh
cleanup:
runs-on: [self-hosted, vanity]
needs: [tests, checkout]
if: always()
steps:
- name: Cleanup workspace
run: |
CHECKOUT_PATH="${{ needs.checkout.outputs.checkout_path }}"
echo "Cleaning workspace for ${CHECKOUT_PATH}"
rm -rf "${{ github.workspace }}/${CHECKOUT_PATH}"

View File

@@ -4,21 +4,32 @@ on: [push, pull_request]
jobs:
check:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
env:
CLANG_VERSION: 10
steps:
- uses: actions/checkout@v3
- name: Install clang-format
# - name: Install clang-format
# run: |
# codename=$( lsb_release --codename --short )
# sudo tee /etc/apt/sources.list.d/llvm.list >/dev/null <<EOF
# deb http://apt.llvm.org/${codename}/ llvm-toolchain-${codename}-${CLANG_VERSION} main
# deb-src http://apt.llvm.org/${codename}/ llvm-toolchain-${codename}-${CLANG_VERSION} main
# EOF
# wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add
# sudo apt-get update -y
# sudo apt-get install -y clang-format-${CLANG_VERSION}
# Temporary fix until this commit is merged
# https://github.com/XRPLF/rippled/commit/552377c76f55b403a1c876df873a23d780fcc81c
- name: Download and install clang-format
run: |
codename=$( lsb_release --codename --short )
sudo tee /etc/apt/sources.list.d/llvm.list >/dev/null <<EOF
deb http://apt.llvm.org/${codename}/ llvm-toolchain-${codename}-${CLANG_VERSION} main
deb-src http://apt.llvm.org/${codename}/ llvm-toolchain-${codename}-${CLANG_VERSION} main
EOF
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add
sudo apt-get update
sudo apt-get install clang-format-${CLANG_VERSION}
sudo apt-get update -y
sudo apt-get install -y libtinfo5
curl -LO https://github.com/llvm/llvm-project/releases/download/llvmorg-10.0.1/clang+llvm-10.0.1-x86_64-linux-gnu-ubuntu-16.04.tar.xz
tar -xf clang+llvm-10.0.1-x86_64-linux-gnu-ubuntu-16.04.tar.xz
sudo mv clang+llvm-10.0.1-x86_64-linux-gnu-ubuntu-16.04 /opt/clang-10
sudo ln -s /opt/clang-10/bin/clang-format /usr/local/bin/clang-format-10
- name: Format src/ripple
run: find src/ripple -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 clang-format-${CLANG_VERSION} -i
- name: Format src/test
@@ -30,7 +41,7 @@ jobs:
git diff --exit-code | tee "clang-format.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
continue-on-error: true
with:
name: clang-format.patch

View File

@@ -1,25 +0,0 @@
name: Build and publish Doxygen documentation
on:
push:
branches:
- dev
jobs:
job:
runs-on: ubuntu-latest
container:
image: docker://rippleci/rippled-ci-builder:2944b78d22db
steps:
- name: checkout
uses: actions/checkout@v2
- name: build
run: |
mkdir build
cd build
cmake -DBoost_NO_BOOST_CMAKE=ON ..
cmake --build . --target docs --parallel $(nproc)
- name: publish
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: build/docs/html

View File

@@ -18,7 +18,7 @@ jobs:
git diff --exit-code | tee "levelization.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
continue-on-error: true
with:
name: levelization.patch

116
.github/workflows/xahau-ga-macos.yml vendored Normal file
View File

@@ -0,0 +1,116 @@
name: MacOS - GA Runner
on:
push:
branches: ["dev", "candidate", "release"]
pull_request:
branches: ["dev", "candidate", "release"]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
strategy:
matrix:
generator:
- Ninja
configuration:
- Debug
runs-on: macos-15
env:
build_dir: .build
# Bump this number to invalidate all caches globally.
CACHE_VERSION: 1
MAIN_BRANCH_NAME: dev
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Conan
run: |
brew install conan@1
# Add Conan 1 to the PATH for this job
echo "$(brew --prefix conan@1)/bin" >> $GITHUB_PATH
- name: Install Coreutils
run: |
brew install coreutils
echo "Num proc: $(nproc)"
- name: Install Ninja
if: matrix.generator == 'Ninja'
run: brew install ninja
- name: Install Python
run: |
if which python3 > /dev/null 2>&1; then
echo "Python 3 executable exists"
python3 --version
else
brew install python@3.12
fi
# Create 'python' symlink if it doesn't exist (for tools expecting 'python')
if ! which python > /dev/null 2>&1; then
sudo ln -sf $(which python3) /usr/local/bin/python
fi
- name: Install CMake
run: |
if which cmake > /dev/null 2>&1; then
echo "cmake executable exists"
cmake --version
else
brew install cmake
fi
- name: Install ccache
run: brew install ccache
- name: Configure ccache
uses: ./.github/actions/xahau-configure-ccache
with:
max_size: 2G
hash_dir: true
compiler_check: content
- name: Check environment
run: |
echo "PATH:"
echo "${PATH}" | tr ':' '\n'
which python && python --version || echo "Python not found"
which conan && conan --version || echo "Conan not found"
which cmake && cmake --version || echo "CMake not found"
clang --version
ccache --version
echo "---- Full Environment ----"
env
- name: Configure Conan
run: |
conan profile new default --detect || true # Ignore error if profile exists
conan profile update settings.compiler.cppstd=20 default
- name: Install dependencies
uses: ./.github/actions/xahau-ga-dependencies
with:
configuration: ${{ matrix.configuration }}
build_dir: ${{ env.build_dir }}
compiler-id: clang
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
- name: Build
uses: ./.github/actions/xahau-ga-build
with:
generator: ${{ matrix.generator }}
configuration: ${{ matrix.configuration }}
build_dir: ${{ env.build_dir }}
compiler-id: clang
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
- name: Test
run: |
${{ env.build_dir }}/rippled --unittest --unittest-jobs $(nproc)

123
.github/workflows/xahau-ga-nix.yml vendored Normal file
View File

@@ -0,0 +1,123 @@
name: Nix - GA Runner
on:
push:
branches: ["dev", "candidate", "release"]
pull_request:
branches: ["dev", "candidate", "release"]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build-job:
runs-on: ubuntu-latest
outputs:
artifact_name: ${{ steps.set-artifact-name.outputs.artifact_name }}
strategy:
fail-fast: false
matrix:
compiler: [gcc]
configuration: [Debug]
include:
- compiler: gcc
cc: gcc-11
cxx: g++-11
compiler_id: gcc-11
env:
build_dir: .build
# Bump this number to invalidate all caches globally.
CACHE_VERSION: 1
MAIN_BRANCH_NAME: dev
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install build dependencies
run: |
sudo apt-get update
sudo apt-get install -y ninja-build ${{ matrix.cc }} ${{ matrix.cxx }} ccache
# Install specific Conan version needed
pip install --upgrade "conan<2"
- name: Configure ccache
uses: ./.github/actions/xahau-configure-ccache
with:
max_size: 2G
hash_dir: true
compiler_check: content
- name: Configure Conan
run: |
conan profile new default --detect || true # Ignore error if profile exists
conan profile update settings.compiler.cppstd=20 default
conan profile update settings.compiler=${{ matrix.compiler }} default
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update env.CC=/usr/bin/${{ matrix.cc }} default
conan profile update env.CXX=/usr/bin/${{ matrix.cxx }} default
conan profile update conf.tools.build:compiler_executables='{"c": "/usr/bin/${{ matrix.cc }}", "cpp": "/usr/bin/${{ matrix.cxx }}"}' default
# Set correct compiler version based on matrix.compiler
if [ "${{ matrix.compiler }}" = "gcc" ]; then
conan profile update settings.compiler.version=11 default
elif [ "${{ matrix.compiler }}" = "clang" ]; then
conan profile update settings.compiler.version=14 default
fi
# Display profile for verification
conan profile show default
- name: Check environment
run: |
echo "PATH:"
echo "${PATH}" | tr ':' '\n'
which conan && conan --version || echo "Conan not found"
which cmake && cmake --version || echo "CMake not found"
which ${{ matrix.cc }} && ${{ matrix.cc }} --version || echo "${{ matrix.cc }} not found"
which ${{ matrix.cxx }} && ${{ matrix.cxx }} --version || echo "${{ matrix.cxx }} not found"
which ccache && ccache --version || echo "ccache not found"
echo "---- Full Environment ----"
env
- name: Install dependencies
uses: ./.github/actions/xahau-ga-dependencies
with:
configuration: ${{ matrix.configuration }}
build_dir: ${{ env.build_dir }}
compiler-id: ${{ matrix.compiler_id }}
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
- name: Build
uses: ./.github/actions/xahau-ga-build
with:
generator: Ninja
configuration: ${{ matrix.configuration }}
build_dir: ${{ env.build_dir }}
cc: ${{ matrix.cc }}
cxx: ${{ matrix.cxx }}
compiler-id: ${{ matrix.compiler_id }}
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
- name: Set artifact name
id: set-artifact-name
run: |
ARTIFACT_NAME="build-output-nix-${{ github.run_id }}-${{ matrix.compiler }}-${{ matrix.configuration }}"
echo "artifact_name=${ARTIFACT_NAME}" >> "$GITHUB_OUTPUT"
echo "Using artifact name: ${ARTIFACT_NAME}"
- name: Debug build directory
run: |
echo "Checking build directory contents: ${{ env.build_dir }}"
ls -la ${{ env.build_dir }} || echo "Build directory not found or empty"
- name: Run tests
run: |
# Ensure the binary exists before trying to run
if [ -f "${{ env.build_dir }}/rippled" ]; then
${{ env.build_dir }}/rippled --unittest --unittest-jobs $(nproc)
else
echo "Error: rippled executable not found in ${{ env.build_dir }}"
exit 1
fi

2
.gitignore vendored
View File

@@ -114,3 +114,5 @@ pkg_out
pkg
CMakeUserPresets.json
bld.rippled/
generated

View File

@@ -3,7 +3,7 @@
"C_Cpp.clang_format_path": ".clang-format",
"C_Cpp.clang_format_fallbackStyle": "{ ColumnLimit: 0 }",
"[cpp]":{
"editor.wordBasedSuggestions": false,
"editor.wordBasedSuggestions": "off",
"editor.suggest.insertMode": "replace",
"editor.semanticHighlighting.enabled": true,
"editor.tabSize": 4,

442
BUILD.md Normal file
View File

@@ -0,0 +1,442 @@
> These instructions assume you have a C++ development environment ready
> with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up
> on Linux, macOS, or Windows, see [our guide](./docs/build/environment.md).
>
> These instructions also assume a basic familiarity with Conan and CMake.
> If you are unfamiliar with Conan,
> you can read our [crash course](./docs/build/conan.md)
> or the official [Getting Started][3] walkthrough.
## Branches
For a stable release, choose the `master` branch or one of the [tagged
releases](https://github.com/ripple/rippled/releases).
```
git checkout master
```
For the latest release candidate, choose the `release` branch.
```
git checkout release
```
For the latest set of untested features, or to contribute, choose the `develop`
branch.
```
git checkout develop
```
## Minimum Requirements
- [Python 3.7](https://www.python.org/downloads/)
- [Conan 1.55](https://conan.io/downloads.html)
- [CMake 3.16](https://cmake.org/download/)
`rippled` is written in the C++20 dialect and includes the `<concepts>` header.
The [minimum compiler versions][2] required are:
| Compiler | Version |
|-------------|---------|
| GCC | 10 |
| Clang | 13 |
| Apple Clang | 13.1.6 |
| MSVC | 19.23 |
We don't recommend Windows for `rippled` production at this time. As of
January 2023, Ubuntu has the highest level of quality assurance, testing,
and support.
Windows developers should use Visual Studio 2019. `rippled` isn't
compatible with [Boost](https://www.boost.org/) 1.78 or 1.79, and Conan
can't build earlier Boost versions.
**Note:** 32-bit Windows development isn't supported.
## Steps
### Set Up Conan
1. (Optional) If you've never used Conan, use autodetect to set up a default profile.
```
conan profile new default --detect
```
2. Update the compiler settings.
```
conan profile update settings.compiler.cppstd=20 default
```
Linux developers will commonly have a default Conan [profile][] that compiles
with GCC and links with libstdc++.
If you are linking with libstdc++ (see profile setting `compiler.libcxx`),
then you will need to choose the `libstdc++11` ABI.
```
conan profile update settings.compiler.libcxx=libstdc++11 default
```
On Windows, you should use the x64 native build tools.
An easy way to do that is to run the shortcut "x64 Native Tools Command
Prompt" for the version of Visual Studio that you have installed.
Windows developers must also build `rippled` and its dependencies for the x64
architecture.
```
conan profile update settings.arch=x86_64 default
```
3. (Optional) If you have multiple compilers installed on your platform,
make sure that Conan and CMake select the one you want to use.
This setting will set the correct variables (`CMAKE_<LANG>_COMPILER`)
in the generated CMake toolchain file.
```
conan profile update 'conf.tools.build:compiler_executables={"c": "<path>", "cpp": "<path>"}' default
```
It should choose the compiler for dependencies as well,
but not all of them have a Conan recipe that respects this setting (yet).
For the rest, you can set these environment variables:
```
conan profile update env.CC=<path> default
conan profile update env.CXX=<path> default
```
4. Export our [Conan recipe for Snappy](./external/snappy).
It doesn't explicitly link the C++ standard library,
which allows you to statically link it with GCC, if you want.
```
conan export external/snappy snappy/1.1.9@
```
5. Export our [Conan recipe for SOCI](./external/soci).
It patches their CMake to correctly import its dependencies.
```
conan export external/soci soci/4.0.3@
```
### Build and Test
1. Create a build directory and move into it.
```
mkdir .build
cd .build
```
You can use any directory name. Conan treats your working directory as an
install folder and generates files with implementation details.
You don't need to worry about these files, but make sure to change
your working directory to your build directory before calling Conan.
**Note:** You can specify a directory for the installation files by adding
the `install-folder` or `-if` option to every `conan install` command
in the next step.
2. Generate CMake files for every configuration you want to build.
```
conan install .. --output-folder . --build missing --settings build_type=Release
conan install .. --output-folder . --build missing --settings build_type=Debug
```
For a single-configuration generator, e.g. `Unix Makefiles` or `Ninja`,
you only need to run this command once.
For a multi-configuration generator, e.g. `Visual Studio`, you may want to
run it more than once.
Each of these commands should also have a different `build_type` setting.
A second command with the same `build_type` setting will overwrite the files
generated by the first. You can pass the build type on the command line with
`--settings build_type=$BUILD_TYPE` or in the profile itself,
under the section `[settings]` with the key `build_type`.
If you are using a Microsoft Visual C++ compiler,
then you will need to ensure consistency between the `build_type` setting
and the `compiler.runtime` setting.
When `build_type` is `Release`, `compiler.runtime` should be `MT`.
When `build_type` is `Debug`, `compiler.runtime` should be `MTd`.
```
conan install .. --output-folder . --build missing --settings build_type=Release --settings compiler.runtime=MT
conan install .. --output-folder . --build missing --settings build_type=Debug --settings compiler.runtime=MTd
```
3. Configure CMake and pass the toolchain file generated by Conan, located at
`$OUTPUT_FOLDER/build/generators/conan_toolchain.cmake`.
Single-config generators:
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release ..
```
Pass the CMake variable [`CMAKE_BUILD_TYPE`][build_type]
and make sure it matches the `build_type` setting you chose in the previous
step.
Multi-config gnerators:
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake ..
```
**Note:** You can pass build options for `rippled` in this step.
4. Build `rippled`.
For a single-configuration generator, it will build whatever configuration
you passed for `CMAKE_BUILD_TYPE`. For a multi-configuration generator,
you must pass the option `--config` to select the build configuration.
Single-config generators:
```
cmake --build .
```
Multi-config generators:
```
cmake --build . --config Release
cmake --build . --config Debug
```
5. Test rippled.
Single-config generators:
```
./rippled --unittest
```
Multi-config generators:
```
./Release/rippled --unittest
./Debug/rippled --unittest
```
The location of `rippled` in your build directory depends on your CMake
generator. Pass `--help` to see the rest of the command line options.
## Options
| Option | Default Value | Description |
| --- | ---| ---|
| `assert` | OFF | Enable assertions.
| `reporting` | OFF | Build the reporting mode feature. |
| `tests` | ON | Build tests. |
| `unity` | ON | Configure a unity build. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
[Unity builds][5] may be faster for the first build
(at the cost of much more memory) since they concatenate sources into fewer
translation units. Non-unity builds may be faster for incremental builds,
and can be helpful for detecting `#include` omissions.
## Troubleshooting
### Conan
If you have trouble building dependencies after changing Conan settings,
try removing the Conan cache.
```
rm -rf ~/.conan/data
```
### no std::result_of
If your compiler version is recent enough to have removed `std::result_of` as
part of C++20, e.g. Apple Clang 15.0, then you might need to add a preprocessor
definition to your build.
```
conan profile update 'options.boost:extra_b2_flags="define=BOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'conf.tools.build:cflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
```
### recompile with -fPIC
If you get a linker error suggesting that you recompile Boost with
position-independent code, such as:
```
/usr/bin/ld.gold: error: /home/username/.conan/data/boost/1.77.0/_/_/package/.../lib/libboost_container.a(alloc_lib.o):
requires unsupported dynamic reloc 11; recompile with -fPIC
```
Conan most likely downloaded a bad binary distribution of the dependency.
This seems to be a [bug][1] in Conan just for Boost 1.77.0 compiled with GCC
for Linux. The solution is to build the dependency locally by passing
`--build boost` when calling `conan install`.
```
/usr/bin/ld.gold: error: /home/username/.conan/data/boost/1.77.0/_/_/package/dc8aedd23a0f0a773a5fcdcfe1ae3e89c4205978/lib/libboost_container.a(alloc_lib.o): requires unsupported dynamic reloc 11; recompile with -fPIC
```
## Add a Dependency
If you want to experiment with a new package, follow these steps:
1. Search for the package on [Conan Center](https://conan.io/center/).
2. Modify [`conanfile.py`](./conanfile.py):
- Add a version of the package to the `requires` property.
- Change any default options for the package by adding them to the
`default_options` property (with syntax `'$package:$option': $value`).
3. Modify [`CMakeLists.txt`](./CMakeLists.txt):
- Add a call to `find_package($package REQUIRED)`.
- Link a library from the package to the target `ripple_libs`
(search for the existing call to `target_link_libraries(ripple_libs INTERFACE ...)`).
4. Start coding! Don't forget to include whatever headers you need from the package.
## A crash course in CMake and Conan
To better understand how to use Conan,
we should first understand _why_ we use Conan,
and to understand that,
we need to understand how we use CMake.
### CMake
Technically, you don't need CMake to build this project.
You could manually compile every translation unit into an object file,
using the right compiler options,
and then manually link all those objects together,
using the right linker options.
However, that is very tedious and error-prone,
which is why we lean on tools like CMake.
We have written CMake configuration files
([`CMakeLists.txt`](./CMakeLists.txt) and friends)
for this project so that CMake can be used to correctly compile and link
all of the translation units in it.
Or rather, CMake will generate files for a separate build system
(e.g. Make, Ninja, Visual Studio, Xcode, etc.)
that compile and link all of the translation units.
Even then, CMake has parameters, some of which are platform-specific.
In CMake's parlance, parameters are specially-named **variables** like
[`CMAKE_BUILD_TYPE`][build_type] or
[`CMAKE_MSVC_RUNTIME_LIBRARY`][runtime].
Parameters include:
- what build system to generate files for
- where to find the compiler and linker
- where to find dependencies, e.g. libraries and headers
- how to link dependencies, e.g. any special compiler or linker flags that
need to be used with them, including preprocessor definitions
- how to compile translation units, e.g. with optimizations, debug symbols,
position-independent code, etc.
- on Windows, which runtime library to link with
For some of these parameters, like the build system and compiler,
CMake goes through a complicated search process to choose default values.
For others, like the dependencies,
_we_ had written in the CMake configuration files of this project
our own complicated process to choose defaults.
For most developers, things "just worked"... until they didn't, and then
you were left trying to debug one of these complicated processes, instead of
choosing and manually passing the parameter values yourself.
You can pass every parameter to CMake on the command line,
but writing out these parameters every time we want to configure CMake is
a pain.
Most humans prefer to put them into a configuration file, once, that
CMake can read every time it is configured.
For CMake, that file is a [toolchain file][toolchain].
### Conan
These next few paragraphs on Conan are going to read much like the ones above
for CMake.
Technically, you don't need Conan to build this project.
You could manually download, configure, build, and install all of the
dependencies yourself, and then pass all of the parameters necessary for
CMake to link to those dependencies.
To guarantee ABI compatibility, you must be sure to use the same set of
compiler and linker options for all dependencies _and_ this project.
However, that is very tedious and error-prone, which is why we lean on tools
like Conan.
We have written a Conan configuration file ([`conanfile.py`](./conanfile.py))
so that Conan can be used to correctly download, configure, build, and install
all of the dependencies for this project,
using a single set of compiler and linker options for all of them.
It generates files that contain almost all of the parameters that CMake
expects.
Those files include:
- A single toolchain file.
- For every dependency, a CMake [package configuration file][pcf],
[package version file][pvf], and for every build type, a package
targets file.
Together, these files implement version checking and define `IMPORTED`
targets for the dependencies.
The toolchain file itself amends the search path
([`CMAKE_PREFIX_PATH`][prefix_path]) so that [`find_package()`][find_package]
will [discover][search] the generated package configuration files.
**Nearly all we must do to properly configure CMake is pass the toolchain
file.**
What CMake parameters are left out?
You'll still need to pick a build system generator,
and if you choose a single-configuration generator,
you'll need to pass the `CMAKE_BUILD_TYPE`,
which should match the `build_type` setting you gave to Conan.
Even then, Conan has parameters, some of which are platform-specific.
In Conan's parlance, parameters are either settings or options.
**Settings** are shared by all packages, e.g. the build type.
**Options** are specific to a given package, e.g. whether to build and link
OpenSSL as a shared library.
For settings, Conan goes through a complicated search process to choose
defaults.
For options, each package recipe defines its own defaults.
You can pass every parameter to Conan on the command line,
but it is more convenient to put them in a [profile][profile].
**All we must do to properly configure Conan is edit and pass the profile.**
[1]: https://github.com/conan-io/conan-center-index/issues/13168
[5]: https://en.wikipedia.org/wiki/Unity_build
[build_type]: https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html
[runtime]: https://cmake.org/cmake/help/latest/variable/CMAKE_MSVC_RUNTIME_LIBRARY.html
[toolchain]: https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html
[pcf]: https://cmake.org/cmake/help/latest/manual/cmake-packages.7.html#package-configuration-file
[pvf]: https://cmake.org/cmake/help/latest/manual/cmake-packages.7.html#package-version-file
[find_package]: https://cmake.org/cmake/help/latest/command/find_package.html
[search]: https://cmake.org/cmake/help/latest/command/find_package.html#search-procedure
[prefix_path]: https://cmake.org/cmake/help/latest/variable/CMAKE_PREFIX_PATH.html
[profile]: https://docs.conan.io/en/latest/reference/profiles.html

465
BUILD_LEDGER.md Normal file
View File

@@ -0,0 +1,465 @@
# Hash Migration Implementation via BuildLedger
## Overview
This document outlines the approach for implementing SHA-512 Half to BLAKE3 hash migration by performing the state map rekeying operation in the ledger building process, bypassing the metadata generation problem inherent in the transaction processing pipeline.
## The Core Problem
When switching from SHA-512 Half to BLAKE3, every object in the state map needs to be rekeyed because the hash (which IS the key in the SHAMap) changes. This would generate metadata showing:
- Every object deleted at its old SHA-512 key
- Every object created at its new BLAKE3 key
- Total metadata size: 2× the entire state size (potentially gigabytes)
## The Solution: Bypass Transaction Processing
Instead of trying to rekey within the transaction processor (which tracks all changes for metadata), perform the rekeying AFTER transaction processing but BEFORE ledger finalization.
## Implementation Location
The key intervention point is in `buildLedgerImpl()` at line 63 of `BuildLedger.cpp`:
```cpp
// BuildLedger.cpp, lines 58-65
{
OpenView accum(&*built);
assert(!accum.open());
applyTxs(accum, built); // Apply transactions (including pseudo-txns)
accum.apply(*built); // Apply accumulated changes to the ledger
}
// <-- INTERVENTION POINT HERE
built->updateSkipList();
```
## Detailed Implementation
### 1. Pseudo-Transaction Role (Simple Flag Setting)
```cpp
// In Change.cpp
TER Change::applyHashMigration()
{
// The pseudo-transaction just sets a flag
// The actual migration happens in BuildLedger
JLOG(j_.warn()) << "Hash migration pseudo transaction triggered at ledger "
<< view().seq();
// Create a migration flag object
auto migrationFlag = std::make_shared<SLE>(
keylet::hashMigrationFlag(
hash_options{view().seq(), KEYLET_MIGRATION_FLAG}));
migrationFlag->setFieldU32(sfLedgerSequence, view().seq());
migrationFlag->setFieldU8(sfMigrationStatus, 1); // 1 = pending
view().insert(migrationFlag);
return tesSUCCESS;
}
```
### 2. BuildLedger Modification
```cpp
// In BuildLedger.cpp, after line 63
template <class ApplyTxs>
std::shared_ptr<Ledger>
buildLedgerImpl(
std::shared_ptr<Ledger const> const& parent,
NetClock::time_point closeTime,
const bool closeTimeCorrect,
NetClock::duration closeResolution,
Application& app,
beast::Journal j,
ApplyTxs&& applyTxs)
{
auto built = std::make_shared<Ledger>(*parent, closeTime);
if (built->isFlagLedger() && built->rules().enabled(featureNegativeUNL))
{
built->updateNegativeUNL();
}
{
OpenView accum(&*built);
assert(!accum.open());
applyTxs(accum, built);
accum.apply(*built);
}
// NEW: Check for hash migration flag
if (shouldPerformHashMigration(built, app, j))
{
performHashMigration(built, app, j);
}
built->updateSkipList();
// ... rest of function
}
// New helper functions
bool shouldPerformHashMigration(
std::shared_ptr<Ledger> const& ledger,
Application& app,
beast::Journal j)
{
// Check if we're in the migration window
constexpr LedgerIndex MIGRATION_START = 20'000'000;
constexpr LedgerIndex MIGRATION_END = 20'000'010;
if (ledger->seq() < MIGRATION_START || ledger->seq() >= MIGRATION_END)
return false;
// Check for migration flag
auto const flag = ledger->read(keylet::hashMigrationFlag(
hash_options{ledger->seq(), KEYLET_MIGRATION_FLAG}));
if (!flag)
return false;
return flag->getFieldU8(sfMigrationStatus) == 1; // 1 = pending
}
void performHashMigration(
std::shared_ptr<Ledger> const& ledger,
Application& app,
beast::Journal j)
{
JLOG(j.warn()) << "PERFORMING HASH MIGRATION at ledger " << ledger->seq();
auto& oldStateMap = ledger->stateMap();
// Create new state map with BLAKE3 hashing
SHAMap newStateMap(SHAMapType::STATE, ledger->family());
newStateMap.setLedgerSeq(ledger->seq());
// Track statistics
std::size_t objectCount = 0;
auto startTime = std::chrono::steady_clock::now();
// Walk the entire state map and rekey everything
oldStateMap.visitLeaves([&](SHAMapItem const& item) {
try {
// Deserialize the ledger entry
SerialIter sit(item.slice());
auto sle = std::make_shared<SLE>(sit, item.key());
// The new key would be calculated with BLAKE3
// For now, we'd need the actual BLAKE3 implementation
// uint256 newKey = calculateBlake3Key(sle);
// For this example, let's assume we have a function that
// computes the new key based on the SLE type and contents
uint256 newKey = computeNewHashKey(sle, ledger->seq());
// Re-serialize the SLE
Serializer s;
sle->add(s);
// Add to new map with new key
newStateMap.addGiveItem(
SHAMapNodeType::tnACCOUNT_STATE,
make_shamapitem(newKey, s.slice()));
objectCount++;
if (objectCount % 10000 == 0) {
JLOG(j.info()) << "Migration progress: " << objectCount
<< " objects rekeyed";
}
}
catch (std::exception const& e) {
JLOG(j.error()) << "Failed to migrate object " << item.key()
<< ": " << e.what();
throw;
}
});
auto endTime = std::chrono::steady_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(
endTime - startTime);
JLOG(j.warn()) << "Hash migration completed: " << objectCount
<< " objects rekeyed in " << duration.count() << "ms";
// Swap the state maps
oldStateMap = std::move(newStateMap);
// Update the migration flag to completed
auto flag = ledger->peek(keylet::hashMigrationFlag(
hash_options{ledger->seq(), KEYLET_MIGRATION_FLAG}));
if (flag) {
flag->setFieldU8(sfMigrationStatus, 2); // 2 = completed
ledger->rawReplace(flag);
}
}
uint256 computeNewHashKey(
std::shared_ptr<SLE const> const& sle,
LedgerIndex ledgerSeq)
{
// This would use BLAKE3 instead of SHA-512 Half
// Implementation depends on the BLAKE3 integration
// For now, this is a placeholder
// The actual implementation would:
// 1. Determine the object type
// 2. Extract the identifying fields
// 3. Hash them with BLAKE3
// 4. Return the new key
return uint256(); // Placeholder
}
```
## Why This Approach Works
### 1. No Metadata Explosion
- The rekeying happens AFTER the `OpenView` is destroyed
- No change tracking occurs during the rekeying
- Only the migration flag generates metadata (minimal)
### 2. Direct SHAMap Access
- We have direct access to `built->stateMap()`
- Can manipulate the raw data structure without going through ApplyView
- Can create a new SHAMap and swap it in
### 3. Clean Separation of Concerns
- Pseudo-transaction: "Signal that migration should happen"
- BuildLedger: "Actually perform the migration"
- Transaction processor: Unchanged, doesn't need to handle massive rekeying
### 4. Timing is Perfect
- After all transactions are applied
- Before the ledger is finalized
- Before the skip list is updated
- Before the SHAMap is flushed to disk
## Files Referenced in This Analysis
### Core Implementation Files
- `src/ripple/app/ledger/impl/BuildLedger.cpp` - Main implementation location
- `src/ripple/app/ledger/BuildLedger.h` - Header for build functions
- `src/ripple/app/tx/impl/Change.cpp` - Pseudo-transaction handler
- `src/ripple/app/tx/impl/Change.h` - Change transactor header
### Transaction Processing Pipeline (analyzed but bypassed)
- `src/ripple/app/tx/impl/Transactor.cpp` - Base transaction processor
- `src/ripple/app/tx/impl/Transactor.h` - Transactor header
- `src/ripple/app/tx/impl/apply.cpp` - Transaction application
- `src/ripple/app/tx/impl/applySteps.cpp` - Transaction routing
- `src/ripple/app/tx/impl/ApplyContext.h` - Application context
### Ledger and View Classes
- `src/ripple/app/ledger/Ledger.h` - Ledger class definition
- `src/ripple/app/ledger/Ledger.cpp` - Ledger implementation
- `src/ripple/ledger/ApplyView.h` - View interface
- `src/ripple/ledger/ApplyViewImpl.h` - View implementation header
- `src/ripple/ledger/impl/ApplyViewImpl.cpp` - View implementation
- `src/ripple/ledger/impl/ApplyViewBase.cpp` - Base view implementation
- `src/ripple/ledger/detail/ApplyViewBase.h` - Base view header
- `src/ripple/ledger/OpenView.h` - Open ledger view
- `src/ripple/ledger/RawView.h` - Raw view interface
### SHAMap and Data Structures
- `src/ripple/shamap/SHAMap.h` - SHAMap class definition
### Metadata Generation
- `src/ripple/protocol/TxMeta.h` - Transaction metadata header
- `src/ripple/protocol/impl/TxMeta.cpp` - Metadata implementation
### Consensus and Pseudo-Transaction Injection
- `src/ripple/app/consensus/RCLConsensus.cpp` - Consensus implementation
### Supporting Documents
- `PSEUDO_TRANSACTIONS.md` - Documentation on pseudo-transactions
- `HASH_MIGRATION_CONTEXT.md` - Context for hash migration work
## Key Advantages
1. **Architecturally Clean**: Works within existing ledger building framework
2. **No Metadata Issues**: Completely bypasses the metadata generation problem
3. **Atomic Operation**: Either the entire state is rekeyed or none of it is
4. **Fail-Safe**: Can be wrapped in try-catch for error handling
5. **Observable**: Can log progress for large state maps
6. **Testable**: Can be tested independently of transaction processing
## Challenges and Considerations
1. **Performance**: Rekeying millions of objects will take time
- Solution: This happens during consensus, all nodes do it simultaneously
2. **Memory Usage**: Need to hold both old and new SHAMaps temporarily
- Solution: Could potentially do in-place updates with careful ordering
3. **Verification**: Need to ensure all nodes get the same result
- Solution: Deterministic rekeying based on ledger sequence
4. **Rollback**: If migration fails, need to handle gracefully
- Solution: Keep old map until new map is fully built and verified
## Conclusion
By performing the hash migration at the ledger building level rather than within the transaction processing pipeline, we can successfully rekey the entire state map without generating massive metadata. This approach leverages the existing architecture's separation between transaction processing and ledger construction, providing a clean and efficient solution to what initially appeared to be an intractable problem.
---
## APPENDIX: Revised Implementation Following Ledger Pattern
After reviewing the existing pattern in `BuildLedger.cpp`, it's clear that special ledger operations are implemented as methods on the `Ledger` class itself (e.g., `built->updateNegativeUNL()`). Following this pattern, the hash migration should be implemented as `Ledger::migrateToBlake3()`.
### Updated BuildLedger.cpp Implementation
```cpp
// In BuildLedger.cpp, following the existing pattern
template <class ApplyTxs>
std::shared_ptr<Ledger>
buildLedgerImpl(
std::shared_ptr<Ledger const> const& parent,
NetClock::time_point closeTime,
const bool closeTimeCorrect,
NetClock::duration closeResolution,
Application& app,
beast::Journal j,
ApplyTxs&& applyTxs)
{
auto built = std::make_shared<Ledger>(*parent, closeTime);
if (built->isFlagLedger() && built->rules().enabled(featureNegativeUNL))
{
built->updateNegativeUNL();
}
{
OpenView accum(&*built);
assert(!accum.open());
applyTxs(accum, built);
accum.apply(*built);
}
// NEW: Check and perform hash migration following the pattern
if (built->rules().enabled(featureBLAKE3Migration) &&
built->shouldMigrateToBlake3())
{
built->migrateToBlake3();
}
built->updateSkipList();
// ... rest of function
}
```
### Ledger.h Addition
```cpp
// In src/ripple/app/ledger/Ledger.h
class Ledger final : public std::enable_shared_from_this<Ledger>,
public DigestAwareReadView,
public TxsRawView,
public CountedObject<Ledger>
{
public:
// ... existing methods ...
/** Update the Negative UNL ledger component. */
void
updateNegativeUNL();
/** Check if hash migration to BLAKE3 should be performed */
bool
shouldMigrateToBlake3() const;
/** Perform hash migration from SHA-512 Half to BLAKE3
* This rekeys all objects in the state map with new BLAKE3 hashes.
* Must be called after transactions are applied but before the
* ledger is finalized.
*/
void
migrateToBlake3();
// ... rest of class ...
};
```
### Ledger.cpp Implementation
```cpp
// In src/ripple/app/ledger/Ledger.cpp
bool
Ledger::shouldMigrateToBlake3() const
{
// Check if we're in the migration window
constexpr LedgerIndex MIGRATION_START = 20'000'000;
constexpr LedgerIndex MIGRATION_END = 20'000'010;
if (seq() < MIGRATION_START || seq() >= MIGRATION_END)
return false;
// Check for migration flag set by pseudo-transaction
auto const flag = read(keylet::hashMigrationFlag(
hash_options{seq(), KEYLET_MIGRATION_FLAG}));
if (!flag)
return false;
return flag->getFieldU8(sfMigrationStatus) == 1; // 1 = pending
}
void
Ledger::migrateToBlake3()
{
JLOG(j_.warn()) << "Performing BLAKE3 hash migration at ledger " << seq();
// Create new state map with BLAKE3 hashing
SHAMap newStateMap(SHAMapType::STATE, stateMap_.family());
newStateMap.setLedgerSeq(seq());
std::size_t objectCount = 0;
auto startTime = std::chrono::steady_clock::now();
// Walk the entire state map and rekey everything
stateMap_.visitLeaves([&](SHAMapItem const& item) {
// Deserialize the ledger entry
SerialIter sit(item.slice());
auto sle = std::make_shared<SLE>(sit, item.key());
// Calculate new BLAKE3-based key
// This would use the actual BLAKE3 implementation
uint256 newKey = computeBlake3Key(sle);
// Re-serialize and add to new map
Serializer s;
sle->add(s);
newStateMap.addGiveItem(
SHAMapNodeType::tnACCOUNT_STATE,
make_shamapitem(newKey, s.slice()));
if (++objectCount % 10000 == 0) {
JLOG(j_.info()) << "Migration progress: " << objectCount
<< " objects rekeyed";
}
});
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - startTime);
JLOG(j_.warn()) << "BLAKE3 migration completed: " << objectCount
<< " objects rekeyed in " << duration.count() << "ms";
// Swap the state maps
stateMap_ = std::move(newStateMap);
// Update the migration flag to completed
auto flag = peek(keylet::hashMigrationFlag(
hash_options{seq(), KEYLET_MIGRATION_FLAG}));
if (flag) {
flag->setFieldU8(sfMigrationStatus, 2); // 2 = completed
rawReplace(flag);
}
}
```
This approach follows the established pattern in the codebase where special ledger operations are encapsulated as methods on the `Ledger` class itself, making the code more maintainable and consistent with the existing architecture.

View File

@@ -23,6 +23,11 @@ else()
message(STATUS "ACL not found, continuing without ACL support")
endif()
add_library(libxrpl INTERFACE)
target_link_libraries(libxrpl INTERFACE xrpl_core)
add_library(xrpl::libxrpl ALIAS libxrpl)
#[===============================[
beast/legacy FILES:
TODO: review these sources for removal or replacement
@@ -144,11 +149,11 @@ target_link_libraries (xrpl_core
PUBLIC
OpenSSL::Crypto
Ripple::boost
NIH::WasmEdge
NIH::MongoCxx
wasmedge::wasmedge
Ripple::syslibs
NIH::secp256k1
NIH::ed25519-donna
secp256k1::secp256k1
ed25519::ed25519
BLAKE3::blake3
date::date
Ripple::opts)
#[=================================[
@@ -393,6 +398,7 @@ target_sources (rippled PRIVATE
src/ripple/app/misc/NegativeUNLVote.cpp
src/ripple/app/misc/NetworkOPs.cpp
src/ripple/app/misc/SHAMapStoreImp.cpp
src/ripple/app/misc/StateAccounting.cpp
src/ripple/app/misc/detail/impl/WorkSSL.cpp
src/ripple/app/misc/impl/AccountTxPaging.cpp
src/ripple/app/misc/impl/AmendmentTable.cpp
@@ -434,13 +440,18 @@ target_sources (rippled PRIVATE
src/ripple/app/tx/impl/CancelOffer.cpp
src/ripple/app/tx/impl/CashCheck.cpp
src/ripple/app/tx/impl/Change.cpp
src/ripple/app/tx/impl/ClaimReward.cpp
src/ripple/app/tx/impl/Clawback.cpp
src/ripple/app/tx/impl/CreateCheck.cpp
src/ripple/app/tx/impl/CreateOffer.cpp
src/ripple/app/tx/impl/CreateTicket.cpp
src/ripple/app/tx/impl/DeleteAccount.cpp
src/ripple/app/tx/impl/DepositPreauth.cpp
src/ripple/app/tx/impl/Escrow.cpp
src/ripple/app/tx/impl/GenesisMint.cpp
src/ripple/app/tx/impl/Import.cpp
src/ripple/app/tx/impl/InvariantCheck.cpp
src/ripple/app/tx/impl/Invoke.cpp
src/ripple/app/tx/impl/NFTokenAcceptOffer.cpp
src/ripple/app/tx/impl/NFTokenBurn.cpp
src/ripple/app/tx/impl/NFTokenCancelOffer.cpp
@@ -449,14 +460,11 @@ target_sources (rippled PRIVATE
src/ripple/app/tx/impl/OfferStream.cpp
src/ripple/app/tx/impl/PayChan.cpp
src/ripple/app/tx/impl/Payment.cpp
src/ripple/app/tx/impl/SetAccount.cpp
src/ripple/app/tx/impl/SetRegularKey.cpp
src/ripple/app/tx/impl/SetHook.cpp
src/ripple/app/tx/impl/ClaimReward.cpp
src/ripple/app/tx/impl/GenesisMint.cpp
src/ripple/app/tx/impl/Import.cpp
src/ripple/app/tx/impl/Invoke.cpp
src/ripple/app/tx/impl/Remit.cpp
src/ripple/app/tx/impl/SetAccount.cpp
src/ripple/app/tx/impl/SetHook.cpp
src/ripple/app/tx/impl/SetRemarks.cpp
src/ripple/app/tx/impl/SetRegularKey.cpp
src/ripple/app/tx/impl/SetSignerList.cpp
src/ripple/app/tx/impl/SetTrust.cpp
src/ripple/app/tx/impl/SignerEntries.cpp
@@ -539,7 +547,9 @@ target_sources (rippled PRIVATE
subdir: nodestore
#]===============================]
src/ripple/nodestore/backend/CassandraFactory.cpp
src/ripple/nodestore/backend/RWDBFactory.cpp
src/ripple/nodestore/backend/MemoryFactory.cpp
src/ripple/nodestore/backend/FlatmapFactory.cpp
src/ripple/nodestore/backend/NuDBFactory.cpp
src/ripple/nodestore/backend/NullFactory.cpp
src/ripple/nodestore/backend/RocksDBFactory.cpp
@@ -604,6 +614,7 @@ target_sources (rippled PRIVATE
src/ripple/rpc/handlers/BlackList.cpp
src/ripple/rpc/handlers/BookOffers.cpp
src/ripple/rpc/handlers/CanDelete.cpp
src/ripple/rpc/handlers/Catalogue.cpp
src/ripple/rpc/handlers/Connect.cpp
src/ripple/rpc/handlers/ConsensusInfo.cpp
src/ripple/rpc/handlers/CrawlShards.cpp
@@ -627,6 +638,7 @@ target_sources (rippled PRIVATE
src/ripple/rpc/handlers/LogLevel.cpp
src/ripple/rpc/handlers/LogRotate.cpp
src/ripple/rpc/handlers/Manifest.cpp
src/ripple/rpc/handlers/MapStats.cpp
src/ripple/rpc/handlers/NFTOffers.cpp
src/ripple/rpc/handlers/NodeToShard.cpp
src/ripple/rpc/handlers/NoRippleCheck.cpp
@@ -659,6 +671,7 @@ target_sources (rippled PRIVATE
src/ripple/rpc/handlers/ValidatorListSites.cpp
src/ripple/rpc/handlers/Validators.cpp
src/ripple/rpc/handlers/WalletPropose.cpp
src/ripple/rpc/handlers/Catalogue.cpp
src/ripple/rpc/impl/DeliveredAmount.cpp
src/ripple/rpc/impl/Handler.cpp
src/ripple/rpc/impl/LegacyPathFind.cpp
@@ -670,6 +683,9 @@ target_sources (rippled PRIVATE
src/ripple/rpc/impl/ShardVerificationScheduler.cpp
src/ripple/rpc/impl/Status.cpp
src/ripple/rpc/impl/TransactionSign.cpp
src/ripple/rpc/impl/NFTokenID.cpp
src/ripple/rpc/impl/NFTokenOfferID.cpp
src/ripple/rpc/impl/NFTSyntheticSerializer.cpp
#[===============================[
main sources:
subdir: perflog
@@ -708,6 +724,7 @@ if (tests)
src/test/app/BaseFee_test.cpp
src/test/app/Check_test.cpp
src/test/app/ClaimReward_test.cpp
src/test/app/Clawback_test.cpp
src/test/app/CrossingLimits_test.cpp
src/test/app/DeliverMin_test.cpp
src/test/app/DepositAuth_test.cpp
@@ -738,6 +755,7 @@ if (tests)
src/test/app/Path_test.cpp
src/test/app/PayChan_test.cpp
src/test/app/PayStrand_test.cpp
src/test/app/PreviousTxn_test.cpp
src/test/app/PseudoTx_test.cpp
src/test/app/RCLCensorshipDetector_test.cpp
src/test/app/RCLValidations_test.cpp
@@ -745,11 +763,15 @@ if (tests)
src/test/app/Remit_test.cpp
src/test/app/SHAMapStore_test.cpp
src/test/app/SetAuth_test.cpp
src/test/app/SetHook_test.cpp
src/test/app/SetHookTSH_test.cpp
src/test/app/SetRegularKey_test.cpp
src/test/app/SetRemarks_test.cpp
src/test/app/SetTrust_test.cpp
src/test/app/Taker_test.cpp
src/test/app/TheoreticalQuality_test.cpp
src/test/app/Ticket_test.cpp
src/test/app/Touch_test.cpp
src/test/app/Transaction_ordering_test.cpp
src/test/app/TrustAndBalance_test.cpp
src/test/app/TxQ_test.cpp
@@ -757,8 +779,6 @@ if (tests)
src/test/app/ValidatorKeys_test.cpp
src/test/app/ValidatorList_test.cpp
src/test/app/ValidatorSite_test.cpp
src/test/app/SetHook_test.cpp
src/test/app/SetHookTSH_test.cpp
src/test/app/Wildcard_test.cpp
src/test/app/XahauGenesis_test.cpp
src/test/app/tx/apply_test.cpp
@@ -892,11 +912,13 @@ if (tests)
src/test/jtx/impl/rate.cpp
src/test/jtx/impl/regkey.cpp
src/test/jtx/impl/reward.cpp
src/test/jtx/impl/remarks.cpp
src/test/jtx/impl/remit.cpp
src/test/jtx/impl/sendmax.cpp
src/test/jtx/impl/seq.cpp
src/test/jtx/impl/sig.cpp
src/test/jtx/impl/tag.cpp
src/test/jtx/impl/TestHelpers.cpp
src/test/jtx/impl/ticket.cpp
src/test/jtx/impl/token.cpp
src/test/jtx/impl/trust.cpp
@@ -953,6 +975,7 @@ if (tests)
test sources:
subdir: protocol
#]===============================]
src/test/protocol/blake3_test.cpp
src/test/protocol/BuildInfo_test.cpp
src/test/protocol/InnerObjectFormats_test.cpp
src/test/protocol/Issue_test.cpp
@@ -989,6 +1012,7 @@ if (tests)
src/test/rpc/AccountTx_test.cpp
src/test/rpc/AmendmentBlocked_test.cpp
src/test/rpc/Book_test.cpp
src/test/rpc/Catalogue_test.cpp
src/test/rpc/DepositAuthorized_test.cpp
src/test/rpc/DeliveredAmount_test.cpp
src/test/rpc/Feature_test.cpp
@@ -1047,6 +1071,9 @@ target_link_libraries (rippled
Ripple::opts
Ripple::libs
Ripple::xrpl_core
BLAKE3::blake3
# Workaround for a Conan 1.x bug...
m
)
exclude_if_included (rippled)
# define a macro for tests that might need to

View File

@@ -1,6 +1,13 @@
#[===================================================================[
docs target (optional)
#]===================================================================]
# Early return if the `docs` directory is missing,
# e.g. when we are building a Conan package.
if(NOT EXISTS docs)
return()
endif()
if (tests)
find_package (Doxygen)
if (NOT TARGET Doxygen::doxygen)

View File

@@ -4,7 +4,6 @@
install (
TARGETS
ed25519-donna
common
opts
ripple_syslibs
@@ -16,17 +15,6 @@ install (
RUNTIME DESTINATION bin
INCLUDES DESTINATION include)
if(${INSTALL_SECP256K1})
install (
TARGETS
secp256k1
EXPORT RippleExports
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES DESTINATION include)
endif()
install (EXPORT RippleExports
FILE RippleTargets.cmake
NAMESPACE Ripple::

View File

@@ -14,7 +14,7 @@ if (is_multiconfig)
file(GLOB md_files RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} CONFIGURE_DEPENDS
*.md)
LIST(APPEND all_sources ${md_files})
foreach (_target secp256k1 ed25519-donna pbufs xrpl_core rippled)
foreach (_target secp256k1::secp256k1 ed25519::ed25519 pbufs xrpl_core rippled)
get_target_property (_type ${_target} TYPE)
if(_type STREQUAL "INTERFACE_LIBRARY")
continue()

View File

@@ -0,0 +1,52 @@
find_package(Boost 1.83 REQUIRED
COMPONENTS
chrono
container
context
coroutine
date_time
filesystem
program_options
regex
system
thread
)
add_library(ripple_boost INTERFACE)
add_library(Ripple::boost ALIAS ripple_boost)
if(XCODE)
target_include_directories(ripple_boost BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
target_compile_options(ripple_boost INTERFACE --system-header-prefix="boost/")
else()
target_include_directories(ripple_boost SYSTEM BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
endif()
target_link_libraries(ripple_boost
INTERFACE
Boost::boost
Boost::chrono
Boost::container
Boost::coroutine
Boost::date_time
Boost::filesystem
Boost::program_options
Boost::regex
Boost::system
Boost::iostreams
Boost::thread)
if(Boost_COMPILER)
target_link_libraries(ripple_boost INTERFACE Boost::disable_autolinking)
endif()
if(san AND is_clang)
# TODO: gcc does not support -fsanitize-blacklist...can we do something else
# for gcc ?
if(NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)
get_target_property(Boost_INCLUDE_DIRS Boost::headers INTERFACE_INCLUDE_DIRECTORIES)
endif()
message(STATUS "Adding [${Boost_INCLUDE_DIRS}] to sanitizer blacklist")
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt "src:${Boost_INCLUDE_DIRS}/*")
target_compile_options(opts
INTERFACE
# ignore boost headers for sanitizing
-fsanitize-blacklist=${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt)
endif()

View File

@@ -0,0 +1,22 @@
find_package(Protobuf 3.8)
file(MAKE_DIRECTORY ${CMAKE_BINARY_DIR}/proto_gen)
set(ccbd ${CMAKE_CURRENT_BINARY_DIR})
set(CMAKE_CURRENT_BINARY_DIR ${CMAKE_BINARY_DIR}/proto_gen)
protobuf_generate_cpp(PROTO_SRCS PROTO_HDRS src/ripple/proto/ripple.proto)
set(CMAKE_CURRENT_BINARY_DIR ${ccbd})
add_library(pbufs STATIC ${PROTO_SRCS} ${PROTO_HDRS})
target_include_directories(pbufs SYSTEM PUBLIC
${CMAKE_BINARY_DIR}/proto_gen
${CMAKE_BINARY_DIR}/proto_gen/src/ripple/proto
)
target_link_libraries(pbufs protobuf::libprotobuf)
target_compile_options(pbufs
PUBLIC
$<$<BOOL:${XCODE}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>
)
add_library(Ripple::pbufs ALIAS pbufs)

View File

@@ -0,0 +1,62 @@
find_package(gRPC 1.23)
#[=================================[
generate protobuf sources for
grpc defs and bundle into a
static lib
#]=================================]
set(GRPC_GEN_DIR "${CMAKE_BINARY_DIR}/proto_gen_grpc")
file(MAKE_DIRECTORY ${GRPC_GEN_DIR})
set(GRPC_PROTO_SRCS)
set(GRPC_PROTO_HDRS)
set(GRPC_PROTO_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/src/ripple/proto/org")
file(GLOB_RECURSE GRPC_DEFINITION_FILES LIST_DIRECTORIES false "${GRPC_PROTO_ROOT}/*.proto")
foreach(file ${GRPC_DEFINITION_FILES})
get_filename_component(_abs_file ${file} ABSOLUTE)
get_filename_component(_abs_dir ${_abs_file} DIRECTORY)
get_filename_component(_basename ${file} NAME_WE)
get_filename_component(_proto_inc ${GRPC_PROTO_ROOT} DIRECTORY) # updir one level
file(RELATIVE_PATH _rel_root_file ${_proto_inc} ${_abs_file})
get_filename_component(_rel_root_dir ${_rel_root_file} DIRECTORY)
file(RELATIVE_PATH _rel_dir ${CMAKE_CURRENT_SOURCE_DIR} ${_abs_dir})
set(src_1 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.grpc.pb.cc")
set(src_2 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.pb.cc")
set(hdr_1 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.grpc.pb.h")
set(hdr_2 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.pb.h")
add_custom_command(
OUTPUT ${src_1} ${src_2} ${hdr_1} ${hdr_2}
COMMAND protobuf::protoc
ARGS --grpc_out=${GRPC_GEN_DIR}
--cpp_out=${GRPC_GEN_DIR}
--plugin=protoc-gen-grpc=$<TARGET_FILE:gRPC::grpc_cpp_plugin>
-I ${_proto_inc} -I ${_rel_dir}
${_abs_file}
DEPENDS ${_abs_file} protobuf::protoc gRPC::grpc_cpp_plugin
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
COMMENT "Running gRPC C++ protocol buffer compiler on ${file}"
VERBATIM)
set_source_files_properties(${src_1} ${src_2} ${hdr_1} ${hdr_2} PROPERTIES GENERATED TRUE)
list(APPEND GRPC_PROTO_SRCS ${src_1} ${src_2})
list(APPEND GRPC_PROTO_HDRS ${hdr_1} ${hdr_2})
endforeach()
add_library(grpc_pbufs STATIC ${GRPC_PROTO_SRCS} ${GRPC_PROTO_HDRS})
#target_include_directories(grpc_pbufs PRIVATE src)
target_include_directories(grpc_pbufs SYSTEM PUBLIC ${GRPC_GEN_DIR})
target_link_libraries(grpc_pbufs
"gRPC::grpc++"
# libgrpc is missing references.
absl::random_random
)
target_compile_options(grpc_pbufs
PRIVATE
$<$<BOOL:${MSVC}>:-wd4065>
$<$<NOT:$<BOOL:${MSVC}>>:-Wno-deprecated-declarations>
PUBLIC
$<$<BOOL:${MSVC}>:-wd4996>
$<$<BOOL:${XCODE}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>)
add_library(Ripple::grpc_pbufs ALIAS grpc_pbufs)

View File

@@ -1,14 +1,16 @@
#[===================================================================[
NIH dep: boost
#]===================================================================]
if((NOT DEFINED BOOST_ROOT) AND(DEFINED ENV{BOOST_ROOT}))
set(BOOST_ROOT $ENV{BOOST_ROOT})
endif()
if((NOT DEFINED BOOST_LIBRARYDIR) AND(DEFINED ENV{BOOST_LIBRARYDIR}))
set(BOOST_LIBRARYDIR $ENV{BOOST_LIBRARYDIR})
endif()
file(TO_CMAKE_PATH "${BOOST_ROOT}" BOOST_ROOT)
if(WIN32 OR CYGWIN)
# Workaround for MSVC having two boost versions - x86 and x64 on same PC in stage folders
if(DEFINED BOOST_ROOT)
if((NOT DEFINED BOOST_LIBRARYDIR) AND (DEFINED BOOST_ROOT))
if(IS_DIRECTORY ${BOOST_ROOT}/stage64/lib)
set(BOOST_LIBRARYDIR ${BOOST_ROOT}/stage64/lib)
elseif(IS_DIRECTORY ${BOOST_ROOT}/stage/lib)
@@ -44,7 +46,7 @@ else()
endif()
# TBD:
# Boost_USE_DEBUG_RUNTIME: When ON, uses Boost libraries linked against the
find_package(Boost 1.70 REQUIRED
find_package(Boost 1.86 REQUIRED
COMPONENTS
chrono
container
@@ -55,6 +57,7 @@ find_package(Boost 1.70 REQUIRED
program_options
regex
system
iostreams
thread)
add_library(ripple_boost INTERFACE)
@@ -74,6 +77,7 @@ target_link_libraries(ripple_boost
Boost::coroutine
Boost::date_time
Boost::filesystem
Boost::iostreams
Boost::program_options
Boost::regex
Boost::system

View File

@@ -248,6 +248,7 @@ include(FindPackageHandleStandardArgs)
# Save project's policies
cmake_policy(PUSH)
cmake_policy(SET CMP0057 NEW) # if IN_LIST
#cmake_policy(SET CMP0144 NEW)
#-------------------------------------------------------------------------------
# Before we go searching, check whether a boost cmake package is available, unless
@@ -969,7 +970,24 @@ function(_Boost_COMPONENT_DEPENDENCIES component _ret)
set(_Boost_WAVE_DEPENDENCIES filesystem serialization thread chrono date_time atomic)
set(_Boost_WSERIALIZATION_DEPENDENCIES serialization)
endif()
if(NOT Boost_VERSION_STRING VERSION_LESS 1.77.0)
# Special handling for Boost 1.86.0 and higher
if(NOT Boost_VERSION_STRING VERSION_LESS 1.86.0)
# Explicitly set these for Boost 1.86
set(_Boost_IOSTREAMS_DEPENDENCIES "") # No dependencies for iostreams in 1.86
# Debug output to help diagnose the issue
if(Boost_DEBUG)
message(STATUS "Using special dependency settings for Boost 1.86.0+")
message(STATUS "Component: ${component}, uppercomponent: ${uppercomponent}")
message(STATUS "Boost_VERSION_STRING: ${Boost_VERSION_STRING}")
message(STATUS "BOOST_ROOT: $ENV{BOOST_ROOT}")
message(STATUS "BOOST_LIBRARYDIR: $ENV{BOOST_LIBRARYDIR}")
endif()
endif()
# Only show warning for versions beyond what we've defined
if(NOT Boost_VERSION_STRING VERSION_LESS 1.87.0)
message(WARNING "New Boost version may have incorrect or missing dependencies and imported targets")
endif()
endif()
@@ -1879,6 +1897,18 @@ foreach(COMPONENT ${Boost_FIND_COMPONENTS})
list(INSERT _boost_LIBRARY_SEARCH_DIRS_RELEASE 0 ${Boost_LIBRARY_DIR_DEBUG})
endif()
if(NOT Boost_VERSION_STRING VERSION_LESS 1.86.0)
if(BOOST_LIBRARYDIR AND EXISTS "${BOOST_LIBRARYDIR}")
# Clear existing search paths and use only BOOST_LIBRARYDIR
set(_boost_LIBRARY_SEARCH_DIRS_RELEASE "${BOOST_LIBRARYDIR}" NO_DEFAULT_PATH)
set(_boost_LIBRARY_SEARCH_DIRS_DEBUG "${BOOST_LIBRARYDIR}" NO_DEFAULT_PATH)
if(Boost_DEBUG)
message(STATUS "Boost 1.86: Setting library search dirs to BOOST_LIBRARYDIR: ${BOOST_LIBRARYDIR}")
endif()
endif()
endif()
# Avoid passing backslashes to _Boost_FIND_LIBRARY due to macro re-parsing.
string(REPLACE "\\" "/" _boost_LIBRARY_SEARCH_DIRS_tmp "${_boost_LIBRARY_SEARCH_DIRS_RELEASE}")

View File

@@ -1,100 +0,0 @@
#[===================================================================[
NIH dep: mongo: MongoDB C++ driver (bsoncxx and mongocxx).
#]===================================================================]
include(FetchContent)
FetchContent_Declare(
mongo_c_driver_src
GIT_REPOSITORY https://github.com/mongodb/mongo-c-driver.git
GIT_TAG 1.17.4
)
FetchContent_GetProperties(mongo_c_driver_src)
if(NOT mongo_c_driver_src_POPULATED)
message(STATUS "Pausing to download MongoDB C driver...")
FetchContent_Populate(mongo_c_driver_src)
endif()
set(MONGO_C_DRIVER_BUILD_DIR "${mongo_c_driver_src_BINARY_DIR}")
set(MONGO_C_DRIVER_INCLUDE_DIR "${mongo_c_driver_src_SOURCE_DIR}/src/libbson/src")
set(MONGO_C_DRIVER_INSTALL_PREFIX "${CMAKE_CURRENT_BINARY_DIR}/mongo_c_install")
set(MONGO_C_DRIVER_CMAKE_ARGS
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
-DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF
-DENABLE_STATIC=ON
-DENABLE_SHARED=OFF
-DCMAKE_INSTALL_PREFIX=${MONGO_C_DRIVER_INSTALL_PREFIX}
)
ExternalProject_Add(mongo_c_driver
PREFIX ${CMAKE_CURRENT_BINARY_DIR}/mongo_c
SOURCE_DIR ${mongo_c_driver_src_SOURCE_DIR}
CMAKE_ARGS ${MONGO_C_DRIVER_CMAKE_ARGS}
BUILD_COMMAND ${CMAKE_COMMAND} --build . --config $<CONFIG>
INSTALL_COMMAND ${CMAKE_COMMAND} --install .
)
FetchContent_Declare(
mongo_cxx_driver_src
GIT_REPOSITORY https://github.com/mongodb/mongo-cxx-driver.git
GIT_TAG r3.10.2
)
FetchContent_GetProperties(mongo_cxx_driver_src)
if(NOT mongo_cxx_driver_src_POPULATED)
message(STATUS "Pausing to download MongoDB C++ driver...")
FetchContent_Populate(mongo_cxx_driver_src)
endif()
set(MONGO_CXX_DRIVER_BUILD_DIR "${mongo_cxx_driver_src_BINARY_DIR}")
set(MONGO_CXX_DRIVER_INCLUDE_DIR "${mongo_cxx_driver_src_SOURCE_DIR}/include")
set(MONGO_CXX_DRIVER_INSTALL_PREFIX "${CMAKE_CURRENT_BINARY_DIR}/mongo_cxx_install")
set(MONGO_CXX_DRIVER_CMAKE_ARGS
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
-DBUILD_SHARED_AND_STATIC_LIBS=ON
-DBSONCXX_ENABLE_MONGOC=ON
-DCMAKE_INSTALL_PREFIX=${MONGO_CXX_DRIVER_INSTALL_PREFIX}
-DCMAKE_PREFIX_PATH=${MONGO_C_DRIVER_INSTALL_PREFIX}
)
ExternalProject_Add(mongo_cxx_driver
PREFIX ${CMAKE_CURRENT_BINARY_DIR}/mongo_cxx
SOURCE_DIR ${mongo_cxx_driver_src_SOURCE_DIR}
CMAKE_ARGS ${MONGO_CXX_DRIVER_CMAKE_ARGS}
BUILD_COMMAND ${CMAKE_COMMAND} --build . --config $<CONFIG>
INSTALL_COMMAND ${CMAKE_COMMAND} --install .
DEPENDS mongo_c_driver
)
add_library(bsoncxx STATIC IMPORTED GLOBAL)
add_library(mongocxx STATIC IMPORTED GLOBAL)
add_dependencies(bsoncxx mongo_cxx_driver)
add_dependencies(mongocxx mongo_cxx_driver)
ExternalProject_Get_Property(mongo_cxx_driver BINARY_DIR)
execute_process(
COMMAND
mkdir -p "${BINARY_DIR}/include/bsoncxx/v_noabi"
mkdir -p "${BINARY_DIR}/include/mongocxx/v_noabi"
)
set_target_properties(bsoncxx PROPERTIES
IMPORTED_LOCATION "${MONGO_CXX_DRIVER_INSTALL_PREFIX}/lib/libbsoncxx-static.a"
INTERFACE_INCLUDE_DIRECTORIES "${MONGO_CXX_DRIVER_INSTALL_PREFIX}/include/bsoncxx/v_noabi"
)
set_target_properties(mongocxx PROPERTIES
IMPORTED_LOCATION "${MONGO_CXX_DRIVER_INSTALL_PREFIX}/lib/libmongocxx-static.a"
INTERFACE_INCLUDE_DIRECTORIES "${MONGO_CXX_DRIVER_INSTALL_PREFIX}/include/mongocxx/v_noabi"
)
# Link the C driver libraries
find_library(BSON_LIB bson-1.0 PATHS ${MONGO_C_DRIVER_INSTALL_PREFIX}/lib)
find_library(MONGOC_LIB mongoc-1.0 PATHS ${MONGO_C_DRIVER_INSTALL_PREFIX}/lib)
target_link_libraries(ripple_libs INTERFACE bsoncxx mongocxx ${BSON_LIB} ${MONGOC_LIB})
add_library(NIH::MongoCxx ALIAS mongocxx)

View File

@@ -81,4 +81,4 @@ if(XAR_LIBRARY)
else()
message(WARNING "xar library not found... (only important for mac builds)")
endif()
add_library (NIH::WasmEdge ALIAS wasmedge)
add_library (wasmedge::wasmedge ALIAS wasmedge)

View File

@@ -74,7 +74,11 @@ else ()
if (NOT _location)
message (FATAL_ERROR "using pkg-config for grpc, can't find c-ares")
endif ()
add_library (c-ares::cares ${_static} IMPORTED GLOBAL)
if(${_location} MATCHES "\\.a$")
add_library(c-ares::cares STATIC IMPORTED GLOBAL)
else()
add_library(c-ares::cares SHARED IMPORTED GLOBAL)
endif()
set_target_properties (c-ares::cares PROPERTIES
IMPORTED_LOCATION ${_location}
INTERFACE_INCLUDE_DIRECTORIES "${${_prefix}_INCLUDE_DIRS}"
@@ -204,6 +208,7 @@ else ()
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DCMAKE_CXX_STANDARD=17
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
$<$<BOOL:${CMAKE_TOOLCHAIN_FILE}>:-DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}>
$<$<BOOL:${VCPKG_TARGET_TRIPLET}>:-DVCPKG_TARGET_TRIPLET=${VCPKG_TARGET_TRIPLET}>

View File

@@ -53,7 +53,7 @@ Loop: test.app test.jtx
test.app > test.jtx
Loop: test.app test.rpc
test.rpc == test.app
test.rpc ~= test.app
Loop: test.jtx test.toplevel
test.toplevel > test.jtx

View File

@@ -1,14 +1,24 @@
cmake_minimum_required (VERSION 3.16)
if (POLICY CMP0074)
cmake_policy(SET CMP0074 NEW)
endif ()
project (rippled)
set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
if(POLICY CMP0074)
cmake_policy(SET CMP0074 NEW)
endif()
if(POLICY CMP0077)
cmake_policy(SET CMP0077 NEW)
endif()
# Fix "unrecognized escape" issues when passing CMAKE_MODULE_PATH on Windows.
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake")
if(POLICY CMP0144)
cmake_policy(SET CMP0144 NEW)
endif()
project (rippled)
set(Boost_NO_BOOST_CMAKE ON)
# make GIT_COMMIT_HASH define available to all sources
@@ -23,14 +33,27 @@ if(Git_FOUND)
endif()
endif() #git
if (thread_safety_analysis)
if(thread_safety_analysis)
add_compile_options(-Wthread-safety -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS -DRIPPLE_ENABLE_THREAD_SAFETY_ANNOTATIONS)
add_compile_options("-stdlib=libc++")
add_link_options("-stdlib=libc++")
endif()
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/deps")
option(USE_CONAN "Use Conan package manager for dependencies" OFF)
# Then, auto-detect if conan_toolchain.cmake is being used
if(CMAKE_TOOLCHAIN_FILE)
# Check if the toolchain file path contains "conan_toolchain"
if(CMAKE_TOOLCHAIN_FILE MATCHES "conan_toolchain")
set(USE_CONAN ON CACHE BOOL "Using Conan detected from toolchain file" FORCE)
message(STATUS "Conan toolchain detected: ${CMAKE_TOOLCHAIN_FILE}")
message(STATUS "Building with Conan dependencies")
endif()
endif()
if (NOT USE_CONAN)
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/deps")
endif()
include (CheckCXXCompilerFlag)
include (FetchContent)
@@ -44,7 +67,9 @@ endif ()
include(RippledSanity)
include(RippledVersion)
include(RippledSettings)
include(RippledNIH)
if (NOT USE_CONAN)
include(RippledNIH)
endif()
# this check has to remain in the top-level cmake
# because of the early return statement
if (packages_only)
@@ -57,25 +82,86 @@ include(RippledCompiler)
include(RippledInterface)
###
if (NOT USE_CONAN)
add_subdirectory(src/secp256k1)
add_subdirectory(src/ed25519-donna)
include(deps/Boost)
include(deps/OpenSSL)
# include(deps/Secp256k1)
# include(deps/Ed25519-donna)
include(deps/Lz4)
include(deps/Libarchive)
include(deps/Sqlite)
include(deps/Soci)
include(deps/Snappy)
include(deps/Rocksdb)
include(deps/Nudb)
include(deps/date)
include(deps/Protobuf)
include(deps/gRPC)
include(deps/cassandra)
include(deps/Postgres)
include(deps/WasmEdge)
else()
include(conan/Boost)
find_package(OpenSSL 1.1.1 REQUIRED)
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2
)
add_subdirectory(src/secp256k1)
add_subdirectory(src/ed25519-donna)
find_package(lz4 REQUIRED)
# Target names with :: are not allowed in a generator expression.
# We need to pull the include directories and imported location properties
# from separate targets.
find_package(LibArchive REQUIRED)
find_package(SOCI REQUIRED)
find_package(SQLite3 REQUIRED)
find_package(Snappy REQUIRED)
find_package(wasmedge REQUIRED)
option(rocksdb "Enable RocksDB" ON)
if(rocksdb)
find_package(RocksDB REQUIRED)
set_target_properties(RocksDB::rocksdb PROPERTIES
INTERFACE_COMPILE_DEFINITIONS RIPPLE_ROCKSDB_AVAILABLE=1
)
target_link_libraries(ripple_libs INTERFACE RocksDB::rocksdb)
endif()
find_package(nudb REQUIRED)
find_package(date REQUIRED)
find_package(BLAKE3 REQUIRED)
include(conan/Protobuf)
include(conan/gRPC)
if(TARGET nudb::core)
set(nudb nudb::core)
elseif(TARGET NuDB::nudb)
set(nudb NuDB::nudb)
else()
message(FATAL_ERROR "unknown nudb target")
endif()
target_link_libraries(ripple_libs INTERFACE ${nudb})
include(deps/Boost)
include(deps/OpenSSL)
include(deps/Secp256k1)
include(deps/Ed25519-donna)
include(deps/Lz4)
include(deps/Libarchive)
include(deps/Sqlite)
include(deps/Soci)
include(deps/Snappy)
include(deps/Rocksdb)
include(deps/Nudb)
include(deps/date)
include(deps/Protobuf)
include(deps/gRPC)
include(deps/cassandra)
include(deps/Postgres)
include(deps/WasmEdge)
include(deps/Mongo)
if(reporting)
find_package(cassandra-cpp-driver REQUIRED)
find_package(PostgreSQL REQUIRED)
target_link_libraries(ripple_libs INTERFACE
cassandra-cpp-driver::cassandra-cpp-driver
PostgreSQL::PostgreSQL
)
endif()
target_link_libraries(ripple_libs INTERFACE
ed25519::ed25519
LibArchive::LibArchive
lz4::lz4
OpenSSL::Crypto
OpenSSL::SSL
Ripple::grpc_pbufs
Ripple::pbufs
secp256k1::secp256k1
soci::soci
SQLite::SQLite3
)
endif()
###

217
DISABLE_OLD_ENTRIES.md Normal file
View File

@@ -0,0 +1,217 @@
# Auto-Disable Strategy for Hash Migration
## Core Concept
Instead of trying to fix entries with hardcoded old keys, **automatically disable them** during migration. If it contains old keys, it's broken anyway - make that explicit.
## The Algorithm
### Phase 1: Build Complete Old Key Set
```cpp
std::unordered_set<uint256> all_old_keys;
// Collect ALL SHA-512 keys from current state
stateMap_.visitLeaves([&](SHAMapItem const& item) {
all_old_keys.insert(item.key());
// Also collect keys from reference fields
SerialIter sit(item.slice());
auto sle = std::make_shared<SLE>(sit, item.key());
// Collect from vector fields
if (sle->isFieldPresent(sfIndexes)) {
for (auto& key : sle->getFieldV256(sfIndexes)) {
all_old_keys.insert(key);
}
}
// ... check all other reference fields
});
```
### Phase 2: Scan and Disable
#### Hook Definitions (WASM Code)
```cpp
bool scanWASMForKeys(Blob const& wasm_code, std::unordered_set<uint256> const& keys) {
// Scan for 32-byte sequences matching known keys
for (size_t i = 0; i <= wasm_code.size() - 32; i++) {
uint256 potential_key = extract32Bytes(wasm_code, i);
if (keys.count(potential_key)) {
return true; // Found hardcoded key!
}
}
return false;
}
// During migration
for (auto& hookDef : allHookDefinitions) {
if (scanWASMForKeys(hookDef->getFieldBlob(sfCreateCode), all_old_keys)) {
hookDef->setFieldU32(sfFlags, hookDef->getFlags() | HOOK_DISABLED_OLD_KEYS);
disabled_hooks.push_back(hookDef->key());
}
}
```
#### Hook State (Arbitrary Data)
```cpp
for (auto& hookState : allHookStates) {
auto data = hookState->getFieldBlob(sfHookStateData);
if (containsAnyKey(data, all_old_keys)) {
hookState->setFieldU32(sfFlags, STATE_INVALID_OLD_KEYS);
disabled_states.push_back(hookState->key());
}
}
```
#### Other Vulnerable Entry Types
```cpp
void disableEntriesWithOldKeys(SLE& sle) {
switch(sle.getType()) {
case ltHOOK:
if (hasOldKeys(sle)) {
sle.setFlag(HOOK_DISABLED_MIGRATION);
}
break;
case ltESCROW:
// Check if destination/condition contains old keys
if (containsOldKeyReferences(sle)) {
sle.setFlag(ESCROW_FROZEN_MIGRATION);
}
break;
case ltPAYCHAN:
// Payment channels with old key references
if (hasOldKeyInFields(sle)) {
sle.setFlag(PAYCHAN_SUSPENDED_MIGRATION);
}
break;
case ltHOOK_STATE:
// Already handled above
break;
}
}
```
## Flag Definitions
```cpp
// New flags for migration-disabled entries
constexpr uint32_t HOOK_DISABLED_OLD_KEYS = 0x00100000;
constexpr uint32_t STATE_INVALID_OLD_KEYS = 0x00200000;
constexpr uint32_t ESCROW_FROZEN_MIGRATION = 0x00400000;
constexpr uint32_t PAYCHAN_SUSPENDED_MIGRATION = 0x00800000;
constexpr uint32_t ENTRY_BROKEN_MIGRATION = 0x01000000;
```
## Execution Prevention
```cpp
// In transaction processing
TER HookExecutor::executeHook(Hook const& hook) {
if (hook.isFieldPresent(sfFlags)) {
if (hook.getFlags() & HOOK_DISABLED_OLD_KEYS) {
return tecHOOK_DISABLED_MIGRATION;
}
}
// Normal execution
}
TER processEscrow(Escrow const& escrow) {
if (escrow.getFlags() & ESCROW_FROZEN_MIGRATION) {
return tecESCROW_FROZEN_MIGRATION;
}
// Normal processing
}
```
## Re-enabling Process
### For Hooks
Developer must submit a new SetHook transaction with updated WASM:
```cpp
TER SetHook::doApply() {
// If hook was disabled for migration
if (oldHook->getFlags() & HOOK_DISABLED_OLD_KEYS) {
// Verify new WASM doesn't contain old keys
if (scanWASMForKeys(newWasm, all_old_keys)) {
return tecSTILL_CONTAINS_OLD_KEYS;
}
// Clear the disabled flag
newHook->clearFlag(HOOK_DISABLED_OLD_KEYS);
}
}
```
### For Hook State
Must be cleared and rebuilt:
```cpp
TER HookStateModify::doApply() {
if (state->getFlags() & STATE_INVALID_OLD_KEYS) {
// Can only delete, not modify
if (operation != DELETE) {
return tecSTATE_REQUIRES_REBUILD;
}
}
}
```
## Migration Report
```json
{
"migration_ledger": 20000000,
"entries_scanned": 620000,
"entries_disabled": {
"hooks": 12,
"hook_definitions": 3,
"hook_states": 1847,
"escrows": 5,
"payment_channels": 2
},
"disabled_by_reason": {
"wasm_contains_keys": 3,
"state_contains_keys": 1847,
"reference_old_keys": 19
},
"action_required": [
{
"type": "HOOK_DEFINITION",
"key": "0xABCD...",
"owner": "rXXX...",
"reason": "WASM contains 3 hardcoded SHA-512 keys",
"action": "Recompile hook with new keys or remove hardcoding"
}
]
}
```
## Benefits
1. **Safety**: Broken things explicitly disabled, not silently failing
2. **Transparency**: Clear record of what was disabled and why
3. **Natural Cleanup**: Abandoned entries stay disabled forever
4. **Developer Responsibility**: Owners must actively fix and re-enable
5. **No Silent Corruption**: Better to disable than corrupt
6. **Audit Trail**: Complete record of migration casualties
## Implementation Complexity
- **Scanning**: O(n×m) where n=entries, m=data size
- **Memory**: Need all old keys in memory (~40MB)
- **False Positives**: Extremely unlikely (2^-256 probability)
- **Recovery**: Clear path to re-enable fixed entries
## The Nuclear Option
If too many critical entries would be disabled:
```cpp
if (disabled_count > ACCEPTABLE_THRESHOLD) {
// Abort migration
return temMIGRATION_TOO_DESTRUCTIVE;
}
```
## Summary
Instead of attempting impossible fixes for hardcoded keys, acknowledge reality: **if it contains old keys, it's broken**. Make that brokenness explicit through disabling, forcing conscious action to repair and re-enable. This turns an impossible problem (fixing hardcoded keys in WASM) into a manageable one (identifying and disabling broken entries).

663
HASH_MIGRATION_CONTEXT.md Normal file
View File

@@ -0,0 +1,663 @@
# Hash Migration to Blake3 - Work Context
## Build Commands
- **To build**: `ninja -C build`
- **To count errors**: `ninja -C build 2>&1 | grep "error:" | wc -l`
- **To see failed files**: `ninja -C build 2>&1 | grep "^FAILED:" | head -20`
- **DO NOT USE**: `cmake --build` or `make`
## Test Compilation of Single Files
### Quick Method (basic errors only)
```bash
clang++ -std=c++20 -I/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc/src \
-c src/test/app/NFToken_test.cpp -o /tmp/test.o 2>&1 | head -50
```
### Full Compilation Command (from compile_commands.json)
Extract the exact compilation command for any file:
```bash
# For any specific file:
jq -r '.[] | select(.file | endswith("NFToken_test.cpp")) | .command' build/compile_commands.json
# Or simplified with just the file path:
FILE="src/test/app/NFToken_test.cpp"
jq -r --arg file "$FILE" '.[] | select(.file | endswith($file)) | .command' build/compile_commands.json
```
### compile_commands.json location
- **Location**: `build/compile_commands.json`
- Contains exact compilation commands with all include paths and flags for each source file
- Generated by CMake during configuration
## Objective
Modify Xahaud to pass ledger_index through all hash functions to enable switching from SHA512-Half to Blake3 at a specific ledger index.
## Current Approach
Using a `hash_options` struct containing `ledger_index` that must be passed to all hash functions.
## Structure Added
```cpp
struct hash_options {
std::uint32_t ledger_index;
explicit hash_options(std::uint32_t li) : ledger_index(li) {}
};
```
## CRITICAL: Hash Function Classification Required
### The Historical Ledger Problem
**EVERY** hash operation needs proper classification because even "content hashing" (like transaction IDs, validator manifests, signatures) depends on WHEN it was created:
- Transaction from ledger 10M (pre-transition) → Must use SHA-512 Half
- Transaction from ledger 20M (post-transition) → Must use BLAKE3
- You cannot mix hash algorithms for the same ledger - it's all or nothing
### Classification Constants (to be added to digest.h)
As an interim step, introduce classification constants to make intent explicit:
```cpp
// Special ledger_index values for hash operations
constexpr uint32_t LEDGER_INDEX_UNKNOWN = 0; // DANGEROUS - avoid!
constexpr uint32_t LEDGER_INDEX_TEST_ONLY = std::numeric_limits<uint32_t>::max();
constexpr uint32_t LEDGER_INDEX_NETWORK_PROTOCOL = std::numeric_limits<uint32_t>::max() - 1;
constexpr uint32_t LEDGER_INDEX_CURRENT = std::numeric_limits<uint32_t>::max() - 2; // Use current ledger
```
### Classification Categories
1. **Ledger Object Indexing** (MUST use actual ledger_index)
- All `indexHash()` calls determining WHERE objects live
- All keylet functions creating/finding ledger objects
- SHAMap node hashing (builds the Merkle tree)
2. **Historical Content Hashing** (MUST use ledger_index from when it was created)
- Transaction IDs (use ledger where tx was included)
- Validator manifests (use ledger when signed)
- Hook code hashing (use ledger when deployed)
- Signatures referencing ledger data
3. **Test Code** (use LEDGER_INDEX_TEST_ONLY)
- Unit tests
- Mock objects
- Test fixtures
4. **Network Protocol** (special handling needed)
- Peer handshakes
- Protocol messages
- May need to support both algorithms during transition
### Why hash_options{0} is Dangerous
Using `hash_options{0}` assumes everything is pre-transition (SHA-512 Half forever), which breaks after the transition point. Every usage must be classified and given the appropriate context.
## SHAMap Hash Architecture Analysis
### Current State
- SHAMap stores `ledgerSeq_` and knows which ledger it represents
- Nodes compute hashes WITHOUT ledger context (just use sha512Half directly)
- Nodes can be SHARED between SHAMaps via canonicalization/caching
- Node's `hash_` member stores only ONE hash value
### The Fundamental Problem: Node Sharing vs Hash Migration
When nodes are shared between ledgers through canonicalization:
- A node from ledger 19,999,999 (SHA-512) has hash H1
- Same node referenced by ledger 20,000,000 (BLAKE3) needs hash H2
- **Cannot store both hashes without major memory overhead**
### Merkle Tree Constraint
The Merkle tree structure requires homogeneous hashing:
```
Root (BLAKE3)
├── Child1 (BLAKE3) ✓
└── Child2 (SHA-512) ✗ IMPOSSIBLE - breaks tree integrity
```
You cannot mix hash algorithms within a single tree - all nodes must use the same algorithm.
### Migration Strategies Considered
#### Option 1: Lazy/Gradual Migration ✗
- Store both SHA-512 and BLAKE3 hashes in each node
- Problems:
- Double memory usage per node
- Complex cache invalidation logic
- Still can't mix algorithms in same tree
- Node sharing between ledgers becomes impossible
#### Option 2: Big Bang Migration ✓ (Recommended)
- At transition ledger:
- Invalidate ALL cached/stored nodes
- Rebuild entire state with new hash algorithm
- Maintain separate caches for pre/post transition
- Benefits:
- Clean separation of hash epochs
- Easier to reason about
- No memory overhead
- Maintains tree integrity
### Implementation Requirements for Big Bang
1. **Pass ledger_index through all hash operations:**
```cpp
void updateHash(std::uint32_t ledgerSeq);
SHAMapHash getHash(std::uint32_t ledgerSeq) const;
```
2. **Separate node caches by hash epoch:**
- Pre-transition: SHA-512 node cache
- Post-transition: BLAKE3 node cache
- Never share nodes between epochs
3. **Critical functions needing updates:**
- `SHAMapInnerNode::updateHash()`
- `SHAMapInnerNode::updateHashDeep()`
- All `SHAMapLeafNode` subclass hash computations
- `SHAMap::flushDirty()` and `walkSubTree()`
### Why Big Bang is Preferred
- **Correctness**: Guarantees Merkle tree integrity
- **Simplicity**: No complex dual-hash logic
- **Performance**: No overhead of maintaining multiple hashes
- **Clear boundaries**: Pre-transition and post-transition are completely separate
The transition point becomes a hard boundary where the entire ledger state is rehashed with the new algorithm.
### Alternative: Heterogeneous Tree - Deep Dive
After deeper analysis, heterogeneous trees are more complex but potentially viable. Here's a comprehensive examination:
#### The Core Insight: Hash Values Are Algorithm-Agnostic
When `SHAMapInnerNode::updateHash()` computes a hash:
```cpp
void updateHash(hash_options const& opts) {
sha512_half_hasher h(opts);
hash_append(h, HashPrefix::innerNode);
iterChildren([&](SHAMapHash const& hh) { hash_append(h, hh); });
// Parent hashes the HASH VALUES of children, not the raw data
}
```
**Key realization**: Parent nodes hash their children's hash values (256-bit numbers), NOT the children's data. This means a BLAKE3 parent can hash SHA-512 child hashes without issue.
#### How Heterogeneous Trees Would Work
Post-transition rule: **Any NEW hash uses BLAKE3**
```
Ledger 19,999,999 (SHA-512):
Root_SHA512
├── Child1_SHA512
└── Child2_SHA512
Ledger 20,000,000 (BLAKE3) - Child1 modified:
Root_BLAKE3 = BLAKE3(Child1_BLAKE3_hash || Child2_SHA512_hash)
├── Child1_BLAKE3 (NEW hash due to modification)
└── Child2_SHA512 (unchanged, keeps old hash)
```
#### The Canonical Structure Ensures Determinism
**Critical insight**: SHAMap trees are **canonical** - the structure is deterministic based on keys:
- Alice's account always goes in the same tree position
- Bob's account always goes in the same position
- Tree shape is fully determined by the set of keys
Therefore:
- Same modifications → Same tree structure
- Same tree structure → Same nodes get rehashed
- Same rehashing → Same final root hash
- **Consensus is maintained!**
#### The "NEW vs OLD" Detection Problem
The killer issue: How do you know if you're computing a NEW hash vs verifying an OLD one?
```cpp
void updateHash(hash_options const& opts) {
// Am I computing a NEW hash (use BLAKE3)?
// Or verifying an OLD hash (could be SHA-512)?
// This function doesn't know WHY it was called!
}
```
Without explicit context about NEW vs OLD:
- Loading from DB: Don't know if it's SHA-512 or BLAKE3
- Modifying a node: Should use BLAKE3
- Verifying from network: Could be either
Potential solutions:
1. **Try-both approach**: Verify with BLAKE3, fallback to SHA-512
2. **Version tracking**: Store algorithm version with each node
3. **Context passing**: Thread NEW/OLD context through all calls
#### Canonical Nodes (cowid=0) - A Complication
Canonical nodes are immutable and shared, BUT:
- They ARE verified when loaded from DB or network
- The verification needs to compute the hash to check integrity
- This means we need to know WHICH algorithm was used
- Can't just trust the hash - must verify data matches
This actually makes heterogeneous trees HARDER because:
```cpp
// When loading a canonical node from DB:
auto node = SHAMapTreeNode::makeFromPrefix(data, hash);
// Need to verify: does hash(data) == provided_hash?
// But which hash function? SHA-512 or BLAKE3?
// Must try both, adding complexity and ambiguity
```
#### System-Wide Implications
##### Database Layer
- Heterogeneous: Same data might have 2 entries (SHA-512 and BLAKE3 versions)
- Big Bang: Clean cutover, old entries become invalid
##### Network Sync
- Heterogeneous: Ambiguous - "I need node 0xABC..." (which algorithm?)
- Big Bang: Clear - algorithm determined by ledger context
##### Consensus
- Heterogeneous: Works IF all validators make same NEW/OLD decisions
- Big Bang: Simple - everyone uses same algorithm
##### External Proof Verification
- Heterogeneous: Complex - mixed algorithms in Merkle paths
- Big Bang: Simple - "before ledger X use SHA-512, after use BLAKE3"
##### Performance
- Heterogeneous: Double verification attempts for old nodes
- Big Bang: One-time rehash cost
#### Gradual Migration Pattern
With heterogeneous trees, migration happens naturally:
```
Ledger 20,000,000: 5% BLAKE3, 95% SHA512
Ledger 20,001,000: 30% BLAKE3, 70% SHA512
Ledger 20,010,000: 60% BLAKE3, 40% SHA512
Ledger 20,100,000: 90% BLAKE3, 10% SHA512
Eventually: ~100% BLAKE3 (dormant nodes may remain SHA-512)
```
#### Remaining Challenges
1. **Context Plumbing**: Need to distinguish NEW vs OLD operations everywhere
2. **Verification Ambiguity**: Failures could be corruption OR wrong algorithm
3. **Testing Complexity**: Many more edge cases to test
4. **Protocol Complexity**: Merkle proofs need algorithm information
5. **Developer Cognitive Load**: Harder to reason about
### Conclusion: Trade-offs
**Heterogeneous Trees**:
- ✅ No "big bang" transition
- ✅ Natural, incremental migration
- ✅ Maintains consensus (with careful implementation)
- ❌ Permanent complexity throughout codebase
- ❌ Ambiguous verification
- ❌ Complex testing
**Big Bang Migration**:
- ✅ Clean, simple mental model
- ✅ Clear algorithm boundaries
- ✅ Easier testing and debugging
- ❌ One-time massive performance hit
- ❌ Requires careful coordination
- ❌ Can't easily roll back
The heterogeneous approach is **theoretically viable** but adds significant permanent complexity. Big Bang is simpler but has a painful transition. The choice depends on whether you prefer one-time pain (Big Bang) or permanent complexity (heterogeneous).
## Files Modified So Far
1. `src/ripple/protocol/digest.h` - Added hash_options struct and modified sha512Half signatures
2. `src/ripple/protocol/impl/Indexes.cpp` - Updated indexHash and all keylet functions to accept hash_options
3. `src/ripple/protocol/Indexes.h` - Updated all function declarations to include hash_options
## Current Status
- Core hash functions modified with backward-compatible overloads
- All keylet functions updated to require hash_options
- Propagating hash_options through codebase - MASSIVE undertaking
- 91+ compilation errors remaining after fixing ~20 files
- Every fix exposes more errors as headers propagate changes
## SHAMap Node Factory Methods - Missing Ledger Context
### The Problem with makeFromWire and makeFromPrefix
These factory methods create SHAMap nodes from serialized data but **don't have ledger context**:
```cpp
// Called when receiving nodes from peers over network
SHAMapTreeNode::makeFromWire(Slice rawNode)
// Called when loading nodes from database
SHAMapTreeNode::makeFromPrefix(Slice rawNode, SHAMapHash const& hash)
```
These methods:
- Parse serialized node data (from network or database)
- Create `SHAMapInnerNode` or leaf nodes
- Need to call `updateHash()` if hash isn't provided
- **BUT** don't know which ledger they're building for!
### Why This Is Critical
When a node is loaded from database or received from network:
1. The serialized data doesn't include ledger_index
2. The node might be shared across multiple ledgers (pre-transition)
3. After transition, we need to know if this node uses SHA-512 or BLAKE3
4. Currently using `LEDGER_INDEX_UNKNOWN` as placeholder
### Implications for P2P Protocol
The network protocol would need changes:
- `TMGetLedger` messages would need to specify ledger_index
- Node responses would need hash algorithm version
- Database storage would need to track which hash was used
This reinforces why **Big Bang migration** is simpler - no protocol changes needed!
## Next Steps
1. Add hash_options{0} to all keylet call sites (using 0 as placeholder ledger_index)
2. Get the code compiling first
3. Later: Thread actual ledger_index values through from Views/transactions
4. Eventually: Add Blake3 switching logic based on ledger_index threshold
## Key Insight
Every place that creates a ledger key needs to know what ledger it's operating on. This is a massive change touching:
- All View classes
- All transaction processors
- All RPC handlers
- All tests
- Consensus code
## Compilation Strategy
NEVER use hash_options{0} - always use proper classification from the HashContext enum in digest.h!
## Quick Reference for Fixing Test Files
### Essential Files to Reference
1. **@src/ripple/protocol/digest.h** - Lines 37-107 contain the HashContext enum with ALL valid classifiers
2. **@src/ripple/protocol/Indexes.h** - Shows all keylet function signatures that need hash_options
### Keylet Function Mapping to HashContext
When you see a keylet function, use the corresponding KEYLET_* enum value:
```cpp
keylet::account() → KEYLET_ACCOUNT
keylet::amendments() → KEYLET_AMENDMENTS
keylet::book() → KEYLET_BOOK
keylet::check() → KEYLET_CHECK
keylet::child() → KEYLET_CHILD
keylet::depositPreauth() → KEYLET_DEPOSIT_PREAUTH
keylet::dirPage() → KEYLET_DIR_PAGE
keylet::emittedDir() → KEYLET_EMITTED_DIR
keylet::emittedTxn() → KEYLET_EMITTED_TXN
keylet::escrow() → KEYLET_ESCROW
keylet::fees() → KEYLET_FEES
keylet::hook() → KEYLET_HOOK
keylet::hookDefinition() → KEYLET_HOOK_DEFINITION
keylet::hookState() → KEYLET_HOOK_STATE
keylet::hookStateDir() → KEYLET_HOOK_STATE_DIR
keylet::importVLSeq() → KEYLET_IMPORT_VLSEQ
keylet::negativeUNL() → KEYLET_NEGATIVE_UNL
keylet::nftBuys() → KEYLET_NFT_BUYS
keylet::nftOffer() → KEYLET_NFT_OFFER
keylet::nftPage() → KEYLET_NFT_PAGE
keylet::nftSells() → KEYLET_NFT_SELLS
keylet::offer() → KEYLET_OFFER
keylet::ownerDir() → KEYLET_OWNER_DIR
keylet::payChan() → KEYLET_PAYCHAN
keylet::signers() → KEYLET_SIGNERS
keylet::skip() → KEYLET_SKIP_LIST
keylet::ticket() → KEYLET_TICKET
keylet::trustline() → KEYLET_TRUSTLINE
keylet::unchecked() → KEYLET_UNCHECKED
keylet::UNLReport() → KEYLET_UNL_REPORT
keylet::uriToken() → KEYLET_URI_TOKEN
```
### Non-Keylet Hash Classifications
```cpp
sha512Half() for validator data → VALIDATOR_LIST_HASH
sha512Half() for hook code → HOOK_DEFINITION or LEDGER_INDEX_UNNEEDED
sha512Half() for signatures → CRYPTO_SIGNATURE_HASH
sha512Half() for network protocol → NETWORK_HANDSHAKE_HASH
sha512Half_s() for secure hashing → Same rules apply
```
## Key Insights
### The Scale Problem
Every place that creates a ledger key needs to know what ledger it's operating on:
- All View classes
- All transaction processors (50+ files)
- All RPC handlers (30+ files)
- All tests (hundreds of files)
- Consensus code
- Path finding code
- Payment processing pipelines
### Why This Is Hard
1. **Cascading changes**: Fixing one header file exposes dozens of new errors
2. **Deep call chains**: Ledger index must be threaded through multiple layers
3. **Protocol implications**: Network messages would need to include ledger sequences
4. **No gradual migration**: Can't mix hash algorithms - it's all or nothing
5. **Testing nightmare**: Every test that creates mock ledger objects needs updating
### The Fundamental Challenge
The hash function is so deeply embedded in the architecture that changing it is like replacing the foundation of a building while people are living in it. This is why most blockchains never change their hash functions after launch.
## Key Learnings - Where to Get ledger_index
### 1. ReadView/ApplyView Classes
- `view.seq()` returns the LedgerIndex (found in ReadView.h:193)
- ReadView has `info()` method that returns LedgerInfo struct
- LedgerInfo contains `seq` field which is the ledger sequence number
- Cast to uint32_t: `static_cast<std::uint32_t>(view.seq())`
### 2. Common Patterns Found
- Functions that take `ReadView const& view` can access `view.seq()`
- Functions that take `ApplyView& view` can also access `view.seq()` (ApplyView inherits from ReadView)
### 3. Files That Need Updates
- View.h - DONE - all 6 keylet calls updated to use view.seq()
- Any file with keylet:: namespace calls
- Transaction processors that create/lookup ledger objects
### 4. Progress Log
#### Stats (Updated: 2025-09-09)
- Total files with keylet:: calls: 126
- Files fixed so far: **100+** (including all core files, transaction processors, RPC handlers, and most tests)
- Started with 7 errors, peaked at 105+ as fixes propagate through headers
- **Currently down to 113 errors across 11 test files** (from hundreds of files!)
- ALL keylet function signatures updated to require hash_options
- Pattern emerging: Every transaction processor, every RPC handler, every test needs updating
#### Major Milestone Achieved
- Successfully fixed ALL non-test source files
- Fixed majority of test files using parallel agents
- Demonstrated that the hash migration IS possible despite being a massive undertaking
#### Major Files Fixed
- All core ledger files (View.cpp, ApplyView.cpp, ReadView.cpp, etc.)
- Most transaction processors (Payment, Escrow, CreateOffer, NFToken*, etc.)
- Hook implementation files (applyHook.cpp, SetHook.cpp)
- Infrastructure files (Transactor.cpp, BookDirs.cpp, Directory.cpp)
#### Key Milestone
- Updated ALL keylet function signatures in Indexes.h/cpp to require hash_options
- Even functions that take pre-existing uint256 keys now require hash_options for consistency
- This causes massive cascading compilation errors but ensures consistency
#### Files Fixed So Far
1. **View.h** - Fixed using `view.seq()`
2. **Ledger.cpp** - Fixed using `info_.seq` in constructor, `seq()` in methods
3. **LocalTxs.cpp** - Fixed using `view.seq()`
4. **NegativeUNLVote.cpp** - Fixed using `prevLedger->seq()`
5. **TxQ.cpp** - Fixed 5 calls using `view.seq()`
6. **SkipListAcquire.cpp** - Fixed (with protocol issue on line 90)
7. **LedgerReplayMsgHandler.cpp** - Fixed using `info.seq`
8. **RCLValidations.cpp** - Fixed using `ledger->seq()`
9. **Credit.cpp** - Fixed using `view.seq()`
10. **StepChecks.h** - Fixed using `view.seq()`
11. **BookTip.cpp** - Fixed using `view.seq()`
12. **XRPEndpointStep.cpp** - Fixed using `ctx.view.seq()`
13. **DirectStep.cpp** - Fixed using `sb.seq()` and `ctx.view.seq()`
14. **BookStep.cpp** - Fixed using `afView.seq()` and `view.seq()`
15. **CancelCheck.cpp** - Fixed using `ctx.view.seq()` and `view().seq()`
16. **Pathfinder.cpp** - Fixed using `mLedger->seq()`
17. More coming...
#### Common Patterns Found
- **ReadView/ApplyView**: Use `.seq()`
- **Ledger pointer**: Use `->seq()`
- **Transactor classes**: Use `view().seq()`
- **PaymentSandbox**: Use `sb.seq()`
- **StrandContext**: Use `ctx.view.seq()`
- **LedgerInfo struct**: Use `.seq` field directly
### 5. Architectural Questions
#### Keylets with pre-existing keys
- Functions like `keylet::check(uint256 const& key)` just wrap an existing key
- They don't compute a new hash, just interpret the key as a specific ledger type
- **Question**: Do these really need hash_options?
- **Current approach**: Include hash_options for consistency, might store ledger_index in Keylet for future use
- **Note**: This could be revisited - might be unnecessary overhead for simple key wrapping
### 6. Known Issues
- SkipListAcquire.cpp line 90: Requesting skip list by hash without knowing ledger seq yet
- Can't know the sequence until we GET the ledger
- Using hash_options{0} as placeholder
- Would need to refactor to fetch ledger first, THEN request skip list with proper seq
- Or protocol change to handle "skip list for whatever ledger this hash is"
### 7. Network Protocol Has Ledger Sequence!
**Critical Discovery**: The protobuf definitions show that network messages DO carry ledger sequence in many places:
#### TMLedgerData (what InboundLedger::processData receives):
```protobuf
message TMLedgerData {
required bytes ledgerHash = 1;
required uint32 ledgerSeq = 2; // <-- HAS THE SEQUENCE!
required TMLedgerInfoType type = 3;
repeated TMLedgerNode nodes = 4;
}
```
#### TMGetLedger (the request):
```protobuf
message TMGetLedger {
optional uint32 ledgerSeq = 4; // Can request by sequence
}
```
#### TMIndexedObject (per-object context):
```protobuf
message TMIndexedObject {
optional uint32 ledgerSeq = 5; // Per-object sequence!
}
```
**Implications**:
- The protocol already has infrastructure for ledger context
- `InboundLedger` can use `packet.ledgerseq()` from TMLedgerData
- Network sync might be solvable WITHOUT major protocol changes
- The ledger sequence just needs to be threaded through to hash functions
**Key Flow**:
```cpp
InboundLedger::processData(protocol::TMLedgerData& packet)
packet.ledgerseq() // <-- Extract sequence from protocol message
<- SHAMap::addKnownNode(..., ledgerSeq)
<- SHAMapTreeNode::makeFromWire(data, ledgerSeq)
<- updateHash(hash_options{ledgerSeq})
```
This solves a major piece of the puzzle - the network layer CAN provide context for hash verification!
### 8. Test File Patterns
**Key Discovery**: Tests should use actual ledger sequences, NOT placeholders!
#### Getting Ledger Sequence in Tests
Tests typically use `test::jtx::Env` which provides access to ledger context:
- `env.current()` - Returns a ReadView pointer
- `env.current()->seq()` - Gets the current ledger sequence (already uint32_t)
#### Common Test Patterns
##### Pattern 1: Direct keylet calls
```cpp
// OLD
env.le(keylet::line(alice, bob, currency));
// NEW
env.le(keylet::line(hash_options{env.current()->seq(), KEYLET_TRUSTLINE}, alice, bob, currency));
```
##### Pattern 2: Helper functions need env parameter
```cpp
// OLD
static uint256 getCheckIndex(AccountID const& account, uint32_t seq) {
return keylet::check(account, seq).key;
}
// Called as: getCheckIndex(alice, seq)
// NEW
static uint256 getCheckIndex(test::jtx::Env& env, AccountID const& account, uint32_t seq) {
return keylet::check(hash_options{env.current()->seq(), KEYLET_CHECK}, account, seq).key;
}
// Called as: getCheckIndex(env, alice, seq)
```
##### Pattern 3: Fee calculations
```cpp
// Uses env.current() to get fee information
XRPAmount const baseFeeDrops{env.current()->fees().base};
```
#### Test Files Status
- **Total test files needing fixes**: ~12-15
- **Pattern**: All test files that create or look up ledger objects need updating
- **Common test files**:
- Check_test.cpp - PARTIALLY FIXED
- AccountDelete_test.cpp
- Escrow_test.cpp
- NFToken_test.cpp
- Flow_test.cpp
- Import_test.cpp
- etc.
#### Why NOT to use placeholders in tests
- Tests verify actual ledger behavior
- Using `hash_options{0}` would test wrong behavior after transition
- Tests need to work both pre and post hash migration
- `env.current()->seq()` gives the actual test ledger sequence
### 6. Ways to Get Ledger Sequence
#### From Application (app_):
- `app_.getLedgerMaster()` gives you LedgerMaster
#### From LedgerMaster:
- `getLedgerByHash(hash)` - Get ledger by hash, then call `->seq()` on it
- `getLedgerBySeq(index)` - Get ledger by sequence directly
- `getCurrentLedger()` - Current open ledger, call `->seq()`
- `getClosedLedger()` - Last closed ledger, call `->seq()`
- `getValidatedLedger()` - Last validated ledger, call `->seq()`
- `getPublishedLedger()` - Last published ledger, call `->seq()`
- `getCurrentLedgerIndex()` - Direct sequence number
- `getValidLedgerIndex()` - Direct sequence number
- `walkHashBySeq()` - Walk ledger chain to find hash by sequence
#### From Ledger object:
- `ledger->seq()` - Direct method
- `ledger->info().seq` - Through LedgerInfo
#### From ReadView/OpenView/ApplyView:
- `view.seq()` - All views have this method
- `view.info().seq` - Through LedgerInfo
#### Special Case - SkipListAcquire:
- Line 67: `app_.getLedgerMaster().getLedgerByHash(hash_)`
- If ledger exists locally, we get it and can use its seq()
- If not, we're requesting it from peers - don't know seq yet!

108
LAST_TESTAMENT.md Normal file
View File

@@ -0,0 +1,108 @@
# Last Testament: SHA-512 to BLAKE3 Migration Deep Dive
## The Journey
Started with "just change the hash function" - ended up discovering why hash functions are permanent blockchain decisions.
## Key Discoveries
### 1. Ledger Implementation Added
- `Ledger::shouldMigrateToBlake3()` - checks migration window/flags
- `Ledger::migrateToBlake3()` - placeholder for actual migration
- Hook in `BuildLedger.cpp` after transaction processing
- Migration happens OUTSIDE transaction pipeline to avoid metadata explosion
### 2. The Rekeying Nightmare (REKEYING_ISSUES.md)
Every object key changes SHA512→BLAKE3, but keys are EVERYWHERE:
- **DirectoryNode.sfIndexes** - vectors of keys
- **sfPreviousTxnID, sfIndexNext, sfIndexPrevious** - direct key refs
- **Order books** - sorted by key value (order changes!)
- **Hook state** - arbitrary blobs with embedded keys
- **620k objects** with millions of interconnected references
### 3. The LUT Approach
```cpp
// Build lookup table: old_key → new_key (40MB for 620k entries)
// O(n) to build, O(n×m) to update all fields
// Must check EVERY uint256 field in EVERY object
```
**Problems:**
- LUT check on EVERY lookup forever (performance tax)
- Can't know if key is old/new without checking
- Might need bidirectional LUT (80MB)
- Can NEVER remove it (WASM has hardcoded keys!)
### 4. The WASM Hook Bomb
Hook code can hardcode keys in compiled WASM:
- Can't modify without changing hook hash
- Changing hash breaks all references to hook
- Literally unfixable without breaking hooks
### 5. MapStats Enhancement
Added ledger entry type tracking:
- Count, total bytes, avg size per type
- Uses `LedgerFormats::getInstance().findByType()` for names
- Shows 124k HookState entries (potential key references!)
### 6. Current Ledger Stats
- 620k total objects, 98MB total
- 117k DirectoryNodes (full of references)
- 124k HookState entries (arbitrary data)
- 80 HookDefinitions (WASM code)
- Millions of internal key references
## Why It's Impossible
### The Lookup Problem
```cpp
auto key = keylet::account(Alice).key;
// Which hash function? SHA512 or BLAKE3?
// NO WAY TO KNOW without timestamp/LUT/double-lookup
```
### The Fundamental Issues
1. **No key timestamp** - Can't tell if key is pre/post migration
2. **Embedded references everywhere** - sfIndexes, hook state, WASM code
3. **Permanent LUT required** - Check on every operation forever
4. **Performance death** - 2x lookups or LUT check on everything
5. **Dual-key SHAMap impossible** - Breaks ordering/structure
### The "Solutions" That Don't Work
- **Lazy migration**: LUT forever, complexity forever
- **Big bang**: Still need permanent LUT for old references
- **Heterogeneous tree**: Can't mix hash functions in Merkle tree
- **Binary search-replace**: Could corrupt data, no validation
- **Import to v2 chain**: Same reference update problems
## The Verdict
After threading ledger_index through 1000+ functions for hash context, the migration faces insurmountable challenges:
1. **WASM hooks** contain unfixable hardcoded keys
2. **Every lookup** needs LUT check forever (performance tax)
3. **Can't determine key age** from identifier alone
4. **Millions of references** need perfect updating
5. **One mistake** = corrupted ledger, lost funds
**Conclusion**: This is a "v2 blockchain" problem, not a migration problem. SHA-512 is forever.
## Lessons Learned
- Hash functions are THE fundamental addressing system
- Deeper than consensus, deeper than data structures
- Once chosen, essentially permanent
- The attempt revealed the true complexity of blockchain internals
- Even 98MB of data has millions of interconnected references
- WASM hooks make any migration effectively impossible
**Final Status**: Technically possible with permanent LUT and massive complexity. Practically impossible due to hooks, performance, and risk. SHA-512 until heat death of universe.
## Code Artifacts
- Hash classification system in `digest.h`
- `hash_options` threaded through Indexes/keylets
- Ledger migration methods (placeholder)
- MapStats with entry type breakdown
- REKEYING_ISSUES.md with full analysis
- 100+ files modified to thread ledger context
The migration died not from lack of effort, but from the discovery that some architectural decisions are truly permanent.

71
LEDGER_INDEX_PREVIOUS.md Normal file
View File

@@ -0,0 +1,71 @@
Yes! Youre onto something quite elegant there. Let me break it down:
---
### 🔑 LedgerEntry “PriorKey” field
If each SLE carried an **optional `LedgerIndexPriorSha512`** (or whatever name), then:
* When you rekey at cutover, you dont lose continuity.
* Anyone walking the ledger can trivially build the old→new LUT on the fly.
* For “day-to-day” operations, the node only needs the canonical keys.
This basically lets you say:
* **Canonical view** = new Blake3 keys.
* **Continuity view** = traverse new keys, but if you need to resolve old references, follow the `LedgerIndexPriorSha512` link.
---
### 📊 For analysis and data science
You can now reconstruct historical continuity without keeping a giant external LUT file. Just walk the ledger and collect `(old, new)` pairs from the entries.
This means explorers, forensic tools, compliance, etc. can:
* Map old keys to new at any point.
* Handle partially migrated networks (some ledgers pre-cutover, some post).
---
### 🟢 Operational benefits
* Validators dont need to bloat memory with a global LUT — its **distributed inside the ledger itself**.
* New ledgers “just work” for CRUD.
* Old references (hooks, weird integrations) can be handled until people migrate.
---
### ⚠️ Caveats
1. **Storage overhead**: each SLE gets \~32 bytes extra. But given only \~600k objects in the state tree, thats trivial compared to the win.
2. **Hardcoded keys in hooks/wasm**: still a landmine — if someone has *literally baked in* the old canonical key, theyll need the LUT or breakage handling at ApplyView.
3. **Consensus rules**: adding this field changes serialization — so its an amendment, not just an operator convenience.
---
### 🔄 Proposal workflow
1. Cutover ledger → during bookkeeping, every object is rekeyed into the new canonical tree.
2. At the same time, each new object gets `LedgerIndexPriorSha512 = old_key`.
3. From then on, **every ledger post-cutover contains the LUT implicitly**.
4. After some deprecation period, once nobody references `LedgerIndexPriorSha512`, you can prune support.
---
So in practice:
* **Validators** only care about the canonical map.
* **Explorers, auditors, backward-compat systems** can lazily reconstruct the LUT.
* You dont poison `succ()` or iteration with mixed domains.
---
This actually solves both your goals:
* Day-to-day simplicity (Blake-only canonical).
* Historical continuity (walk the field if you need).
---
Want me to sketch how `ReadView::read(Keylet)` could transparently try `LUT` lookups only if the canonical miss happens, using that new field? That would make it zero-effort for old call sites.

1093
PSEUDO_TRANSACTIONS.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -67,5 +67,5 @@ git-subtree. See those directories' README files for more details.
- [explorer.xahau.network](https://explorer.xahau.network)
- **Testnet & Faucet**: Test applications and obtain test XAH at [xahau-test.net](https://xahau-test.net) and use the testnet explorer at [explorer.xahau.network](https://explorer.xahau.network).
- **Supporting Wallets**: A list of wallets that support XAH and Xahau-based assets.
- [Xumm](https://xumm.app)
- [Xaman](https://xaman.app)
- [Crossmark](https://crossmark.io)

332
REKEYING_ISSUES.md Normal file
View File

@@ -0,0 +1,332 @@
# Hash Migration Rekeying Issues
## The Fundamental Problem
When migrating from SHA-512 Half to BLAKE3, we're not just changing a hash function - we're changing the **keys** that identify every object in the ledger's state map. Since the hash IS the key in the SHAMap, every object needs a new address.
## What Needs to be Rekeyed
### 1. Primary Ledger Objects
Every SLE (STLedgerEntry) in the state map has its key computed from:
- Its type (Account, Offer, RippleState, etc.)
- Its identifying data (AccountID, offer sequence, etc.)
When we change the hash function, EVERY object gets a new key.
### 2. Directory Structures
Directories are ledger objects that contain **lists of other objects' keys**:
#### Owner Directories (`keylet::ownerDir`)
- Contains keys of all objects owned by an account
- Every offer, escrow, check, payment channel, etc. key stored here
- When those objects are rekeyed, ALL these references must be updated
#### Order Book Directories (`keylet::book`, `keylet::quality`)
- Contains keys of offers at specific quality levels
- All offer keys must be updated to their new BLAKE3 values
#### NFT Directories (`keylet::nft_buys`, `keylet::nft_sells`)
- Contains keys of NFT offers
- All NFT offer keys must be updated
#### Hook State Directories (`keylet::hookStateDir`)
- Contains keys of hook state entries
- All hook state keys must be updated
### 3. Cross-References Between Objects
Many objects contain direct references to other objects:
#### Account Objects
- `sfNFTokenPage` - References to NFT page keys
- Previous/Next links in directory pages
#### Directory Pages
- `sfIndexes` - Vector of object keys
- `sfPreviousTxnID` - Transaction hash references
- `sfIndexPrevious`/`sfIndexNext` - Links to other directory pages
#### NFT Pages
- References to previous/next pages in the chain
## The Cascade Effect
Re-keying isn't a simple one-pass operation:
```
1. Rekey Account A (SHA512 → BLAKE3)
2. Update Account A's owner directory
3. Rekey the owner directory itself
4. Update all objects IN the directory with new keys
5. Update any directories THOSE objects appear in
6. Continue cascading...
```
## Implementation Challenges
### Challenge 1: Directory Entry Updates
```cpp
// Current directory structure
STVector256 indexes = directory->getFieldV256(sfIndexes);
// Contains: [sha512_key1, sha512_key2, sha512_key3, ...]
// After migration needs to be:
// [blake3_key1, blake3_key2, blake3_key3, ...]
```
### Challenge 2: Finding All References
There's no reverse index - given an object's key, you can't easily find all directories that reference it. You'd need to:
1. Walk the entire state map
2. Check every directory's `sfIndexes` field
3. Update any matching keys
### Challenge 3: Maintaining Consistency
During migration, you need to ensure:
- No orphaned references (keys pointing to non-existent objects)
- No duplicate entries
- Proper ordering in sorted structures (offer books)
### Challenge 4: Page Links
Directory pages link to each other:
```cpp
uint256 prevPage = dir->getFieldU256(sfIndexPrevious);
uint256 nextPage = dir->getFieldU256(sfIndexNext);
```
These links are also keys that need updating!
## Why This Makes Migration Complex
### Option A: Big Bang Migration
- Must update EVERYTHING atomically
- Need to track old→new key mappings for entire ledger
- Memory requirements: ~2x the state size for mapping table
- Risk: Any missed reference breaks the ledger
### Option B: Heterogeneous Tree
- Old nodes keep SHA-512 keys
- New/modified nodes use BLAKE3
- Problem: How do you know which hash to use for lookups?
- Problem: Directory contains mix of old and new keys?
### Option C: Double Storage
- Store objects under BOTH keys temporarily
- Gradually migrate references
- Problem: Massive storage overhead
- Problem: Synchronization between copies
## Example: Rekeying an Offer
Consider rekeying a single offer:
1. **The Offer Itself**
- Old key: `sha512Half(OFFER, account, sequence)`
- New key: `blake3(OFFER, account, sequence)`
2. **Owner Directory**
- Must update `sfIndexes` to replace old offer key with new
3. **Order Book Directory**
- Must update `sfIndexes` in the quality directory
- May need to update multiple quality levels if offer moved
4. **Account Object**
- Update offer count/reserve tracking if needed
5. **The Directories Themselves**
- Owner directory key: `sha512Half(OWNER_DIR, account, page)`
- New key: `blake3(OWNER_DIR, account, page)`
- Order book key: `sha512Half(BOOK_DIR, ...)`
- New key: `blake3(BOOK_DIR, ...)`
## Potential Solutions
### 1. Migration Ledger Object
Create a temporary "migration map" ledger object:
```cpp
sfOldKey sfNewKey mappings
```
But this could be gigabytes for millions of objects.
### 2. Deterministic Rekeying
Since we can determine an object's type from its `LedgerEntryType`, we could:
1. Load each SLE
2. Determine its type
3. Recompute its key with BLAKE3
4. Track the mapping
But we still need to update all references.
### 3. Lazy Migration
Only rekey objects when they're modified:
- Pro: Spreads migration over time
- Con: Permanent complexity in codebase
- Con: Must support both hash types forever
### 4. New State Structure
Instead of migrating in-place, build an entirely new state map:
1. Create new empty BLAKE3 SHAMap
2. Walk old map, inserting with new keys
3. Update all references during copy
4. Atomically swap maps
This is essentially what BUILD_LEDGER.md suggests, but the reference updating remains complex.
## The Lookup Table Approach
After further analysis, a lookup table (LUT) based approach might actually be feasible:
### Algorithm Overview
#### Phase 1: Build the LUT (O(n))
```cpp
std::unordered_map<uint256, uint256> old_to_new;
stateMap_.visitLeaves([&](SHAMapItem const& item) {
SerialIter sit(item.slice());
auto sle = std::make_shared<SLE>(sit, item.key());
// Determine type from the SLE
LedgerEntryType type = sle->getType();
// Recompute key with BLAKE3 based on type
uint256 newKey = computeBlake3Key(sle, type);
old_to_new[item.key()] = newKey;
});
// Results in ~620k entries in the LUT
```
#### Phase 2: Update ALL uint256 Fields (O(n × m))
Walk every object and check every uint256 field against the LUT:
```cpp
stateMap_.visitLeaves([&](SHAMapItem& item) {
SerialIter sit(item.slice());
auto sle = std::make_shared<SLE>(sit, item.key());
bool modified = false;
// Check every possible uint256 field
modified |= updateField(sle, sfPreviousTxnID, old_to_new);
modified |= updateField(sle, sfIndexPrevious, old_to_new);
modified |= updateField(sle, sfIndexNext, old_to_new);
modified |= updateField(sle, sfBookNode, old_to_new);
// Vector fields
modified |= updateVector(sle, sfIndexes, old_to_new);
modified |= updateVector(sle, sfHashes, old_to_new);
modified |= updateVector(sle, sfAmendments, old_to_new);
modified |= updateVector(sle, sfNFTokenOffers, old_to_new);
if (modified) {
// Re-serialize with updated references
Serializer s;
sle->add(s);
// Create new item with new key
item = make_shamapitem(old_to_new[item.key()], s.slice());
}
});
```
### Complexity Analysis
- **Phase 1**: O(n) where n = number of objects (~620k)
- **Phase 2**: O(n × m) where m = average fields per object
- **Hash lookups**: O(1) average case
- **Total**: Linear in the number of objects!
### Memory Requirements
- LUT size: 620k entries × (32 bytes + 32 bytes) = ~40 MB
- Reasonable to keep in memory during migration
### Implementation Challenges
#### 1. Comprehensive Field Coverage
Must check EVERY field that could contain a key:
```cpp
// Singleton uint256 fields
sfPreviousTxnID, sfIndexPrevious, sfIndexNext, sfBookNode,
sfNFTokenID, sfEmitParentTxnID, sfHookOn, sfHookStateKey...
// Vector256 fields
sfIndexes, sfHashes, sfAmendments, sfNFTokenOffers,
sfHookNamespaces, sfURITokenIDs...
// Nested structures
STArray fields containing STObjects with uint256 fields
```
#### 2. False Positive Risk
Any uint256 that happens to match a key would be updated:
- Could corrupt data if a non-key field matches
- Mitigation: Only update known reference fields
- Risk: Missing custom fields added by hooks
#### 3. Order Book Sorting
Order books are sorted by key value. After rekeying:
- Sort order changes completely
- Need to rebuild book directories
- Quality levels might shift
### Alternative: Persistent Migration Map
Instead of one-time migration, store the mapping permanently:
```cpp
// Special ledger entries (one per ~1000 mappings)
MigrationMap_0000: {
sfOldKeys: [old_hash_0, old_hash_1, ...],
sfNewKeys: [new_hash_0, new_hash_1, ...]
}
MigrationMap_0001: { ... }
// ~620 of these objects
```
Pros:
- Can verify historical references
- Debugging is easier
- Can be pruned later if needed
Cons:
- Permanent state bloat (~40MB)
- Must be loaded on every node forever
- Lookup overhead for historical operations
### The Nuclear Option: Binary Search-Replace
For maximum chaos (don't actually do this):
```cpp
// Build LUT
std::map<std::array<uint8_t, 32>, std::array<uint8_t, 32>> binary_lut;
// Scan serialized blobs and replace
for (auto& node : shamap) {
auto data = node.getData();
for (size_t i = 0; i <= data.size() - 32; i++) {
if (binary_lut.count(data[i..i+31])) {
memcpy(&data[i], binary_lut[data[i..i+31]], 32);
}
}
}
```
Why this is insane:
- False positives would corrupt data
- No validation of what you're replacing
- Breaks all checksums and signatures
- Impossible to debug when it goes wrong
## Conclusion
The rekeying problem is not just about changing hash functions - it's about maintaining referential integrity across millions of interlinked objects. Every key change cascades through the reference graph, making this one of the most complex migrations possible in a blockchain system.
The lookup table approach makes it algorithmically feasible (linear time rather than quadratic), but the implementation complexity and risk remain enormous. You need to:
1. Find every single field that could contain a key
2. Update them all correctly
3. Handle sorting changes in order books
4. Avoid false positives
5. Deal with custom fields from hooks
6. Maintain consistency across the entire state
This is likely why most blockchains never change their hash functions after genesis - even with an efficient algorithm, the complexity and risk are enormous.

View File

@@ -8,6 +8,130 @@ This document contains the release notes for `rippled`, the reference server imp
Have new ideas? Need help with setting up your node? [Please open an issue here](https://github.com/xrplf/rippled/issues/new/choose).
# Introducing XRP Ledger version 1.11.0
Version 1.11.0 of `rippled`, the reference server implementation of the XRP Ledger protocol, is now available.
This release reduces memory usage, introduces the `fixNFTokenRemint` amendment, and adds new features and bug fixes. For example, the new NetworkID field in transactions helps to prevent replay attacks with side-chains.
[Sign Up for Future Release Announcements](https://groups.google.com/g/ripple-server)
<!-- BREAK -->
## Action Required
The `fixNFTokenRemint` amendment is now open for voting according to the XRP Ledger's [amendment process](https://xrpl.org/amendments.html), which enables protocol changes following two weeks of >80% support from trusted validators.
If you operate an XRP Ledger server, upgrade to version 1.11.0 by July 5 to ensure service continuity. The exact time that protocol changes take effect depends on the voting decisions of the decentralized network.
## Install / Upgrade
On supported platforms, see the [instructions on installing or updating `rippled`](https://xrpl.org/install-rippled.html).
## What's Changed
### New Features and Improvements
* Allow port numbers be be specified using a either a colon or a space by @RichardAH in https://github.com/XRPLF/rippled/pull/4328
* Eliminate memory allocation from critical path: by @nbougalis in https://github.com/XRPLF/rippled/pull/4353
* Make it easy for projects to depend on libxrpl by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4449
* Add the ability to mark amendments as obsolete by @ximinez in https://github.com/XRPLF/rippled/pull/4291
* Always create the FeeSettings object in genesis ledger by @ximinez in https://github.com/XRPLF/rippled/pull/4319
* Log exception messages in several locations by @drlongle in https://github.com/XRPLF/rippled/pull/4400
* Parse flags in account_info method by @drlongle in https://github.com/XRPLF/rippled/pull/4459
* Add NFTokenPages to account_objects RPC by @RichardAH in https://github.com/XRPLF/rippled/pull/4352
* add jss fields used by clio `nft_info` by @ledhed2222 in https://github.com/XRPLF/rippled/pull/4320
* Introduce a slab-based memory allocator and optimize SHAMapItem by @nbougalis in https://github.com/XRPLF/rippled/pull/4218
* Add NetworkID field to transactions to help prevent replay attacks on and from side-chains by @RichardAH in https://github.com/XRPLF/rippled/pull/4370
* If present, set quorum based on command line. by @mtrippled in https://github.com/XRPLF/rippled/pull/4489
* API does not accept seed or public key for account by @drlongle in https://github.com/XRPLF/rippled/pull/4404
* Add `nftoken_id`, `nftoken_ids` and `offer_id` meta fields into NFT `Tx` responses by @shawnxie999 in https://github.com/XRPLF/rippled/pull/4447
### Bug Fixes
* fix(gateway_balances): handle overflow exception by @RichardAH in https://github.com/XRPLF/rippled/pull/4355
* fix(ValidatorSite): handle rare null pointer dereference in timeout by @ximinez in https://github.com/XRPLF/rippled/pull/4420
* RPC commands understand markers derived from all ledger object types by @ximinez in https://github.com/XRPLF/rippled/pull/4361
* `fixNFTokenRemint`: prevent NFT re-mint: by @shawnxie999 in https://github.com/XRPLF/rippled/pull/4406
* Fix a case where ripple::Expected returned a json array, not a value by @scottschurr in https://github.com/XRPLF/rippled/pull/4401
* fix: Ledger data returns an empty list (instead of null) when all entries are filtered out by @drlongle in https://github.com/XRPLF/rippled/pull/4398
* Fix unit test ripple.app.LedgerData by @drlongle in https://github.com/XRPLF/rippled/pull/4484
* Fix the fix for std::result_of by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4496
* Fix errors for Clang 16 by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4501
* Ensure that switchover vars are initialized before use: by @seelabs in https://github.com/XRPLF/rippled/pull/4527
* Move faulty assert by @ximinez in https://github.com/XRPLF/rippled/pull/4533
* Fix unaligned load and stores: (#4528) by @seelabs in https://github.com/XRPLF/rippled/pull/4531
* fix node size estimation by @dangell7 in https://github.com/XRPLF/rippled/pull/4536
* fix: remove redundant moves by @ckeshava in https://github.com/XRPLF/rippled/pull/4565
### Code Cleanup and Testing
* Replace compare() with the three-way comparison operator in base_uint, Issue and Book by @drlongle in https://github.com/XRPLF/rippled/pull/4411
* Rectify the import paths of boost::function_output_iterator by @ckeshava in https://github.com/XRPLF/rippled/pull/4293
* Expand Linux test matrix by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4454
* Add patched recipe for SOCI by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4510
* Switch to self-hosted runners for macOS by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4511
* [TRIVIAL] Add missing includes by @seelabs in https://github.com/XRPLF/rippled/pull/4555
### Docs
* Refactor build instructions by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4381
* Add install instructions for package managers by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4472
* Fix typo by @solmsted in https://github.com/XRPLF/rippled/pull/4508
* Update environment.md by @sappenin in https://github.com/XRPLF/rippled/pull/4498
* Update BUILD.md by @oeggert in https://github.com/XRPLF/rippled/pull/4514
* Trivial: add comments for NFToken-related invariants by @scottschurr in https://github.com/XRPLF/rippled/pull/4558
## New Contributors
* @drlongle made their first contribution in https://github.com/XRPLF/rippled/pull/4411
* @ckeshava made their first contribution in https://github.com/XRPLF/rippled/pull/4293
* @solmsted made their first contribution in https://github.com/XRPLF/rippled/pull/4508
* @sappenin made their first contribution in https://github.com/XRPLF/rippled/pull/4498
* @oeggert made their first contribution in https://github.com/XRPLF/rippled/pull/4514
**Full Changelog**: https://github.com/XRPLF/rippled/compare/1.10.1...1.11.0
### GitHub
The public source code repository for `rippled` is hosted on GitHub at <https://github.com/XRPLF/rippled>.
We welcome all contributions and invite everyone to join the community of XRP Ledger developers to help build the Internet of Value.
### Credits
The following people contributed directly to this release:
- Alloy Networks <45832257+alloynetworks@users.noreply.github.com>
- Brandon Wilson <brandon@coil.com>
- Chenna Keshava B S <21219765+ckeshava@users.noreply.github.com>
- David Fuelling <sappenin@gmail.com>
- Denis Angell <dangell@transia.co>
- Ed Hennis <ed@ripple.com>
- Elliot Lee <github.public@intelliot.com>
- John Freeman <jfreeman08@gmail.com>
- Mark Travis <mtrippled@users.noreply.github.com>
- Nik Bougalis <nikb@bougalis.net>
- RichardAH <richard.holland@starstone.co.nz>
- Scott Determan <scott.determan@yahoo.com>
- Scott Schurr <scott@ripple.com>
- Shawn Xie <35279399+shawnxie999@users.noreply.github.com>
- drlongle <drlongle@gmail.com>
- ledhed2222 <ledhed2222@users.noreply.github.com>
- oeggert <117319296+oeggert@users.noreply.github.com>
- solmsted <steven.olm@gmail.com>
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find.
To report a bug, please send a detailed report to:
bugs@xrpl.org
# Introducing XRP Ledger version 1.10.1
Version 1.10.1 of `rippled`, the reference server implementation of the XRP Ledger protocol, is now available. This release restores packages for Ubuntu 18.04.

View File

@@ -61,13 +61,12 @@ For these complaints or reports, please [contact our support team](mailto:bugs@x
### The following type of security problems are excluded
- (D)DOS attacks
- Error messages or error pages without sensitive data
- Tests & sample data as publicly available in our repositories at Github
- Common issues like browser header warnings or DNS configuration, identified by vulnerability scans
- Vulnerability scan reports for software we publicly use
- Security issues related to outdated OS's, browsers or plugins
- Reports for security problems that we have been notified of before
1. **In scope**. Only bugs in software under the scope of the program qualify. Currently, that means `xahaud` and `xahau-lib`.
2. **Relevant**. A security issue, posing a danger to user funds, privacy or the operation of the Xahau Ledger.
3. **Original and previously unknown**. Bugs that are already known and discussed in public do not qualify. Previously reported bugs, even if publicly unknown, are not eligible.
4. **Specific**. We welcome general security advice or recommendations, but we cannot pay bounties for that.
5. **Fixable**. There has to be something we can do to permanently fix the problem. Note that bugs in other peoples software may still qualify in some cases. For example, if you find a bug in a library that we use which can compromise the security of software that is in scope and we can get it fixed, you may qualify for a bounty.
6. **Unused**. If you use the exploit to attack the Xahau Ledger, you do not qualify for a bounty. If you report a vulnerability used in an ongoing or past attack and there is specific, concrete evidence that suggests you are the attacker we reserve the right not to pay a bounty.
Please note: Reports that are lacking any proof (such as screenshots or other data), detailed information or details on how to reproduce any unexpected result will be investigated but will not be eligible for any reward.

179
TEST_FILES_TO_FIX.md Normal file
View File

@@ -0,0 +1,179 @@
# Test Files That Need hash_options Fixes
## How to Check Compilation Errors
Use the `compile_single_v2.py` script to check individual files:
```bash
# Check compilation errors for a specific file
./compile_single_v2.py src/test/app/SomeFile_test.cpp -e 3 --errors-only
# Get just the last few lines to see if it compiled successfully
./compile_single_v2.py src/test/app/SomeFile_test.cpp 2>&1 | tail -5
```
## Originally Fixed Files (11 files)
1. **src/test/app/Import_test.cpp**
- Status: Needs fixing
- Errors: keylet functions missing hash_options
2. **src/test/app/LedgerReplay_test.cpp**
- Status: Needs fixing
- Errors: keylet functions missing hash_options
3. **src/test/app/Offer_test.cpp**
- Status: Needs fixing
- Errors: keylet functions missing hash_options
4. **src/test/app/SetHook_test.cpp**
- Status: Needs fixing
- Errors: keylet functions missing hash_options
5. **src/test/app/SetHookTSH_test.cpp**
- Status: Needs fixing
- Errors: keylet functions missing hash_options
6. **src/test/app/ValidatorList_test.cpp**
- Status: Needs fixing
- Errors: keylet functions missing hash_options
7. **src/test/app/XahauGenesis_test.cpp**
- Status: Needs fixing
- Errors: keylet functions missing hash_options, keylet::fees() needs hash_options
8. **src/test/consensus/NegativeUNL_test.cpp**
- Status: Needs fixing
- Errors: keylet functions missing hash_options
9. **src/test/consensus/UNLReport_test.cpp**
- Status: Needs fixing
- Errors: keylet functions missing hash_options
10. **src/test/jtx/impl/balance.cpp**
- Status: Needs fixing
- Errors: keylet functions missing hash_options
11. **src/test/jtx/impl/Env.cpp**
- Status: Needs fixing
- Errors: keylet functions missing hash_options
## Fix Strategy
Each file needs:
1. keylet function calls updated to include hash_options{ledger_seq, classifier}
2. The ledger_seq typically comes from env.current()->seq() or view.seq()
3. The classifier matches the keylet type (e.g., KEYLET_ACCOUNT, KEYLET_FEES, etc.)
## Progress Tracking
- [x] Import_test.cpp - FIXED
- [x] LedgerReplay_test.cpp - FIXED
- [x] Offer_test.cpp - FIXED
- [x] SetHook_test.cpp - FIXED
- [x] SetHookTSH_test.cpp - FIXED
- [x] ValidatorList_test.cpp - FIXED (sha512Half calls updated with VALIDATOR_LIST_HASH classifier)
- [x] XahauGenesis_test.cpp - FIXED (removed duplicate hash_options parameters)
- [x] NegativeUNL_test.cpp - FIXED
- [x] UNLReport_test.cpp - FIXED
- [x] balance.cpp - FIXED
- [x] Env.cpp - FIXED
## All original 11 files have been successfully fixed!
## Remaining Files Still Needing Fixes (9 files)
### Status: NOT STARTED
These files still have compilation errors and need hash_options fixes:
1. **src/test/jtx/impl/uritoken.cpp**
- Status: Needs fixing
- Check errors: `./compile_single_v2.py src/test/jtx/impl/uritoken.cpp -e 3 --errors-only`
2. **src/test/jtx/impl/utility.cpp**
- Status: Needs fixing
- Check errors: `./compile_single_v2.py src/test/jtx/impl/utility.cpp -e 3 --errors-only`
3. **src/test/ledger/Directory_test.cpp**
- Status: Needs fixing
- Check errors: `./compile_single_v2.py src/test/ledger/Directory_test.cpp -e 3 --errors-only`
4. **src/test/ledger/Invariants_test.cpp**
- Status: Needs fixing
- Check errors: `./compile_single_v2.py src/test/ledger/Invariants_test.cpp -e 3 --errors-only`
5. **src/test/overlay/compression_test.cpp**
- Status: Needs fixing
- Check errors: `./compile_single_v2.py src/test/overlay/compression_test.cpp -e 3 --errors-only`
6. **src/test/rpc/AccountSet_test.cpp**
- Status: Needs fixing
- Check errors: `./compile_single_v2.py src/test/rpc/AccountSet_test.cpp -e 3 --errors-only`
7. **src/test/rpc/AccountTx_test.cpp**
- Status: Needs fixing
- Check errors: `./compile_single_v2.py src/test/rpc/AccountTx_test.cpp -e 3 --errors-only`
8. **src/test/rpc/Book_test.cpp**
- Status: Needs fixing
- Check errors: `./compile_single_v2.py src/test/rpc/Book_test.cpp -e 3 --errors-only`
9. **src/test/rpc/Catalogue_test.cpp**
- Status: Needs fixing
- Check errors: `./compile_single_v2.py src/test/rpc/Catalogue_test.cpp -e 3 --errors-only`
## CRITICAL INSTRUCTIONS FOR FIXING
### 1. Read the HashContext enum from digest.h
**ALWAYS** check @src/ripple/protocol/digest.h lines 37-107 for the complete HashContext enum.
This enum defines ALL the valid classifiers you can use in hash_options.
### 2. Understanding hash_options constructor
The hash_options struct (lines 110-126 in digest.h) has TWO constructors:
- `hash_options(HashContext ctx)` - classifier only, no ledger index
- `hash_options(std::uint32_t li, HashContext ctx)` - ledger index AND classifier
### 3. How to classify each hash operation
#### For keylet functions:
- Match the keylet function name to the KEYLET_* enum value
- Examples:
- `keylet::account()` → use `KEYLET_ACCOUNT`
- `keylet::fees()` → use `KEYLET_FEES`
- `keylet::trustline()` → use `KEYLET_TRUSTLINE`
- `keylet::negativeUNL()` → use `KEYLET_NEGATIVE_UNL`
- `keylet::UNLReport()` → use `KEYLET_UNL_REPORT`
- `keylet::hook()` → use `KEYLET_HOOK`
- `keylet::uriToken()` → use `KEYLET_URI_TOKEN`
#### For sha512Half calls:
- Validator manifests/lists → use `VALIDATOR_LIST_HASH`
- Hook code hashing → use `HOOK_DEFINITION` or `LEDGER_INDEX_UNNEEDED`
- Network protocol → use appropriate context from enum
#### For test environments:
- Use `env.current()->seq()` to get ledger sequence (it's already uint32_t, NO CAST NEEDED)
- Use `ledger->seq()` for Ledger pointers
- Use `view.seq()` for ReadView/ApplyView references
### 4. IMPORTANT: Read the entire file first!
When fixing a file, ALWAYS:
1. Read the ENTIRE file first (or at least 500+ lines) to understand the context
2. Look for patterns of how the test is structured
3. Check what types of ledger objects are being tested
4. Then fix ALL occurrences systematically
### 5. Common patterns to fix:
```cpp
// OLD - missing hash_options
env.le(keylet::account(alice));
// NEW - with proper classification
env.le(keylet::account(hash_options{env.current()->seq(), KEYLET_ACCOUNT}, alice));
// OLD - sha512Half without context
auto hash = sha512Half(data);
// NEW - with proper classification
auto hash = sha512Half(hash_options{VALIDATOR_LIST_HASH}, data);
```

204
analyze_keylet_calls.py Normal file
View File

@@ -0,0 +1,204 @@
#!/usr/bin/env python3
import re
import os
from pathlib import Path
from collections import defaultdict
from typing import Set, Dict, List, Tuple
def find_keylet_calls(root_dir: str) -> Dict[str, List[Tuple[str, int, str]]]:
"""
Find all keylet:: function calls with hash_options as first parameter.
Returns a dict mapping keylet function names to list of (file, line, full_match) tuples.
"""
# Pattern to match keylet::<function>(hash_options{...}, ...) calls
# This captures:
# 1. The keylet function name
# 2. The entire first argument (hash_options{...})
# 3. The content inside hash_options{...}
pattern = re.compile(
r'keylet::(\w+)\s*\(\s*(hash_options\s*\{([^}]*)\})',
re.MULTILINE | re.DOTALL
)
results = defaultdict(list)
unique_first_args = set()
# Walk through all C++ source files
for root, dirs, files in os.walk(Path(root_dir) / "src" / "ripple"):
# Skip certain directories
dirs[:] = [d for d in dirs if d not in ['.git', 'build', '__pycache__']]
for file in files:
if file.endswith(('.cpp', '.h', '.hpp')):
filepath = os.path.join(root, file)
try:
with open(filepath, 'r', encoding='utf-8', errors='ignore') as f:
content = f.read()
# Find all matches in this file
for match in pattern.finditer(content):
func_name = match.group(1)
full_first_arg = match.group(2)
inner_content = match.group(3).strip()
# Get line number
line_num = content[:match.start()].count('\n') + 1
# Store the result
rel_path = os.path.relpath(filepath, root_dir)
results[func_name].append((rel_path, line_num, full_first_arg))
unique_first_args.add(inner_content)
except Exception as e:
print(f"Error reading {filepath}: {e}")
return results, unique_first_args
def analyze_hash_options_content(unique_args: Set[str]) -> Dict[str, int]:
"""Analyze the content of hash_options{...} arguments."""
categories = {
'literal_0': 0,
'literal_number': 0,
'view_seq': 0,
'ledger_seq': 0,
'info_seq': 0,
'ctx_view_seq': 0,
'sb_seq': 0,
'env_current_seq': 0,
'other': 0
}
other_patterns = []
for arg in unique_args:
arg_clean = arg.strip()
if arg_clean == '0':
categories['literal_0'] += 1
elif arg_clean.isdigit():
categories['literal_number'] += 1
elif 'view.seq()' in arg_clean or 'view().seq()' in arg_clean:
categories['view_seq'] += 1
elif 'ledger->seq()' in arg_clean or 'ledger.seq()' in arg_clean:
categories['ledger_seq'] += 1
elif 'info.seq' in arg_clean or 'info_.seq' in arg_clean:
categories['info_seq'] += 1
elif 'ctx.view.seq()' in arg_clean:
categories['ctx_view_seq'] += 1
elif 'sb.seq()' in arg_clean:
categories['sb_seq'] += 1
elif 'env.current()->seq()' in arg_clean:
categories['env_current_seq'] += 1
else:
categories['other'] += 1
other_patterns.append(arg_clean)
return categories, other_patterns
def print_report(results: Dict[str, List], unique_args: Set[str]):
"""Print a detailed report of findings."""
print("=" * 80)
print("KEYLET FUNCTION CALL ANALYSIS")
print("=" * 80)
# Summary
total_calls = sum(len(calls) for calls in results.values())
print(f"\nTotal keylet calls found: {total_calls}")
print(f"Unique keylet functions: {len(results)}")
print(f"Unique hash_options arguments: {len(unique_args)}")
# Function frequency
print("\n" + "=" * 80)
print("KEYLET FUNCTIONS BY FREQUENCY:")
print("=" * 80)
sorted_funcs = sorted(results.items(), key=lambda x: len(x[1]), reverse=True)
for func_name, calls in sorted_funcs[:20]: # Top 20
print(f" {func_name:30} {len(calls):4} calls")
if len(sorted_funcs) > 20:
print(f" ... and {len(sorted_funcs) - 20} more functions")
# Analyze hash_options content
print("\n" + "=" * 80)
print("HASH_OPTIONS ARGUMENT PATTERNS:")
print("=" * 80)
categories, other_patterns = analyze_hash_options_content(unique_args)
for category, count in sorted(categories.items(), key=lambda x: x[1], reverse=True):
if count > 0:
print(f" {category:25} {count:4} occurrences")
if other_patterns:
print("\n" + "=" * 80)
print("OTHER PATTERNS (need review):")
print("=" * 80)
for i, pattern in enumerate(sorted(set(other_patterns))[:10], 1):
# Truncate long patterns
display = pattern if len(pattern) <= 60 else pattern[:57] + "..."
print(f" {i:2}. {display}")
# Sample calls for most common functions
print("\n" + "=" * 80)
print("SAMPLE CALLS (top 5 functions):")
print("=" * 80)
for func_name, calls in sorted_funcs[:5]:
print(f"\n{func_name}:")
for filepath, line_num, arg in calls[:3]: # Show first 3 examples
print(f" {filepath}:{line_num}")
print(f" {arg}")
if len(calls) > 3:
print(f" ... and {len(calls) - 3} more")
def generate_replacement_script(results: Dict[str, List], unique_args: Set[str]):
"""Generate a script to help with replacements."""
print("\n" + "=" * 80)
print("SUGGESTED MIGRATION STRATEGY:")
print("=" * 80)
print("""
The goal is to migrate from:
keylet::func(hash_options{ledger_seq})
To either:
keylet::func(hash_options{ledger_seq, KEYLET_CLASSIFIER})
Where KEYLET_CLASSIFIER would be a specific HashContext enum value
based on the keylet function type.
Suggested mappings:
- keylet::account() -> LEDGER_HEADER_HASH (or new KEYLET_ACCOUNT)
- keylet::line() -> LEDGER_HEADER_HASH (or new KEYLET_TRUSTLINE)
- keylet::offer() -> LEDGER_HEADER_HASH (or new KEYLET_OFFER)
- keylet::ownerDir() -> LEDGER_HEADER_HASH (or new KEYLET_OWNER_DIR)
- keylet::page() -> LEDGER_HEADER_HASH (or new KEYLET_DIR_PAGE)
- keylet::fees() -> LEDGER_HEADER_HASH (or new KEYLET_FEES)
- keylet::amendments() -> LEDGER_HEADER_HASH (or new KEYLET_AMENDMENTS)
- keylet::check() -> LEDGER_HEADER_HASH (or new KEYLET_CHECK)
- keylet::escrow() -> LEDGER_HEADER_HASH (or new KEYLET_ESCROW)
- keylet::payChan() -> LEDGER_HEADER_HASH (or new KEYLET_PAYCHAN)
- keylet::signers() -> LEDGER_HEADER_HASH (or new KEYLET_SIGNERS)
- keylet::ticket() -> LEDGER_HEADER_HASH (or new KEYLET_TICKET)
- keylet::nftpage_*() -> LEDGER_HEADER_HASH (or new KEYLET_NFT_PAGE)
- keylet::nftoffer() -> LEDGER_HEADER_HASH (or new KEYLET_NFT_OFFER)
- keylet::depositPreauth() -> LEDGER_HEADER_HASH (or new KEYLET_DEPOSIT_PREAUTH)
""")
if __name__ == "__main__":
# Get the project root directory
project_root = "/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc"
print(f"Analyzing keylet calls in: {project_root}")
print("This may take a moment...\n")
# Find all keylet calls
results, unique_args = find_keylet_calls(project_root)
# Print the report
print_report(results, unique_args)
# Generate replacement suggestions
generate_replacement_script(results, unique_args)

View File

@@ -1,4 +1,11 @@
#!/bin/bash
#!/bin/bash -u
# We use set -e and bash with -u to bail on first non zero exit code of any
# processes launched or upon any unbound variable.
# We use set -x to print commands before running them to help with
# debugging.
set -ex
set -e
echo "START INSIDE CONTAINER - CORE"
@@ -23,12 +30,12 @@ fi
perl -i -pe "s/^(\\s*)-DBUILD_SHARED_LIBS=OFF/\\1-DBUILD_SHARED_LIBS=OFF\\n\\1-DROCKSDB_BUILD_SHARED=OFF/g" Builds/CMake/deps/Rocksdb.cmake &&
mv Builds/CMake/deps/WasmEdge.cmake Builds/CMake/deps/WasmEdge.old &&
echo "find_package(LLVM REQUIRED CONFIG)
message(STATUS \"Found LLVM ${LLVM_PACKAGE_VERSION}\")
message(STATUS \"Found LLVM \${LLVM_PACKAGE_VERSION}\")
message(STATUS \"Using LLVMConfig.cmake in: \${LLVM_DIR}\")
add_library (wasmedge STATIC IMPORTED GLOBAL)
set_target_properties(wasmedge PROPERTIES IMPORTED_LOCATION \${WasmEdge_LIB})
target_link_libraries (ripple_libs INTERFACE wasmedge)
add_library (NIH::WasmEdge ALIAS wasmedge)
add_library (wasmedge::wasmedge ALIAS wasmedge)
message(\"WasmEdge DONE\")
" > Builds/CMake/deps/WasmEdge.cmake &&
git checkout src/ripple/protocol/impl/BuildInfo.cpp &&

31
build-full.sh Executable file → Normal file
View File

@@ -1,4 +1,11 @@
#!/bin/bash
#!/bin/bash -u
# We use set -e and bash with -u to bail on first non zero exit code of any
# processes launched or upon any unbound variable.
# We use set -x to print commands before running them to help with
# debugging.
set -ex
set -e
echo "START INSIDE CONTAINER - FULL"
@@ -19,7 +26,7 @@ yum-config-manager --disable centos-sclo-sclo
####
cd /io;
mkdir src/certs;
mkdir -p src/certs;
curl --silent -k https://raw.githubusercontent.com/RichardAH/rippled-release-builder/main/ca-bundle/certbundle.h -o src/certs/certbundle.h;
if [ "`grep certbundle.h src/ripple/net/impl/RegisterSSLCerts.cpp | wc -l`" -eq "0" ]
then
@@ -66,8 +73,8 @@ then
#endif/g" src/ripple/net/impl/RegisterSSLCerts.cpp &&
sed -i "s/#include <ripple\/net\/RegisterSSLCerts.h>/\0\n#include <certs\/certbundle.h>/g" src/ripple/net/impl/RegisterSSLCerts.cpp
fi
mkdir .nih_c;
mkdir .nih_toolchain;
mkdir -p .nih_c;
mkdir -p .nih_toolchain;
cd .nih_toolchain &&
yum install -y wget lz4 lz4-devel git llvm13-static.x86_64 llvm13-devel.x86_64 devtoolset-10-binutils zlib-static ncurses-static -y \
devtoolset-7-gcc-c++ \
@@ -90,11 +97,11 @@ echo "-- Install Cmake 3.23.1 --" &&
pwd &&
( wget -nc -q https://github.com/Kitware/CMake/releases/download/v3.23.1/cmake-3.23.1-linux-x86_64.tar.gz; echo "" ) &&
tar -xzf cmake-3.23.1-linux-x86_64.tar.gz -C /hbb/ &&
echo "-- Install Boost 1.75.0 --" &&
echo "-- Install Boost 1.86.0 --" &&
pwd &&
( wget -nc -q https://boostorg.jfrog.io/artifactory/main/release/1.75.0/source/boost_1_75_0.tar.gz; echo "" ) &&
tar -xzf boost_1_75_0.tar.gz &&
cd boost_1_75_0 && ./bootstrap.sh && ./b2 link=static -j$3 && ./b2 install &&
( wget -nc -q https://archives.boost.io/release/1.86.0/source/boost_1_86_0.tar.gz; echo "" ) &&
tar -xzf boost_1_86_0.tar.gz &&
cd boost_1_86_0 && ./bootstrap.sh && ./b2 link=static -j$3 && ./b2 install &&
cd ../ &&
echo "-- Install Protobuf 3.20.0 --" &&
pwd &&
@@ -115,7 +122,7 @@ tar -xf libunwind-13.0.1.src.tar.xz &&
cp -r libunwind-13.0.1.src/include libunwind-13.0.1.src/src lld-13.0.1.src/ &&
cd lld-13.0.1.src &&
rm -rf build CMakeCache.txt &&
mkdir build &&
mkdir -p build &&
cd build &&
cmake .. -DLLVM_LIBRARY_DIR=/usr/lib64/llvm13/lib/ -DCMAKE_INSTALL_PREFIX=/usr/lib64/llvm13/ -DCMAKE_BUILD_TYPE=Release &&
make -j$3 install &&
@@ -125,11 +132,11 @@ cd ../../ &&
echo "-- Build WasmEdge --" &&
( wget -nc -q https://github.com/WasmEdge/WasmEdge/archive/refs/tags/0.11.2.zip; unzip -o 0.11.2.zip; ) &&
cd WasmEdge-0.11.2 &&
( mkdir build; echo "" ) &&
( mkdir -p build; echo "" ) &&
cd build &&
export BOOST_ROOT="/usr/local/src/boost_1_75_0" &&
export BOOST_ROOT="/usr/local/src/boost_1_86_0" &&
export Boost_LIBRARY_DIRS="/usr/local/lib" &&
export BOOST_INCLUDEDIR="/usr/local/src/boost_1_75_0" &&
export BOOST_INCLUDEDIR="/usr/local/src/boost_1_86_0" &&
export PATH=`echo $PATH | sed -E "s/devtoolset-7/devtoolset-9/g"` &&
cmake .. \
-DCMAKE_BUILD_TYPE=Release \

View File

@@ -1056,7 +1056,18 @@
# Cassandra is an alternative backend to be used only with Reporting Mode.
# See the Reporting Mode section for more details about Reporting Mode.
#
# Required keys for NuDB and RocksDB:
# type = RWDB
#
# RWDB is a high-performance memory store written by XRPL-Labs and optimized
# for xahaud. RWDB is NOT persistent and the data will be lost on restart.
# RWDB is recommended for Validator and Peer nodes that are not required to
# store history.
#
# RWDB maintains its high speed regardless of the amount of history
# stored. Online delete should NOT be used instead RWDB will use the
# ledger_history config value to determine how many ledgers to keep in memory.
#
# Required keys for NuDB, RWDB and RocksDB:
#
# path Location to store the database
#
@@ -1112,7 +1123,8 @@
# online_delete Minimum value of 256. Enable automatic purging
# of older ledger information. Maintain at least this
# number of ledger records online. Must be greater
# than or equal to ledger_history.
# than or equal to ledger_history. If using RWDB
# this value is ignored.
#
# These keys modify the behavior of online_delete, and thus are only
# relevant if online_delete is defined and non-zero:

View File

@@ -144,4 +144,12 @@ D686F2538F410C9D0D856788E98E3579595DAF7B38D38887F81ECAC934B06040 HooksUpdate1
86E83A7D2ECE3AD5FA87AB2195AE015C950469ABF0B72EAACED318F74886AE90 CryptoConditionsSuite
3C43D9A973AA4443EF3FC38E42DD306160FBFFDAB901CD8BAA15D09F2597EB87 NonFungibleTokensV1
0285B7E5E08E1A8E4C15636F0591D87F73CB6A7B6452A932AD72BBC8E5D1CBE3 fixNFTokenDirV1
36799EA497B1369B170805C078AEFE6188345F9B3E324C21E9CA3FF574E3C3D6 fixNFTokenNegOffer
36799EA497B1369B170805C078AEFE6188345F9B3E324C21E9CA3FF574E3C3D6 fixNFTokenNegOffer
4C499D17719BB365B69010A436B64FD1A82AAB199FC1CEB06962EBD01059FB09 fixXahauV1
215181D23BF5C173314B5FDB9C872C92DE6CC918483727DE037C0C13E7E6EE9D fixXahauV2
0D8BF22FF7570D58598D1EF19EBB6E142AD46E59A223FD3816262FBB69345BEA Remit
7CA0426E7F411D39BB014E57CD9E08F61DE1750F0D41FCD428D9FB80BB7596B0 ZeroB2M
4B8466415FAB32FFA89D9DCBE166A42340115771DF611A7160F8D7439C87ECD8 fixNSDelete
EDB4EE4C524E16BDD91D9A529332DED08DCAAA51CC6DC897ACFA1A0ED131C5B6 fix240819
8063140E9260799D6716756B891CEC3E7006C4E4F277AB84670663A88F94B9C4 fixPageCap
88693F108C3CD8A967F3F4253A32DEF5E35F9406ACD2A11B88B11D90865763A9 fix240911

127
compile_single.py Executable file
View File

@@ -0,0 +1,127 @@
#!/usr/bin/env python3
"""
Compile a single file using commands from compile_commands.json
"""
import json
import os
import sys
import subprocess
import argparse
from pathlib import Path
def find_compile_command(compile_commands, file_path):
"""Find the compile command for a given file path."""
# Normalize the input path
abs_path = os.path.abspath(file_path)
for entry in compile_commands:
# Check if this entry matches our file
entry_file = os.path.abspath(entry['file'])
if entry_file == abs_path:
return entry
# Try relative path matching as fallback
for entry in compile_commands:
if entry['file'].endswith(file_path) or file_path.endswith(entry['file']):
return entry
return None
def main():
parser = argparse.ArgumentParser(
description='Compile a single file using compile_commands.json'
)
parser.add_argument(
'file',
help='Path to the source file to compile'
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help='Show the compile command being executed'
)
parser.add_argument(
'--dump-output', '-d',
action='store_true',
help='Dump the full output from the compiler'
)
parser.add_argument(
'--compile-db',
default='build/compile_commands.json',
help='Path to compile_commands.json (default: build/compile_commands.json)'
)
args = parser.parse_args()
# Check if compile_commands.json exists
if not os.path.exists(args.compile_db):
print(f"Error: {args.compile_db} not found", file=sys.stderr)
print("Make sure you've run cmake with -DCMAKE_EXPORT_COMPILE_COMMANDS=ON", file=sys.stderr)
sys.exit(1)
# Load compile commands
try:
with open(args.compile_db, 'r') as f:
compile_commands = json.load(f)
except json.JSONDecodeError as e:
print(f"Error parsing {args.compile_db}: {e}", file=sys.stderr)
sys.exit(1)
# Find the compile command for the requested file
entry = find_compile_command(compile_commands, args.file)
if not entry:
print(f"Error: No compile command found for {args.file}", file=sys.stderr)
print(f"Available files in {args.compile_db}:", file=sys.stderr)
# Show first 10 files as examples
for i, cmd in enumerate(compile_commands[:10]):
print(f" {cmd['file']}", file=sys.stderr)
if len(compile_commands) > 10:
print(f" ... and {len(compile_commands) - 10} more", file=sys.stderr)
sys.exit(1)
# Extract the command and directory
command = entry['command']
directory = entry.get('directory', '.')
if args.verbose:
print(f"Directory: {directory}", file=sys.stderr)
print(f"Command: {command}", file=sys.stderr)
print("-" * 80, file=sys.stderr)
# Execute the compile command
try:
result = subprocess.run(
command,
shell=True,
cwd=directory,
capture_output=not args.dump_output,
text=True
)
if args.dump_output:
# Output was already printed to stdout/stderr
pass
else:
# Only show output if there were errors or warnings
if result.stderr:
print(result.stderr, file=sys.stderr)
if result.stdout:
print(result.stdout)
# Exit with the same code as the compiler
sys.exit(result.returncode)
except subprocess.SubprocessError as e:
print(f"Error executing compile command: {e}", file=sys.stderr)
sys.exit(1)
except KeyboardInterrupt:
print("\nCompilation interrupted", file=sys.stderr)
sys.exit(130)
if __name__ == '__main__':
main()

311
compile_single_v2.py Executable file
View File

@@ -0,0 +1,311 @@
#!/usr/bin/env python3
"""
Compile a single file using commands from compile_commands.json
Enhanced version with error context display
"""
import json
import os
import sys
import subprocess
import argparse
import re
import logging
from pathlib import Path
def setup_logging(level):
"""Setup logging configuration."""
numeric_level = getattr(logging, level.upper(), None)
if not isinstance(numeric_level, int):
raise ValueError(f'Invalid log level: {level}')
logging.basicConfig(
level=numeric_level,
format='[%(levelname)s] %(message)s',
stream=sys.stderr
)
def find_compile_command(compile_commands, file_path):
"""Find the compile command for a given file path."""
# Normalize the input path
abs_path = os.path.abspath(file_path)
logging.debug(f"Looking for compile command for: {abs_path}")
for entry in compile_commands:
# Check if this entry matches our file
entry_file = os.path.abspath(entry['file'])
if entry_file == abs_path:
logging.debug(f"Found exact match: {entry_file}")
return entry
# Try relative path matching as fallback
for entry in compile_commands:
if entry['file'].endswith(file_path) or file_path.endswith(entry['file']):
logging.debug(f"Found relative match: {entry['file']}")
return entry
logging.debug("No compile command found")
return None
def extract_errors_with_context(output, file_path, context_lines=3):
"""Extract error messages with context from compiler output."""
lines = output.split('\n')
errors = []
logging.debug(f"Parsing {len(lines)} lines of compiler output")
logging.debug(f"Looking for errors in file: {file_path}")
# Pattern to match error lines from clang/gcc
# Matches: filename:line:col: error: message
# Also handle color codes
error_pattern = re.compile(r'([^:]+):(\d+):(\d+):\s*(?:\x1b\[[0-9;]*m)?\s*(error|warning):\s*(?:\x1b\[[0-9;]*m)?\s*(.*?)(?:\x1b\[[0-9;]*m)?$')
for i, line in enumerate(lines):
# Strip ANSI color codes for pattern matching
clean_line = re.sub(r'\x1b\[[0-9;]*m', '', line)
match = error_pattern.search(clean_line)
if match:
filename = match.group(1)
line_num = int(match.group(2))
col_num = int(match.group(3))
error_type = match.group(4)
message = match.group(5)
logging.debug(f"Found {error_type} at {filename}:{line_num}:{col_num}")
# Check if this error is from the file we're compiling
# Be more flexible with path matching
if (file_path in filename or
filename.endswith(os.path.basename(file_path)) or
os.path.basename(filename) == os.path.basename(file_path)):
logging.debug(f" -> Including {error_type}: {message[:50]}...")
error_info = {
'line': line_num,
'col': col_num,
'type': error_type,
'message': message,
'full_line': line, # Keep original line with colors
'context_before': [],
'context_after': []
}
# Get context lines from compiler output
for j in range(max(0, i - context_lines), i):
error_info['context_before'].append(lines[j])
for j in range(i + 1, min(len(lines), i + context_lines + 1)):
error_info['context_after'].append(lines[j])
errors.append(error_info)
else:
logging.debug(f" -> Skipping (different file: {filename})")
logging.info(f"Found {len(errors)} errors/warnings")
return errors
def read_source_context(file_path, line_num, context_lines=3):
"""Read context from the source file around a specific line."""
try:
with open(file_path, 'r') as f:
lines = f.readlines()
start = max(0, line_num - context_lines - 1)
end = min(len(lines), line_num + context_lines)
context = []
for i in range(start, end):
line_marker = '>>> ' if i == line_num - 1 else ' '
context.append(f"{i+1:4d}:{line_marker}{lines[i].rstrip()}")
return '\n'.join(context)
except Exception as e:
logging.warning(f"Could not read source context: {e}")
return None
def format_error_with_context(error, file_path, show_source_context=False):
"""Format an error with its context."""
output = []
output.append(f"\n{'='*80}")
output.append(f"Error at line {error['line']}, column {error['col']}:")
output.append(f" {error['message']}")
if show_source_context:
source_context = read_source_context(file_path, error['line'], 3)
if source_context:
output.append("\nSource context:")
output.append(source_context)
if error['context_before'] or error['context_after']:
output.append("\nCompiler output context:")
for line in error['context_before']:
output.append(f" {line}")
output.append(f">>> {error['full_line']}")
for line in error['context_after']:
output.append(f" {line}")
return '\n'.join(output)
def main():
parser = argparse.ArgumentParser(
description='Compile a single file using compile_commands.json with enhanced error display'
)
parser.add_argument(
'file',
help='Path to the source file to compile'
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help='Show the compile command being executed'
)
parser.add_argument(
'--dump-output', '-d',
action='store_true',
help='Dump the full output from the compiler'
)
parser.add_argument(
'--show-error-context', '-e',
type=int,
metavar='N',
help='Show N lines of context around each error (implies capturing output)'
)
parser.add_argument(
'--show-source-context', '-s',
action='store_true',
help='Show source file context around errors'
)
parser.add_argument(
'--errors-only',
action='store_true',
help='Only show errors, not warnings'
)
parser.add_argument(
'--compile-db',
default='build/compile_commands.json',
help='Path to compile_commands.json (default: build/compile_commands.json)'
)
parser.add_argument(
'--log-level', '-l',
default='WARNING',
choices=['DEBUG', 'INFO', 'WARNING', 'ERROR'],
help='Set logging level (default: WARNING)'
)
args = parser.parse_args()
# Setup logging
setup_logging(args.log_level)
# Check if compile_commands.json exists
if not os.path.exists(args.compile_db):
print(f"Error: {args.compile_db} not found", file=sys.stderr)
print("Make sure you've run cmake with -DCMAKE_EXPORT_COMPILE_COMMANDS=ON", file=sys.stderr)
sys.exit(1)
# Load compile commands
try:
with open(args.compile_db, 'r') as f:
compile_commands = json.load(f)
logging.info(f"Loaded {len(compile_commands)} compile commands")
except json.JSONDecodeError as e:
print(f"Error parsing {args.compile_db}: {e}", file=sys.stderr)
sys.exit(1)
# Find the compile command for the requested file
entry = find_compile_command(compile_commands, args.file)
if not entry:
print(f"Error: No compile command found for {args.file}", file=sys.stderr)
print(f"Available files in {args.compile_db}:", file=sys.stderr)
# Show first 10 files as examples
for i, cmd in enumerate(compile_commands[:10]):
print(f" {cmd['file']}", file=sys.stderr)
if len(compile_commands) > 10:
print(f" ... and {len(compile_commands) - 10} more", file=sys.stderr)
sys.exit(1)
# Extract the command and directory
command = entry['command']
directory = entry.get('directory', '.')
source_file = entry['file']
if args.verbose:
print(f"Directory: {directory}", file=sys.stderr)
print(f"Command: {command}", file=sys.stderr)
print("-" * 80, file=sys.stderr)
logging.info(f"Compiling {source_file}")
logging.debug(f"Working directory: {directory}")
logging.debug(f"Command: {command}")
# Execute the compile command
try:
# If we need to show error context, we must capture output
capture = not args.dump_output or args.show_error_context is not None
logging.debug(f"Running compiler (capture={capture})")
result = subprocess.run(
command,
shell=True,
cwd=directory,
capture_output=capture,
text=True
)
logging.info(f"Compiler returned code: {result.returncode}")
if args.dump_output and not args.show_error_context:
# Output was already printed to stdout/stderr
pass
elif args.show_error_context is not None:
# Parse and display errors with context
all_output = result.stderr + "\n" + result.stdout
# Log first few lines of output for debugging
output_lines = all_output.split('\n')[:10]
for line in output_lines:
logging.debug(f"Output: {line}")
errors = extract_errors_with_context(all_output, args.file, args.show_error_context)
if args.errors_only:
errors = [e for e in errors if e['type'] == 'error']
logging.info(f"Filtered to {len(errors)} errors only")
print(f"\nFound {len(errors)} {'error' if args.errors_only else 'error/warning'}(s) in {args.file}:\n")
for error in errors:
print(format_error_with_context(error, source_file, args.show_source_context))
if errors:
print(f"\n{'='*80}")
print(f"Total: {len(errors)} {'error' if args.errors_only else 'error/warning'}(s)")
else:
# Default behavior - show output if there were errors or warnings
if result.stderr:
print(result.stderr, file=sys.stderr)
if result.stdout:
print(result.stdout)
# Exit with the same code as the compiler
sys.exit(result.returncode)
except subprocess.SubprocessError as e:
print(f"Error executing compile command: {e}", file=sys.stderr)
sys.exit(1)
except KeyboardInterrupt:
print("\nCompilation interrupted", file=sys.stderr)
sys.exit(130)
if __name__ == '__main__':
main()

157
conanfile.py Normal file
View File

@@ -0,0 +1,157 @@
from conan import ConanFile
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
import re
class Xrpl(ConanFile):
name = 'xrpl'
license = 'ISC'
author = 'John Freeman <jfreeman@ripple.com>'
url = 'https://github.com/xrplf/rippled'
description = 'The XRP Ledger'
settings = 'os', 'compiler', 'build_type', 'arch'
options = {
'assertions': [True, False],
'coverage': [True, False],
'fPIC': [True, False],
'jemalloc': [True, False],
'reporting': [True, False],
'rocksdb': [True, False],
'shared': [True, False],
'static': [True, False],
'tests': [True, False],
'unity': [True, False],
}
requires = [
'blake3/1.5.0@xahaud/stable',
'boost/1.86.0',
'date/3.0.1',
'libarchive/3.6.0',
'lz4/1.9.3',
'grpc/1.50.1',
'nudb/2.0.8',
'openssl/3.3.2',
'protobuf/3.21.9',
'snappy/1.1.10',
'soci/4.0.3',
'sqlite3/3.42.0',
'zlib/1.3.1',
'wasmedge/0.11.2',
]
default_options = {
'assertions': False,
'coverage': False,
'fPIC': True,
'jemalloc': False,
'reporting': False,
'rocksdb': True,
'shared': False,
'static': True,
'tests': True,
'unity': False,
'blake3:simd': False, # Disable SIMD for testing
'cassandra-cpp-driver:shared': False,
'date:header_only': True,
'grpc:shared': False,
'grpc:secure': True,
'libarchive:shared': False,
'libarchive:with_acl': False,
'libarchive:with_bzip2': False,
'libarchive:with_cng': False,
'libarchive:with_expat': False,
'libarchive:with_iconv': False,
'libarchive:with_libxml2': False,
'libarchive:with_lz4': True,
'libarchive:with_lzma': False,
'libarchive:with_lzo': False,
'libarchive:with_nettle': False,
'libarchive:with_openssl': False,
'libarchive:with_pcreposix': False,
'libarchive:with_xattr': False,
'libarchive:with_zlib': False,
'libpq:shared': False,
'lz4:shared': False,
'openssl:shared': False,
'protobuf:shared': False,
'protobuf:with_zlib': True,
'rocksdb:enable_sse': False,
'rocksdb:lite': False,
'rocksdb:shared': False,
'rocksdb:use_rtti': True,
'rocksdb:with_jemalloc': False,
'rocksdb:with_lz4': True,
'rocksdb:with_snappy': True,
'snappy:shared': False,
'soci:shared': False,
'soci:with_sqlite3': True,
'soci:with_boost': True,
}
def set_version(self):
path = f'{self.recipe_folder}/src/ripple/protocol/impl/BuildInfo.cpp'
regex = r'versionString\s?=\s?\"(.*)\"'
with open(path, 'r') as file:
matches = (re.search(regex, line) for line in file)
match = next(m for m in matches if m)
self.version = match.group(1)
def configure(self):
if self.settings.compiler == 'apple-clang':
self.options['boost'].visibility = 'global'
def requirements(self):
if self.options.jemalloc:
self.requires('jemalloc/5.2.1')
if self.options.reporting:
self.requires('cassandra-cpp-driver/2.15.3')
self.requires('libpq/13.6')
if self.options.rocksdb:
self.requires('rocksdb/6.27.3')
exports_sources = (
'CMakeLists.txt', 'Builds/*', 'bin/getRippledInfo', 'src/*', 'cfg/*'
)
def layout(self):
cmake_layout(self)
# Fix this setting to follow the default introduced in Conan 1.48
# to align with our build instructions.
self.folders.generators = 'build/generators'
generators = 'CMakeDeps'
def generate(self):
tc = CMakeToolchain(self)
tc.variables['tests'] = self.options.tests
tc.variables['assert'] = self.options.assertions
tc.variables['coverage'] = self.options.coverage
tc.variables['jemalloc'] = self.options.jemalloc
tc.variables['reporting'] = self.options.reporting
tc.variables['rocksdb'] = self.options.rocksdb
tc.variables['BUILD_SHARED_LIBS'] = self.options.shared
tc.variables['static'] = self.options.static
tc.variables['unity'] = self.options.unity
tc.generate()
def build(self):
cmake = CMake(self)
cmake.verbose = True
cmake.configure()
cmake.build()
def package(self):
cmake = CMake(self)
cmake.verbose = True
cmake.install()
def package_info(self):
libxrpl = self.cpp_info.components['libxrpl']
libxrpl.libs = [
'libxrpl_core.a',
'libed25519.a',
'libsecp256k1.a',
]
libxrpl.includedirs = ['include']
libxrpl.requires = ['boost::boost']

11
docker-unit-tests.sh Normal file → Executable file
View File

@@ -1,4 +1,11 @@
#!/bin/bash
#!/bin/bash -x
docker run --rm -i -v $(pwd):/io ubuntu sh -c '/io/release-build/xahaud -u'
BUILD_CORES=$(echo "scale=0 ; `nproc` / 1.337" | bc)
if [[ "$GITHUB_REPOSITORY" == "" ]]; then
#Default
BUILD_CORES=8
fi
echo "Mounting $(pwd)/io in ubuntu and running unit tests"
docker run --rm -i -v $(pwd):/io -e BUILD_CORES=$BUILD_CORES ubuntu sh -c '/io/release-build/xahaud --unittest-jobs $BUILD_CORES -u'

84
docs/build/environment.md vendored Normal file
View File

@@ -0,0 +1,84 @@
Our [build instructions][BUILD.md] assume you have a C++ development
environment complete with Git, Python, Conan, CMake, and a C++ compiler.
This document exists to help readers set one up on any of the Big Three
platforms: Linux, macOS, or Windows.
[BUILD.md]: ../../BUILD.md
## Linux
Package ecosystems vary across Linux distributions,
so there is no one set of instructions that will work for every Linux user.
These instructions are written for Ubuntu 22.04.
They are largely copied from the [script][1] used to configure our Docker
container for continuous integration.
That script handles many more responsibilities.
These instructions are just the bare minimum to build one configuration of
rippled.
You can check that codebase for other Linux distributions and versions.
If you cannot find yours there,
then we hope that these instructions can at least guide you in the right
direction.
```
apt update
apt install --yes curl git libssl-dev python3.10-dev python3-pip make g++-11
curl --location --remote-name \
"https://github.com/Kitware/CMake/releases/download/v3.25.1/cmake-3.25.1.tar.gz"
tar -xzf cmake-3.25.1.tar.gz
rm cmake-3.25.1.tar.gz
cd cmake-3.25.1
./bootstrap --parallel=$(nproc)
make --jobs $(nproc)
make install
cd ..
pip3 install 'conan<2'
```
[1]: https://github.com/thejohnfreeman/rippled-docker/blob/master/ubuntu-22.04/install.sh
## macOS
Open a Terminal and enter the below command to bring up a dialog to install
the command line developer tools.
Once it is finished, this command should return a version greater than the
minimum required (see [BUILD.md][]).
```
clang --version
```
The command line developer tools should include Git too:
```
git --version
```
Install [Homebrew][],
use it to install [pyenv][],
use it to install Python,
and use it to install Conan:
[Homebrew]: https://brew.sh/
[pyenv]: https://github.com/pyenv/pyenv
```
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew update
brew install xz
brew install pyenv
pyenv install 3.10-dev
pyenv global 3.10-dev
eval "$(pyenv init -)"
pip install 'conan<2'
```
Install CMake with Homebrew too:
```
brew install cmake
```

159
docs/build/install.md vendored Normal file
View File

@@ -0,0 +1,159 @@
This document contains instructions for installing rippled.
The APT package manager is common on Debian-based Linux distributions like
Ubuntu,
while the YUM package manager is common on Red Hat-based Linux distributions
like CentOS.
Installing from source is an option for all platforms,
and the only supported option for installing custom builds.
## From source
From a source build, you can install rippled and libxrpl using CMake's
`--install` mode:
```
cmake --install . --prefix /opt/local
```
The default [prefix][1] is typically `/usr/local` on Linux and macOS and
`C:/Program Files/rippled` on Windows.
[1]: https://cmake.org/cmake/help/latest/variable/CMAKE_INSTALL_PREFIX.html
## With the APT package manager
1. Update repositories:
sudo apt update -y
2. Install utilities:
sudo apt install -y apt-transport-https ca-certificates wget gnupg
3. Add Ripple's package-signing GPG key to your list of trusted keys:
sudo mkdir /usr/local/share/keyrings/
wget -q -O - "https://repos.ripple.com/repos/api/gpg/key/public" | gpg --dearmor > ripple-key.gpg
sudo mv ripple-key.gpg /usr/local/share/keyrings
4. Check the fingerprint of the newly-added key:
gpg /usr/local/share/keyrings/ripple-key.gpg
The output should include an entry for Ripple such as the following:
gpg: WARNING: no command supplied. Trying to guess what you mean ...
pub rsa3072 2019-02-14 [SC] [expires: 2026-02-17]
C0010EC205B35A3310DC90DE395F97FFCCAFD9A2
uid TechOps Team at Ripple <techops+rippled@ripple.com>
sub rsa3072 2019-02-14 [E] [expires: 2026-02-17]
In particular, make sure that the fingerprint matches. (In the above example, the fingerprint is on the third line, starting with `C001`.)
4. Add the appropriate Ripple repository for your operating system version:
echo "deb [signed-by=/usr/local/share/keyrings/ripple-key.gpg] https://repos.ripple.com/repos/rippled-deb focal stable" | \
sudo tee -a /etc/apt/sources.list.d/ripple.list
The above example is appropriate for **Ubuntu 20.04 Focal Fossa**. For other operating systems, replace the word `focal` with one of the following:
- `jammy` for **Ubuntu 22.04 Jammy Jellyfish**
- `bionic` for **Ubuntu 18.04 Bionic Beaver**
- `bullseye` for **Debian 11 Bullseye**
- `buster` for **Debian 10 Buster**
If you want access to development or pre-release versions of `rippled`, use one of the following instead of `stable`:
- `unstable` - Pre-release builds ([`release` branch](https://github.com/ripple/rippled/tree/release))
- `nightly` - Experimental/development builds ([`develop` branch](https://github.com/ripple/rippled/tree/develop))
**Warning:** Unstable and nightly builds may be broken at any time. Do not use these builds for production servers.
5. Fetch the Ripple repository.
sudo apt -y update
6. Install the `rippled` software package:
sudo apt -y install rippled
7. Check the status of the `rippled` service:
systemctl status rippled.service
The `rippled` service should start automatically. If not, you can start it manually:
sudo systemctl start rippled.service
8. Optional: allow `rippled` to bind to privileged ports.
This allows you to serve incoming API requests on port 80 or 443. (If you want to do so, you must also update the config file's port settings.)
sudo setcap 'cap_net_bind_service=+ep' /opt/ripple/bin/rippled
## With the YUM package manager
1. Install the Ripple RPM repository:
Choose the appropriate RPM repository for the stability of releases you want:
- `stable` for the latest production release (`master` branch)
- `unstable` for pre-release builds (`release` branch)
- `nightly` for experimental/development builds (`develop` branch)
*Stable*
cat << REPOFILE | sudo tee /etc/yum.repos.d/ripple.repo
[ripple-stable]
name=XRP Ledger Packages
enabled=1
gpgcheck=0
repo_gpgcheck=1
baseurl=https://repos.ripple.com/repos/rippled-rpm/stable/
gpgkey=https://repos.ripple.com/repos/rippled-rpm/stable/repodata/repomd.xml.key
REPOFILE
*Unstable*
cat << REPOFILE | sudo tee /etc/yum.repos.d/ripple.repo
[ripple-unstable]
name=XRP Ledger Packages
enabled=1
gpgcheck=0
repo_gpgcheck=1
baseurl=https://repos.ripple.com/repos/rippled-rpm/unstable/
gpgkey=https://repos.ripple.com/repos/rippled-rpm/unstable/repodata/repomd.xml.key
REPOFILE
*Nightly*
cat << REPOFILE | sudo tee /etc/yum.repos.d/ripple.repo
[ripple-nightly]
name=XRP Ledger Packages
enabled=1
gpgcheck=0
repo_gpgcheck=1
baseurl=https://repos.ripple.com/repos/rippled-rpm/nightly/
gpgkey=https://repos.ripple.com/repos/rippled-rpm/nightly/repodata/repomd.xml.key
REPOFILE
2. Fetch the latest repo updates:
sudo yum -y update
3. Install the new `rippled` package:
sudo yum install -y rippled
4. Configure the `rippled` service to start on boot:
sudo systemctl enable rippled.service
5. Start the `rippled` service:
sudo systemctl start rippled.service

10
external/blake3/conandata.yml vendored Normal file
View File

@@ -0,0 +1,10 @@
sources:
"1.5.0":
url: "https://github.com/BLAKE3-team/BLAKE3/archive/refs/tags/1.5.0.tar.gz"
sha256: "f506140bc3af41d3432a4ce18b3b83b08eaa240e94ef161eb72b2e57cdc94c69"
"1.4.1":
url: "https://github.com/BLAKE3-team/BLAKE3/archive/refs/tags/1.4.1.tar.gz"
sha256: "33020ac83a8169b2e847cc6fb1dd38806ffab6efe79fe6c320e322154a3bea2c"
"1.4.0":
url: "https://github.com/BLAKE3-team/BLAKE3/archive/refs/tags/1.4.0.tar.gz"
sha256: "658e1c75e2d9bbed9f426385f02d2a188dc19978a39e067ba93e837861e5fe58"

115
external/blake3/conanfile.py vendored Normal file
View File

@@ -0,0 +1,115 @@
from conan import ConanFile
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
from conan.tools.files import copy, get
from conan.tools.scm import Version
import os
required_conan_version = ">=1.54.0"
class Blake3Conan(ConanFile):
name = "blake3"
version = "1.5.0"
description = "BLAKE3 cryptographic hash function"
topics = ("blake3", "hash", "cryptography")
url = "https://github.com/BLAKE3-team/BLAKE3"
homepage = "https://github.com/BLAKE3-team/BLAKE3"
license = "CC0-1.0 OR Apache-2.0"
package_type = "library"
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
"fPIC": [True, False],
"simd": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
"simd": False, # Default to NO SIMD for testing
}
def config_options(self):
if self.settings.os == 'Windows':
del self.options.fPIC
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
# BLAKE3 is C code
self.settings.rm_safe("compiler.cppstd")
self.settings.rm_safe("compiler.libcxx")
def layout(self):
cmake_layout(self, src_folder="src")
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def generate(self):
tc = CMakeToolchain(self)
# BLAKE3's CMake options
tc.variables["BUILD_SHARED_LIBS"] = self.options.shared
if not self.options.simd:
# For v1.5.0, we'll need to manually patch the CMakeLists.txt
# These flags don't work with the old CMake
tc.preprocessor_definitions["BLAKE3_USE_NEON"] = "0"
tc.generate()
def build(self):
# Patch CMakeLists.txt if SIMD is disabled
if not self.options.simd:
cmake_file = os.path.join(self.source_folder, "c", "CMakeLists.txt")
# Read the file
with open(cmake_file, 'r') as f:
content = f.read()
# Replace the ARM detection line to never match
content = content.replace(
'elseif(CMAKE_SYSTEM_PROCESSOR IN_LIST BLAKE3_ARMv8_NAMES',
'elseif(FALSE # Disabled by conan simd=False'
)
# Write it back
with open(cmake_file, 'w') as f:
f.write(content)
cmake = CMake(self)
# BLAKE3's C implementation has its CMakeLists.txt in the c/ subdirectory
cmake.configure(build_script_folder=os.path.join(self.source_folder, "c"))
cmake.build()
def package(self):
# Copy license files
copy(self, "LICENSE*", src=self.source_folder,
dst=os.path.join(self.package_folder, "licenses"))
# Copy header
copy(self, "blake3.h",
src=os.path.join(self.source_folder, "c"),
dst=os.path.join(self.package_folder, "include"))
# Copy library
copy(self, "*.a", src=self.build_folder,
dst=os.path.join(self.package_folder, "lib"), keep_path=False)
copy(self, "*.lib", src=self.build_folder,
dst=os.path.join(self.package_folder, "lib"), keep_path=False)
copy(self, "*.dylib", src=self.build_folder,
dst=os.path.join(self.package_folder, "lib"), keep_path=False)
copy(self, "*.so*", src=self.build_folder,
dst=os.path.join(self.package_folder, "lib"), keep_path=False)
copy(self, "*.dll", src=self.build_folder,
dst=os.path.join(self.package_folder, "bin"), keep_path=False)
def package_info(self):
self.cpp_info.set_property("cmake_file_name", "BLAKE3")
self.cpp_info.set_property("cmake_target_name", "BLAKE3::blake3")
# IMPORTANT: Explicitly set include directories to fix Conan CMakeDeps generation
self.cpp_info.includedirs = ["include"]
self.cpp_info.libs = ["blake3"]
# System libraries
if self.settings.os in ["Linux", "FreeBSD"]:
self.cpp_info.system_libs.append("m")
self.cpp_info.system_libs.append("pthread")
# TODO: to remove in conan v2 once cmake_find_package* generators removed
self.cpp_info.names["cmake_find_package"] = "BLAKE3"
self.cpp_info.names["cmake_find_package_multi"] = "BLAKE3"

193
external/rocksdb/conanfile.py vendored Normal file
View File

@@ -0,0 +1,193 @@
import os
import shutil
from conans import ConanFile, CMake
from conan.tools import microsoft as ms
class RocksDB(ConanFile):
name = 'rocksdb'
version = '6.27.3'
license = ('GPL-2.0-only', 'Apache-2.0')
url = 'https://github.com/conan-io/conan-center-index'
description = 'A library that provides an embeddable, persistent key-value store for fast storage'
topics = ('rocksdb', 'database', 'leveldb', 'facebook', 'key-value')
settings = 'os', 'compiler', 'build_type', 'arch'
options = {
'enable_sse': [False, 'sse42', 'avx2'],
'fPIC': [True, False],
'lite': [True, False],
'shared': [True, False],
'use_rtti': [True, False],
'with_gflags': [True, False],
'with_jemalloc': [True, False],
'with_lz4': [True, False],
'with_snappy': [True, False],
'with_tbb': [True, False],
'with_zlib': [True, False],
'with_zstd': [True, False],
}
default_options = {
'enable_sse': False,
'fPIC': True,
'lite': False,
'shared': False,
'use_rtti': False,
'with_gflags': False,
'with_jemalloc': False,
'with_lz4': False,
'with_snappy': False,
'with_tbb': False,
'with_zlib': False,
'with_zstd': False,
}
def requirements(self):
if self.options.with_gflags:
self.requires('gflags/2.2.2')
if self.options.with_jemalloc:
self.requires('jemalloc/5.2.1')
if self.options.with_lz4:
self.requires('lz4/1.9.3')
if self.options.with_snappy:
self.requires('snappy/1.1.9')
if self.options.with_tbb:
self.requires('onetbb/2020.3')
if self.options.with_zlib:
self.requires('zlib/1.2.11')
if self.options.with_zstd:
self.requires('zstd/1.5.2')
def config_options(self):
if self.settings.os == 'Windows':
del self.options.fPIC
def configure(self):
if self.options.shared:
del self.options.fPIC
generators = 'cmake', 'cmake_find_package'
scm = {
'type': 'git',
'url': 'https://github.com/facebook/rocksdb.git',
'revision': 'v6.27.3',
}
exports_sources = 'thirdparty.inc'
# For out-of-source build.
no_copy_source = True
_cmake = None
def _configure_cmake(self):
if self._cmake:
return
self._cmake = CMake(self)
self._cmake.definitions['CMAKE_POSITION_INDEPENDENT_CODE'] = True
self._cmake.definitions['DISABLE_STALL_NOTIF'] = False
self._cmake.definitions['FAIL_ON_WARNINGS'] = False
self._cmake.definitions['OPTDBG'] = True
self._cmake.definitions['WITH_TESTS'] = False
self._cmake.definitions['WITH_TOOLS'] = False
self._cmake.definitions['WITH_GFLAGS'] = self.options.with_gflags
self._cmake.definitions['WITH_JEMALLOC'] = self.options.with_jemalloc
self._cmake.definitions['WITH_LZ4'] = self.options.with_lz4
self._cmake.definitions['WITH_SNAPPY'] = self.options.with_snappy
self._cmake.definitions['WITH_TBB'] = self.options.with_tbb
self._cmake.definitions['WITH_ZLIB'] = self.options.with_zlib
self._cmake.definitions['WITH_ZSTD'] = self.options.with_zstd
self._cmake.definitions['USE_RTTI'] = self.options.use_rtti
self._cmake.definitions['ROCKSDB_LITE'] = self.options.lite
self._cmake.definitions['ROCKSDB_INSTALL_ON_WINDOWS'] = (
self.settings.os == 'Windows'
)
if not self.options.enable_sse:
self._cmake.definitions['PORTABLE'] = True
self._cmake.definitions['FORCE_SSE42'] = False
elif self.options.enable_sse == 'sse42':
self._cmake.definitions['PORTABLE'] = True
self._cmake.definitions['FORCE_SSE42'] = True
elif self.options.enable_sse == 'avx2':
self._cmake.definitions['PORTABLE'] = False
self._cmake.definitions['FORCE_SSE42'] = False
self._cmake.definitions['WITH_ASAN'] = False
self._cmake.definitions['WITH_BZ2'] = False
self._cmake.definitions['WITH_JNI'] = False
self._cmake.definitions['WITH_LIBRADOS'] = False
if ms.is_msvc(self):
self._cmake.definitions['WITH_MD_LIBRARY'] = (
ms.msvc_runtime_flag(self).startswith('MD')
)
self._cmake.definitions['WITH_RUNTIME_DEBUG'] = (
ms.msvc_runtime_flag(self).endswith('d')
)
self._cmake.definitions['WITH_NUMA'] = False
self._cmake.definitions['WITH_TSAN'] = False
self._cmake.definitions['WITH_UBSAN'] = False
self._cmake.definitions['WITH_WINDOWS_UTF8_FILENAMES'] = False
self._cmake.definitions['WITH_XPRESS'] = False
self._cmake.definitions['WITH_FALLOCATE'] = True
def build(self):
if ms.is_msvc(self):
file = os.path.join(
self.recipe_folder, '..', 'export_source', 'thirdparty.inc'
)
shutil.copy(file, self.build_folder)
self._configure_cmake()
self._cmake.configure()
self._cmake.build()
def package(self):
self._configure_cmake()
self._cmake.install()
def package_info(self):
self.cpp_info.filenames['cmake_find_package'] = 'RocksDB'
self.cpp_info.filenames['cmake_find_package_multi'] = 'RocksDB'
self.cpp_info.set_property('cmake_file_name', 'RocksDB')
self.cpp_info.names['cmake_find_package'] = 'RocksDB'
self.cpp_info.names['cmake_find_package_multi'] = 'RocksDB'
self.cpp_info.components['librocksdb'].names['cmake_find_package'] = 'rocksdb'
self.cpp_info.components['librocksdb'].names['cmake_find_package_multi'] = 'rocksdb'
self.cpp_info.components['librocksdb'].set_property(
'cmake_target_name', 'RocksDB::rocksdb'
)
self.cpp_info.components['librocksdb'].libs = ['rocksdb']
if self.settings.os == "Windows":
self.cpp_info.components["librocksdb"].system_libs = ["shlwapi", "rpcrt4"]
if self.options.shared:
self.cpp_info.components["librocksdb"].defines = ["ROCKSDB_DLL"]
elif self.settings.os in ["Linux", "FreeBSD"]:
self.cpp_info.components["librocksdb"].system_libs = ["pthread", "m"]
if self.options.lite:
self.cpp_info.components["librocksdb"].defines.append("ROCKSDB_LITE")
if self.options.with_gflags:
self.cpp_info.components["librocksdb"].requires.append("gflags::gflags")
if self.options.with_jemalloc:
self.cpp_info.components["librocksdb"].requires.append("jemalloc::jemalloc")
if self.options.with_lz4:
self.cpp_info.components["librocksdb"].requires.append("lz4::lz4")
if self.options.with_snappy:
self.cpp_info.components["librocksdb"].requires.append("snappy::snappy")
if self.options.with_tbb:
self.cpp_info.components["librocksdb"].requires.append("onetbb::onetbb")
if self.options.with_zlib:
self.cpp_info.components["librocksdb"].requires.append("zlib::zlib")
if self.options.with_zstd:
self.cpp_info.components["librocksdb"].requires.append("zstd::zstd")

62
external/rocksdb/thirdparty.inc vendored Normal file
View File

@@ -0,0 +1,62 @@
if(WITH_GFLAGS)
# Config with namespace available since gflags 2.2.2
find_package(gflags REQUIRED)
set(GFLAGS_LIB gflags::gflags)
list(APPEND THIRDPARTY_LIBS ${GFLAGS_LIB})
add_definitions(-DGFLAGS=1)
endif()
if(WITH_SNAPPY)
find_package(Snappy REQUIRED)
add_definitions(-DSNAPPY)
list(APPEND THIRDPARTY_LIBS Snappy::snappy)
endif()
if(WITH_LZ4)
find_package(lz4 REQUIRED)
add_definitions(-DLZ4)
list(APPEND THIRDPARTY_LIBS lz4::lz4)
endif()
if(WITH_ZLIB)
find_package(ZLIB REQUIRED)
add_definitions(-DZLIB)
list(APPEND THIRDPARTY_LIBS ZLIB::ZLIB)
endif()
option(WITH_BZ2 "build with bzip2" OFF)
if(WITH_BZ2)
find_package(BZip2 REQUIRED)
add_definitions(-DBZIP2)
list(APPEND THIRDPARTY_LIBS BZip2::BZip2)
endif()
if(WITH_ZSTD)
find_package(zstd REQUIRED)
add_definitions(-DZSTD)
list(APPEND THIRDPARTY_LIBS zstd::zstd)
endif()
# ================================================== XPRESS ==================================================
# This makes use of built-in Windows API, no additional includes, links to a system lib
if(WITH_XPRESS)
message(STATUS "XPRESS is enabled")
add_definitions(-DXPRESS)
# We are using the implementation provided by the system
list(APPEND SYSTEM_LIBS Cabinet.lib)
else()
message(STATUS "XPRESS is disabled")
endif()
# ================================================== JEMALLOC ==================================================
if(WITH_JEMALLOC)
message(STATUS "JEMALLOC library is enabled")
add_definitions(-DROCKSDB_JEMALLOC -DJEMALLOC_EXPORT= -DJEMALLOC_NO_RENAME)
list(APPEND THIRDPARTY_LIBS jemalloc::jemalloc)
set(ARTIFACT_SUFFIX "_je")
else ()
set(ARTIFACT_SUFFIX "")
message(STATUS "JEMALLOC library is disabled")
endif ()

40
external/snappy/conandata.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
sources:
"1.1.10":
url: "https://github.com/google/snappy/archive/1.1.10.tar.gz"
sha256: "49d831bffcc5f3d01482340fe5af59852ca2fe76c3e05df0e67203ebbe0f1d90"
"1.1.9":
url: "https://github.com/google/snappy/archive/1.1.9.tar.gz"
sha256: "75c1fbb3d618dd3a0483bff0e26d0a92b495bbe5059c8b4f1c962b478b6e06e7"
"1.1.8":
url: "https://github.com/google/snappy/archive/1.1.8.tar.gz"
sha256: "16b677f07832a612b0836178db7f374e414f94657c138e6993cbfc5dcc58651f"
"1.1.7":
url: "https://github.com/google/snappy/archive/1.1.7.tar.gz"
sha256: "3dfa02e873ff51a11ee02b9ca391807f0c8ea0529a4924afa645fbf97163f9d4"
patches:
"1.1.10":
- patch_file: "patches/1.1.10-0001-fix-inlining-failure.patch"
patch_description: "disable inlining for compilation error"
patch_type: "portability"
- patch_file: "patches/1.1.9-0002-no-Werror.patch"
patch_description: "disable 'warning as error' options"
patch_type: "portability"
- patch_file: "patches/1.1.10-0003-fix-clobber-list-older-llvm.patch"
patch_description: "disable inline asm on apple-clang"
patch_type: "portability"
- patch_file: "patches/1.1.9-0004-rtti-by-default.patch"
patch_description: "remove 'disable rtti'"
patch_type: "conan"
"1.1.9":
- patch_file: "patches/1.1.9-0001-fix-inlining-failure.patch"
patch_description: "disable inlining for compilation error"
patch_type: "portability"
- patch_file: "patches/1.1.9-0002-no-Werror.patch"
patch_description: "disable 'warning as error' options"
patch_type: "portability"
- patch_file: "patches/1.1.9-0003-fix-clobber-list-older-llvm.patch"
patch_description: "disable inline asm on apple-clang"
patch_type: "portability"
- patch_file: "patches/1.1.9-0004-rtti-by-default.patch"
patch_description: "remove 'disable rtti'"
patch_type: "conan"

89
external/snappy/conanfile.py vendored Normal file
View File

@@ -0,0 +1,89 @@
from conan import ConanFile
from conan.tools.build import check_min_cppstd
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get, rmdir
from conan.tools.scm import Version
import os
required_conan_version = ">=1.54.0"
class SnappyConan(ConanFile):
name = "snappy"
description = "A fast compressor/decompressor"
topics = ("google", "compressor", "decompressor")
url = "https://github.com/conan-io/conan-center-index"
homepage = "https://github.com/google/snappy"
license = "BSD-3-Clause"
package_type = "library"
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
"fPIC": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
}
def export_sources(self):
export_conandata_patches(self)
def config_options(self):
if self.settings.os == 'Windows':
del self.options.fPIC
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
def layout(self):
cmake_layout(self, src_folder="src")
def validate(self):
if self.settings.compiler.get_safe("cppstd"):
check_min_cppstd(self, 11)
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def generate(self):
tc = CMakeToolchain(self)
tc.variables["SNAPPY_BUILD_TESTS"] = False
if Version(self.version) >= "1.1.8":
tc.variables["SNAPPY_FUZZING_BUILD"] = False
tc.variables["SNAPPY_REQUIRE_AVX"] = False
tc.variables["SNAPPY_REQUIRE_AVX2"] = False
tc.variables["SNAPPY_INSTALL"] = True
if Version(self.version) >= "1.1.9":
tc.variables["SNAPPY_BUILD_BENCHMARKS"] = False
tc.generate()
def build(self):
apply_conandata_patches(self)
cmake = CMake(self)
cmake.configure()
cmake.build()
def package(self):
copy(self, "COPYING", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
cmake = CMake(self)
cmake.install()
rmdir(self, os.path.join(self.package_folder, "lib", "cmake"))
def package_info(self):
self.cpp_info.set_property("cmake_file_name", "Snappy")
self.cpp_info.set_property("cmake_target_name", "Snappy::snappy")
# TODO: back to global scope in conan v2 once cmake_find_package* generators removed
self.cpp_info.components["snappylib"].libs = ["snappy"]
if not self.options.shared:
if self.settings.os in ["Linux", "FreeBSD"]:
self.cpp_info.components["snappylib"].system_libs.append("m")
# TODO: to remove in conan v2 once cmake_find_package* generators removed
self.cpp_info.names["cmake_find_package"] = "Snappy"
self.cpp_info.names["cmake_find_package_multi"] = "Snappy"
self.cpp_info.components["snappylib"].names["cmake_find_package"] = "snappy"
self.cpp_info.components["snappylib"].names["cmake_find_package_multi"] = "snappy"
self.cpp_info.components["snappylib"].set_property("cmake_target_name", "Snappy::snappy")

View File

@@ -0,0 +1,13 @@
diff --git a/snappy-stubs-internal.h b/snappy-stubs-internal.h
index 1548ed7..3b4a9f3 100644
--- a/snappy-stubs-internal.h
+++ b/snappy-stubs-internal.h
@@ -100,7 +100,7 @@
// Inlining hints.
#if HAVE_ATTRIBUTE_ALWAYS_INLINE
-#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE __attribute__((always_inline))
+#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#else
#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#endif // HAVE_ATTRIBUTE_ALWAYS_INLINE

View File

@@ -0,0 +1,13 @@
diff --git a/snappy.cc b/snappy.cc
index d414718..e4efb59 100644
--- a/snappy.cc
+++ b/snappy.cc
@@ -1132,7 +1132,7 @@ inline size_t AdvanceToNextTagX86Optimized(const uint8_t** ip_p, size_t* tag) {
size_t literal_len = *tag >> 2;
size_t tag_type = *tag;
bool is_literal;
-#if defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(__x86_64__)
+#if defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(__x86_64__) && ( (!defined(__clang__) && !defined(__APPLE__)) || (!defined(__APPLE__) && defined(__clang__) && (__clang_major__ >= 9)) || (defined(__APPLE__) && defined(__clang__) && (__clang_major__ > 11)) )
// TODO clang misses the fact that the (c & 3) already correctly
// sets the zero flag.
asm("and $3, %k[tag_type]\n\t"

View File

@@ -0,0 +1,14 @@
Fixes the following error:
error: inlining failed in call to always_inline size_t snappy::AdvanceToNextTag(const uint8_t**, size_t*): function body can be overwritten at link time
--- snappy-stubs-internal.h
+++ snappy-stubs-internal.h
@@ -100,7 +100,7 @@
// Inlining hints.
#ifdef HAVE_ATTRIBUTE_ALWAYS_INLINE
-#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE __attribute__((always_inline))
+#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#else
#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#endif

View File

@@ -0,0 +1,12 @@
--- CMakeLists.txt
+++ CMakeLists.txt
@@ -69,7 +69,7 @@
- # Use -Werror for clang only.
+if(0)
if(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
if(NOT CMAKE_CXX_FLAGS MATCHES "-Werror")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror")
endif(NOT CMAKE_CXX_FLAGS MATCHES "-Werror")
endif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
-
+endif()

View File

@@ -0,0 +1,12 @@
asm clobbers do not work for clang < 9 and apple-clang < 11 (found by SpaceIm)
--- snappy.cc
+++ snappy.cc
@@ -1026,7 +1026,7 @@
size_t literal_len = *tag >> 2;
size_t tag_type = *tag;
bool is_literal;
-#if defined(__GNUC__) && defined(__x86_64__)
+#if defined(__GNUC__) && defined(__x86_64__) && ( (!defined(__clang__) && !defined(__APPLE__)) || (!defined(__APPLE__) && defined(__clang__) && (__clang_major__ >= 9)) || (defined(__APPLE__) && defined(__clang__) && (__clang_major__ > 11)) )
// TODO clang misses the fact that the (c & 3) already correctly
// sets the zero flag.
asm("and $3, %k[tag_type]\n\t"

View File

@@ -0,0 +1,20 @@
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -53,8 +53,6 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
add_definitions(-D_HAS_EXCEPTIONS=0)
# Disable RTTI.
- string(REGEX REPLACE "/GR" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /GR-")
else(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# Use -Wall for clang and gcc.
if(NOT CMAKE_CXX_FLAGS MATCHES "-Wall")
@@ -78,8 +76,6 @@ endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-exceptions")
# Disable RTTI.
- string(REGEX REPLACE "-frtti" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-rtti")
endif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# BUILD_SHARED_LIBS is a standard CMake variable, but we declare it here to make

12
external/soci/conandata.yml vendored Normal file
View File

@@ -0,0 +1,12 @@
sources:
"4.0.3":
url: "https://github.com/SOCI/soci/archive/v4.0.3.tar.gz"
sha256: "4b1ff9c8545c5d802fbe06ee6cd2886630e5c03bf740e269bb625b45cf934928"
patches:
"4.0.3":
- patch_file: "patches/0001-Remove-hardcoded-INSTALL_NAME_DIR-for-relocatable-li.patch"
patch_description: "Generate relocatable libraries on MacOS"
patch_type: "portability"
- patch_file: "patches/0002-Fix-soci_backend.patch"
patch_description: "Fix variable names for dependencies"
patch_type: "conan"

212
external/soci/conanfile.py vendored Normal file
View File

@@ -0,0 +1,212 @@
from conan import ConanFile
from conan.tools.build import check_min_cppstd
from conan.tools.cmake import CMake, CMakeDeps, CMakeToolchain, cmake_layout
from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get, rmdir
from conan.tools.microsoft import is_msvc
from conan.tools.scm import Version
from conan.errors import ConanInvalidConfiguration
import os
required_conan_version = ">=1.55.0"
class SociConan(ConanFile):
name = "soci"
homepage = "https://github.com/SOCI/soci"
url = "https://github.com/conan-io/conan-center-index"
description = "The C++ Database Access Library "
topics = ("mysql", "odbc", "postgresql", "sqlite3")
license = "BSL-1.0"
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
"fPIC": [True, False],
"empty": [True, False],
"with_sqlite3": [True, False],
"with_db2": [True, False],
"with_odbc": [True, False],
"with_oracle": [True, False],
"with_firebird": [True, False],
"with_mysql": [True, False],
"with_postgresql": [True, False],
"with_boost": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
"empty": False,
"with_sqlite3": False,
"with_db2": False,
"with_odbc": False,
"with_oracle": False,
"with_firebird": False,
"with_mysql": False,
"with_postgresql": False,
"with_boost": False,
}
def export_sources(self):
export_conandata_patches(self)
def layout(self):
cmake_layout(self, src_folder="src")
def config_options(self):
if self.settings.os == "Windows":
self.options.rm_safe("fPIC")
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
def requirements(self):
if self.options.with_sqlite3:
self.requires("sqlite3/3.41.1")
if self.options.with_odbc and self.settings.os != "Windows":
self.requires("odbc/2.3.11")
if self.options.with_mysql:
self.requires("libmysqlclient/8.0.31")
if self.options.with_postgresql:
self.requires("libpq/14.7")
if self.options.with_boost:
self.requires("boost/1.81.0")
@property
def _minimum_compilers_version(self):
return {
"Visual Studio": "14",
"gcc": "4.8",
"clang": "3.8",
"apple-clang": "8.0"
}
def validate(self):
if self.settings.compiler.get_safe("cppstd"):
check_min_cppstd(self, 11)
compiler = str(self.settings.compiler)
compiler_version = Version(self.settings.compiler.version.value)
if compiler not in self._minimum_compilers_version:
self.output.warning("{} recipe lacks information about the {} compiler support.".format(self.name, self.settings.compiler))
elif compiler_version < self._minimum_compilers_version[compiler]:
raise ConanInvalidConfiguration("{} requires a {} version >= {}".format(self.name, compiler, compiler_version))
prefix = "Dependencies for"
message = "not configured in this conan package."
if self.options.with_db2:
# self.requires("db2/0.0.0") # TODO add support for db2
raise ConanInvalidConfiguration("{} DB2 {} ".format(prefix, message))
if self.options.with_oracle:
# self.requires("oracle_db/0.0.0") # TODO add support for oracle
raise ConanInvalidConfiguration("{} ORACLE {} ".format(prefix, message))
if self.options.with_firebird:
# self.requires("firebird/0.0.0") # TODO add support for firebird
raise ConanInvalidConfiguration("{} firebird {} ".format(prefix, message))
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def generate(self):
tc = CMakeToolchain(self)
tc.variables["SOCI_SHARED"] = self.options.shared
tc.variables["SOCI_STATIC"] = not self.options.shared
tc.variables["SOCI_TESTS"] = False
tc.variables["SOCI_CXX11"] = True
tc.variables["SOCI_EMPTY"] = self.options.empty
tc.variables["WITH_SQLITE3"] = self.options.with_sqlite3
tc.variables["WITH_DB2"] = self.options.with_db2
tc.variables["WITH_ODBC"] = self.options.with_odbc
tc.variables["WITH_ORACLE"] = self.options.with_oracle
tc.variables["WITH_FIREBIRD"] = self.options.with_firebird
tc.variables["WITH_MYSQL"] = self.options.with_mysql
tc.variables["WITH_POSTGRESQL"] = self.options.with_postgresql
tc.variables["WITH_BOOST"] = self.options.with_boost
tc.generate()
deps = CMakeDeps(self)
deps.generate()
def build(self):
apply_conandata_patches(self)
cmake = CMake(self)
cmake.configure()
cmake.build()
def package(self):
copy(self, "LICENSE_1_0.txt", dst=os.path.join(self.package_folder, "licenses"), src=self.source_folder)
cmake = CMake(self)
cmake.install()
rmdir(self, os.path.join(self.package_folder, "lib", "cmake"))
def package_info(self):
self.cpp_info.set_property("cmake_file_name", "SOCI")
target_suffix = "" if self.options.shared else "_static"
lib_prefix = "lib" if is_msvc(self) and not self.options.shared else ""
version = Version(self.version)
lib_suffix = "_{}_{}".format(version.major, version.minor) if self.settings.os == "Windows" else ""
# soci_core
self.cpp_info.components["soci_core"].set_property("cmake_target_name", "SOCI::soci_core{}".format(target_suffix))
self.cpp_info.components["soci_core"].libs = ["{}soci_core{}".format(lib_prefix, lib_suffix)]
if self.options.with_boost:
self.cpp_info.components["soci_core"].requires.append("boost::boost")
# soci_empty
if self.options.empty:
self.cpp_info.components["soci_empty"].set_property("cmake_target_name", "SOCI::soci_empty{}".format(target_suffix))
self.cpp_info.components["soci_empty"].libs = ["{}soci_empty{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_empty"].requires = ["soci_core"]
# soci_sqlite3
if self.options.with_sqlite3:
self.cpp_info.components["soci_sqlite3"].set_property("cmake_target_name", "SOCI::soci_sqlite3{}".format(target_suffix))
self.cpp_info.components["soci_sqlite3"].libs = ["{}soci_sqlite3{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_sqlite3"].requires = ["soci_core", "sqlite3::sqlite3"]
# soci_odbc
if self.options.with_odbc:
self.cpp_info.components["soci_odbc"].set_property("cmake_target_name", "SOCI::soci_odbc{}".format(target_suffix))
self.cpp_info.components["soci_odbc"].libs = ["{}soci_odbc{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_odbc"].requires = ["soci_core"]
if self.settings.os == "Windows":
self.cpp_info.components["soci_odbc"].system_libs.append("odbc32")
else:
self.cpp_info.components["soci_odbc"].requires.append("odbc::odbc")
# soci_mysql
if self.options.with_mysql:
self.cpp_info.components["soci_mysql"].set_property("cmake_target_name", "SOCI::soci_mysql{}".format(target_suffix))
self.cpp_info.components["soci_mysql"].libs = ["{}soci_mysql{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_mysql"].requires = ["soci_core", "libmysqlclient::libmysqlclient"]
# soci_postgresql
if self.options.with_postgresql:
self.cpp_info.components["soci_postgresql"].set_property("cmake_target_name", "SOCI::soci_postgresql{}".format(target_suffix))
self.cpp_info.components["soci_postgresql"].libs = ["{}soci_postgresql{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_postgresql"].requires = ["soci_core", "libpq::libpq"]
# TODO: to remove in conan v2 once cmake_find_package* generators removed
self.cpp_info.names["cmake_find_package"] = "SOCI"
self.cpp_info.names["cmake_find_package_multi"] = "SOCI"
self.cpp_info.components["soci_core"].names["cmake_find_package"] = "soci_core{}".format(target_suffix)
self.cpp_info.components["soci_core"].names["cmake_find_package_multi"] = "soci_core{}".format(target_suffix)
if self.options.empty:
self.cpp_info.components["soci_empty"].names["cmake_find_package"] = "soci_empty{}".format(target_suffix)
self.cpp_info.components["soci_empty"].names["cmake_find_package_multi"] = "soci_empty{}".format(target_suffix)
if self.options.with_sqlite3:
self.cpp_info.components["soci_sqlite3"].names["cmake_find_package"] = "soci_sqlite3{}".format(target_suffix)
self.cpp_info.components["soci_sqlite3"].names["cmake_find_package_multi"] = "soci_sqlite3{}".format(target_suffix)
if self.options.with_odbc:
self.cpp_info.components["soci_odbc"].names["cmake_find_package"] = "soci_odbc{}".format(target_suffix)
self.cpp_info.components["soci_odbc"].names["cmake_find_package_multi"] = "soci_odbc{}".format(target_suffix)
if self.options.with_mysql:
self.cpp_info.components["soci_mysql"].names["cmake_find_package"] = "soci_mysql{}".format(target_suffix)
self.cpp_info.components["soci_mysql"].names["cmake_find_package_multi"] = "soci_mysql{}".format(target_suffix)
if self.options.with_postgresql:
self.cpp_info.components["soci_postgresql"].names["cmake_find_package"] = "soci_postgresql{}".format(target_suffix)
self.cpp_info.components["soci_postgresql"].names["cmake_find_package_multi"] = "soci_postgresql{}".format(target_suffix)

View File

@@ -0,0 +1,39 @@
From d491bf7b5040d314ffd0c6310ba01f78ff44c85e Mon Sep 17 00:00:00 2001
From: Rasmus Thomsen <rasmus.thomsen@dampsoft.de>
Date: Fri, 14 Apr 2023 09:16:29 +0200
Subject: [PATCH] Remove hardcoded INSTALL_NAME_DIR for relocatable libraries
on MacOS
---
cmake/SociBackend.cmake | 2 +-
src/core/CMakeLists.txt | 1 -
2 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/cmake/SociBackend.cmake b/cmake/SociBackend.cmake
index 5d4ef0df..39fe1f77 100644
--- a/cmake/SociBackend.cmake
+++ b/cmake/SociBackend.cmake
@@ -171,7 +171,7 @@ macro(soci_backend NAME)
set_target_properties(${THIS_BACKEND_TARGET}
PROPERTIES
SOVERSION ${${PROJECT_NAME}_SOVERSION}
- INSTALL_NAME_DIR ${CMAKE_INSTALL_PREFIX}/lib)
+ )
if(APPLE)
set_target_properties(${THIS_BACKEND_TARGET}
diff --git a/src/core/CMakeLists.txt b/src/core/CMakeLists.txt
index 3e7deeae..f9eae564 100644
--- a/src/core/CMakeLists.txt
+++ b/src/core/CMakeLists.txt
@@ -59,7 +59,6 @@ if (SOCI_SHARED)
PROPERTIES
VERSION ${SOCI_VERSION}
SOVERSION ${SOCI_SOVERSION}
- INSTALL_NAME_DIR ${CMAKE_INSTALL_PREFIX}/lib
CLEAN_DIRECT_OUTPUT 1)
endif()
--
2.25.1

View File

@@ -0,0 +1,24 @@
diff --git a/cmake/SociBackend.cmake b/cmake/SociBackend.cmake
index 0a664667..3fa2ed95 100644
--- a/cmake/SociBackend.cmake
+++ b/cmake/SociBackend.cmake
@@ -31,14 +31,13 @@ macro(soci_backend_deps_found NAME DEPS SUCCESS)
if(NOT DEPEND_FOUND)
list(APPEND DEPS_NOT_FOUND ${dep})
else()
- string(TOUPPER "${dep}" DEPU)
- if( ${DEPU}_INCLUDE_DIR )
- list(APPEND DEPS_INCLUDE_DIRS ${${DEPU}_INCLUDE_DIR})
+ if( ${dep}_INCLUDE_DIR )
+ list(APPEND DEPS_INCLUDE_DIRS ${${dep}_INCLUDE_DIR})
endif()
- if( ${DEPU}_INCLUDE_DIRS )
- list(APPEND DEPS_INCLUDE_DIRS ${${DEPU}_INCLUDE_DIRS})
+ if( ${dep}_INCLUDE_DIRS )
+ list(APPEND DEPS_INCLUDE_DIRS ${${dep}_INCLUDE_DIRS})
endif()
- list(APPEND DEPS_LIBRARIES ${${DEPU}_LIBRARIES})
+ list(APPEND DEPS_LIBRARIES ${${dep}_LIBRARIES})
endif()
endforeach()

194
external/wasmedge/conandata.yml vendored Normal file
View File

@@ -0,0 +1,194 @@
sources:
"0.13.5":
Windows:
"x86_64":
Visual Studio:
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.13.5/WasmEdge-0.13.5-windows.zip"
sha256: "db533289ba26ec557b5193593c9ed03db75be3bc7aa737e2caa5b56b8eef888a"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.13.5/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Linux:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.13.5/WasmEdge-0.13.5-manylinux2014_x86_64.tar.gz"
sha256: "3686e0226871bf17b62ec57e1c15778c2947834b90af0dfad14f2e0202bf9284"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.13.5/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.13.5/WasmEdge-0.13.5-manylinux2014_aarch64.tar.gz"
sha256: "472de88e0257c539c120b33fdd1805e1e95063121acc2df1d5626e4676b93529"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Macos:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.13.5/WasmEdge-0.13.5-darwin_x86_64.tar.gz"
sha256: "b7fdfaf59805951241f47690917b501ddfa06d9b6f7e0262e44e784efe4a7b33"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.13.5/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.13.5/WasmEdge-0.13.5-darwin_arm64.tar.gz"
sha256: "acc93721210294ced0887352f360e42e46dcc05332e6dd78c1452fb3a35d5255"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.13.5/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Android:
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.13.5/WasmEdge-0.13.5-android_aarch64.tar.gz"
sha256: "59a0d68a0c7368b51cc65cb5a44a68037d79fd449883ef42792178d57c8784a8"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.13.5/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"0.11.2":
Windows:
"x86_64":
Visual Studio:
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.2/WasmEdge-0.11.2-windows.zip"
sha256: "ca49b98c0cf5f187e08c3ba71afc8d71365fde696f10b4219379a4a4d1a91e6d"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.2/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Linux:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.2/WasmEdge-0.11.2-manylinux2014_x86_64.tar.gz"
sha256: "784bf1eb25928e2cf02aa88e9372388fad682b4a188485da3cd9162caeedf143"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.2/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.2/WasmEdge-0.11.2-manylinux2014_aarch64.tar.gz"
sha256: "a2766a4c1edbaea298a30e5431a4e795003a10d8398a933d923f23d4eb4fa5d1"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Macos:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.2/WasmEdge-0.11.2-darwin_x86_64.tar.gz"
sha256: "aedec53f29b1e0b657e46e67dba3e2f32a2924f4d9136e60073ea1aba3073e70"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.2/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.2/WasmEdge-0.11.2-darwin_arm64.tar.gz"
sha256: "fe391df90e1eee69cf1e976f5ddf60c20f29b651710aaa4fc03e2ab4fe52c0d3"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.2/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Android:
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.2/WasmEdge-0.11.2-android_aarch64.tar.gz"
sha256: "69e308f5927c753b2bb5639569d10219b60598174d8b304bdf310093fd7b2464"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.2/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"0.11.1":
Windows:
"x86_64":
Visual Studio:
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.1/WasmEdge-0.11.1-windows.zip"
sha256: "c86f6384555a0484a5dd81faba5636bba78f5e3d6eaf627d880e34843f9e24bf"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Linux:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.1/WasmEdge-0.11.1-manylinux2014_x86_64.tar.gz"
sha256: "76ce4ea0eb86adfa52c73f6c6b44383626d94990e0923cae8b1e6f060ef2bf5b"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.1/WasmEdge-0.11.1-manylinux2014_aarch64.tar.gz"
sha256: "cb9ea32932360463991cfda80e09879b2cf6c69737f12f3f2b371cd0af4e9ce8"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Macos:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.1/WasmEdge-0.11.1-darwin_x86_64.tar.gz"
sha256: "56df2b00669c25b8143ea2c17370256cd6a33f3b316d3b47857dd38d603cb69a"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.1/WasmEdge-0.11.1-darwin_arm64.tar.gz"
sha256: "82f7da1a7a36ec1923fb045193784dd090a03109e84da042af97297205a71f08"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Android:
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.1/WasmEdge-0.11.1-android_aarch64.tar.gz"
sha256: "af8694e93bf72ac5506450d4caebccc340fbba254dca3d58ec0712e96ec9dedd"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"0.10.0":
Windows:
"x86_64":
Visual Studio:
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.10.0/WasmEdge-0.10.0-windows.zip"
sha256: "63b8a02cced52a723aa283dba02bbe887656256ecca69bb0fff17872c0fb5ebc"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.10.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Linux:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.10.0/WasmEdge-0.10.0-manylinux2014_x86_64.tar.gz"
sha256: "4c1ffca9fd8cbdeb8f0951ddaffbbefe81ae123d5b80f61e80ea8d9b56853cde"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.10.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.10.0/WasmEdge-0.10.0-manylinux2014_aarch64.tar.gz"
sha256: "c000bf96d0a73a1d360659246c0806c2ce78620b6f78c1147fbf9e2be0280bd9"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.10.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"0.9.1":
Windows:
"x86_64":
Visual Studio:
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.1/WasmEdge-0.9.1-windows.zip"
sha256: "68240d8aee23d44db5cc252d8c1cf5d0c77ab709a122af2747a4b836ba461671"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Linux:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.1/WasmEdge-0.9.1-manylinux2014_x86_64.tar.gz"
sha256: "bcb6fe3d6e30db0d0aa267ec3bd9b7248f8c8c387620cef4049d682d293c8371"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.1/WasmEdge-0.9.1-manylinux2014_aarch64.tar.gz"
sha256: "515bcac3520cd546d9d14372b7930ab48b43f1c5dc258a9f61a82b22c0107eef"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"0.9.0":
Windows:
"x86_64":
Visual Studio:
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.0/WasmEdge-0.9.0-windows.zip"
sha256: "f81bfea4cf09053510e3e74c16c1ee010fc93def8a7e78744443b950f0011c3b"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Linux:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.0/WasmEdge-0.9.0-manylinux2014_x86_64.tar.gz"
sha256: "27847f15e4294e707486458e857d7cb11806481bb67a26f076a717a1446827ed"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.0/WasmEdge-0.9.0-manylinux2014_aarch64.tar.gz"
sha256: "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Macos:
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.0/WasmEdge-0.9.0-darwin_arm64.tar.gz"
sha256: "236a407a646f746ab78a1d0a39fa4e85fe28eae219b1635ba49f908d7944686d"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"

92
external/wasmedge/conanfile.py vendored Normal file
View File

@@ -0,0 +1,92 @@
from conan import ConanFile
from conan.tools.files import get, copy, download
from conan.tools.scm import Version
from conan.errors import ConanInvalidConfiguration
import os
required_conan_version = ">=1.53.0"
class WasmedgeConan(ConanFile):
name = "wasmedge"
description = ("WasmEdge is a lightweight, high-performance, and extensible WebAssembly runtime"
"for cloud native, edge, and decentralized applications."
"It powers serverless apps, embedded functions, microservices, smart contracts, and IoT devices.")
license = "Apache-2.0"
url = "https://github.com/conan-io/conan-center-index"
homepage = "https://github.com/WasmEdge/WasmEdge/"
topics = ("webassembly", "wasm", "wasi", "emscripten")
package_type = "shared-library"
settings = "os", "arch", "compiler", "build_type"
@property
def _compiler_alias(self):
return {
"Visual Studio": "Visual Studio",
# "Visual Studio": "msvc",
"msvc": "msvc",
}.get(str(self.info.settings.compiler), "gcc")
def configure(self):
self.settings.compiler.rm_safe("libcxx")
self.settings.compiler.rm_safe("cppstd")
def validate(self):
try:
self.conan_data["sources"][self.version][str(self.settings.os)][str(self.settings.arch)][self._compiler_alias]
except KeyError:
raise ConanInvalidConfiguration("Binaries for this combination of version/os/arch/compiler are not available")
def package_id(self):
del self.info.settings.compiler.version
self.info.settings.compiler = self._compiler_alias
def build(self):
# This is packaging binaries so the download needs to be in build
get(self, **self.conan_data["sources"][self.version][str(self.settings.os)][str(self.settings.arch)][self._compiler_alias][0],
destination=self.source_folder, strip_root=True)
download(self, filename="LICENSE",
**self.conan_data["sources"][self.version][str(self.settings.os)][str(self.settings.arch)][self._compiler_alias][1])
def package(self):
copy(self, pattern="*.h", dst=os.path.join(self.package_folder, "include"), src=os.path.join(self.source_folder, "include"), keep_path=True)
copy(self, pattern="*.inc", dst=os.path.join(self.package_folder, "include"), src=os.path.join(self.source_folder, "include"), keep_path=True)
srclibdir = os.path.join(self.source_folder, "lib64" if self.settings.os == "Linux" else "lib")
srcbindir = os.path.join(self.source_folder, "bin")
dstlibdir = os.path.join(self.package_folder, "lib")
dstbindir = os.path.join(self.package_folder, "bin")
if Version(self.version) >= "0.11.1":
copy(self, pattern="wasmedge.lib", src=srclibdir, dst=dstlibdir, keep_path=False)
copy(self, pattern="wasmedge.dll", src=srcbindir, dst=dstbindir, keep_path=False)
copy(self, pattern="libwasmedge.so*", src=srclibdir, dst=dstlibdir, keep_path=False)
copy(self, pattern="libwasmedge*.dylib", src=srclibdir, dst=dstlibdir, keep_path=False)
else:
copy(self, pattern="wasmedge_c.lib", src=srclibdir, dst=dstlibdir, keep_path=False)
copy(self, pattern="wasmedge_c.dll", src=srcbindir, dst=dstbindir, keep_path=False)
copy(self, pattern="libwasmedge_c.so*", src=srclibdir, dst=dstlibdir, keep_path=False)
copy(self, pattern="libwasmedge_c*.dylib", src=srclibdir, dst=dstlibdir, keep_path=False)
copy(self, pattern="wasmedge*", src=srcbindir, dst=dstbindir, keep_path=False)
copy(self, pattern="LICENSE", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"), keep_path=False)
def package_info(self):
if Version(self.version) >= "0.11.1":
self.cpp_info.libs = ["wasmedge"]
else:
self.cpp_info.libs = ["wasmedge_c"]
bindir = os.path.join(self.package_folder, "bin")
self.output.info("Appending PATH environment variable: {}".format(bindir))
self.env_info.PATH.append(bindir)
if self.settings.os == "Windows":
self.cpp_info.system_libs.append("ws2_32")
self.cpp_info.system_libs.append("wsock32")
self.cpp_info.system_libs.append("shlwapi")
if self.settings.os in ["Linux", "FreeBSD"]:
self.cpp_info.system_libs.append("m")
self.cpp_info.system_libs.append("dl")
self.cpp_info.system_libs.append("rt")
self.cpp_info.system_libs.append("pthread")

145
fix_final_tests.py Normal file
View File

@@ -0,0 +1,145 @@
#!/usr/bin/env python3
import re
from pathlib import Path
def fix_keylet_without_hash_options(filepath: Path) -> int:
"""Fix keylet calls without hash_options in test files."""
try:
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
except Exception as e:
print(f"Error reading {filepath}: {e}")
return 0
original_content = content
replacements = 0
# Pattern to match keylet calls without hash_options
# E.g., keylet::ownerDir(acct.id()) or keylet::account(alice)
keylet_funcs = {
'ownerDir': 'KEYLET_OWNER_DIR',
'account': 'KEYLET_ACCOUNT',
'signers': 'KEYLET_SIGNERS',
'offer': 'KEYLET_OFFER',
'line': 'KEYLET_TRUSTLINE',
'check': 'KEYLET_CHECK',
'escrow': 'KEYLET_ESCROW',
'payChan': 'KEYLET_PAYCHAN',
'depositPreauth': 'KEYLET_DEPOSIT_PREAUTH',
'ticket': 'KEYLET_TICKET',
'nftoffer': 'KEYLET_NFT_OFFER',
'fees': 'KEYLET_FEES',
'amendments': 'KEYLET_AMENDMENTS',
'negativeUNL': 'KEYLET_NEGATIVE_UNL',
'skip': 'KEYLET_SKIP_LIST',
'hook': 'KEYLET_HOOK',
'hookDefinition': 'KEYLET_HOOK_DEFINITION',
'hookState': 'KEYLET_HOOK_STATE',
'hookStateDir': 'KEYLET_HOOK_STATE_DIR',
'emittedDir': 'KEYLET_EMITTED_DIR',
'emittedTxn': 'KEYLET_EMITTED_TXN',
'import_vlseq': 'KEYLET_IMPORT_VLSEQ',
'unchecked': 'KEYLET_UNCHECKED',
'uritoken': 'KEYLET_URI_TOKEN',
'nftpage': 'KEYLET_NFT_PAGE',
'nftpage_min': 'KEYLET_NFT_PAGE',
'nftpage_max': 'KEYLET_NFT_PAGE',
'nft_buys': 'KEYLET_NFT_BUYS',
'nft_sells': 'KEYLET_NFT_SELLS',
'child': 'KEYLET_CHILD',
'page': 'KEYLET_DIR_PAGE',
'UNLReport': 'KEYLET_UNL_REPORT',
'book': 'KEYLET_BOOK'
}
for func, classifier in keylet_funcs.items():
# Pattern to match keylet::<func>(...) where the args don't start with hash_options
pattern = re.compile(
rf'\bkeylet::{re.escape(func)}\s*\(\s*(?!hash_options)',
re.MULTILINE
)
matches = list(pattern.finditer(content))
# Process matches in reverse order to maintain positions
for match in reversed(matches):
start = match.start()
# Find the matching closing parenthesis
paren_count = 1
pos = match.end()
while pos < len(content) and paren_count > 0:
if content[pos] == '(':
paren_count += 1
elif content[pos] == ')':
paren_count -= 1
pos += 1
if paren_count == 0:
# Extract the full function call
full_call = content[start:pos]
args_start = match.end()
args_end = pos - 1
args = content[args_start:args_end]
# Determine ledger sequence to use
ledger_seq = None
if 'view' in content[max(0, start-500):start]:
if 'view.seq()' in content[max(0, start-500):start]:
ledger_seq = '(view.seq())'
else:
ledger_seq = '0'
elif 'env' in content[max(0, start-500):start]:
if 'env.current()' in content[max(0, start-500):start]:
ledger_seq = '(env.current()->seq())'
else:
ledger_seq = '0'
else:
ledger_seq = '0'
# Build the new call
new_call = f'keylet::{func}(hash_options{{{ledger_seq}, {classifier}}}, {args})'
# Replace in content
content = content[:start] + new_call + content[pos:]
replacements += 1
if replacements > 0 and content != original_content:
with open(filepath, 'w', encoding='utf-8') as f:
f.write(content)
return replacements
return 0
def main():
project_root = Path("/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc")
# Files to fix
test_files = [
"src/test/app/URIToken_test.cpp",
"src/test/app/Touch_test.cpp",
"src/test/app/SetRemarks_test.cpp",
"src/test/app/Remit_test.cpp",
"src/test/app/HookNegatives_test.cpp",
"src/test/app/Hook_test.cpp",
"src/test/app/NFTokenBurn_test.cpp",
"src/test/app/NFToken_test.cpp",
"src/test/app/TxMeta_test.cpp",
"src/test/app/AccountTxPaging_test.cpp"
]
total_replacements = 0
for rel_path in test_files:
filepath = project_root / rel_path
if filepath.exists():
replacements = fix_keylet_without_hash_options(filepath)
if replacements > 0:
print(f"Fixed {rel_path}: {replacements} replacements")
total_replacements += replacements
print(f"\nTotal replacements: {total_replacements}")
if __name__ == "__main__":
main()

130
fix_remaining_tests.py Normal file
View File

@@ -0,0 +1,130 @@
#!/usr/bin/env python3
import re
from pathlib import Path
def fix_file(filepath: Path) -> int:
"""Fix various hash_options issues in a single file."""
try:
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
except Exception as e:
print(f"Error reading {filepath}: {e}")
return 0
original_content = content
replacements = 0
# Fix duplicate keylet calls with hash_options
# Pattern: keylet::X(keylet::X(hash_options{...}, ...))
pattern1 = re.compile(
r'keylet::(\w+)\s*\(\s*keylet::\1\s*\(\s*hash_options\s*\{[^}]+\}[^)]*\)\s*\)',
re.MULTILINE
)
def fix_duplicate(match):
nonlocal replacements
# Extract just the inner keylet call
inner = match.group(0)
# Find the position of the second keylet::
second_keylet_pos = inner.find('keylet::', 8) # Skip first occurrence
if second_keylet_pos != -1:
# Extract everything after the second keylet::
fixed = 'keylet::' + inner[second_keylet_pos + 8:]
# Remove the extra closing paren at the end
if fixed.endswith('))'):
fixed = fixed[:-1]
replacements += 1
return fixed
return match.group(0)
content = pattern1.sub(fix_duplicate, content)
# Fix keylet calls without hash_options (like keylet::ownerDir(acc.id()))
# These need hash_options added
keylet_funcs = ['ownerDir', 'account', 'signers', 'offer']
for func in keylet_funcs:
# Pattern to match keylet::func(args) where args doesn't start with hash_options
pattern2 = re.compile(
rf'keylet::{func}\s*\(\s*(?!hash_options)([^)]+)\)',
re.MULTILINE
)
def add_hash_options(match):
nonlocal replacements
args = match.group(1).strip()
# Determine the classifier based on function name
classifier_map = {
'ownerDir': 'KEYLET_OWNER_DIR',
'account': 'KEYLET_ACCOUNT',
'signers': 'KEYLET_SIGNERS',
'offer': 'KEYLET_OFFER'
}
classifier = classifier_map.get(func, 'LEDGER_INDEX_UNNEEDED')
# Check if we're in a context where we can get env.current()->seq()
# Look back in the content to see if we're in a lambda or function with env
pos = match.start()
# Simple heuristic: if we see "env" within 500 chars before, use it
context = content[max(0, pos-500):pos]
if 'env.' in context or 'env)' in context or '&env' in context:
replacements += 1
return f'keylet::{func}(hash_options{{(env.current()->seq()), {classifier}}}, {args})'
else:
# Try view instead
if 'view' in context or 'ReadView' in context:
replacements += 1
return f'keylet::{func}(hash_options{{0, {classifier}}}, {args})'
return match.group(0)
content = pattern2.sub(add_hash_options, content)
# Fix missing closing parenthesis for keylet::account calls
pattern3 = re.compile(
r'(keylet::account\s*\(\s*hash_options\s*\{[^}]+\}\s*,\s*\w+(?:\.\w+\(\))?\s*)(\);)',
re.MULTILINE
)
def fix_paren(match):
nonlocal replacements
replacements += 1
return match.group(1) + '));'
content = pattern3.sub(fix_paren, content)
if replacements > 0 and content != original_content:
with open(filepath, 'w', encoding='utf-8') as f:
f.write(content)
return replacements
return 0
def main():
project_root = Path("/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc")
# Files to fix
test_files = [
"src/test/app/RCLValidations_test.cpp",
"src/test/app/PayStrand_test.cpp",
"src/test/app/PayChan_test.cpp",
"src/test/app/ClaimReward_test.cpp",
"src/test/app/Import_test.cpp",
"src/test/app/LedgerReplay_test.cpp",
"src/test/app/Offer_test.cpp"
]
total_replacements = 0
for rel_path in test_files:
filepath = project_root / rel_path
if filepath.exists():
replacements = fix_file(filepath)
if replacements > 0:
print(f"Fixed {rel_path}: {replacements} replacements")
total_replacements += replacements
print(f"\nTotal replacements: {total_replacements}")
if __name__ == "__main__":
main()

43
fix_sethook_ledger_seq.py Normal file
View File

@@ -0,0 +1,43 @@
#!/usr/bin/env python3
import re
from pathlib import Path
def fix_sethook_ledger_sequences():
filepath = Path("/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc/src/test/app/SetHook_test.cpp")
with open(filepath, 'r') as f:
content = f.read()
# Fix keylet::hookState calls with hash_options{0, KEYLET_HOOK_STATE}
# These are inside test functions where env is available
content = re.sub(
r'keylet::hookState\(hash_options\{0, KEYLET_HOOK_STATE\}',
r'keylet::hookState(hash_options{(env.current()->seq()), KEYLET_HOOK_STATE}',
content
)
# Fix keylet::hookStateDir calls with hash_options{0, KEYLET_HOOK_STATE_DIR}
content = re.sub(
r'keylet::hookStateDir\(hash_options\{0, KEYLET_HOOK_STATE_DIR\}',
r'keylet::hookStateDir(hash_options{(env.current()->seq()), KEYLET_HOOK_STATE_DIR}',
content
)
# The sha512Half_s and sha512Half calls with LEDGER_INDEX_UNNEEDED are CORRECT
# because they're hashing non-ledger data (WASM bytecode, nonces, etc.)
# So we leave those alone.
# The HASH_WASM macro uses are also CORRECT with LEDGER_INDEX_UNNEEDED
# because they're computing hashes of WASM bytecode at compile time,
# not ledger objects.
with open(filepath, 'w') as f:
f.write(content)
print(f"Fixed {filepath}")
print("Note: sha512Half* calls with LEDGER_INDEX_UNNEEDED are correct (hashing non-ledger data)")
print("Note: HASH_WASM macro uses are correct (compile-time WASM bytecode hashing)")
if __name__ == "__main__":
fix_sethook_ledger_sequences()

40
fix_sethook_test.py Normal file
View File

@@ -0,0 +1,40 @@
#!/usr/bin/env python3
import re
from pathlib import Path
def fix_sethook_test():
filepath = Path("/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc/src/test/app/SetHook_test.cpp")
with open(filepath, 'r') as f:
content = f.read()
# Fix keylet::hook(Account(...).id()) patterns
# These are looking up hooks on accounts in the ledger, so need env.current()->seq()
content = re.sub(
r'env\.le\(keylet::hook\(Account\(([^)]+)\)\.id\(\)\)\)',
r'env.le(keylet::hook(hash_options{(env.current()->seq()), KEYLET_HOOK}, Account(\1).id()))',
content
)
# Fix other keylet::hook patterns with just an ID
content = re.sub(
r'env\.le\(keylet::hook\((\w+)\.id\(\)\)\)',
r'env.le(keylet::hook(hash_options{(env.current()->seq()), KEYLET_HOOK}, \1.id()))',
content
)
# Fix patterns like keylet::hook(alice)
content = re.sub(
r'env\.le\(keylet::hook\((\w+)\)\)',
r'env.le(keylet::hook(hash_options{(env.current()->seq()), KEYLET_HOOK}, \1))',
content
)
with open(filepath, 'w') as f:
f.write(content)
print(f"Fixed {filepath}")
if __name__ == "__main__":
fix_sethook_test()

View File

@@ -0,0 +1,47 @@
#!/usr/bin/env python3
import re
from pathlib import Path
def fix_sethook_test():
filepath = Path("/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc/src/test/app/SetHook_test.cpp")
with open(filepath, 'r') as f:
content = f.read()
# Fix ALL keylet::hookState patterns including those with ripple:: prefix
# Pattern 1: ripple::keylet::hookState(id, key, ns)
content = re.sub(
r'ripple::keylet::hookState\(([^,]+),\s*([^,]+),\s*([^)]+)\)',
r'ripple::keylet::hookState(hash_options{(env.current()->seq()), KEYLET_HOOK_STATE}, \1, \2, \3)',
content
)
# Pattern 2: keylet::hookState without ripple:: prefix (if not already fixed)
content = re.sub(
r'(?<!ripple::)keylet::hookState\(([^h][^,]+),\s*([^,]+),\s*([^)]+)\)',
r'keylet::hookState(hash_options{(env.current()->seq()), KEYLET_HOOK_STATE}, \1, \2, \3)',
content
)
# Fix ripple::keylet::hookStateDir patterns
content = re.sub(
r'ripple::keylet::hookStateDir\(([^,]+),\s*([^)]+)\)',
r'ripple::keylet::hookStateDir(hash_options{(env.current()->seq()), KEYLET_HOOK_STATE_DIR}, \1, \2)',
content
)
# Fix ripple::keylet::hook patterns
content = re.sub(
r'ripple::keylet::hook\(([^)]+)\)(?!\))',
r'ripple::keylet::hook(hash_options{(env.current()->seq()), KEYLET_HOOK}, \1)',
content
)
with open(filepath, 'w') as f:
f.write(content)
print(f"Fixed {filepath}")
if __name__ == "__main__":
fix_sethook_test()

54
fix_sethook_test_v2.py Normal file
View File

@@ -0,0 +1,54 @@
#!/usr/bin/env python3
import re
from pathlib import Path
def fix_sethook_test():
filepath = Path("/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc/src/test/app/SetHook_test.cpp")
with open(filepath, 'r') as f:
content = f.read()
# Fix keylet::hookState - it takes 4 args now (hash_options + 3 original)
# Match patterns like: keylet::hookState(Account("alice").id(), key, ns)
content = re.sub(
r'keylet::hookState\(Account\(([^)]+)\)\.id\(\),\s*([^,]+),\s*([^)]+)\)',
r'keylet::hookState(hash_options{0, KEYLET_HOOK_STATE}, Account(\1).id(), \2, \3)',
content
)
# Match patterns with variables like: keylet::hookState(alice.id(), key, ns)
content = re.sub(
r'keylet::hookState\((\w+)\.id\(\),\s*([^,]+),\s*([^)]+)\)',
r'keylet::hookState(hash_options{0, KEYLET_HOOK_STATE}, \1.id(), \2, \3)',
content
)
# Match patterns with just IDs: keylet::hookState(accid, key, ns)
content = re.sub(
r'keylet::hookState\((\w+),\s*([^,]+),\s*([^)]+)\)(?!\))',
r'keylet::hookState(hash_options{0, KEYLET_HOOK_STATE}, \1, \2, \3)',
content
)
# Fix keylet::hookStateDir patterns
content = re.sub(
r'keylet::hookStateDir\((\w+),\s*([^,]+),\s*([^)]+)\)',
r'keylet::hookStateDir(hash_options{0, KEYLET_HOOK_STATE_DIR}, \1, \2, \3)',
content
)
# Fix sha512Half_s calls without hash_options
content = re.sub(
r'sha512Half_s\(ripple::Slice\(',
r'sha512Half_s(hash_options{0, LEDGER_INDEX_UNNEEDED}, ripple::Slice(',
content
)
with open(filepath, 'w') as f:
f.write(content)
print(f"Fixed {filepath}")
if __name__ == "__main__":
fix_sethook_test()

41
fix_sethook_test_v3.py Normal file
View File

@@ -0,0 +1,41 @@
#!/usr/bin/env python3
import re
from pathlib import Path
def fix_sethook_test():
filepath = Path("/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc/src/test/app/SetHook_test.cpp")
with open(filepath, 'r') as f:
content = f.read()
# Fix keylet::hookStateDir - it takes 3 args now (hash_options + 2 original)
# Match patterns like: keylet::hookStateDir(Account("alice").id(), ns)
content = re.sub(
r'keylet::hookStateDir\(Account\(([^)]+)\)\.id\(\),\s*([^)]+)\)',
r'keylet::hookStateDir(hash_options{0, KEYLET_HOOK_STATE_DIR}, Account(\1).id(), \2)',
content
)
# Match with variables
content = re.sub(
r'keylet::hookStateDir\((\w+)\.id\(\),\s*([^)]+)\)',
r'keylet::hookStateDir(hash_options{0, KEYLET_HOOK_STATE_DIR}, \1.id(), \2)',
content
)
# Fix multiline hookStateDir patterns
content = re.sub(
r'keylet::hookStateDir\(\s*\n\s*Account\(([^)]+)\)\.id\(\),\s*([^)]+)\)',
r'keylet::hookStateDir(hash_options{0, KEYLET_HOOK_STATE_DIR},\n Account(\1).id(), \2)',
content,
flags=re.MULTILINE
)
with open(filepath, 'w') as f:
f.write(content)
print(f"Fixed {filepath}")
if __name__ == "__main__":
fix_sethook_test()

120
fix_test_hash_options.py Normal file
View File

@@ -0,0 +1,120 @@
#!/usr/bin/env python3
import re
import os
import sys
from pathlib import Path
def fix_test_files(root_dir: str):
"""Fix hash_options in test files by adding appropriate classifiers."""
# Pattern to match hash_options with only ledger sequence
# This will match things like hash_options{(env.current()->seq())}
pattern = re.compile(
r'(keylet::(\w+)\s*\([^)]*\s*)hash_options\s*\{\s*\(([^}]+)\)\s*\}',
re.MULTILINE
)
# Mapping of keylet functions to their classifiers
keylet_classifiers = {
'account': 'KEYLET_ACCOUNT',
'ownerDir': 'KEYLET_OWNER_DIR',
'signers': 'KEYLET_SIGNERS',
'offer': 'KEYLET_OFFER',
'check': 'KEYLET_CHECK',
'depositPreauth': 'KEYLET_DEPOSIT_PREAUTH',
'escrow': 'KEYLET_ESCROW',
'payChan': 'KEYLET_PAYCHAN',
'line': 'KEYLET_TRUSTLINE',
'ticket': 'KEYLET_TICKET',
'hook': 'KEYLET_HOOK',
'hookDefinition': 'KEYLET_HOOK_DEFINITION',
'hookState': 'KEYLET_HOOK_STATE',
'hookStateDir': 'KEYLET_HOOK_STATE_DIR',
'child': 'KEYLET_CHILD',
'page': 'KEYLET_DIR_PAGE',
'nftpage_min': 'KEYLET_NFT_PAGE',
'nftpage_max': 'KEYLET_NFT_PAGE',
'nftoffer': 'KEYLET_NFT_OFFER',
'nft_buys': 'KEYLET_NFT_BUYS',
'nft_sells': 'KEYLET_NFT_SELLS',
'uritoken': 'KEYLET_URI_TOKEN',
}
files_fixed = 0
total_replacements = 0
# Find all test files
test_dir = Path(root_dir) / "src" / "test"
for filepath in test_dir.rglob("*.cpp"):
try:
with open(filepath, 'r', encoding='utf-8') as f:
original_content = f.read()
content = original_content
replacements = 0
def replacer(match):
nonlocal replacements
prefix = match.group(1)
keylet_func = match.group(2)
ledger_expr = match.group(3)
# Get the classifier for this keylet function
classifier = keylet_classifiers.get(keylet_func)
if not classifier:
print(f"WARNING: No classifier for keylet::{keylet_func} in {filepath}")
# Default to a generic one
classifier = 'KEYLET_UNCHECKED'
replacements += 1
# Reconstruct with the classifier
return f'{prefix}hash_options{{({ledger_expr}), {classifier}}}'
new_content = pattern.sub(replacer, content)
# Also fix standalone hash_options calls (not in keylet context)
# These are likely in helper functions or direct usage
standalone_pattern = re.compile(
r'(?<!keylet::\w{1,50}\s{0,10}\([^)]*\s{0,10})hash_options\s*\{\s*\(([^}]+)\)\s*\}',
re.MULTILINE
)
def standalone_replacer(match):
nonlocal replacements
ledger_expr = match.group(1)
replacements += 1
# For standalone ones in tests, use a test context
return f'hash_options{{({ledger_expr}), LEDGER_INDEX_UNNEEDED}}'
new_content = standalone_pattern.sub(standalone_replacer, new_content)
if replacements > 0:
with open(filepath, 'w', encoding='utf-8') as f:
f.write(new_content)
rel_path = filepath.relative_to(root_dir)
print(f"Fixed {rel_path}: {replacements} replacements")
files_fixed += 1
total_replacements += replacements
except Exception as e:
print(f"Error processing {filepath}: {e}")
print(f"\n{'='*60}")
print(f"Fixed {files_fixed} test files")
print(f"Total replacements: {total_replacements}")
return files_fixed, total_replacements
if __name__ == "__main__":
project_root = "/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc"
print("Fixing hash_options in test files...")
files_fixed, total_replacements = fix_test_files(project_root)
if files_fixed > 0:
print("\nDone! Now rebuild to see if there are any remaining issues.")
else:
print("\nNo test files needed fixing.")

View File

@@ -0,0 +1,130 @@
#!/usr/bin/env python3
import re
import os
import sys
from pathlib import Path
def fix_test_files(root_dir: str):
"""Fix hash_options in test files by adding appropriate classifiers."""
# Mapping of keylet functions to their classifiers
keylet_classifiers = {
'account': 'KEYLET_ACCOUNT',
'ownerDir': 'KEYLET_OWNER_DIR',
'signers': 'KEYLET_SIGNERS',
'offer': 'KEYLET_OFFER',
'check': 'KEYLET_CHECK',
'depositPreauth': 'KEYLET_DEPOSIT_PREAUTH',
'escrow': 'KEYLET_ESCROW',
'payChan': 'KEYLET_PAYCHAN',
'line': 'KEYLET_TRUSTLINE',
'ticket': 'KEYLET_TICKET',
'hook': 'KEYLET_HOOK',
'hookDefinition': 'KEYLET_HOOK_DEFINITION',
'hookState': 'KEYLET_HOOK_STATE',
'hookStateDir': 'KEYLET_HOOK_STATE_DIR',
'child': 'KEYLET_CHILD',
'page': 'KEYLET_DIR_PAGE',
'nftpage_min': 'KEYLET_NFT_PAGE',
'nftpage_max': 'KEYLET_NFT_PAGE',
'nftoffer': 'KEYLET_NFT_OFFER',
'nft_buys': 'KEYLET_NFT_BUYS',
'nft_sells': 'KEYLET_NFT_SELLS',
'uritoken': 'KEYLET_URI_TOKEN',
'fees': 'KEYLET_FEES',
'amendments': 'KEYLET_AMENDMENTS',
'negativeUNL': 'KEYLET_NEGATIVE_UNL',
'skip': 'KEYLET_SKIP_LIST',
'unchecked': 'KEYLET_UNCHECKED',
'import_vlseq': 'KEYLET_IMPORT_VLSEQ',
'UNLReport': 'KEYLET_UNL_REPORT',
'emittedDir': 'KEYLET_EMITTED_DIR',
'emittedTxn': 'KEYLET_EMITTED_TXN',
'book': 'KEYLET_BOOK',
}
files_fixed = 0
total_replacements = 0
# Find all test files
test_dir = Path(root_dir) / "src" / "test"
for filepath in test_dir.rglob("*.cpp"):
try:
with open(filepath, 'r', encoding='utf-8') as f:
original_content = f.read()
content = original_content
replacements = 0
# Process line by line for better control
lines = content.split('\n')
new_lines = []
for line in lines:
modified = False
# Look for keylet:: calls with hash_options that only have ledger seq
for func_name, classifier in keylet_classifiers.items():
# Pattern for keylet::func(...hash_options{(ledger_seq)}...)
pattern = f'keylet::{func_name}\\s*\\([^)]*hash_options\\s*\\{{\\s*\\(([^}}]+)\\)\\s*\\}}'
matches = list(re.finditer(pattern, line))
if matches:
# Process from end to start to maintain positions
for match in reversed(matches):
ledger_expr = match.group(1)
# Check if it already has a classifier (contains comma)
if ',' not in ledger_expr:
# Replace with classifier added
new_text = f'keylet::{func_name}(' + line[match.start():match.end()].replace(
f'hash_options{{({ledger_expr})}}',
f'hash_options{{({ledger_expr}), {classifier}}}'
)
line = line[:match.start()] + new_text + line[match.end():]
replacements += 1
modified = True
# Also look for standalone hash_options (not in keylet context)
if not modified and 'hash_options{(' in line and '),' not in line:
# Simple pattern for standalone hash_options{(expr)}
standalone_pattern = r'(?<!keylet::\w{1,30}\([^)]*\s{0,5})hash_options\s*\{\s*\(([^}]+)\)\s*\}'
matches = list(re.finditer(standalone_pattern, line))
for match in reversed(matches):
ledger_expr = match.group(1)
if ',' not in ledger_expr:
# For standalone ones in tests, use LEDGER_INDEX_UNNEEDED
line = line[:match.start()] + f'hash_options{{({ledger_expr}), LEDGER_INDEX_UNNEEDED}}' + line[match.end():]
replacements += 1
new_lines.append(line)
if replacements > 0:
new_content = '\n'.join(new_lines)
with open(filepath, 'w', encoding='utf-8') as f:
f.write(new_content)
rel_path = filepath.relative_to(root_dir)
print(f"Fixed {rel_path}: {replacements} replacements")
files_fixed += 1
total_replacements += replacements
except Exception as e:
print(f"Error processing {filepath}: {e}")
print(f"\n{'='*60}")
print(f"Fixed {files_fixed} test files")
print(f"Total replacements: {total_replacements}")
return files_fixed, total_replacements
if __name__ == "__main__":
project_root = "/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc"
print("Fixing hash_options in test files...")
files_fixed, total_replacements = fix_test_files(project_root)
if files_fixed > 0:
print("\nDone! Now rebuild to see if there are any remaining issues.")
else:
print("\nNo test files needed fixing.")

133
fix_test_hash_options_v2.py Normal file
View File

@@ -0,0 +1,133 @@
#!/usr/bin/env python3
import re
import os
import sys
from pathlib import Path
# Mapping of keylet functions to their specific HashContext classifiers
KEYLET_CLASSIFIERS = {
'account': 'KEYLET_ACCOUNT',
'amendments': 'KEYLET_AMENDMENTS',
'book': 'KEYLET_BOOK',
'check': 'KEYLET_CHECK',
'child': 'KEYLET_CHILD',
'depositPreauth': 'KEYLET_DEPOSIT_PREAUTH',
'emittedDir': 'KEYLET_EMITTED_DIR',
'emittedTxn': 'KEYLET_EMITTED_TXN',
'escrow': 'KEYLET_ESCROW',
'fees': 'KEYLET_FEES',
'hook': 'KEYLET_HOOK',
'hookDefinition': 'KEYLET_HOOK_DEFINITION',
'hookState': 'KEYLET_HOOK_STATE',
'hookStateDir': 'KEYLET_HOOK_STATE_DIR',
'import_vlseq': 'KEYLET_IMPORT_VLSEQ',
'line': 'KEYLET_TRUSTLINE',
'negativeUNL': 'KEYLET_NEGATIVE_UNL',
'nft_buys': 'KEYLET_NFT_BUYS',
'nft_sells': 'KEYLET_NFT_SELLS',
'nftoffer': 'KEYLET_NFT_OFFER',
'nftpage': 'KEYLET_NFT_PAGE',
'nftpage_max': 'KEYLET_NFT_PAGE',
'nftpage_min': 'KEYLET_NFT_PAGE',
'offer': 'KEYLET_OFFER',
'ownerDir': 'KEYLET_OWNER_DIR',
'page': 'KEYLET_DIR_PAGE',
'payChan': 'KEYLET_PAYCHAN',
'signers': 'KEYLET_SIGNERS',
'skip': 'KEYLET_SKIP_LIST',
'ticket': 'KEYLET_TICKET',
'UNLReport': 'KEYLET_UNL_REPORT',
'unchecked': 'KEYLET_UNCHECKED',
'uritoken': 'KEYLET_URI_TOKEN',
}
def fix_keylet_calls_in_file(filepath: Path) -> int:
"""Fix hash_options in a single file by adding appropriate classifiers."""
try:
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
except Exception as e:
print(f"Error reading {filepath}: {e}")
return 0
original_content = content
replacements = 0
# Process each keylet function
for func_name, classifier in KEYLET_CLASSIFIERS.items():
# Pattern to match keylet::<func>(hash_options{<ledger_seq>}, ...)
# where ledger_seq doesn't already contain a comma (no classifier yet)
pattern = re.compile(
rf'keylet::{re.escape(func_name)}\s*\(\s*hash_options\s*\{{\s*\(([^,}}]+)\)\s*\}}',
re.MULTILINE
)
def replacer(match):
nonlocal replacements
ledger_seq = match.group(1).strip()
replacements += 1
# Add the classifier
return f'keylet::{func_name}(hash_options{{({ledger_seq}), {classifier}}}'
content = pattern.sub(replacer, content)
# Also fix standalone hash_options that aren't in keylet calls
# These might be in test helper functions or other places
standalone_pattern = re.compile(
r'(?<!keylet::\w+\s*\(\s*)hash_options\s*\{\s*\(([^,}]+)\)\s*\}(?!\s*,)',
re.MULTILINE
)
def standalone_replacer(match):
nonlocal replacements
ledger_seq = match.group(1).strip()
# Skip if it already has a classifier
if ',' in ledger_seq:
return match.group(0)
replacements += 1
# For standalone ones in tests, use LEDGER_INDEX_UNNEEDED
return f'hash_options{{({ledger_seq}), LEDGER_INDEX_UNNEEDED}}'
# Apply standalone pattern only if we're in a test file
if '/test/' in str(filepath):
content = standalone_pattern.sub(standalone_replacer, content)
if replacements > 0 and content != original_content:
with open(filepath, 'w', encoding='utf-8') as f:
f.write(content)
return replacements
return 0
def main():
project_root = Path("/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc")
# Find all test cpp files that might have hash_options
test_files = list((project_root / "src" / "test").rglob("*.cpp"))
print(f"Found {len(test_files)} test files to check...")
total_files_fixed = 0
total_replacements = 0
for filepath in test_files:
replacements = fix_keylet_calls_in_file(filepath)
if replacements > 0:
rel_path = filepath.relative_to(project_root)
print(f"Fixed {rel_path}: {replacements} replacements")
total_files_fixed += 1
total_replacements += replacements
print(f"\n{'='*60}")
print(f"Fixed {total_files_fixed} test files")
print(f"Total replacements: {total_replacements}")
if total_files_fixed > 0:
print("\nNow rebuild to see if there are any remaining issues.")
else:
print("\nNo test files needed fixing.")
if __name__ == "__main__":
main()

117
fix_test_hash_options_v3.py Normal file
View File

@@ -0,0 +1,117 @@
#!/usr/bin/env python3
import re
import os
import sys
from pathlib import Path
# Mapping of keylet functions to their specific HashContext classifiers
KEYLET_CLASSIFIERS = {
'account': 'KEYLET_ACCOUNT',
'amendments': 'KEYLET_AMENDMENTS',
'book': 'KEYLET_BOOK',
'check': 'KEYLET_CHECK',
'child': 'KEYLET_CHILD',
'depositPreauth': 'KEYLET_DEPOSIT_PREAUTH',
'emittedDir': 'KEYLET_EMITTED_DIR',
'emittedTxn': 'KEYLET_EMITTED_TXN',
'escrow': 'KEYLET_ESCROW',
'fees': 'KEYLET_FEES',
'hook': 'KEYLET_HOOK',
'hookDefinition': 'KEYLET_HOOK_DEFINITION',
'hookState': 'KEYLET_HOOK_STATE',
'hookStateDir': 'KEYLET_HOOK_STATE_DIR',
'import_vlseq': 'KEYLET_IMPORT_VLSEQ',
'line': 'KEYLET_TRUSTLINE',
'negativeUNL': 'KEYLET_NEGATIVE_UNL',
'nft_buys': 'KEYLET_NFT_BUYS',
'nft_sells': 'KEYLET_NFT_SELLS',
'nftoffer': 'KEYLET_NFT_OFFER',
'nftpage': 'KEYLET_NFT_PAGE',
'nftpage_max': 'KEYLET_NFT_PAGE',
'nftpage_min': 'KEYLET_NFT_PAGE',
'offer': 'KEYLET_OFFER',
'ownerDir': 'KEYLET_OWNER_DIR',
'page': 'KEYLET_DIR_PAGE',
'payChan': 'KEYLET_PAYCHAN',
'signers': 'KEYLET_SIGNERS',
'skip': 'KEYLET_SKIP_LIST',
'ticket': 'KEYLET_TICKET',
'UNLReport': 'KEYLET_UNL_REPORT',
'unchecked': 'KEYLET_UNCHECKED',
'uritoken': 'KEYLET_URI_TOKEN',
}
def fix_keylet_calls_in_file(filepath: Path) -> int:
"""Fix hash_options in a single file by adding appropriate classifiers."""
try:
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
except Exception as e:
print(f"Error reading {filepath}: {e}")
return 0
original_content = content
replacements = 0
# Process each keylet function
for func_name, classifier in KEYLET_CLASSIFIERS.items():
# Pattern to match keylet::<func>(hash_options{<ledger_seq>}, ...)
# where ledger_seq doesn't already contain a comma (no classifier yet)
pattern = re.compile(
rf'keylet::{re.escape(func_name)}\s*\(\s*hash_options\s*\{{\s*\(([^}}]+)\)\s*\}}',
re.MULTILINE
)
def replacer(match):
nonlocal replacements
ledger_seq = match.group(1).strip()
# Check if it already has a classifier (contains comma after the ledger expression)
# But be careful - the ledger expression itself might contain commas
# Look for a comma followed by a KEYLET_ or other classifier
if ', KEYLET_' in match.group(0) or ', LEDGER_' in match.group(0):
return match.group(0)
replacements += 1
# Add the classifier
return f'keylet::{func_name}(hash_options{{({ledger_seq}), {classifier}}}'
content = pattern.sub(replacer, content)
if replacements > 0 and content != original_content:
with open(filepath, 'w', encoding='utf-8') as f:
f.write(content)
return replacements
return 0
def main():
project_root = Path("/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc")
# Find all test cpp files that might have hash_options
test_files = list((project_root / "src" / "test").rglob("*.cpp"))
print(f"Found {len(test_files)} test files to check...")
total_files_fixed = 0
total_replacements = 0
for filepath in test_files:
replacements = fix_keylet_calls_in_file(filepath)
if replacements > 0:
rel_path = filepath.relative_to(project_root)
print(f"Fixed {rel_path}: {replacements} replacements")
total_files_fixed += 1
total_replacements += replacements
print(f"\n{'='*60}")
print(f"Fixed {total_files_fixed} test files")
print(f"Total replacements: {total_replacements}")
if total_files_fixed > 0:
print("\nNow rebuild to see if there are any remaining issues.")
else:
print("\nNo test files needed fixing.")
if __name__ == "__main__":
main()

67
fix_xahau_genesis.py Normal file
View File

@@ -0,0 +1,67 @@
#!/usr/bin/env python3
import re
from pathlib import Path
def fix_xahau_genesis():
filepath = Path("/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc/src/test/app/XahauGenesis_test.cpp")
with open(filepath, 'r') as f:
content = f.read()
# Fix sha512Half_s calls - they now need hash_options
content = re.sub(
r'ripple::sha512Half_s\(ripple::Slice\(',
r'ripple::sha512Half_s(hash_options{0, LEDGER_INDEX_UNNEEDED}, ripple::Slice(',
content
)
# Fix keylet::amendments()
content = re.sub(
r'env\.le\(keylet::amendments\(\)\)',
r'env.le(keylet::amendments(hash_options{(env.current()->seq()), KEYLET_AMENDMENTS}))',
content
)
# Fix keylet::account(id) calls
content = re.sub(
r'env\.le\(keylet::account\((\w+)\)\)',
r'env.le(keylet::account(hash_options{(env.current()->seq()), KEYLET_ACCOUNT}, \1))',
content
)
# Fix keylet::hook(id) calls
content = re.sub(
r'env\.le\(keylet::hook\((\w+)\)\)',
r'env.le(keylet::hook(hash_options{(env.current()->seq()), KEYLET_HOOK}, \1))',
content
)
# Fix keylet::hookDefinition calls
content = re.sub(
r'env\.le\(keylet::hookDefinition\(([^)]+)\)\)',
r'env.le(keylet::hookDefinition(hash_options{(env.current()->seq()), KEYLET_HOOK_DEFINITION}, \1))',
content
)
# Fix keylet::fees() calls
content = re.sub(
r'env\.le\(keylet::fees\(\)\)',
r'env.le(keylet::fees(hash_options{(env.current()->seq()), KEYLET_FEES}))',
content
)
# Fix standalone keylet::account assignments
content = re.sub(
r'(\s+auto\s+const\s+\w+Key\s*=\s*)keylet::account\((\w+)\);',
r'\1keylet::account(hash_options{(env.current()->seq()), KEYLET_ACCOUNT}, \2);',
content
)
with open(filepath, 'w') as f:
f.write(content)
print(f"Fixed {filepath}")
if __name__ == "__main__":
fix_xahau_genesis()

54
fix_xahau_genesis_v2.py Normal file
View File

@@ -0,0 +1,54 @@
#!/usr/bin/env python3
import re
from pathlib import Path
def fix_xahau_genesis():
filepath = Path("/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc/src/test/app/XahauGenesis_test.cpp")
with open(filepath, 'r') as f:
content = f.read()
# Fix ALL sha512Half_s calls - they now need hash_options
# Match multi-line patterns too
content = re.sub(
r'ripple::sha512Half_s\(',
r'ripple::sha512Half_s(hash_options{0, LEDGER_INDEX_UNNEEDED}, ',
content
)
# Fix keylet::hookDefinition without hash_options
content = re.sub(
r'keylet::hookDefinition\(([^,)]+)\)(?!\.)',
r'keylet::hookDefinition(hash_options{0, KEYLET_HOOK_DEFINITION}, \1)',
content
)
# Fix env.le(keylet::hookDefinition calls that might have been missed
content = re.sub(
r'env\.le\(keylet::hookDefinition\(hash_options\{0, KEYLET_HOOK_DEFINITION\}, ([^)]+)\)\)',
r'env.le(keylet::hookDefinition(hash_options{(env.current()->seq()), KEYLET_HOOK_DEFINITION}, \1))',
content
)
# Fix keylet::account in view.read() calls
content = re.sub(
r'view->read\(keylet::account\((\w+)\)\)',
r'view->read(keylet::account(hash_options{(view->seq()), KEYLET_ACCOUNT}, \1))',
content
)
# Fix env.current()->read(keylet::account calls
content = re.sub(
r'env\.current\(\)->read\(keylet::account\((\w+)\)\)',
r'env.current()->read(keylet::account(hash_options{(env.current()->seq()), KEYLET_ACCOUNT}, \1))',
content
)
with open(filepath, 'w') as f:
f.write(content)
print(f"Fixed {filepath}")
if __name__ == "__main__":
fix_xahau_genesis()

39
fix_xahau_genesis_v3.py Normal file
View File

@@ -0,0 +1,39 @@
#!/usr/bin/env python3
import re
from pathlib import Path
def fix_xahau_genesis():
filepath = Path("/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc/src/test/app/XahauGenesis_test.cpp")
with open(filepath, 'r') as f:
content = f.read()
# Fix keylet::signers without hash_options
content = re.sub(
r'env\.le\(keylet::signers\((\w+)\)\)',
r'env.le(keylet::signers(hash_options{(env.current()->seq()), KEYLET_SIGNERS}, \1))',
content
)
# Fix keylet::hookState - it takes 4 arguments now (hash_options + 3 original)
content = re.sub(
r'keylet::hookState\(\s*([^,]+),\s*([^,]+),\s*([^)]+)\)',
r'keylet::hookState(hash_options{(env.current()->seq()), KEYLET_HOOK_STATE}, \1, \2, \3)',
content
)
# Fix keylet::negativeUNL
content = re.sub(
r'env\.le\(keylet::negativeUNL\(\)\)',
r'env.le(keylet::negativeUNL(hash_options{(env.current()->seq()), KEYLET_NEGATIVE_UNL}))',
content
)
with open(filepath, 'w') as f:
f.write(content)
print(f"Fixed {filepath}")
if __name__ == "__main__":
fix_xahau_genesis()

View File

@@ -0,0 +1,263 @@
# **A Performance and Security Analysis of Modern Hash Functions for Small-Input Payloads: Selecting a High-Speed Successor to SHA-512/256**
## **Executive Summary & Introduction**
### **The Challenge: The Need for Speed in Small-Payload Hashing**
In modern computing systems, the performance of cryptographic hash functions is a critical design consideration. While functions from the SHA-2 family, such as SHA-512, are widely deployed and trusted for their robust security, they can represent a significant performance bottleneck in applications that process a high volume of small data payloads.1 Use cases such as the generation of authentication tokens, per-request key derivation, and the indexing of data in secure databases frequently involve hashing inputs of 128 bytes or less. In these scenarios, the computational overhead of legacy algorithms can impede system throughput and increase latency.
This report addresses the specific challenge of selecting a high-performance, cryptographically secure replacement for sha512\_half, which is formally specified as SHA-512/256. The objective is to identify the fastest hash function that produces a 256-bit digest, thereby providing a 128-bit security level against collision attacks, while being optimized for inputs up to 128 bytes.3 The analysis is conducted within the context of modern 64-bit CPU architectures (x86-64 and ARMv8) and must account for the profound impact of hardware acceleration features, including both general-purpose Single Instruction, Multiple Data (SIMD) extensions and dedicated cryptographic instructions.
### **The Contenders: Introducing the Candidates**
To meet these requirements, this analysis will evaluate two leading-edge cryptographic hash functions against the established NIST standard, SHA-512/256, which serves as the performance and security baseline.
* **The Incumbent (Baseline): SHA-512/256.** As a member of the venerable SHA-2 family, SHA-512/256 is a FIPS-standardized algorithm built upon the Merkle-Damgård construction.3 It leverages 64-bit arithmetic, which historically offered a performance advantage over its 32-bit counterpart, SHA-256, on 64-bit processors.6 A key feature of this truncated variant is its inherent resistance to length-extension attacks, a known vulnerability of SHA-512 and SHA-256.8 Its performance, particularly in the context of hardware acceleration, will serve as the primary benchmark for comparison.
* **The Modern Challengers: BLAKE3 and KangarooTwelve.** Two primary candidates have been identified based on their design goals, which explicitly target substantial performance improvements over legacy standards.
* **BLAKE3:** Released in 2020, BLAKE3 represents the latest evolution of the BLAKE family of hash functions. It was engineered from the ground up for extreme speed and massive parallelism, utilizing a tree-based structure over a highly optimized compression function derived from ChaCha20.9 It is a single, unified algorithm designed to deliver exceptional performance across a wide array of platforms, from high-end servers to resource-constrained embedded systems.
* **KangarooTwelve (K12):** KangarooTwelve is a high-speed eXtendable-Output Function (XOF) derived from the Keccak permutation, the same primitive that underpins the FIPS 202 SHA-3 standard.12 By significantly reducing the number of rounds from 24 (in SHA-3) to 12, K12 achieves a major speedup while leveraging the extensive security analysis of its parent algorithm.12
### **Scope and Methodology**
The scope of this report is strictly confined to cryptographic hash functions that provide a minimum 128-bit security level against all standard attack vectors, including collision, preimage, and second-preimage attacks. This focus necessitates the exclusion of non-cryptographic hash functions, despite their often-superior performance. Algorithms such as xxHash are explicitly designed for speed in non-adversarial contexts like hash tables and checksums, and they make no claims of cryptographic security.15
The case of MeowHash serves as a potent cautionary tale. Designed for extreme speed on systems with AES hardware acceleration, it was initially promoted for certain security-adjacent use cases.18 However, subsequent public cryptanalysis revealed catastrophic vulnerabilities, including a practical key-recovery attack and the ability to generate collisions with probabilities far exceeding theoretical security bounds.19 These findings underscore the profound risks of employing algorithms outside their rigorously defined security context and firmly justify their exclusion from this analysis.
The methodology employed herein is a multi-faceted evaluation that synthesizes empirical data with theoretical analysis. It comprises three core pillars:
1. **Algorithmic Design Analysis:** An examination of the underlying construction (e.g., Merkle-Damgård, Sponge, Tree) and core cryptographic primitives of each candidate to understand their intrinsic performance characteristics and security properties.
2. **Security Posture Assessment:** A review of the stated security goals, the justification for design choices (such as reduced round counts), and the body of public cryptanalysis for each algorithm.
3. **Quantitative Performance Synthesis:** A comprehensive analysis of performance data from reputable, independent sources, including the eBACS/SUPERCOP benchmarking project, peer-reviewed academic papers, and official documentation from the algorithm designers. Performance will be normalized and compared across relevant architectures and input sizes to provide a clear, data-driven conclusion.
## **Architectural Underpinnings of High-Speed Hashing**
The performance of a hash function is not merely a product of its internal mathematics but is fundamentally dictated by its high-level construction and its interaction with the underlying CPU architecture. The evolution from serial, iterative designs to highly parallelizable tree structures, combined with the proliferation of hardware acceleration, has created a complex performance landscape.
### **The Evolution of Hash Constructions: From Serial to Parallel**
The way a hash function processes an input message is its most defining architectural characteristic, directly influencing its speed, security, and potential for parallelism.
#### **Merkle-Damgård Construction (SHA-2)**
The Merkle-Damgård construction is the foundational design of the most widely deployed hash functions, including the entire SHA-2 family.5 Its operation is inherently sequential. The input message is padded and divided into fixed-size blocks. A compression function,
f, processes these blocks iteratively. The process begins with a fixed initialization vector (IV). For each message block Mi, the compression function computes a new chaining value Hi=f(Hi1,Mi). The final hash output is derived from the last chaining value, Hn.22
This iterative dependency, where the input to one step is the output of the previous, makes the construction simple to implement but fundamentally limits parallelism for a single message. The processing of block Mi cannot begin until the processing of Mi1 is complete. Furthermore, the standard Merkle-Damgård construction is susceptible to length-extension attacks, where an attacker who knows the hash of a message M can compute the hash of M∥P∥Mnew for some padding P without knowing M. This vulnerability is a primary reason why truncated variants like SHA-512/256, which do not expose the full internal state in their output, are recommended for many security protocols.8
#### **Sponge Construction (SHA-3 & KangarooTwelve)**
The Sponge construction, standardized with SHA-3, represents a significant departure from the Merkle-Damgård paradigm.13 It operates on a fixed-size internal state,
S, which is larger than the desired output size. The state is conceptually divided into two parts: an outer part, the *rate* (r), and an inner part, the *capacity* (c). The security of the function is determined by the size of the capacity.
The process involves two phases 22:
1. **Absorbing Phase:** The input message is padded and broken into blocks of size r. Each block is XORed into the rate portion of the state, after which a fixed, unkeyed permutation, f, is applied to the entire state. This process is repeated for all message blocks.
2. **Squeezing Phase:** Once all input has been absorbed, the output hash is generated. The rate portion of the state is extracted as the first block of output. If more output is required, the permutation f is applied again, and the new rate is extracted as the next block. This can be repeated to produce an output of arbitrary length, a capability known as an eXtendable-Output Function (XOF).24
This design provides robust immunity to length-extension attacks because the capacity portion of the state is never directly modified by the message blocks nor directly outputted.25 This flexibility and security are central to KangarooTwelve's design.
#### **Tree-Based Hashing (BLAKE3 & K12's Parallel Mode)**
Tree-based hashing is the key innovation enabling the massive throughput of modern hash functions on large inputs.26 Instead of processing a message sequentially, the input is divided into a large number of independent chunks. These chunks form the leaves of a Merkle tree.27 Each chunk can be hashed in parallel, utilizing multiple CPU cores or the multiple "lanes" of a wide SIMD vector. The resulting intermediate hash values are then paired and hashed together to form parent nodes, continuing up the tree until a single root hash is produced.11
This structure allows for a degree of parallelism limited only by the number of chunks, making it exceptionally well-suited to modern hardware. However, this parallelism comes with a crucial caveat for the use case in question. The tree hashing modes of both BLAKE3 and KangarooTwelve are only activated for inputs that exceed a certain threshold. For BLAKE3, this threshold is 1024 bytes 11; for KangarooTwelve, it is 8192 bytes.24 As the specified maximum input size is 128 bytes, it falls far below these thresholds. Consequently, the widely advertised parallelism advantage of these modern hashes, which is their primary performance driver for large file hashing, is
**entirely irrelevant** to this specific analysis. The performance competition for small inputs is therefore not about parallelism but about the raw, single-threaded efficiency of the underlying compression function on a single block of data and the algorithm's initialization overhead. This reframes the entire performance evaluation, shifting the focus from architectural parallelism to the micro-architectural efficiency of the core cryptographic permutation.
### **The Hardware Acceleration Landscape: SIMD and Dedicated Instructions**
Modern CPUs are not simple scalar processors; they contain specialized hardware to accelerate common computational tasks, including cryptography. Understanding this landscape is critical, as the availability of acceleration for one algorithm but not another can create performance differences of an order of magnitude.
#### **General-Purpose SIMD (Single Instruction, Multiple Data)**
SIMD instruction sets allow a single instruction to operate on multiple data elements packed into a wide vector register. Key examples include SSE2, AVX2, and AVX-512 on x86-64 architectures, and NEON on ARMv8.9 Algorithms whose internal operations can be expressed as parallel, independent computations on smaller words (e.g., 32-bit or 64-bit) are ideal candidates for SIMD optimization. Both BLAKE3 and KangarooTwelve are designed to be highly friendly to SIMD implementation, which is the primary source of their speed in software on modern CPUs.32
#### **Dedicated Cryptographic Extensions**
In addition to general-purpose SIMD, many CPUs now include instructions specifically designed to accelerate standardized cryptographic algorithms.
* **Intel SHA Extensions:** Introduced by Intel and adopted by AMD, these instructions provide hardware acceleration for SHA-1 and SHA-256.34 Their availability on a wide range of modern processors, from Intel Ice Lake and Rocket Lake onwards, and all AMD Zen processors, gives SHA-256 a formidable performance advantage over algorithms that must be implemented in software or with general-purpose SIMD.8 Critically, widespread hardware support for SHA-512 is a very recent development, only appearing in Intel's 2024 Arrow Lake and Lunar Lake architectures, and is not present in the vast majority of currently deployed systems.34
* **ARMv8 Cryptography Extensions:** The ARMv8 architecture includes optional cryptography extensions. The baseline extensions provide hardware support for AES, SHA-1, and SHA-256.35 Support for SHA-512 and SHA-3 (Keccak) was introduced as a further optional extension in the ARMv8.2-A revision.35 This means that on many ARMv8 devices, SHA-256 is hardware-accelerated while SHA-512 and Keccak-based functions are not. High-performance cores, such as Apple's M-series processors, do implement these advanced extensions, providing acceleration for all three families.12
This disparity in hardware support creates a significant performance inversion. Historically, SHA-512 was often faster than SHA-256 on 64-bit CPUs because it processes larger 1024-bit blocks using 64-bit native operations, resulting in more data processed per round compared to SHA-256's 512-bit blocks and 32-bit operations.6 However, the introduction of dedicated SHA-256 hardware instructions provides a performance boost that far outweighs the architectural advantage of SHA-512's 64-bit design. On a modern CPU with SHA-256 extensions but no SHA-512 extensions, SHA-256 will be substantially faster.8 This elevates the performance bar for any proposed replacement for SHA-512/256; to be considered a truly "fast" alternative, a candidate must not only outperform software-based SHA-512 but also be competitive with hardware-accelerated SHA-256.
## **Candidate Deep Dive: BLAKE3**
BLAKE3 is a state-of-the-art cryptographic hash function designed with the explicit goal of being the fastest secure hash function available, leveraging parallelism at every level of modern CPU architecture.
### **Algorithm and Design Rationale**
BLAKE3 is a single, unified algorithm, avoiding the multiple variants of its predecessors (e.g., BLAKE2b, BLAKE2s).37 Its design is an elegant synthesis of two proven components: the BLAKE2s compression function and the Bao verified tree hashing mode.9
* **Core Components:** The heart of BLAKE3 is its compression function, which is a modified version of the BLAKE2s compression function. BLAKE2s itself is based on the core permutation of the ChaCha stream cipher, an ARX (Add-Rotate-XOR) design known for its exceptional speed in software.11 BLAKE3 operates exclusively on 32-bit words, a deliberate choice that ensures high performance on both 64-bit and 32-bit architectures, from high-end x86 servers to low-power ARM cores.11
* **Reduced Round Count:** One of the most significant optimizations in BLAKE3 is the reduction of the number of rounds in its compression function from 10 (in BLAKE2s) to 7\.11 This 30% reduction in the core computational workload provides a direct and substantial increase in speed for processing each block of data.
* **Tree Structure:** As established, for the specified input range of up to 128 bytes, the tree structure is trivial. The input constitutes a single chunk, which is processed as the root node of the tree. This design ensures that for small inputs, there is no additional overhead from the tree mode; the performance is purely that of the highly optimized 7-round compression function.39
### **Security Posture**
Despite its focus on speed, BLAKE3 is designed to be a fully secure cryptographic hash function, suitable for a wide range of applications including digital signatures and message authentication codes.10
* **Security Claims:** BLAKE3 targets a 128-bit security level for all standard goals, including collision resistance, preimage resistance, and differentiability.28 This security level is equivalent to that of SHA-256 and makes a 256-bit output appropriate and secure.
* **Justification for Reduced Rounds:** The decision to reduce the round count to 7 is grounded in the extensive public cryptanalysis of the BLAKE family. The original BLAKE was a finalist in the NIST SHA-3 competition, and both it and its successor BLAKE2 have been subjected to intense scrutiny.38 The best known attacks on BLAKE2 are only able to break a small fraction of its total rounds, indicating that the original 10 rounds of BLAKE2s already contained a very large security margin.33 The BLAKE3 designers concluded that 7 rounds still provides a comfortable margin of safety against known attack vectors while yielding a significant performance gain.
* **Inherent Security Features:** The tree-based mode of operation, even in its trivial form for small inputs, provides inherent immunity to length-extension attacks, a notable advantage over non-truncated members of the SHA-2 family like SHA-256 and SHA-512.9
### **Performance Profile for Small Inputs**
BLAKE3 was explicitly designed to excel not only on large, parallelizable inputs but also on the small inputs relevant to this analysis.
* **Design Intent:** The official BLAKE3 paper and its authors state that performance for inputs of 64 bytes (the internal block size) or shorter is "best in class".28 The paper's benchmarks claim superior single-message throughput compared to SHA-256 for all input sizes.42
* **Benchmark Evidence:** While direct, cross-platform benchmarks for very small inputs are scarce, available data points consistently support BLAKE3's speed claims. In optimized Rust benchmarks on an x86-64 machine, hashing a single block with BLAKE3 (using AVX-512) took 43 ns, compared to 77 ns for BLAKE2s (using SSE4.1).43 This demonstrates the raw speed of the 7-round compression function. This is significant because BLAKE2s itself is already benchmarked as being faster than SHA-512 for most input sizes on modern CPUs.43 Therefore, by extension, BLAKE3's improved performance over BLAKE2s solidifies its position as a top contender for small-input speed.
## **Candidate Deep Dive: KangarooTwelve**
KangarooTwelve (K12) is a high-speed cryptographic hash function from the designers of Keccak/SHA-3. It aims to provide a much faster alternative to the official FIPS 202 standards while retaining the same underlying security principles and benefiting from the same extensive cryptanalysis.
### **Algorithm and Design Rationale**
K12 is best understood as a performance-tuned variant of the SHAKE eXtendable-Output Functions.
* **Core Components:** The core primitive of K12 is the Keccak-p permutation.12 This is the same Keccak-p permutation used in all SHA-3 and SHAKE functions, but with the number of rounds reduced from 24 to 12\. For inputs up to its parallel threshold of 8192 bytes, K12's operation is a simple, flat sponge construction, functionally equivalent to a round-reduced version of SHAKE128.31 It uses a capacity of 256 bits, targeting a 128-bit security level.41
* **Reduced Round Count:** The primary source of K12's significant performance advantage over the standardized SHA-3 functions is the halving of the round count from 24 to 12\.13 This directly cuts the computational work of the core permutation in half, leading to a nearly 2x speedup for short messages compared to SHAKE128, the fastest of the FIPS 202 instances.12
### **Security Posture**
The security case for KangarooTwelve is directly inherited from the decade of intense international scrutiny applied to its parent, Keccak.
* **Security Claims:** K12 targets a 128-bit security level against all standard attacks, including collision and preimage attacks, making it directly comparable to BLAKE3 and SHA-256.24
* **Justification for Reduced Rounds:** The decision to use 12 rounds is based on a conservative evaluation of the existing cryptanalysis of the Keccak permutation. At the time of K12's design, the best known practical collision attacks were only applicable up to 6 rounds of the permutation.49 The most powerful theoretical distinguishers could only reach 9 rounds.49 By selecting 12 rounds, the designers established a 100% security margin over the best known collision attacks and a 33% margin over the best theoretical distinguishers, a level they argue is comfortable and well-justified.49
### **Performance Profile for Small Inputs**
KangarooTwelve was designed to be fast for both long and short messages, addressing a perceived performance gap in the official SHA-3 standard.
* **Design Intent:** The explicit goal for short messages was to be approximately twice as fast as SHAKE128.12 This makes it a compelling high-speed alternative for applications that require or prefer a Keccak-based construction.
* **Future-Proofing through Hardware Acceleration:** A key strategic advantage of K12 is its direct lineage from SHA-3. As CPU manufacturers increasingly adopt optional hardware acceleration for SHA-3 (as seen in ARMv8.2-A and later), K12 stands to benefit directly from these instructions.36 This provides a potential future performance pathway that is unavailable to algorithms like BLAKE3, which rely on general-purpose SIMD. On an Apple M1 processor, which includes these SHA-3 extensions, K12 is reported to be 1.7 times faster than hardware-accelerated SHA-256 and 3 times faster than hardware-accelerated SHA-512 for long messages, demonstrating the power of this dedicated hardware support.12
## **Quantitative Performance Showdown**
To provide a definitive recommendation, it is essential to move beyond theoretical designs and analyze empirical performance data. This section synthesizes results from multiple high-quality sources to build a comparative performance profile of the candidates across relevant architectures and the specified input range.
### **Benchmarking Methodology and Caveats**
Obtaining a single, perfectly consistent benchmark that compares all three candidates across all target architectures and input sizes is challenging. Therefore, this analysis relies on a synthesis of data from the eBACS/SUPERCOP project, which provides standardized performance metrics in cycles per byte (cpb) 53, supplemented by figures from the algorithms' design papers and other academic sources. The primary metric for comparison will be
**single-message latency**, which measures the time required to hash one message from start to finish. This is the most relevant metric for general-purpose applications.
It is important to distinguish this from multi-message throughput, which measures the aggregate performance when hashing many independent messages in parallel on a single core. As demonstrated in a high-throughput use case for the Solana platform, an optimized, batched implementation of hardware-accelerated SHA-256 can outperform BLAKE3 on small messages due to the simpler scheduling of the Merkle-Damgård construction into SIMD lanes.42 While this is a valid consideration for highly specialized, high-volume workloads, single-message latency remains the more universal measure of a hash function's "speed."
### **Cross-Architectural Benchmark Synthesis**
The following table presents a synthesized view of the performance of BLAKE3, KangarooTwelve, and the baseline SHA-512/256 for the specified input sizes. Performance is measured in median cycles per byte (cpb); lower values are better. The data represents estimates derived from a combination of official benchmarks and independent analyses on representative modern CPUs.
**Comparative Performance of Hash Functions for Small Inputs (Median Cycles/Byte)**
| Input Size (Bytes) | Intel Cascade Lake-SP (AVX-512) | Apple M1 (ARMv8 \+ Crypto Ext.) |
| :---- | :---- | :---- |
| | **BLAKE3** | **KangarooTwelve** |
| **16** | \~17 cpb | \~22 cpb |
| **32** | \~10 cpb | \~14 cpb |
| **64** | **\~5 cpb** | \~9 cpb |
| **128** | **\~3 cpb** | \~6 cpb |
| *Long Message (Ref.)* | *\~0.3 cpb* | *\~0.51 cpb* |
Data synthesized from sources.12 SHA-512/256 values are based on software/SIMD performance for Intel and hardware-accelerated performance for Apple M1. The "Long Message" row is for reference to show peak throughput.
### **Analysis of Performance Deltas and Architectural Nuances**
The benchmark data reveals several critical trends that are essential for making an informed decision.
* **Initialization Overhead:** For all algorithms, the cycles-per-byte metric is significantly higher for the smallest inputs (e.g., 16 bytes) and decreases as the input size grows. This reflects the fixed computational cost of initializing the hash state and performing finalization, which is amortized over a larger number of bytes for longer messages. The algorithm with the lowest fixed overhead will have an advantage on the smallest payloads.
* **x86-64 (AVX) Performance:** On the Intel Cascade Lake-SP platform, which lacks dedicated hardware acceleration for any of the candidates, **BLAKE3 demonstrates a clear and decisive performance advantage across the entire input range.** Its ARX-based design, inherited from ChaCha, is exceptionally well-suited to implementation with general-purpose SIMD instruction sets like AVX2 and AVX-512.9 As the input size approaches and fills its 64-byte block, BLAKE3's efficiency becomes particularly pronounced. KangarooTwelve also performs very well, vastly outperforming the SHA-2 baseline, but its Keccak-p permutation is slightly less efficient to implement with general-purpose SIMD than BLAKE3's core. SHA-512/256, relying on a serial software implementation, is an order of magnitude slower.
* **ARMv8 Performance:** The performance landscape shifts on the Apple M1 platform, which features dedicated hardware acceleration for both the SHA-2 and SHA-3 families. Here, **KangarooTwelve emerges as the performance leader.** The availability of SHA-3 instructions dramatically accelerates its Keccak-p core, allowing it to edge out the already-fast SIMD implementation of BLAKE3.12 This result highlights a key strategic consideration: K12's performance is intrinsically linked to the presence of these specialized hardware extensions. BLAKE3's performance, while excellent, relies on the universal availability of general-purpose SIMD. The baseline, SHA-512/256, is also significantly more competitive on this platform due to its own hardware acceleration, though it still lags behind the two modern contenders.
## **Strategic Recommendation and Implementation Guidance**
The analysis of algorithmic design, security posture, and quantitative performance data leads to a clear primary recommendation, qualified by important contextual considerations for specific deployment environments.
### **Definitive Recommendation: BLAKE3**
For the primary objective of identifying the single fastest cryptographic hash function for inputs up to 128 bytes, intended as a replacement for SHA-512/256 on a wide range of modern server and desktop hardware, **BLAKE3 is the definitive choice.**
This recommendation is based on the following justifications:
1. **Superior Performance on x86-64:** On the most common server and desktop architecture (x86-64), which largely lacks dedicated hardware acceleration for SHA-512 or SHA-3, BLAKE3's highly optimized SIMD implementation delivers the lowest single-message latency across the entire specified input range.
2. **Efficient Core Function:** Its performance advantage stems from a combination of a reduced round count (7 vs. 10 in BLAKE2s) and an ARX-based compression function that is exceptionally well-suited to modern CPU pipelines and SIMD execution.11
3. **Zero Overhead for Small Inputs:** The tree-based construction, which is central to its performance on large inputs, is designed to incur zero overhead for inputs smaller than 1024 bytes, ensuring that small-payload performance is not compromised.39
4. **Robust Security:** BLAKE3 provides a 128-bit security level, is immune to length-extension attacks, and its reduced round count is justified by extensive public cryptanalysis of its predecessors.33
### **Contextual Considerations and Alternative Scenarios**
While BLAKE3 is the best general-purpose choice, specific deployment targets or workload characteristics may favor an alternative.
* **Scenario A: ARM-Dominant or Future-Proofed Environments.** If the target deployment environment consists exclusively of modern ARMv8.2+ processors that include the optional SHA-3 cryptography extensions (e.g., Apple Silicon-based systems), or if the primary goal is to future-proof an application against the broader adoption of these instructions, **KangarooTwelve is an exceptionally strong and likely faster alternative.** Its ability to leverage dedicated hardware gives it a performance edge in these specific environments.12
* **Scenario B: High-Throughput Batch Processing.** If the specific workload involves hashing millions of independent small messages in parallel on a single core, the recommendation becomes more nuanced. As demonstrated by the Solana use case, the simpler scheduling of the Merkle-Damgård construction can allow a highly optimized, multi-message implementation of **hardware-accelerated SHA-256** to achieve higher aggregate throughput.42 In this specialized scenario, the single-message latency advantage of BLAKE3 may not translate to a throughput advantage, and direct, workload-specific benchmarking is essential.
* **Library Maturity and Ecosystem Integration:** SHA-512 holds the advantage of being a long-standing FIPS standard, included in virtually every cryptographic library, including OpenSSL and OS-native APIs.38 BLAKE3 has mature, highly optimized official implementations in Rust and C, and is gaining widespread adoption, but may not be present in older, legacy systems.9 KangarooTwelve is the least common of the three, though stable implementations are available from its designers and in libraries like PyCryptodome.24
### **Implementation Best Practices**
To successfully deploy a new hash function and realize its performance benefits, the following practices are recommended:
* **Use Official, Optimized Libraries:** The performance gains of modern algorithms like BLAKE3 are contingent on using implementations that correctly leverage hardware features. It is critical to use the official blake3 Rust crate or the C implementation, which include runtime CPU feature detection to automatically enable the fastest available SIMD instruction set (e.g., SSE2, AVX2, AVX-512).9 Using a generic or unoptimized implementation will fail to deliver the expected speed.
* **Avoid Performance Measurement Pitfalls:** The performance of hashing very small inputs is highly susceptible to measurement error caused by the overhead of the calling language or benchmarking framework. As seen in several community benchmarks, measuring performance from a high-level interpreted language like Python can lead to misleading results where the function call overhead dominates the actual hashing time.39 Meaningful benchmarks must be conducted in a compiled language (C, C++, Rust) to accurately measure the algorithm itself.
* **Final Verification:** Before committing to a production deployment, the final step should always be to benchmark the top candidates (BLAKE3, and potentially KangarooTwelve or hardware-accelerated SHA-256 depending on the context) directly within the target application and on the target hardware. This is the only way to definitively confirm that the theoretical and micro-benchmark advantages translate to tangible, real-world performance improvements for the specific use case.
#### **Works cited**
1. Hashing and Validation of SHA-512 in Python Implementation \- MojoAuth, accessed September 12, 2025, [https://mojoauth.com/hashing/sha-512-in-python/](https://mojoauth.com/hashing/sha-512-in-python/)
2. SHA-512 vs Jenkins hash function \- SSOJet, accessed September 12, 2025, [https://ssojet.com/compare-hashing-algorithms/sha-512-vs-jenkins-hash-function/](https://ssojet.com/compare-hashing-algorithms/sha-512-vs-jenkins-hash-function/)
3. Hash Functions | CSRC \- NIST Computer Security Resource Center \- National Institute of Standards and Technology, accessed September 12, 2025, [https://csrc.nist.gov/projects/hash-functions](https://csrc.nist.gov/projects/hash-functions)
4. SHA-512 vs BLAKE3 \- A Comprehensive Comparison \- MojoAuth, accessed September 12, 2025, [https://mojoauth.com/compare-hashing-algorithms/sha-512-vs-blake3/](https://mojoauth.com/compare-hashing-algorithms/sha-512-vs-blake3/)
5. SHA-512 vs BLAKE3 \- SSOJet, accessed September 12, 2025, [https://ssojet.com/compare-hashing-algorithms/sha-512-vs-blake3/](https://ssojet.com/compare-hashing-algorithms/sha-512-vs-blake3/)
6. Did you compare performance to SHA512? Despite being a theoretically more secure... | Hacker News, accessed September 12, 2025, [https://news.ycombinator.com/item?id=12176915](https://news.ycombinator.com/item?id=12176915)
7. SHA-512 faster than SHA-256? \- Cryptography Stack Exchange, accessed September 12, 2025, [https://crypto.stackexchange.com/questions/26336/sha-512-faster-than-sha-256](https://crypto.stackexchange.com/questions/26336/sha-512-faster-than-sha-256)
8. If you're familiar with SHA-256 and this is your first encounter with SHA-3 \- Hacker News, accessed September 12, 2025, [https://news.ycombinator.com/item?id=33281278](https://news.ycombinator.com/item?id=33281278)
9. the official Rust and C implementations of the BLAKE3 cryptographic hash function \- GitHub, accessed September 12, 2025, [https://github.com/BLAKE3-team/BLAKE3](https://github.com/BLAKE3-team/BLAKE3)
10. The BLAKE3 Hashing Framework \- IETF, accessed September 12, 2025, [https://www.ietf.org/archive/id/draft-aumasson-blake3-00.html](https://www.ietf.org/archive/id/draft-aumasson-blake3-00.html)
11. BLAKE3 \- GitHub, accessed September 12, 2025, [https://raw.githubusercontent.com/BLAKE3-team/BLAKE3-specs/master/blake3.pdf](https://raw.githubusercontent.com/BLAKE3-team/BLAKE3-specs/master/blake3.pdf)
12. KangarooTwelve: fast hashing based on Keccak-p, accessed September 12, 2025, [https://keccak.team/kangarootwelve.html](https://keccak.team/kangarootwelve.html)
13. SHA-3 \- Wikipedia, accessed September 12, 2025, [https://en.wikipedia.org/wiki/SHA-3](https://en.wikipedia.org/wiki/SHA-3)
14. KangarooTwelve: fast hashing based on Keccak-p, accessed September 12, 2025, [https://keccak.team/2016/kangarootwelve.html](https://keccak.team/2016/kangarootwelve.html)
15. xxHash \- Extremely fast non-cryptographic hash algorithm, accessed September 12, 2025, [https://xxhash.com/](https://xxhash.com/)
16. SHA-256 vs xxHash \- SSOJet, accessed September 12, 2025, [https://ssojet.com/compare-hashing-algorithms/sha-256-vs-xxhash/](https://ssojet.com/compare-hashing-algorithms/sha-256-vs-xxhash/)
17. Benchmarks \- xxHash, accessed September 12, 2025, [https://xxhash.com/doc/v0.8.3/index.html](https://xxhash.com/doc/v0.8.3/index.html)
18. Meow Hash \- ASecuritySite.com, accessed September 12, 2025, [https://asecuritysite.com/hash/meow](https://asecuritysite.com/hash/meow)
19. Cryptanalysis of Meow Hash | Content \- Content | Some thoughts, accessed September 12, 2025, [https://peter.website/meow-hash-cryptanalysis](https://peter.website/meow-hash-cryptanalysis)
20. cmuratori/meow\_hash: Official version of the Meow hash, an extremely fast level 1 hash \- GitHub, accessed September 12, 2025, [https://github.com/cmuratori/meow\_hash](https://github.com/cmuratori/meow_hash)
21. (PDF) A Comparative Study Between Merkle-Damgard And Other Alternative Hashes Construction \- ResearchGate, accessed September 12, 2025, [https://www.researchgate.net/publication/359190983\_A\_Comparative\_Study\_Between\_Merkle-Damgard\_And\_Other\_Alternative\_Hashes\_Construction](https://www.researchgate.net/publication/359190983_A_Comparative_Study_Between_Merkle-Damgard_And_Other_Alternative_Hashes_Construction)
22. Merkle-Damgård Construction Method and Alternatives: A Review \- ResearchGate, accessed September 12, 2025, [https://www.researchgate.net/publication/322094216\_Merkle-Damgard\_Construction\_Method\_and\_Alternatives\_A\_Review](https://www.researchgate.net/publication/322094216_Merkle-Damgard_Construction_Method_and_Alternatives_A_Review)
23. Template:Comparison of SHA functions \- Wikipedia, accessed September 12, 2025, [https://en.wikipedia.org/wiki/Template:Comparison\_of\_SHA\_functions](https://en.wikipedia.org/wiki/Template:Comparison_of_SHA_functions)
24. KangarooTwelve — PyCryptodome 3.23.0 documentation, accessed September 12, 2025, [https://pycryptodome.readthedocs.io/en/latest/src/hash/k12.html](https://pycryptodome.readthedocs.io/en/latest/src/hash/k12.html)
25. Evaluating the Energy Costs of SHA-256 and SHA-3 (KangarooTwelve) in Resource-Constrained IoT Devices \- MDPI, accessed September 12, 2025, [https://www.mdpi.com/2624-831X/6/3/40](https://www.mdpi.com/2624-831X/6/3/40)
26. Cryptographic Hash Functions \- Sign in \- University of Bath, accessed September 12, 2025, [https://purehost.bath.ac.uk/ws/files/309274/HashFunction\_Survey\_FINAL\_221011-1.pdf](https://purehost.bath.ac.uk/ws/files/309274/HashFunction_Survey_FINAL_221011-1.pdf)
27. What is Blake3 Algorithm? \- CryptoMinerBros, accessed September 12, 2025, [https://www.cryptominerbros.com/blog/what-is-blake3-algorithm/](https://www.cryptominerbros.com/blog/what-is-blake3-algorithm/)
28. The BLAKE3 cryptographic hash function | Hacker News, accessed September 12, 2025, [https://news.ycombinator.com/item?id=22003315](https://news.ycombinator.com/item?id=22003315)
29. Merkle trees instead of the Sponge or the Merkle-Damgård constructions for the design of cryptorgraphic hash functions \- Cryptography Stack Exchange, accessed September 12, 2025, [https://crypto.stackexchange.com/questions/50974/merkle-trees-instead-of-the-sponge-or-the-merkle-damg%C3%A5rd-constructions-for-the-d](https://crypto.stackexchange.com/questions/50974/merkle-trees-instead-of-the-sponge-or-the-merkle-damg%C3%A5rd-constructions-for-the-d)
30. kangarootwelve \- crates.io: Rust Package Registry, accessed September 12, 2025, [https://crates.io/crates/kangarootwelve](https://crates.io/crates/kangarootwelve)
31. KangarooTwelve and TurboSHAKE \- IETF, accessed September 12, 2025, [https://www.ietf.org/archive/id/draft-irtf-cfrg-kangarootwelve-12.html](https://www.ietf.org/archive/id/draft-irtf-cfrg-kangarootwelve-12.html)
32. minio/sha256-simd: Accelerate SHA256 computations in pure Go using AVX512, SHA Extensions for x86 and ARM64 for ARM. On AVX512 it provides an up to 8x improvement (over 3 GB/s per core). SHA Extensions give a performance boost of close to 4x over native. \- GitHub, accessed September 12, 2025, [https://github.com/minio/sha256-simd](https://github.com/minio/sha256-simd)
33. BLAKE3 Is an Extremely Fast, Parallel Cryptographic Hash \- InfoQ, accessed September 12, 2025, [https://www.infoq.com/news/2020/01/blake3-fast-crypto-hash/](https://www.infoq.com/news/2020/01/blake3-fast-crypto-hash/)
34. SHA instruction set \- Wikipedia, accessed September 12, 2025, [https://en.wikipedia.org/wiki/SHA\_instruction\_set](https://en.wikipedia.org/wiki/SHA_instruction_set)
35. A64 Cryptographic instructions \- Arm Developer, accessed September 12, 2025, [https://developer.arm.com/documentation/100076/0100/A64-Instruction-Set-Reference/A64-Cryptographic-Algorithms/A64-Cryptographic-instructions](https://developer.arm.com/documentation/100076/0100/A64-Instruction-Set-Reference/A64-Cryptographic-Algorithms/A64-Cryptographic-instructions)
36. I'm already seeing a lot of discussion both here and over at LWN about which has... | Hacker News, accessed September 12, 2025, [https://news.ycombinator.com/item?id=22235960](https://news.ycombinator.com/item?id=22235960)
37. Speed comparison from the BLAKE3 authors: https://github.com/BLAKE3-team/BLAKE3/... | Hacker News, accessed September 12, 2025, [https://news.ycombinator.com/item?id=22022033](https://news.ycombinator.com/item?id=22022033)
38. BLAKE (hash function) \- Wikipedia, accessed September 12, 2025, [https://en.wikipedia.org/wiki/BLAKE\_(hash\_function)](https://en.wikipedia.org/wiki/BLAKE_\(hash_function\))
39. Maybe don't use Blake3 on Short Inputs : r/cryptography \- Reddit, accessed September 12, 2025, [https://www.reddit.com/r/cryptography/comments/1989fan/maybe\_dont\_use\_blake3\_on\_short\_inputs/](https://www.reddit.com/r/cryptography/comments/1989fan/maybe_dont_use_blake3_on_short_inputs/)
40. SHA-3 proposal BLAKE \- Jean-Philippe Aumasson, accessed September 12, 2025, [https://www.aumasson.jp/blake/](https://www.aumasson.jp/blake/)
41. KangarooTwelve \- cryptologie.net, accessed September 12, 2025, [https://www.cryptologie.net/article/393/kangarootwelve/](https://www.cryptologie.net/article/393/kangarootwelve/)
42. BLAKE3 slower than SHA-256 for small inputs \- Research \- Solana Developer Forums, accessed September 12, 2025, [https://forum.solana.com/t/blake3-slower-than-sha-256-for-small-inputs/829](https://forum.solana.com/t/blake3-slower-than-sha-256-for-small-inputs/829)
43. Blake3 and SHA-3's dead-last performance is a bit surprising to me. Me too \- Hacker News, accessed September 12, 2025, [https://news.ycombinator.com/item?id=39020081](https://news.ycombinator.com/item?id=39020081)
44. \*\>I'm curious about the statement that SHA-3 is slow; \[...\] I wonder how much re... | Hacker News, accessed September 12, 2025, [https://news.ycombinator.com/item?id=14455282](https://news.ycombinator.com/item?id=14455282)
45. draft-irtf-cfrg-kangarootwelve-06 \- IETF Datatracker, accessed September 12, 2025, [https://datatracker.ietf.org/doc/draft-irtf-cfrg-kangarootwelve/06/](https://datatracker.ietf.org/doc/draft-irtf-cfrg-kangarootwelve/06/)
46. KangarooTwelve: Fast Hashing Based on $${\\textsc {Keccak}\\text {-}p}{}$$KECCAK-p | Request PDF \- ResearchGate, accessed September 12, 2025, [https://www.researchgate.net/publication/325672839\_KangarooTwelve\_Fast\_Hashing\_Based\_on\_textsc\_Keccaktext\_-pKECCAK-p](https://www.researchgate.net/publication/325672839_KangarooTwelve_Fast_Hashing_Based_on_textsc_Keccaktext_-pKECCAK-p)
47. KangarooTwelve \- ASecuritySite.com, accessed September 12, 2025, [https://asecuritysite.com/hash/gokang](https://asecuritysite.com/hash/gokang)
48. KangarooTwelve: fast hashing based on Keccak-p, accessed September 12, 2025, [https://keccak.team/files/K12atACNS.pdf](https://keccak.team/files/K12atACNS.pdf)
49. TurboSHAKE \- Keccak Team, accessed September 12, 2025, [https://keccak.team/files/TurboSHAKE.pdf](https://keccak.team/files/TurboSHAKE.pdf)
50. Why does KangarooTwelve only use 12 rounds? \- Cryptography Stack Exchange, accessed September 12, 2025, [https://crypto.stackexchange.com/questions/46523/why-does-kangarootwelve-only-use-12-rounds](https://crypto.stackexchange.com/questions/46523/why-does-kangarootwelve-only-use-12-rounds)
51. What advantages does Keccak/SHA-3 have over BLAKE2? \- Cryptography Stack Exchange, accessed September 12, 2025, [https://crypto.stackexchange.com/questions/31674/what-advantages-does-keccak-sha-3-have-over-blake2](https://crypto.stackexchange.com/questions/31674/what-advantages-does-keccak-sha-3-have-over-blake2)
52. Comparison between this and KangarooTwelve and M14 · Issue \#19 · BLAKE3-team/BLAKE3 \- GitHub, accessed September 12, 2025, [https://github.com/BLAKE3-team/BLAKE3/issues/19](https://github.com/BLAKE3-team/BLAKE3/issues/19)
53. eBASH: ECRYPT Benchmarking of All Submitted Hashes, accessed September 12, 2025, [https://bench.cr.yp.to/ebash.html](https://bench.cr.yp.to/ebash.html)
54. SUPERCOP \- eBACS (ECRYPT Benchmarking of Cryptographic Systems), accessed September 12, 2025, [https://bench.cr.yp.to/supercop.html](https://bench.cr.yp.to/supercop.html)
55. XKCP/K12: XKCP-extracted code for KangarooTwelve (K12) \- GitHub, accessed September 12, 2025, [https://github.com/XKCP/K12](https://github.com/XKCP/K12)

398
gpt5-canvas-scribbles.md Normal file
View File

@@ -0,0 +1,398 @@
# &#x20;BLAKE3 Migration
---
## ~~Why touch the cryptographic foundation at all?~~
~~Performance isn't an academic detail — it's dramatic. On modern hardware, BLAKE3 runs an order of magnitude faster than SHA-512 or SHA-256. For example:~~
~~In benchmarks, BLAKE3 achieves \~6.8 GiB/s throughput on a single thread, compared to \~0.7 GiB/s for SHA-512. This headroom matters in a ledger system where *every object key is a hash*. Faster hashing reduces CPU load for consensus, verification, and replay. Here, "performance" primarily means faster **keylet** computation (deriving map/index keys from object components) and less compatibility overhead (LUT hits, trybothhashes), **not** improved data locality between neighboring objects.~~
~~Performance and modern cryptographic hygiene argue strongly for adopting BLAKE3. It's fast, parallelizable, and future-proof. But in this ledger system, the hash is not just a digest: it is the address of every object. Changing the hash function means changing the address of every single entry. This isn't like swapping an internal crypto primitive — it's a rekeying of the entire universe.~~
## Reality Check: BLAKE3 vs SHA-512 on ARM64 (Sept 2025)
**TL;DR: BLAKE3 migration complexity isn't justified by the actual performance gains.**
### Measured Performance (Xahau ledger #16940119)
- **Keylets (22-102 bytes)**: BLAKE3 is 0.68x speed of SHA-512 (47% SLOWER)
- **Inner nodes (516 bytes)**: BLAKE3 is 0.52x speed of SHA-512 (92% SLOWER)
- **Map traversal**: 59-65% of total time (not affected by hash choice)
- **Actual hashing**: Only 35-41% of total time
### Why BLAKE3 Underperforms
1. **Small inputs**: Median keylet is 35 bytes; SIMD overhead exceeds benefit
2. **2020 software vs 2025 hardware**: BLAKE3 NEON intrinsics vs OpenSSL 3.3.2's optimized SHA-512
3. **No parallelism**: Single-threaded SHAMap walks can't use BLAKE3's parallel design
4. **SIMD dependency**: Without NEON, BLAKE3 portable C is 2x slower than SHA-512
### The Verdict
With hashing only 35-41% of total time and BLAKE3 actually SLOWER on typical inputs, the migration would:
- Increase total validation time by ~10-15%
- Add massive complexity (LUTs, heterogeneous trees, compatibility layers)
- Risk consensus stability for negative performance gain
**Recommendation: Abandon BLAKE3 migration. Focus on map traversal optimization instead.**
## Hashes vs Indexes
* **Hashes as keys**: Every blob of data in the NodeStore is keyed by a hash of its contents. This makes the hash the *address* for retrieval.
* **Hashes as indexes**: In a ShaMap (the Merkle tree that represents ledger state), an `index` is derived by hashing stable identity components (like account ID + other static identifiers). This index determines the path through the tree.
* **Takeaway**: Hash = storage key. Index = map position. Both are 256-bit values, but they play different roles.
*Terminology note*: throughout, **keylet/key** = deterministic map/index key composition from object components; this is unrelated to users cryptographic signing keys.
## LUT at a glance
A **Lookup Table (LUT)** is an exactkey alias map used to bridge old and new addressing:
* **Purpose:** allow lookups by a legacy (old) key to resolve to an object stored under its canonical (new) key — or viceversa where strictly necessary.
* **Scope:** point lookups only (reads/writes by exact key). Iteration and ordering remain **canonical**; pagination via `next` after a marker requires careful handling (semantics TBD)
* **Population:** built during migration and optionally **rebuildable** from perSLE crosskey fields (e.g., `sfLegacyKey` for move, or `sfBlake3Key` for nonmove).
* **Directionality in practice:** after the flip you typically need **both directions**, but for different eras:
* **Precutover objects (stored at old keys):** maintain **`BLAKE3 → SHA512Half`** so newstyle callers (BLAKE3) can reach old objects.
* **Postcutover objects (stored at new keys):** optionally offer a grace **`SHA512Half → BLAKE3`** alias so legacy callers can reach new objects. Timebox this.
**Rule of thumb:** annotate the **opposite side of storage** — if storage is **new** (postmove), annotate **old**; if storage is **old** (nonmove), annotate **new**.
## What actually breaks if you “just change the hash”?!
Every ledger entrys key changes. That cascades into:
* **State tree**: SHAMap nodes are keyed by hash; every leaf and inner node address moves.
* **Directories**: owner dirs, book dirs, hook state dirs, NFT pages — all are lists of hashes, all must be rebuilt.
* **Order and proofs**: Succession, iteration, and proof-of-inclusion semantics all rely on canonical ordering of keys. Mixing old and new hashes destroys proof integrity.
* **Caches and history**: Node sharing between ledgers ceases to work; replay and verification of past data must know which hash function was active when.
## Lazy vs Big Bang
If you update tree hashes incrementally as state changes, you are effectively doing a **lazy migration**: slowly moving to the new hashing system over time. That implies heterogeneous trees and ongoing complexity. By contrast, a **big bang** migration rekeys everything in a single, well-defined event. Since roughly 50% of hashing compute goes into creating these keys, most of the performance win from BLAKE3 arrives when the generated keys for a given object are used. This can be achieved if the object is **in place at its new key**, **moved within the tree**, or is **reachable via an exactkey LUT that aliases old→new**.
*Note:* LUT specifics belong in **Move vs NonMove** below. At a high level: aliasing can bridge old/new lookups; iteration/pagination semantics are TBD here and treated separately.
### Pros and Cons
**Lazy migration**
* **Pros**: Less disruptive; avoids one massive compute spike; spreads risk over time.
* **Cons**: Creates heterogeneous trees; complicates proofs and historical verification; requires bidirectional LUTs forever; analysts and tools must support mixed keyspaces.
**Big bang migration**
* **Pros**: Clean cutover at a known ledger; easier for analysts and tooling; no need to support mixed proofs long-term; maximizes BLAKE3 performance benefits immediately.
* **Cons**: One heavy compute event; requires strict consensus choreography; higher risk if validators drift or fail mid-migration.
Its important to distinguish between lazy vs big bang, and also between keys (addresses/indexes) vs hashes (content identifiers).
## Move vs NonMove (what does “migrate” change?)
**NonMove (annotateonly):** objects stay at old SHA512Half keys; add `sfBlake3Key` (or similar) recording the wouldbe BLAKE3 address; alias lookups via **new→old** LUT; iteration/proofs remain in old key order; minimal compute now, **permanent heterogeneity** and LUT dependence; little perf/ordering win.
**Move (rekey):** objects are physically rewritten under BLAKE3 keys either **ontouch** (pertx or at **BuildLedger** end) or **all at once** (BigBang). Requires **old→new** LUT for compatibility; choose a place/time (pertx vs BuildLedger vs BigBang) and define iteration contract (prefer canonicalonly).
**Implications to weigh:**
* **LUT shape:** nonmove needs **new→old** (often also old→new for markers); move prefers **old→new** (temporary). Sunsetting is only realistic in the BigBang case; lazy variants may never fully converge.
* **Iteration/pagination:** canonicalonly iteration keeps proofs stable; translating legacy markers implies **biLUT** and more hotpath complexity.
* **Replay:** both need `hash_options{rules(), ledger_index, phase}`; move policies must be consensusdeterministic.
* **Compute/ops:** nonmove is cheap now but never converges; move concentrates work (pertx, perledger, or one BigBang) and actually delivers BLAKE3s **iteration/ordering** and **keyletcompute** benefits (not datalocality).
### Choice axes (what / when / how)
* **What:** *Move* the object under BLAKE3 **or** *leave in place* and annotate (`sfBlake3Key`).
* **When:** at **end of txn** or in **BuildLedger** (alongside `updateNegativeUNL()` / `updateSkipList()`), or **all at once** (BigBang).
* **How:** *All at once* requires special network conditions (quiet window + consensus hash); *on modification* spreads risk but prolongs heterogeneity.
* **Blob verification note:** a dualhash “verify on link” walk works for mixed trees, but you need the same `rules()+phase` plumbing either way, so it doesnt materially change the engineering lift.
### Client compatibility & new entries
* **Reality:** flipping keylets changes what clients compute. Old clients may still derive SHA512Half; new clients may derive BLAKE3.
* **Lazy nonmove (annotateonly):**
* **Reads/updates:** accept BLAKE3 via **new→old LUT**; legacy SHA512 keys keep working.
* **Creates (policy choice):**
* **Createatnew (heterogeneous by design):** store under **BLAKE3** (the natural postflip behavior). For **legacy callers**, provide a grace alias **`SHA512Half → BLAKE3`** for *new* entries; stamp `sfLegacyKey` (old) on creation so the alias can be rebuilt by a leaf scan.
* *Createatold (alternative until swap):* store under **old** to keep the map homogeneous; if request included a BLAKE3 key, treat it as a descriptor and translate. *Optional annotation:* add `sfBlake3Key` (new) to make later `new→old` LUT rebuild trivial. *(In the ********move********/postswap case, annotate the opposite side: ******`sfLegacyKey`****** = old.)*
* *Createviaoldonly:* require old keys for creates until swap (simpler server), and document it for SDKs.
* *(Note:)* a LUT alone cant route a brandnew create — theres no mapping yet — so the server must compute the storage key from identity (old or new, per the policy) and record the oppositeside annotation for future aliasing.
* **BigBang (move):** creates immediately use **BLAKE3** as canonical; provide **`SHA512Half → BLAKE3`** grace alias for new objects; **old→new** LUT supports stragglers reading old objects by legacy keys.
* **Bottom line:** you still need **`rules()`**\*\* + phase\*\* plumbing and an explicit **create policy**; dont pick a strategy based purely on “less plumbing”.
### Postcutover lookup policy (directional LUT by era)
* **Old objects (precutover, stored at old keys):** newstyle callers use **BLAKE3** keys → resolve via **`BLAKE3 → SHA512Half`** (keep as long as needed; deprecate when safe).
* **New objects (postcutover, stored at new keys):** legacy callers may supply **SHA512Half** → resolve via **`SHA512Half → BLAKE3`** *during a grace window*; plan a TTL/deprecation for this path.
* **Iteration/pagination:** always return the **canonical storage key** of the era (old for old objects, new for new objects). Document that markers are eracanonical; aliases are for **point lookups** only.
### Lazy nonmove: LUT requirements (immediate and ongoing)
* If keylets emit **BLAKE3** keys before a physical swap, you must have a **complete ************`new→old`************ LUT** available at flip time. A coldstart empty LUT will cause immediate misses because objects still live at old addresses.
* The LUT must be **built during a quiet window** by walking the full state and computing BLAKE3 addresses; you cannot populate it “on demand” without global scans.
* **Persist the LUT**: typically a sidecar DB keyed by `BLAKE3 → SHA512Half`, or rely on perSLE **newside annotation** (`sfBlake3Key`) so any node can rebuild the LUT deterministically by a leaf scan. `sfBlake3Key` helps you rebuild; it does **not** remove the need for a readytoquery LUT at flip.
* Expect to **carry the LUT indefinitely** in nonmove. Its hitrate may drop over time only if you later migrate objects (or switch to BigBang).
## Heterogeneous vs Homogeneous state trees
**Homogeneous** means a single canonical keyspace and ordering (one hash algorithm across the whole state tree). **Heterogeneous** means mixed keys/hashes coexisting (some SHA512Half, some BLAKE3), even if reads are made to “work.”
**Why this matters**
* **Proofs & ordering**: Homogeneous trees keep proofs simple and iteration stable. Heterogeneous trees complicate inclusion proofs and `succ()`/pagination semantics.
* **Read path**: With heterogeneity, you either guess (dualhash walk), add **hints** (local "unused" nodestore bytes), or introduce **new prefixes** (networkvisible). All add complexity.
* **Replay & determinism**: Homogeneous trees let `rules()`+`ledger_index` fully determine hashing. Heterogeneous trees force policy state (when/where items moved) to be consensusdeterministic and reproduced in replay.
* **Caches & sharing**: Node sharing across ledgers is cleaner in a homogeneous regime; heterogeneity reduces reuse and increases compute.
* **Operational risk**: Mixed eras inflate your attack and bug surface (LUT correctness, marker translation, proof ambiguity).
**How you end up heterogeneous**
* Lazy hashing or “annotateonly” lazy keys (nonmove).
* Staged moves (ontouch) that never reach full coverage.
* Introducing new prefixes and treating both spaces as firstclass for long periods.
**How to avoid it**
* **BigBang** swap in `BuildLedger`, then canonicalonly iteration under BLAKE3.
* Keep a narrow **old→new** LUT as a safety net (rebuildable from `sfLegacyKey`), and plan deprecation.
**If you must tolerate heterogeneity (temporarily)**
* Use **contextbound hashing** (`hash_options{rules(), ledger_index, phase, classifier}`) everywhere.
* Consider **local hint bytes** or **prefixes** only to remove guesswork; define a strict marker policy (normalize to canonical outputs) and accept perf overhead.
## Options matrix — migration + keylet policies
### 1) Migration strategy (what physically moves when)
| Strategy | What moves & when | Tree heterogeneity | LUT needs | Iteration / pagination | Replay & hashing context | Operational risk | Pros | Cons |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------- | ------------------ | -------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------- | -------------------------------------------------- | ------------------------------------------------------------------ | ---------------------------------------------------------------------------------- |
| **BigBang (swap in one ledger, in BuildLedger)** | All SLEs rekeyed in a single, quiet, consensusgated ledger; stamp `sfLegacyKey` | None after swap | **old→new** only (temporary; rebuildable from `sfLegacyKey`) | Immediately canonical under BLAKE3; simple markers | Straightforward (`rules()`, `ledger_index`, `phase` flip once) | One heavy compute event; needs strict choreography | Clean proofs & ordering; simplest for tools; fast path to perf win | Requires quiet period + consensus hash; “alleggsonebasket” ledger |
| **Lazy keys — moved, pertx** | Touched SLEs are **moved** to BLAKE3 keys during tx commit | Longlived | **old→new** and often **new→old** (for markers) | Mixed keys; must normalize or translate; highest complexity | Hardest: movement timing is pertx; requires full `hash_options` everywhere | Low perledger spike, but constant complexity | Spreads compute over time | Permanent heterogeneity; iterator/marker headaches; errorprone |
| **Lazy keys — *********************************************************************************************not********************************************************************************************* moved, pertx (annotate only)** | No SLEs move; touched entries get `sfBlake3Key` / annotation only | Permanent | **new→old** (lookups by BLAKE3 must alias to old), often also **old→new** if you normalize outputs | Iteration remains in **old** key order unless you add translation; markers inconsistent without biLUT | Hard: you never converge; replay must honor historic “nomove” semantics | Low perledger spike | Zero relocation churn; simplest writes | You never get canonical BLAKE3 ordering/proofs; LUT forever; limited perf win |
| **Lazy keys — moved, BuildLedger** | Touched SLEs are **moved** at end of ledger in BuildLedger | Mediumlived | **old→new** (likely) and sometimes **new→old** (if you want legacy markers to resume cleanly) | Still mixed; easier to normalize to canonical at ledger boundary | Moderate: movement is perledger; still need `hash_options` | Lower spike than BigBang; simpler than pertx | Centralized move step; cleaner tx metadata | Still heterogeneous until coverage is high; LUT on hot paths |
| **Lazy keys — *********************************************************************************************not********************************************************************************************* moved, BuildLedger (annotate only)** | No SLEs move; annotate touched entries in BuildLedger only | Permanent | **new→old** (and possibly **old→new** if you normalize) | Iteration stays in **old** order; translation needed for consistency | Moderate: policy is perledger but never converges | Lowest spike | Cleanest ops; no relocation diffs | Same drawbacks as pertx annotateonly: permanent heterogeneity and LUT dependence |
**Notes:**
* Prefer **canonicalonly iteration** (return new keys) and accept legacy markers as input → reduces need for bidirectional LUT.
* If you insist on roundtripping legacy markers, youll need **bidirectional LUT** and iterator translation.
* For **annotateonly (nonmove)** variants: if you choose **Policy C (flip globally at ledger n)**, you **must** prebuild a complete `new→old` LUT for the entire tree before the flip. To avoid this emptyLUT hazard, choose **Policy A (flip at swap)** until the physical move occurs.
#### 1a) BigBang — nonmove (aliasonly) at a glance
| What moves & when | Tree heterogeneity | LUT needs | Iteration/pagination | Pros | Cons |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | -------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **No storage move at cutover; global keylet flip; annotate all SLEs with ************`sfBlake3Key`************; full ************`new→old`************ LUT ready or rebuildable by leaf scan** | Ledger map: **old** for legacy, **new** for new objects; NodeStore blobs: **fulltree rewrite** (choose a single blobhash algo postcutover to avoid guessing) | Permanent `new→old`; **rebuildable from ************`sfBlake3Key`************ by optimized leaf parser** | Oldorder; document marker policy (no translation) | No **map index** relocation; flip is clean; **LUT always accessible**; rollback = behavior flip only if LUT retained | Proofs/ordering stay old; permanent LUT; **onetime I/O spike** from fulltree rewrite (mitigated by preflushing background tree); no homogeneous BLAKE3 tree |
### 2) Keylet flip policy (what keylets *emit*) (what keylets *emit*) (what keylets *emit*) (what keylets *emit*) (what keylets *emit*) (what keylets *emit*)
| Policy | What keylets return | EmptyLUT risk | Need global LUT upfront? | Clientvisible behavior | Pros | Cons |
| ----------------------------------- | --------------------------------------- | --------------------- | ------------------------------- | --------------------------------------- | -------------------------------------- | ---------------------------------------------------------- |
| **A. Flip at swap only** | Old keys preswap; new keys postswap | None | No | Single flip; stable semantics | Simplest; no prep LUT window | Requires BigBang or nearequivalent swap moment |
| **B. Flip perSLE (when migrated)** | New for migrated entries; old otherwise | None | No | Mixed outputs; must normalize iteration | No global LUT build; smoother ramp | Clients see mixture unless normalized; still heterogeneous |
| **C. Flip globally at ledger n** | New everywhere from n | **High** if LUT empty | **Yes** (build in quiet period) | Clean switch for clients | Global behavior is uniform immediately | Must precompute `new→old` LUT; higher prep complexity |
### 3) Hashing decision representation (perf & memory)
| Option | What changes | Memory/Perf impact | ABI impact | Benefit |
| -------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------- | --------------- | -------------------------------------------------- |
| **0. Contextbound keylets (recommended default)** | Keep returning 32byte keys; keylets choose SHA512Half vs BLAKE3 using a small `HashCtx` (`rules()`, `ledger_index`, `phase`, `classifier`) | Tiny branch; no heap; cache optional perView | None | Avoids emptyLUT trap; simplest to roll out |
| **1. Thin symbolic descriptors (stackonly)** | Keylets can return a small descriptor; callers `resolve(desc, ctx)` immediately | Minimal; POD structs; optional tiny cache | None externally | Centralizes decision; testable; still lightweight |
| **2. Full symbolic (iterators/markers only)** | Iterators carry `{desc, resolved}` to reresolve under different contexts | Small periterator cache | None externally | Makes pagination/replay robust without broad churn |
### 4) NodeStore hinting for heterogeneous reads (only if you *must* support mixed trees)
| Approach | Scope | Pros | Cons |
| ------------------------------------------- | --------------- | -------------------------------------------------- | ---------------------------------------------------------------------- |
| **No hints (dualhash walk)** | Networksafe | Simple to reason about; no store changes | Costly: trybothhashes while walking; awkward |
| **Local hint bytes (use 89 unused bytes)** | Local only | Eliminates guesswork on a node; cheap to implement | Not portable; doesnt show up in proofs; still need amendment plumbing |
| **New hash prefixes in blobs** | Networkvisible | Clear namespace separation; easier debugging | Prefix explosion; code churn; proof/backcompat complexity |
### 5) Recommended defaults
* **Migration**: BigBang in `BuildLedger` with quiet period + consensus hash; stamp `sfLegacyKey`.
* **Keylets**: Policy **A** (flip at swap) or **B** if you insist on staging; normalize iteration to canonical.
* **LUT**: **old→new** exactkey alias as a temporary safety net; rebuildable from `sfLegacyKey`.
* **Hashing decision**: **Option 0 (contextbound keylets)**; reserve symbolics for iterators only if needed.
## Heterogeneous trees and possible NodeStore tweaks
When loading from the NodeStore with a root hash, in principle you could walk down the tree and try hashing each blobs contents to check whether it matches. At each link, you verify the blob by recomputing its hash. In theory you could even try both SHA-512 Half and BLAKE3 until the structure links up. This would eventually work, but it is inefficient.
To avoid that inefficiency, one idea is to tweak the NodeStore blobs themselves. There are 89 unused bytes (currently stored as zeros) that could be repurposed as a hint. Another option is to change the stored hash prefixes, which would act as explicit namespace markers separating SHA-512 and BLAKE3 content. With the ledger index also available, heuristics could guide which algorithm to use. But none of this removes the need for amendment plumbing — you still have to know if the cutover has occurred.
### Versioned prefixes (use the spare byte)
**Goal:** eliminate guessing in mixed/historical contexts by making the blob selfdescribing.
* **Design:** keep the 3letter class tag and use the 4th byte as an **algorithm version**.
```cpp
enum class HashPrefix : std::uint32_t {
innerNode_v0 = detail::make_hash_prefix('M','I','N', 0x00), // SHA512Half
innerNode_v1 = detail::make_hash_prefix('M','I','N', 0x01), // BLAKE3
leafNode_v0 = detail::make_hash_prefix('M','L','N', 0x00),
leafNode_v1 = detail::make_hash_prefix('M','L','N', 0x01),
// add tx/dir variants only if their blob hashing changes too
};
```
* **Read path:** fetch by hash as usual; after you read the blob, the prefix **discriminates** the hashing algorithm used to produce that key. No dualhash trial needed to verify/link.
* **Write path:** when (re)serializing a node, choose the version byte from `hash_options.rules()/phase`; parent/child content stays consistent because each node carries its own version.
* **Pros:** zeroguess verification; offline tools can parse blobs without external context; makes mixed eras debuggable.
* **Cons:** networkvisible change (new prefixes); code churn where prefixes are assumed fixed; doesnt solve keylet/index aliasing or iteration semantics — it only removes blobhash guessing.
**Note:** you can also avoid guessing entirely by keeping **one blobhash algorithm per ledger** (homogeneous perledger eras). Then `rules()+ledger_index` suffices. Versioned prefixes mainly help offline tools and any design that tolerates intraledger mixing.
### Lazy migration headaches
If you attempt a lazy migration, you must decide how keys are rehashed. Is it done during metadata creation at the end of transactions? Do you rely on a LUT to map between new and old indexes? If so, where is this LUT state stored? Another idea is to embed a `LedgerIndexBlake3` in entries, so that keylet helpers can create new indexes while CRUD operations translate through a LUT. But this complicates pagination markers and functions like `ReadView::succ()` that return natural keys. You risk situations where the system must be aware of multiple keys per entry.
Questions like pagination markers and `ReadView::succ()` make this even thornier. One approach might be to encode the hash type in the LUT, and maintain it bidirectionally, so when iteration returns a canonical key it can be translated back to the old form if needed. But this doubles the complexity and still forces every path to be LUTaware.
By contrast, in the **Big Bang** version the LUT is just a safety net, handling things that could not be automatically rewritten. This is simpler for analysts and avoids perpetual cross-key complexity.
### Why it feels like a headache
Trying to lazily migrate keys means constantly juggling questions:
* Do you move items immediately when the amendment is enabled, or only on first touch?
* If you move them, when exactly: during metadata creation, during BuildLedger along with the SkipList?
* How do you keep CRUD ops working while also updating LUT state?
* How do you handle pagination markers and `succ()` consistently if multiple keys exist? You would need bidirectional.
Every option adds complexity, requires bidirectional LUTs, and forces awareness of dual keyspaces everywhere. This is why the lazy path feels like a perpetual headache, while the Big Bang keeps the pain contained to one wellknown cutover.
## The Big Bang
From here onward, we focus on the **BigBang** approach (oneledger atomic rekey). Lazy/staged variants are summarized above.
### Why BigBang is preferred here
* **Homogeneous immediately:** one canonical keyspace the very next ledger → simple proofs, stable iteration/pagination, no dualkey semantics.
* **No emptyLUT window:** keylets flip at the swap; the LUT is **old→new** only, narrow in scope, and realistically deprecable.
* **Deterministic & replayfriendly:** a single, wellknown cutover ledger anchors tooling and historical verification.
* **Operationally contained risk:** compute is concentrated into the quiet window with explicit consensus checkpoints (single or double), not smeared across months.
* **Cleaner dev/ops surface:** fewer code paths need LUT/translation logic; easier to reason about `succ()`/markers and caches.
### Variant: BigBang “nonmove” (aliasonly swap)
**What it is:** at the cutover ledger, **annotate the entire state tree** by stamping every SLE with its BLAKE3 address (e.g., `sfBlake3Key`). **Do not** rewrite storage keys. During the quiet window, prebuild a complete `new→old` LUT **or** rely on the new field so any node can rebuild the LUT deterministically by scanning leaves with an optimized parser. Flip keylets to emit BLAKE3. Optionally commit a small onledger **annotation/LUT commitment hash** in `MigrationState` so operators can verify their sidecar.
**How it behaves:** point lookups by BLAKE3 resolve via the LUT; writes/erases resolve to the canonical **old** storage key before touching disk; **new objects** are stored under **BLAKE3** keys (postflip); legacy callers may be served by a grace **`SHA512Half → BLAKE3`** alias for *new* objects. Iteration/pagination remain in the old order for legacy entries (document marker policy).
**I/O reality & mitigation:**
* Annotating every leaf **changes its bytes**, forcing a **fulltree NodeStore rewrite** (leaf blob hashes change; inner nodes update). This is a **mass write**, even though map indexes dont relocate.
* Mitigate the spike by **streaming/staged flush** of the staging tree during BuildLedger (chunked passes), backpressure on caches, and ratelimited node writes; total bytes remain \~“rewrite the tree once.”
**LUT reconstruction paths:**
* **From annotation (fastest):** for each leaf, read `sfBlake3Key` and the current (old) key; record `BLAKE3 → old`.
* **From recompute (beltandsuspenders):** recompute BLAKE3 via keylet helpers from identity components and pair with the observed old key.
**Pros:** no **map index** relocation for legacy entries; minimal enduser surprise; clean flip semantics; **LUT always reconstructible** from the annotated tree; **rollback is behavioralonly if the LUT is retained**.
**Cons:** ordering/proofs remain old indefinitely; LUT becomes permanent; you forgo a homogeneous BLAKE3 tree and its simplifications; **fulltree NodeStore rewrite** (leaf annotation changes bytes → new blob hashes → inner nodes update) causing a onetime I/O spike.
**Rollback reality:** Once clients rely on BLAKE3 keys on the wire, a “rollback” without a LUT breaks them. Practical rollback means flipping keylet behavior back to SHA512Half **and** continuing to serve BLAKE3 lookups via the LUT indefinitely (or performing a reversemigration). In other words, rollback is only “easy” if you accept a **permanent LUT**.
**When to pick:** you want BigBangs clean flip and operational containment, but cant (or dont want to) rewrite the entire state tree; you still want a deterministic, cheap way to rebuild the LUT by scanning.
### How to message this (without scaring users)
**Elevator pitch**
> Were flipping key derivation to BLAKE3 for *new* addresses, but were **not relocating existing entries**. We annotate the tree in a maintenance window, so old data stays where it is, new data goes to BLAKE3 addresses, and both key forms work via an alias. Transactions, TxIDs, and signatures dont change.
**What users/operators should expect**
* **No surprise breakage:** Old clients that synthesize SHA512Half keys still read old objects; new clients can use BLAKE3 keys everywhere (old objects resolve via alias).
* **New vs old objects:** Old objects remain at their old locations; **new objects** are stored at **BLAKE3** locations. A **grace alias** can accept SHA512Half for *new* objects for a limited time.
* **Ordering/proofs unchanged for old entries:** Iteration order and proofs remain canonicalold for legacy entries. No bidirectional iteration translation.
* **TxIDs & signing stay the same:** Transaction IDs and signing digests are **unchanged**; do **not** handderive ledger indexes—use keylet APIs.
* **Onetime write spike (planned):** Annotating every leaf causes a **single fulltree blob rewrite** during the quiet window; we stage/stream this as part of `BuildLedger`.
**Soundbite**
> *“Not a scary rekey-everything rewrite.”* Its a onetime annotation and an API flip: old stays reachable, new is faster, and we give legacy callers a grace window.
### Decision & next steps (short list)
1. **Amendment & timing:** finalize `featureBlake3Migration`, `MIGRATION_TIME`, and quietperiod length.
2. **BuildLedger swap/annotate pass:** implement twopass **rekey** (plan → commit), **or** twopass **annotate** (stamp `sfBlake3Key` on all SLEs). For rekey, stamp `sfLegacyKey` and materialize **old→new** LUT; for nonmove, stamp `sfBlake3Key` and materialize **new→old** LUT (both rebuildable by leaf scan).
3. **API rules:** reads/writes = canonicalfirst, LUTonmiss (point lookups only); **iteration is canonicalonly**; document marker semantics.
4. **Hash context plumbing:** ensure `hash_options{rules(), ledger_index, phase, classifier}` are available down to `SHAMap::getHash()` and relevant callers.
5. **Consensus choreography:** pick **single** vs **double** hash checkpoint; wire pseudotx for the prehash if using twostep.
6. **Telemetry & deprecation:** ship metrics for LUT hitrate and schedule a sunset once hits are negligible.
7. **Test plan:** simulate slow validators, partial LUT rebuilds, replay across the swap, and hook workloads with hardcoded keys.
## Governance first: permission to cut over
Such a migration cannot be unilateral. An amendment (`featureBlake3Migration`) acts as the governance switch, enabling the network to authorize the cutover. This amendment does not itself rekey the world, but it declares consensus intent: from a certain point, ledgers may be rebuilt under the new rules.
A pseudo-transaction (e.g. `ttHASH_MIGRATION`) provides the on-ledger coordination. It marks the trigger point, updates the migration state SLE, and ensures every validator knows exactly *when* and *what* to execute.
## Why not just do it in the pseudo-transaction?
A naive attempt to treat the entire migration as a simple pseudo-transaction — a one-off entry applied like any other — would explode into metadata churn, duplicate entries, and lost referential integrity. The scale of rekeying every SLE makes it unsuitable for a normal transaction context; it has to run in a special execution venue like `BuildLedger` to remain atomic and manageable.
## Choose the battlefield: BuildLedger
The right place to run the migration is inside `BuildLedger` — after applying the (quiet) transaction set, and before finalization. This avoids flooding transaction metadata with millions of deletes and creates, and guarantees atomicity: one ledger before = SHA-512 Half; one ledger after = BLAKE3.
This is also exactly where other ledger-maintenance updates happen: for example `updateNegativeUNL()` runs when processing a flag ledger if the feature is enabled, and `updateSkipList()` is invoked just before flushing SHAMap nodes to the NodeStore. By piggybacking the migration here, it integrates cleanly into the existing lifecycle:
```cpp
if (built->isFlagLedger() && built->rules().enabled(featureNegativeUNL))
{
built->updateNegativeUNL();
}
OpenView accum(&*built);
applyTxs(accum, built);
accum.apply(*built);
built->updateSkipList();
// Flush modified SHAMap nodes to NodeStore
built->stateMap().flushDirty(hotACCOUNT_NODE);
built->txMap().flushDirty(hotTRANSACTION_NODE);
built->unshare();
```
By inserting the BLAKE3 migration pass into this sequence, it runs atomically alongside the skip list and NegativeUNL updates, ensuring the new canonical tree is finalized consistently.&#x20;
## Hashing and consensus choreography
It may make sense to stretch the choreography into more than one consensus checkpoint, especially given the amount of compute involved. A possible flow:
* **Quiet period** — block transactions so everyone is aligned.
* **Phase 1: Hash the static tree** — compute a BLAKE3 hash of the ledger state, excluding churny structures like skip lists and the migration state.
* **Consensus** — validators agree on this static-hash checkpoint.
* **Phase 2: Hash the full tree** — compute the full state tree hash under BLAKE3.
* **Consensus** — converge again on the complete view.
* **Atomic swap** — only after both steps succeed, rewrite the ledger under new keys.
This extra step could make it easier for validators to stay in sync without network drift, because they checkpoint on a smaller, stable hash before tackling the full-tree rebuild. It reduces wasted compute if things diverge. The downside is protocol complexity: two ballots instead of one. But given the gnarliness of concurrent full-tree rekeying, a double consensus phase could be safer in practice.
Supporting this implies the hash function must be aware of more than just `ledger_index`; it also needs `rules()` (to know if the amendment is enabled) and an explicit state flag indicating whether the swap is pending, in progress, or complete. To safely support background builds of multiple tree variants, `hash_options` must be plumbed everywhere — from `SHAMap::getHash()` down into all call sites, and even up into callers.
## Two-pass rekey with a safety rope
* **Pass 1 (plan)**: Walk the state tree, compute new BLAKE3 keys, build an in-memory LUT (old→new), and stamp each SLE with its legacy key (`sfLegacyKey`).
* **Pass 2 (commit)**: Rebuild the SHAMap with BLAKE3 keys, rewrite all directories and secondary structures from the LUT, and finalize the new canonical tree.
This two-pass structure ensures determinism and lets every validator converge on the same new map without risk of divergence.
## Keep consensus boring during the scary bit
Migration must not race against normal transaction flow. The procedure anchors on **network time**, not ledger index. Once a ledger closes with `closeTime ≥ MIGRATION_TIME`, the network enters a quiet period: all user and pseudo-transactions are blocked, only trivial skip list mechanics advance. During this window, everyone builds the same hash in the background.
When consensus converges on the special BLAKE3 hash (excluding skip lists and migration state), it appears in a validated ledger. In the next ledger, the atomic swap happens — one big bang, then back to normal life.
## Owning the ugly edges (hooks and hardcoded keys)
Hooks may carry hardcoded 32-byte constants. Detecting them with static analysis is brittle; runtime tracing is too heavy. Instead, the LUT strategy provides a compatibility shim: lookups can still resolve old keys, while all new creations require canonical BLAKE3 keys. Over time, policy can deprecate this fallback.
---

View File

@@ -0,0 +1,99 @@
# BLAKE3 vs SHA-512 Performance Analysis for Ripple Data Structures
## Executive Summary
This document presents empirical performance comparisons between BLAKE3 and SHA-512 (specifically SHA512Half) when hashing Ripple/Xahau blockchain data structures. Tests were conducted on Apple Silicon (M-series) hardware using real-world data distributions.
## Test Environment
- **Platform**: Apple Silicon (ARM64)
- **OpenSSL Version**: 1.1.1u (likely without ARMv8.2 SHA-512 hardware acceleration)
- **BLAKE3**: C reference implementation with NEON optimizations
- **Test Data**: Production ledger #16940119 from Xahau network
## Results by Data Type
### 1. Keylet Lookups (22-102 bytes, 35-byte weighted average)
Keylets are namespace discriminators used for ledger lookups. The SHAMap requires high-entropy keys for balanced tree structure, necessitating cryptographic hashing even for small inputs.
**Distribution:**
- 76,478 ACCOUNT lookups (22 bytes)
- 41,740 HOOK lookups (22 bytes)
- 19,939 HOOK_STATE_DIR lookups (54 bytes)
- 17,587 HOOK_DEFINITION lookups (34 bytes)
- 17,100 HOOK_STATE lookups (86 bytes)
- Other types: ~15,000 operations (22-102 bytes)
**Performance (627,131 operations):**
- **BLAKE3**: 128 ns/hash, 7.81M hashes/sec
- **SHA-512**: 228 ns/hash, 4.38M hashes/sec
- **Speedup**: 1.78x
### 2. Leaf Node Data (167-byte average)
Leaf nodes contain serialized ledger entries (accounts, trustlines, offers, etc.). These represent the actual state data in the ledger.
**Distribution:**
- 626,326 total leaf nodes
- 104.6 MB total data
- Types: AccountRoot (145k), DirectoryNode (118k), RippleState (115k), HookState (124k), URIToken (114k)
**Performance (from production benchmark):**
- **SHA-512**: 446 ns/hash, 357 MB/s (measured)
- **BLAKE3**: ~330 ns/hash, 480 MB/s (projected)
- **Expected Speedup**: ~1.35x
### 3. Inner Nodes (516 bytes exactly)
Inner nodes contain 16 child hashes (32 bytes each) plus a 4-byte prefix. These form the Merkle tree structure enabling cryptographic proofs.
**Distribution:**
- 211,364 inner nodes
- 104.1 MB total data (nearly equal to leaf data volume)
**Performance (211,364 operations):**
- **BLAKE3**: 898 ns/hash, 548 MB/s
- **SHA-512**: 1081 ns/hash, 455 MB/s
- **Speedup**: 1.20x
## Overall Impact Analysis
### Current System Profile
From production measurements, the ledger validation process shows:
- **Map traversal**: 47% of time
- **SHA-512 hashing**: 53% of time
Within the hashing time specifically:
- **Keylet lookups**: ~50% of hashing time
- **Leaf/inner nodes**: ~50% of hashing time
### Projected Improvement with BLAKE3
Given the measured speedups:
- Keylet operations: 1.78x faster → 28% time reduction
- Leaf operations: 1.35x faster → 26% time reduction
- Inner operations: 1.20x faster → 17% time reduction
**Net improvement**: ~20-25% reduction in total hashing time, or **10-13% reduction in overall validation time**.
## Key Observations
1. **Small Input Performance**: BLAKE3 shows its greatest advantage (1.78x) on small keylet inputs where function call overhead dominates.
2. **Diminishing Returns**: As input size increases to SHA-512's block size (128 bytes) and multiples thereof, the performance gap narrows significantly.
3. **Architectural Constraint**: The SHAMap design requires cryptographic hashing for all operations to maintain high-entropy keys, preventing optimization through non-cryptographic alternatives.
4. **Implementation Effort**: Transitioning from SHA-512 to BLAKE3 would require:
- Updating all hash generation code
- Maintaining backward compatibility
- Extensive testing of consensus-critical code
- Potential network upgrade coordination
## Conclusion
BLAKE3 offers measurable performance improvements over SHA-512 for Ripple data structures, particularly for small inputs. However, the gains are modest (1.2-1.78x depending on input size) rather than revolutionary. With map traversal consuming nearly half the total time, even perfect hashing would only double overall performance.
For keylet operations specifically, the 1.78x speedup is significant given that keylet hashing accounts for approximately 50% of all hashing time. However, the measured improvements must be weighed against the engineering effort and risk of modifying consensus-critical cryptographic primitives. A 10-13% overall performance gain may not justify the migration complexity unless combined with other architectural improvements.

29
hook/generate_sfcodes.sh Executable file
View File

@@ -0,0 +1,29 @@
#/bin/bash
RIPPLED_ROOT="../src/ripple"
echo '// For documentation please see: https://xrpl-hooks.readme.io/reference/'
echo '// Generated using generate_sfcodes.sh'
cat $RIPPLED_ROOT/protocol/impl/SField.cpp | grep -E '^CONSTRUCT_' |
sed 's/UINT16,/1,/g' |
sed 's/UINT32,/2,/g' |
sed 's/UINT64,/3,/g' |
sed 's/HASH128,/4,/g' |
sed 's/HASH256,/5,/g' |
sed 's/UINT128,/4,/g' |
sed 's/UINT256,/5,/g' |
sed 's/AMOUNT,/6,/g' |
sed 's/VL,/7,/g' |
sed 's/ACCOUNT,/8,/g' |
sed 's/OBJECT,/14,/g' |
sed 's/ARRAY,/15,/g' |
sed 's/UINT8,/16,/g' |
sed 's/HASH160,/17,/g' |
sed 's/UINT160,/17,/g' |
sed 's/PATHSET,/18,/g' |
sed 's/VECTOR256,/19,/g' |
sed 's/UINT96,/20,/g' |
sed 's/UINT192,/21,/g' |
sed 's/UINT384,/22,/g' |
sed 's/UINT512,/23,/g' |
grep -Eo '"([^"]+)", *([0-9]+), *([0-9]+)' |
sed 's/"//g' | sed 's/ *//g' | sed 's/,/ /g' |
awk '{print ("#define sf"$1" (("$2"U << 16U) + "$3"U)")}'

View File

@@ -637,43 +637,55 @@ int64_t hook(uint32_t r)
{
previous_member[0] = 'V';
for (int i = 1; GUARD(32), i < 32; ++i)
for (int tbl = 1; GUARD(2), tbl <= 2; ++tbl)
{
previous_member[1] = i < 2 ? 'R' : i < 12 ? 'H' : 'S';
previous_member[2] =
i == 0 ? 'R' :
i == 1 ? 'D' :
i < 12 ? i - 2 :
i - 12;
uint8_t vote_key[32];
if (state(SBUF(vote_key), SBUF(previous_member)) == 32)
for (int i = 0; GUARD(66), i < 32; ++i)
{
uint8_t vote_count = 0;
previous_member[1] = i < 2 ? 'R' : i < 12 ? 'H' : 'S';
previous_member[2] =
i == 0 ? 'R' :
i == 1 ? 'D' :
i < 12 ? i - 2 :
i - 12;
previous_member[3] = tbl;
// find and decrement the vote counter
vote_key[0] = 'C';
vote_key[1] = previous_member[1];
vote_key[2] = previous_member[2];
if (state(&vote_count, 1, SBUF(vote_key)) == 1)
uint8_t vote_key[32] = {};
uint8_t ts =
previous_member[1] == 'H' ? 32 : // hook topics are a 32 byte hook hash
previous_member[1] == 'S' ? 20 : // account topics are a 20 byte account ID
8; // reward topics are an 8 byte le xfl
uint8_t padding = 32 - ts;
if (state(vote_key + padding, ts, SBUF(previous_member)) == ts)
{
// if we're down to 1 vote then delete state
if (vote_count <= 1)
{
ASSERT(state_set(0,0, SBUF(vote_key)) == 0);
trace_num(SBUF("Decrement vote count deleted"), vote_count);
}
else // otherwise decrement
{
vote_count--;
ASSERT(state_set(&vote_count, 1, SBUF(vote_key)) == 1);
trace_num(SBUF("Decrement vote count to"), vote_count);
}
}
uint8_t vote_count = 0;
// delete the vote entry
ASSERT(state_set(0,0, SBUF(previous_member)) == 0);
trace(SBUF("Vote entry deleted"), vote_key, 32, 1);
// find and decrement the vote counter
vote_key[0] = 'C';
vote_key[1] = previous_member[1];
vote_key[2] = previous_member[2];
vote_key[3] = tbl;
if (state(&vote_count, 1, SBUF(vote_key)) == 1)
{
// if we're down to 1 vote then delete state
if (vote_count <= 1)
{
ASSERT(state_set(0,0, SBUF(vote_key)) == 0);
trace_num(SBUF("Decrement vote count deleted"), vote_count);
}
else // otherwise decrement
{
vote_count--;
ASSERT(state_set(&vote_count, 1, SBUF(vote_key)) == 1);
trace_num(SBUF("Decrement vote count to"), vote_count);
}
}
// delete the vote entry
ASSERT(state_set(0,0, SBUF(previous_member)) == 0);
trace(SBUF("Vote entry deleted"), vote_key, 32, 1);
}
}
}

View File

@@ -1,5 +1,5 @@
/**
* These are helper macros for writing hooks, all of them are optional as is including hookmacro.h at all
* These are helper macros for writing hooks, all of them are optional as is including macro.h at all
*/
#include <stdint.h>

View File

@@ -60,7 +60,10 @@
#define sfBurnedNFTokens ((2U << 16U) + 44U)
#define sfHookStateCount ((2U << 16U) + 45U)
#define sfEmitGeneration ((2U << 16U) + 46U)
#define sfLockCount ((2U << 16U) + 47U)
#define sfLockCount ((2U << 16U) + 49U)
#define sfFirstNFTokenSequence ((2U << 16U) + 50U)
#define sfXahauActivationLgrSeq ((2U << 16U) + 96U)
#define sfImportSequence ((2U << 16U) + 97U)
#define sfRewardTime ((2U << 16U) + 98U)
#define sfRewardLgrFirst ((2U << 16U) + 99U)
#define sfRewardLgrLast ((2U << 16U) + 100U)
@@ -80,12 +83,15 @@
#define sfHookInstructionCount ((3U << 16U) + 17U)
#define sfHookReturnCode ((3U << 16U) + 18U)
#define sfReferenceCount ((3U << 16U) + 19U)
#define sfTouchCount ((3U << 16U) + 97U)
#define sfAccountIndex ((3U << 16U) + 98U)
#define sfAccountCount ((3U << 16U) + 99U)
#define sfRewardAccumulator ((3U << 16U) + 100U)
#define sfEmailHash ((4U << 16U) + 1U)
#define sfTakerPaysCurrency ((10U << 16U) + 1U)
#define sfTakerPaysIssuer ((10U << 16U) + 2U)
#define sfTakerGetsCurrency ((10U << 16U) + 3U)
#define sfTakerGetsIssuer ((10U << 16U) + 4U)
#define sfTakerPaysCurrency ((17U << 16U) + 1U)
#define sfTakerPaysIssuer ((17U << 16U) + 2U)
#define sfTakerGetsCurrency ((17U << 16U) + 3U)
#define sfTakerGetsIssuer ((17U << 16U) + 4U)
#define sfLedgerHash ((5U << 16U) + 1U)
#define sfParentHash ((5U << 16U) + 2U)
#define sfTransactionHash ((5U << 16U) + 3U)
@@ -120,6 +126,9 @@
#define sfOfferID ((5U << 16U) + 34U)
#define sfEscrowID ((5U << 16U) + 35U)
#define sfURITokenID ((5U << 16U) + 36U)
#define sfGovernanceFlags ((5U << 16U) + 99U)
#define sfGovernanceMarks ((5U << 16U) + 98U)
#define sfEmittedTxnID ((5U << 16U) + 97U)
#define sfAmount ((6U << 16U) + 1U)
#define sfBalance ((6U << 16U) + 2U)
#define sfLimitAmount ((6U << 16U) + 3U)
@@ -136,6 +145,9 @@
#define sfNFTokenBrokerFee ((6U << 16U) + 19U)
#define sfHookCallbackFee ((6U << 16U) + 20U)
#define sfLockedBalance ((6U << 16U) + 21U)
#define sfBaseFeeDrops ((6U << 16U) + 22U)
#define sfReserveBaseDrops ((6U << 16U) + 23U)
#define sfReserveIncrementDrops ((6U << 16U) + 24U)
#define sfPublicKey ((7U << 16U) + 1U)
#define sfMessageKey ((7U << 16U) + 2U)
#define sfSigningPubKey ((7U << 16U) + 3U)
@@ -171,11 +183,13 @@
#define sfNFTokenMinter ((8U << 16U) + 9U)
#define sfEmitCallback ((8U << 16U) + 10U)
#define sfHookAccount ((8U << 16U) + 16U)
#define sfInform ((8U << 16U) + 99U)
#define sfIndexes ((19U << 16U) + 1U)
#define sfHashes ((19U << 16U) + 2U)
#define sfAmendments ((19U << 16U) + 3U)
#define sfNFTokenOffers ((19U << 16U) + 4U)
#define sfHookNamespaces ((19U << 16U) + 5U)
#define sfURITokenIDs ((19U << 16U) + 99U)
#define sfPaths ((18U << 16U) + 1U)
#define sfTransactionMetaData ((14U << 16U) + 2U)
#define sfCreatedNode ((14U << 16U) + 3U)
@@ -198,6 +212,12 @@
#define sfHookDefinition ((14U << 16U) + 22U)
#define sfHookParameter ((14U << 16U) + 23U)
#define sfHookGrant ((14U << 16U) + 24U)
#define sfGenesisMint ((14U << 16U) + 96U)
#define sfActiveValidator ((14U << 16U) + 95U)
#define sfImportVLKey ((14U << 16U) + 94U)
#define sfHookEmission ((14U << 16U) + 93U)
#define sfMintURIToken ((14U << 16U) + 92U)
#define sfAmountEntry ((14U << 16U) + 91U)
#define sfSigners ((15U << 16U) + 3U)
#define sfSignerEntries ((15U << 16U) + 4U)
#define sfTemplate ((15U << 16U) + 5U)
@@ -212,4 +232,8 @@
#define sfHookExecutions ((15U << 16U) + 18U)
#define sfHookParameters ((15U << 16U) + 19U)
#define sfHookGrants ((15U << 16U) + 20U)
#define sfGenesisMints ((15U << 16U) + 96U)
#define sfActiveValidators ((15U << 16U) + 95U)
#define sfImportVLKeys ((15U << 16U) + 94U)
#define sfHookEmissions ((15U << 16U) + 93U)
#define sfAmounts ((15U << 16U) + 92U)

246
migrate_keylet_calls.py Normal file
View File

@@ -0,0 +1,246 @@
#!/usr/bin/env python3
import re
import os
import sys
import argparse
from pathlib import Path
from collections import defaultdict
from typing import Dict, List, Tuple, Optional
# Mapping of keylet functions to their specific HashContext classifiers
KEYLET_CLASSIFIERS = {
'account': 'KEYLET_ACCOUNT',
'amendments': 'KEYLET_AMENDMENTS',
'book': 'KEYLET_BOOK',
'check': 'KEYLET_CHECK',
'child': 'KEYLET_CHILD',
'depositPreauth': 'KEYLET_DEPOSIT_PREAUTH',
'emittedDir': 'KEYLET_EMITTED_DIR',
'emittedTxn': 'KEYLET_EMITTED_TXN',
'escrow': 'KEYLET_ESCROW',
'fees': 'KEYLET_FEES',
'hook': 'KEYLET_HOOK',
'hookDefinition': 'KEYLET_HOOK_DEFINITION',
'hookState': 'KEYLET_HOOK_STATE',
'hookStateDir': 'KEYLET_HOOK_STATE_DIR',
'import_vlseq': 'KEYLET_IMPORT_VLSEQ',
'line': 'KEYLET_TRUSTLINE',
'negativeUNL': 'KEYLET_NEGATIVE_UNL',
'nft_buys': 'KEYLET_NFT_BUYS',
'nft_sells': 'KEYLET_NFT_SELLS',
'nftoffer': 'KEYLET_NFT_OFFER',
'nftpage': 'KEYLET_NFT_PAGE',
'nftpage_max': 'KEYLET_NFT_PAGE',
'nftpage_min': 'KEYLET_NFT_PAGE',
'offer': 'KEYLET_OFFER',
'ownerDir': 'KEYLET_OWNER_DIR',
'page': 'KEYLET_DIR_PAGE',
'payChan': 'KEYLET_PAYCHAN',
'signers': 'KEYLET_SIGNERS',
'skip': 'KEYLET_SKIP_LIST',
'ticket': 'KEYLET_TICKET',
'UNLReport': 'KEYLET_UNL_REPORT',
'unchecked': 'KEYLET_UNCHECKED',
'uritoken': 'KEYLET_URI_TOKEN',
}
def add_classifiers_to_digest_h(digest_h_path: str, dry_run: bool = True) -> bool:
"""Add the new KEYLET_ classifiers to digest.h if they don't exist."""
# Read the file
with open(digest_h_path, 'r') as f:
content = f.read()
# Check if we already have KEYLET_ classifiers
if 'KEYLET_ACCOUNT' in content:
print("KEYLET classifiers already exist in digest.h")
return True
# Find the end of the HashContext enum (before the closing brace and semicolon)
pattern = r'(enum HashContext[^{]*\{[^}]*)(HOOK_DEFINITION\s*=\s*\d+,?)([^}]*\};)'
match = re.search(pattern, content, re.DOTALL)
if not match:
print("ERROR: Could not find HashContext enum in digest.h")
return False
# Build the new classifiers text
new_classifiers = []
# Get the last number used (HOOK_DEFINITION = 17)
last_num = 17
# Add all KEYLET classifiers
unique_classifiers = sorted(set(KEYLET_CLASSIFIERS.values()))
for i, classifier in enumerate(unique_classifiers, start=1):
new_classifiers.append(f" {classifier} = {last_num + i},")
# Join them with newlines
new_text = '\n'.join(new_classifiers)
# Create the replacement
replacement = match.group(1) + match.group(2) + ',\n\n // Keylet-specific hash contexts\n' + new_text + match.group(3)
# Replace in content
new_content = content[:match.start()] + replacement + content[match.end():]
if dry_run:
print("=" * 80)
print("WOULD ADD TO digest.h:")
print("=" * 80)
print(new_text)
print("=" * 80)
else:
with open(digest_h_path, 'w') as f:
f.write(new_content)
print(f"Updated {digest_h_path} with KEYLET classifiers")
return True
def migrate_keylet_call(content: str, func_name: str, dry_run: bool = True) -> Tuple[str, int]:
"""
Migrate keylet calls from single ledger_index to ledger_index + classifier.
Returns (modified_content, number_of_replacements)
"""
classifier = KEYLET_CLASSIFIERS.get(func_name)
if not classifier:
print(f"WARNING: No classifier mapping for keylet::{func_name}")
return content, 0
# Pattern to match keylet::<func>(hash_options{<ledger_seq>}, ...)
# where ledger_seq doesn't already contain a comma (no classifier yet)
pattern = re.compile(
rf'keylet::{re.escape(func_name)}\s*\(\s*hash_options\s*\{{\s*([^,}}]+)\s*\}}',
re.MULTILINE
)
count = 0
def replacer(match):
nonlocal count
ledger_seq = match.group(1).strip()
# Check if it already has a classifier (contains comma)
if ',' in ledger_seq:
return match.group(0) # Already migrated
count += 1
# Add the classifier
return f'keylet::{func_name}(hash_options{{{ledger_seq}, {classifier}}}'
new_content = pattern.sub(replacer, content)
return new_content, count
def process_file(filepath: str, dry_run: bool = True) -> int:
"""Process a single file. Returns number of replacements made."""
with open(filepath, 'r', encoding='utf-8') as f:
original_content = f.read()
content = original_content
total_replacements = 0
replacements_by_func = {}
# Process each keylet function
for func_name in KEYLET_CLASSIFIERS.keys():
new_content, count = migrate_keylet_call(content, func_name, dry_run)
if count > 0:
content = new_content
total_replacements += count
replacements_by_func[func_name] = count
if total_replacements > 0:
if dry_run:
print(f"Would modify {filepath}: {total_replacements} replacements")
for func, count in sorted(replacements_by_func.items()):
print(f" - keylet::{func}: {count}")
else:
with open(filepath, 'w', encoding='utf-8') as f:
f.write(content)
print(f"Modified {filepath}: {total_replacements} replacements")
for func, count in sorted(replacements_by_func.items()):
print(f" - keylet::{func}: {count}")
return total_replacements
def main():
parser = argparse.ArgumentParser(
description='Migrate keylet calls to use HashContext classifiers'
)
parser.add_argument(
'--dry-run',
action='store_true',
default=True,
help='Show what would be changed without modifying files (default: True)'
)
parser.add_argument(
'--apply',
action='store_true',
help='Actually apply the changes (disables dry-run)'
)
parser.add_argument(
'--file',
help='Process a specific file only'
)
parser.add_argument(
'--add-classifiers',
action='store_true',
help='Add KEYLET_ classifiers to digest.h'
)
args = parser.parse_args()
if args.apply:
args.dry_run = False
project_root = "/Users/nicholasdudfield/projects/xahaud-worktrees/xahaud-map-stats-rpc"
# First, optionally add classifiers to digest.h
if args.add_classifiers:
digest_h = os.path.join(project_root, "src/ripple/protocol/digest.h")
if not add_classifiers_to_digest_h(digest_h, args.dry_run):
return 1
print()
# Process files
if args.file:
# Process single file
filepath = os.path.join(project_root, args.file)
if not os.path.exists(filepath):
print(f"ERROR: File not found: {filepath}")
return 1
process_file(filepath, args.dry_run)
else:
# Process all files
total_files = 0
total_replacements = 0
print(f"{'DRY RUN: ' if args.dry_run else ''}Processing files in {project_root}/src/ripple")
print("=" * 80)
for root, dirs, files in os.walk(Path(project_root) / "src" / "ripple"):
dirs[:] = [d for d in dirs if d not in ['.git', 'build', '__pycache__']]
for file in files:
if file.endswith(('.cpp', '.h', '.hpp')):
filepath = os.path.join(root, file)
count = process_file(filepath, args.dry_run)
if count > 0:
total_files += 1
total_replacements += count
print("=" * 80)
print(f"{'Would modify' if args.dry_run else 'Modified'} {total_files} files")
print(f"Total replacements: {total_replacements}")
if args.dry_run:
print("\nTo apply these changes, run with --apply flag:")
print(f" python3 {sys.argv[0]} --apply")
print("\nTo first add classifiers to digest.h:")
print(f" python3 {sys.argv[0]} --add-classifiers --apply")
if __name__ == "__main__":
sys.exit(main())

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,9 @@
#!/bin/bash
#!/bin/bash -u
# We use set -e and bash with -u to bail on first non zero exit code of any
# processes launched or upon any unbound variable.
# We use set -x to print commands before running them to help with
# debugging.
set -ex
echo "START BUILDING (HOST)"
@@ -12,7 +17,12 @@ if [[ "$GITHUB_REPOSITORY" == "" ]]; then
BUILD_CORES=8
fi
CONTAINER_NAME=xahaud_cached_builder_$(echo "$GITHUB_ACTOR" | awk '{print tolower($0)}')
# Ensure still works outside of GH Actions by setting these to /dev/null
# GA will run this script and then delete it at the end of the job
JOB_CLEANUP_SCRIPT=${JOB_CLEANUP_SCRIPT:-/dev/null}
NORMALIZED_WORKFLOW=$(echo "$GITHUB_WORKFLOW" | tr -c 'a-zA-Z0-9' '-')
NORMALIZED_REF=$(echo "$GITHUB_REF" | tr -c 'a-zA-Z0-9' '-')
CONTAINER_NAME="xahaud_cached_builder_${NORMALIZED_WORKFLOW}-${NORMALIZED_REF}"
echo "-- BUILD CORES: $BUILD_CORES"
echo "-- GITHUB_REPOSITORY: $GITHUB_REPOSITORY"
@@ -36,7 +46,8 @@ fi
STATIC_CONTAINER=$(docker ps -a | grep $CONTAINER_NAME |wc -l)
if [[ "$STATIC_CONTAINER" -gt "0" && "$GITHUB_REPOSITORY" != "" ]]; then
# if [[ "$STATIC_CONTAINER" -gt "0" && "$GITHUB_REPOSITORY" != "" ]]; then
if false; then
echo "Static container, execute in static container to have max. cache"
docker start $CONTAINER_NAME
docker exec -i $CONTAINER_NAME /hbb_exe/activate-exec bash -x /io/build-core.sh "$GITHUB_REPOSITORY" "$GITHUB_SHA" "$BUILD_CORES" "$GITHUB_RUN_NUMBER"
@@ -54,6 +65,8 @@ else
# GH Action, runner
echo "GH Action, runner, clean & re-create create persistent container"
docker rm -f $CONTAINER_NAME
echo "echo 'Stopping container: $CONTAINER_NAME'" >> "$JOB_CLEANUP_SCRIPT"
echo "docker stop --time=15 \"$CONTAINER_NAME\" || echo 'Failed to stop container or container not running'" >> "$JOB_CLEANUP_SCRIPT"
docker run -di --user 0:$(id -g) --name $CONTAINER_NAME -v /data/builds:/data/builds -v `pwd`:/io --network host ghcr.io/foobarwidget/holy-build-box-x64 /hbb_exe/activate-exec bash
docker exec -i $CONTAINER_NAME /hbb_exe/activate-exec bash -x /io/build-full.sh "$GITHUB_REPOSITORY" "$GITHUB_SHA" "$BUILD_CORES" "$GITHUB_RUN_NUMBER"
docker stop $CONTAINER_NAME

View File

@@ -0,0 +1,48 @@
cmake_minimum_required(VERSION 3.11)
project(ed25519
LANGUAGES C
)
if(PROJECT_NAME STREQUAL CMAKE_PROJECT_NAME)
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}/output/$<CONFIG>/lib")
endif()
if(NOT TARGET OpenSSL::SSL)
find_package(OpenSSL)
endif()
add_library(ed25519 STATIC
ed25519.c
)
add_library(ed25519::ed25519 ALIAS ed25519)
target_link_libraries(ed25519 PUBLIC OpenSSL::SSL)
include(GNUInstallDirs)
#[=========================================================[
NOTE for macos:
https://github.com/floodyberry/ed25519-donna/issues/29
our source for ed25519-donna-portable.h has been
patched to workaround this.
#]=========================================================]
target_include_directories(ed25519 PUBLIC
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}>
)
install(
TARGETS ed25519
EXPORT ${PROJECT_NAME}-exports
ARCHIVE DESTINATION "${CMAKE_INSTALL_LIBDIR}"
)
install(
EXPORT ${PROJECT_NAME}-exports
DESTINATION "${CMAKE_INSTALL_LIBDIR}/cmake/${PROJECT_NAME}"
FILE ${PROJECT_NAME}-targets.cmake
NAMESPACE ${PROJECT_NAME}::
)
install(
FILES ed25519.h
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}"
)

View File

@@ -186,7 +186,7 @@ RCLConsensus::Adaptor::share(RCLCxTx const& tx)
if (app_.getHashRouter().shouldRelay(tx.id()))
{
JLOG(j_.debug()) << "Relaying disputed tx " << tx.id();
auto const slice = tx.tx_.slice();
auto const slice = tx.tx_->slice();
protocol::TMTransaction msg;
msg.set_rawtransaction(slice.data(), slice.size());
msg.set_status(protocol::tsNEW);
@@ -330,7 +330,7 @@ RCLConsensus::Adaptor::onClose(
tx.first->add(s);
initialSet->addItem(
SHAMapNodeType::tnTRANSACTION_NM,
SHAMapItem(tx.first->getTransactionID(), s.slice()));
make_shamapitem(tx.first->getTransactionID(), s.slice()));
}
// Add pseudo-transactions to the set
@@ -374,7 +374,8 @@ RCLConsensus::Adaptor::onClose(
RCLCensorshipDetector<TxID, LedgerIndex>::TxIDSeqVec proposed;
initialSet->visitLeaves(
[&proposed, seq](std::shared_ptr<SHAMapItem const> const& item) {
[&proposed,
seq](boost::intrusive_ptr<SHAMapItem const> const& item) {
proposed.emplace_back(item->key(), seq);
});
@@ -497,15 +498,11 @@ RCLConsensus::Adaptor::doAccept(
for (auto const& item : *result.txns.map_)
{
#ifndef DEBUG
try
{
#endif
retriableTxs.insert(
std::make_shared<STTx const>(SerialIter{item.slice()}));
JLOG(j_.debug()) << " Tx: " << item.key();
#ifndef DEBUG
}
catch (std::exception const& ex)
{
@@ -513,7 +510,6 @@ RCLConsensus::Adaptor::doAccept(
JLOG(j_.warn())
<< " Tx: " << item.key() << " throws: " << ex.what();
}
#endif
}
auto built = buildLCL(
@@ -539,7 +535,7 @@ RCLConsensus::Adaptor::doAccept(
std::vector<TxID> accepted;
result.txns.map_->visitLeaves(
[&accepted](std::shared_ptr<SHAMapItem const> const& item) {
[&accepted](boost::intrusive_ptr<SHAMapItem const> const& item) {
accepted.push_back(item->key());
});
@@ -614,7 +610,7 @@ RCLConsensus::Adaptor::doAccept(
<< "Test applying disputed transaction that did"
<< " not get in " << dispute.tx().id();
SerialIter sit(dispute.tx().tx_.slice());
SerialIter sit(dispute.tx().tx_->slice());
auto txn = std::make_shared<STTx const>(sit);
// Disputed pseudo-transactions that were not accepted
@@ -868,7 +864,8 @@ RCLConsensus::Adaptor::validate(
auto const serialized = v->getSerialized();
// suppress it if we receive it
app_.getHashRouter().addSuppression(sha512Half(makeSlice(serialized)));
app_.getHashRouter().addSuppression(
sha512Half(hash_options{PEER_VALIDATION_HASH}, makeSlice(serialized)));
handleNewValidation(app_, v, "local");

View File

@@ -42,7 +42,7 @@ public:
@param txn The transaction to wrap
*/
RCLCxTx(SHAMapItem const& txn) : tx_{txn}
RCLCxTx(boost::intrusive_ptr<SHAMapItem const> txn) : tx_(std::move(txn))
{
}
@@ -50,11 +50,11 @@ public:
ID const&
id() const
{
return tx_.key();
return tx_->key();
}
//! The SHAMapItem that represents the transaction.
SHAMapItem const tx_;
boost::intrusive_ptr<SHAMapItem const> tx_;
};
/** Represents a set of transactions in RCLConsensus.
@@ -90,8 +90,7 @@ public:
bool
insert(Tx const& t)
{
return map_->addItem(
SHAMapNodeType::tnTRANSACTION_NM, SHAMapItem{t.tx_});
return map_->addItem(SHAMapNodeType::tnTRANSACTION_NM, t.tx_);
}
/** Remove a transaction from the set.
@@ -145,7 +144,7 @@ public:
code uses the shared_ptr semantics to know whether the find
was successful and properly creates a Tx as needed.
*/
std::shared_ptr<const SHAMapItem> const&
boost::intrusive_ptr<SHAMapItem const> const&
find(Tx::ID const& entry) const
{
return map_->peekItem(entry);

View File

@@ -46,7 +46,8 @@ RCLValidatedLedger::RCLValidatedLedger(
beast::Journal j)
: ledgerID_{ledger->info().hash}, ledgerSeq_{ledger->seq()}, j_{j}
{
auto const hashIndex = ledger->read(keylet::skip());
auto const hashIndex = ledger->read(
keylet::skip(hash_options{(ledger->seq()), KEYLET_SKIP_LIST}));
if (hashIndex)
{
assert(hashIndex->getFieldU32(sfLastLedgerSequence) == (seq() - 1));

View File

@@ -3,6 +3,7 @@
#include <iostream>
#include <map>
#include <memory>
#include <optional>
#include <ostream>
#include <stack>
#include <string>
@@ -271,7 +272,19 @@ check_guard(
int guard_func_idx,
int last_import_idx,
GuardLog guardLog,
std::string guardLogAccStr)
std::string guardLogAccStr,
/* RH NOTE:
* rules version is a bit field, so rule update 1 is 0x01, update 2 is 0x02
* and update 3 is 0x04 ideally at rule version 3 all bits so far are set
* (0b111) so the ruleVersion = 7, however if a specific rule update must be
* rolled back due to unforeseen behaviour then this may no longer be the
* case. using a bit field here leaves us flexible to rollback changes that
* might have unforeseen consequences, without also rolling back further
* changes that are fine.
*/
uint64_t rulesVersion = 0
)
{
#define MAX_GUARD_CALLS 1024
uint32_t guard_count = 0;
@@ -621,11 +634,17 @@ check_guard(
}
else if (fc_type == 10) // memory.copy
{
if (rulesVersion & 0x02U)
GUARD_ERROR("Memory.copy instruction is not allowed.");
REQUIRE(2);
ADVANCE(2);
}
else if (fc_type == 11) // memory.fill
{
if (rulesVersion & 0x02U)
GUARD_ERROR("Memory.fill instruction is not allowed.");
ADVANCE(1);
}
else if (fc_type <= 7) // numeric instructions
@@ -807,6 +826,15 @@ validateGuards(
std::vector<uint8_t> const& wasm,
GuardLog guardLog,
std::string guardLogAccStr,
/* RH NOTE:
* rules version is a bit field, so rule update 1 is 0x01, update 2 is 0x02
* and update 3 is 0x04 ideally at rule version 3 all bits so far are set
* (0b111) so the ruleVersion = 7, however if a specific rule update must be
* rolled back due to unforeseen behaviour then this may no longer be the
* case. using a bit field here leaves us flexible to rollback changes that
* might have unforeseen consequences, without also rolling back further
* changes that are fine.
*/
uint64_t rulesVersion = 0)
{
uint64_t byteCount = wasm.size();
@@ -1477,7 +1505,8 @@ validateGuards(
guard_import_number,
last_import_number,
guardLog,
guardLogAccStr);
guardLogAccStr,
rulesVersion);
if (!valid)
return {};

View File

@@ -428,6 +428,12 @@ namespace hook {
bool
canHook(ripple::TxType txType, ripple::uint256 hookOn);
bool
canEmit(ripple::TxType txType, ripple::uint256 hookCanEmit);
ripple::uint256
getHookCanEmit(ripple::STObject const& hookObj, SLE::pointer const& hookDef);
struct HookResult;
HookResult
@@ -436,6 +442,7 @@ apply(
used for caching (one day) */
ripple::uint256 const&
hookHash, /* hash of the actual hook byte code, used for metadata */
ripple::uint256 const& hookCanEmit,
ripple::uint256 const& hookNamespace,
ripple::Blob const& wasm,
std::map<
@@ -472,6 +479,7 @@ struct HookResult
{
ripple::uint256 const hookSetTxnID;
ripple::uint256 const hookHash;
ripple::uint256 const hookCanEmit;
ripple::Keylet const accountKeylet;
ripple::Keylet const ownerDirKeylet;
ripple::Keylet const hookKeylet;

View File

@@ -79,7 +79,7 @@ main(int argc, char** argv)
close(fd);
auto result = validateGuards(hook, std::cout, "", 1);
auto result = validateGuards(hook, std::cout, "", 3);
if (!result)
{

View File

@@ -1,12 +1,17 @@
#include <ripple/app/hook/applyHook.h>
#include <ripple/app/ledger/OpenLedger.h>
#include <ripple/app/ledger/TransactionMaster.h>
#include <ripple/app/misc/HashRouter.h>
#include <ripple/app/misc/NetworkOPs.h>
#include <ripple/app/misc/Transaction.h>
#include <ripple/app/misc/TxQ.h>
#include <ripple/app/tx/impl/Import.h>
#include <ripple/app/tx/impl/details/NFTokenUtils.h>
#include <ripple/basics/Log.h>
#include <ripple/basics/Slice.h>
#include <ripple/protocol/ErrorCodes.h>
#include <ripple/protocol/TxFlags.h>
#include <ripple/protocol/st.h>
#include <ripple/protocol/tokens.h>
#include <boost/multiprecision/cpp_dec_float.hpp>
#include <any>
@@ -62,7 +67,8 @@ getTransactionalStakeHolders(STTx const& tx, ReadView const& rv)
if (!id || *id == beast::zero)
return nullptr;
return rv.read(keylet::nftoffer(*id));
return rv.read(
keylet::nftoffer(hash_options{(rv.seq()), KEYLET_NFT_OFFER}, *id));
};
bool const fixV1 = rv.rules().enabled(fixXahauV1);
@@ -140,11 +146,14 @@ getTransactionalStakeHolders(STTx const& tx, ReadView const& rv)
if (fixV1)
{
// the owner burns their token, and the issuer is a weak TSH
if (*otxnAcc == owner && rv.exists(keylet::account(issuer)))
if (*otxnAcc == owner &&
rv.exists(keylet::account(
hash_options{(rv.seq()), KEYLET_ACCOUNT}, issuer)))
ADD_TSH(issuer, tshWEAK);
// the issuer burns the owner's token, and the owner is a weak
// TSH
else if (rv.exists(keylet::account(owner)))
else if (rv.exists(keylet::account(
hash_options{(rv.seq()), KEYLET_ACCOUNT}, owner)))
ADD_TSH(owner, tshWEAK);
break;
@@ -337,8 +346,7 @@ getTransactionalStakeHolders(STTx const& tx, ReadView const& rv)
case ttOFFER_CANCEL:
case ttTICKET_CREATE:
case ttHOOK_SET:
case ttOFFER_CREATE: // this is handled seperately
{
case ttOFFER_CREATE: {
break;
}
@@ -395,7 +403,10 @@ getTransactionalStakeHolders(STTx const& tx, ReadView const& rv)
return {};
Keylet kl = hasSeq
? keylet::escrow(owner, tx.getFieldU32(sfOfferSequence))
? keylet::escrow(
hash_options{(rv.seq()), KEYLET_ESCROW},
owner,
tx.getFieldU32(sfOfferSequence))
: Keylet(ltESCROW, tx.getFieldH256(sfEscrowID));
auto escrow = rv.read(kl);
@@ -425,7 +436,9 @@ getTransactionalStakeHolders(STTx const& tx, ReadView const& rv)
return {};
auto escrow = rv.read(keylet::escrow(
tx.getAccountID(sfOwner), tx.getFieldU32(sfOfferSequence)));
hash_options{(rv.seq()), KEYLET_ESCROW},
tx.getAccountID(sfOwner),
tx.getFieldU32(sfOfferSequence)));
if (!escrow)
return {};
@@ -491,6 +504,12 @@ getTransactionalStakeHolders(STTx const& tx, ReadView const& rv)
break;
}
case ttCLAWBACK: {
auto const amount = tx.getFieldAmount(sfAmount);
ADD_TSH(amount.getIssuer(), tshWEAK);
break;
}
default:
return {};
}
@@ -1023,6 +1042,29 @@ hook::canHook(ripple::TxType txType, ripple::uint256 hookOn)
return (hookOn & UINT256_BIT[txType]) != beast::zero;
}
bool
hook::canEmit(ripple::TxType txType, ripple::uint256 hookCanEmit)
{
return hook::canHook(txType, hookCanEmit);
}
ripple::uint256
hook::getHookCanEmit(
ripple::STObject const& hookObj,
SLE::pointer const& hookDef)
{
// default allows all transaction types
uint256 defaultHookCanEmit = UINT256_BIT[ttHOOK_SET];
uint256 hookCanEmit =
(hookObj.isFieldPresent(sfHookCanEmit)
? hookObj.getFieldH256(sfHookCanEmit)
: hookDef->isFieldPresent(sfHookCanEmit)
? hookDef->getFieldH256(sfHookCanEmit)
: defaultHookCanEmit);
return hookCanEmit;
}
// Update HookState ledger objects for the hook... only called after accept()
// assumes the specified acc has already been checked for authoriation (hook
// grants)
@@ -1036,7 +1078,8 @@ hook::setHookState(
{
auto& view = applyCtx.view();
auto j = applyCtx.app.journal("View");
auto const sleAccount = view.peek(ripple::keylet::account(acc));
auto const sleAccount = view.peek(ripple::keylet::account(
hash_options{(view.seq()), KEYLET_ACCOUNT}, acc));
if (!sleAccount)
return tefINTERNAL;
@@ -1045,8 +1088,10 @@ hook::setHookState(
if (data.size() > hook::maxHookStateDataSize())
return temHOOK_DATA_TOO_LARGE;
auto hookStateKeylet = ripple::keylet::hookState(acc, key, ns);
auto hookStateDirKeylet = ripple::keylet::hookStateDir(acc, ns);
auto hookStateKeylet = ripple::keylet::hookState(
hash_options{(view.seq()), KEYLET_HOOK_STATE}, acc, key, ns);
auto hookStateDirKeylet = ripple::keylet::hookStateDir(
hash_options{(view.seq()), KEYLET_HOOK_STATE_DIR}, acc, ns);
uint32_t stateCount = sleAccount->getFieldU32(sfHookStateCount);
uint32_t oldStateReserve = computeHookStateOwnerCount(stateCount);
@@ -1099,8 +1144,9 @@ hook::setHookState(
// from the owner directory
if (!view.peek(hookStateDirKeylet) && rootHint)
{
if (!view.dirRemove(keylet::ownerDir(acc), *rootHint,
hookStateDirKeylet.key, false)) return tefBAD_LEDGER;
if (!view.dirRemove(keylet::ownerDir(hash_options{(view.seq()),
KEYLET_OWNER_DIR}, acc), *rootHint, hookStateDirKeylet.key, false))
return tefBAD_LEDGER;
}
*/
@@ -1174,6 +1220,7 @@ hook::apply(
used for caching (one day) */
ripple::uint256 const&
hookHash, /* hash of the actual hook byte code, used for metadata */
ripple::uint256 const& hookCanEmit,
ripple::uint256 const& hookNamespace,
ripple::Blob const& wasm,
std::map<
@@ -1201,9 +1248,15 @@ hook::apply(
.result =
{.hookSetTxnID = hookSetTxnID,
.hookHash = hookHash,
.accountKeylet = keylet::account(account),
.ownerDirKeylet = keylet::ownerDir(account),
.hookKeylet = keylet::hook(account),
.hookCanEmit = hookCanEmit,
.accountKeylet = keylet::account(
hash_options{(applyCtx.view().seq()), KEYLET_ACCOUNT},
account),
.ownerDirKeylet = keylet::ownerDir(
hash_options{(applyCtx.view().seq()), KEYLET_OWNER_DIR},
account),
.hookKeylet = keylet::hook(
hash_options{(applyCtx.view().seq()), KEYLET_HOOK}, account),
.account = account,
.otxnAccount = applyCtx.tx.getAccountID(sfAccount),
.hookNamespace = hookNamespace,
@@ -1212,9 +1265,10 @@ hook::apply(
.hookParamOverrides = hookParamOverrides,
.hookParams = hookParams,
.hookSkips = {},
.exitType =
hook_api::ExitType::ROLLBACK, // default is to rollback unless
// hook calls accept()
.exitType = applyCtx.view().rules().enabled(fixXahauV3)
? hook_api::ExitType::UNSET
: hook_api::ExitType::ROLLBACK, // default is to rollback
// unless hook calls accept()
.exitReason = std::string(""),
.exitCode = -1,
.hasCallback = hasCallback,
@@ -1227,6 +1281,8 @@ hook::apply(
.emitFailure = isCallback && wasmParam & 1
? std::optional<ripple::STObject>(
(*(applyCtx.view().peek(keylet::emittedTxn(
hash_options{
(applyCtx.view().seq()), KEYLET_EMITTED_TXN},
applyCtx.tx.getFieldH256(sfTransactionHash)))))
.downcast<STObject>())
: std::optional<ripple::STObject>()};
@@ -1441,7 +1497,8 @@ set_state_cache(
return TOO_MANY_STATE_MODIFICATIONS;
bool const createNamespace = view.rules().enabled(fixXahauV1) &&
!view.exists(keylet::hookStateDir(acc, ns));
!view.exists(keylet::hookStateDir(
hash_options{(view.seq()), KEYLET_HOOK_STATE_DIR}, acc, ns));
if (stateMap.find(acc) == stateMap.end())
{
@@ -1449,7 +1506,8 @@ set_state_cache(
// we will compute how many available reserve positions there are
auto const& fees = hookCtx.applyCtx.view().fees();
auto const accSLE = view.read(ripple::keylet::account(acc));
auto const accSLE = view.read(ripple::keylet::account(
hash_options{(view.seq()), KEYLET_ACCOUNT}, acc));
if (!accSLE)
return DOESNT_EXIST;
@@ -1685,7 +1743,8 @@ DEFINE_HOOK_FUNCTION(
// cache miss or cache was present but entry was not marked as previously
// modified therefore before continuing we need to check grants
auto const sle = view.read(ripple::keylet::hook(acc));
auto const sle = view.read(
ripple::keylet::hook(hash_options{(view.seq()), KEYLET_HOOK}, acc));
if (!sle)
return INTERNAL_ERROR;
@@ -1718,6 +1777,7 @@ DEFINE_HOOK_FUNCTION(
{
// fetch the hook definition
auto const def = view.read(ripple::keylet::hookDefinition(
hash_options{(view.seq()), KEYLET_HOOK_DEFINITION},
hookObj.getFieldH256(sfHookHash)));
if (!def) // should never happen except in a rare race condition
continue;
@@ -1858,7 +1918,9 @@ hook::removeEmissionEntry(ripple::ApplyContext& applyCtx)
if (!const_cast<ripple::STTx&>(tx).isFieldPresent(sfEmitDetails))
return tesSUCCESS;
auto key = keylet::emittedTxn(tx.getTransactionID());
auto key = keylet::emittedTxn(
hash_options{(applyCtx.view().seq()), KEYLET_EMITTED_TXN},
tx.getTransactionID());
auto const& sle = applyCtx.view().peek(key);
@@ -1866,7 +1928,11 @@ hook::removeEmissionEntry(ripple::ApplyContext& applyCtx)
return tesSUCCESS;
if (!applyCtx.view().dirRemove(
keylet::emittedDir(), sle->getFieldU64(sfOwnerNode), key, false))
keylet::emittedDir(
hash_options{(applyCtx.view().seq()), KEYLET_EMITTED_DIR}),
sle->getFieldU64(sfOwnerNode),
key,
false))
{
JLOG(j.fatal()) << "HookError[TX:" << tx.getTransactionID()
<< "]: removeEmissionEntry failed tefBAD_LEDGER";
@@ -1913,7 +1979,8 @@ hook::finalizeHookResult(
std::shared_ptr<const ripple::STTx> ptr =
tpTrans->getSTransaction();
auto emittedId = keylet::emittedTxn(id);
auto emittedId = keylet::emittedTxn(
hash_options{(applyCtx.view().seq()), KEYLET_EMITTED_TXN}, id);
auto sleEmitted = applyCtx.view().peek(emittedId);
if (!sleEmitted)
@@ -1934,9 +2001,10 @@ hook::finalizeHookResult(
sleEmitted->emplace_back(ripple::STObject(sit, sfEmittedTxn));
auto page = applyCtx.view().dirInsert(
keylet::emittedDir(), emittedId, [&](SLE::ref sle) {
(*sle)[sfFlags] = lsfEmittedDir;
});
keylet::emittedDir(hash_options{
(applyCtx.view().seq()), KEYLET_EMITTED_DIR}),
emittedId,
[&](SLE::ref sle) { (*sle)[sfFlags] = lsfEmittedDir; });
if (page)
{
@@ -2122,7 +2190,8 @@ DEFINE_HOOK_FUNCTION(
false);
}
auto hsSLE = view.peek(keylet::hookState(acc, *key, ns));
auto hsSLE = view.peek(keylet::hookState(
hash_options{(view.seq()), KEYLET_HOOK_STATE}, acc, *key, ns));
if (!hsSLE)
return DOESNT_EXIST;
@@ -2900,11 +2969,17 @@ DEFINE_HOOK_FUNCTION(
ripple::base_uint<256>::fromVoid(memory + read_ptr);
ripple::Keylet kl = keylet_type == keylet_code::CHILD
? ripple::keylet::child(id)
? ripple::keylet::child(
hash_options{(view.seq()), KEYLET_CHILD}, id)
: keylet_type == keylet_code::EMITTED_TXN
? ripple::keylet::emittedTxn(id)
? ripple::keylet::emittedTxn(
hash_options{(view.seq()), KEYLET_EMITTED_TXN},
id)
: keylet_type == keylet_code::HOOK_DEFINITION
? ripple::keylet::hookDefinition(id)
? ripple::keylet::hookDefinition(
hash_options{
(view.seq()), KEYLET_HOOK_DEFINITION},
id)
: ripple::keylet::unchecked(id);
return serialize_keylet(kl, memory, write_ptr, write_len);
@@ -2932,12 +3007,18 @@ DEFINE_HOOK_FUNCTION(
ripple::AccountID id = AccountID::fromVoid(memory + read_ptr);
ripple::Keylet kl = keylet_type == keylet_code::HOOK
? ripple::keylet::hook(id)
? ripple::keylet::hook(
hash_options{(view.seq()), KEYLET_HOOK}, id)
: keylet_type == keylet_code::SIGNERS
? ripple::keylet::signers(id)
? ripple::keylet::signers(
hash_options{(view.seq()), KEYLET_SIGNERS}, id)
: keylet_type == keylet_code::OWNER_DIR
? ripple::keylet::ownerDir(id)
: ripple::keylet::account(id);
? ripple::keylet::ownerDir(
hash_options{(view.seq()), KEYLET_OWNER_DIR},
id)
: ripple::keylet::account(
hash_options{(view.seq()), KEYLET_ACCOUNT},
id);
return serialize_keylet(kl, memory, write_ptr, write_len);
}
@@ -2975,12 +3056,22 @@ DEFINE_HOOK_FUNCTION(
}
ripple::Keylet kl = keylet_type == keylet_code::CHECK
? ripple::keylet::check(id, seq)
? ripple::keylet::check(
hash_options{(view.seq()), KEYLET_CHECK}, id, seq)
: keylet_type == keylet_code::ESCROW
? ripple::keylet::escrow(id, seq)
? ripple::keylet::escrow(
hash_options{(view.seq()), KEYLET_ESCROW},
id,
seq)
: keylet_type == keylet_code::NFT_OFFER
? ripple::keylet::nftoffer(id, seq)
: ripple::keylet::offer(id, seq);
? ripple::keylet::nftoffer(
hash_options{(view.seq()), KEYLET_NFT_OFFER},
id,
seq)
: ripple::keylet::offer(
hash_options{(view.seq()), KEYLET_OFFER},
id,
seq);
return serialize_keylet(kl, memory, write_ptr, write_len);
}
@@ -3003,7 +3094,9 @@ DEFINE_HOOK_FUNCTION(
uint64_t index = (((uint64_t)c) << 32U) + ((uint64_t)d);
ripple::Keylet kl = ripple::keylet::page(
ripple::base_uint<256>::fromVoid(memory + a), index);
hash_options{(view.seq()), KEYLET_DIR_PAGE},
ripple::base_uint<256>::fromVoid(memory + a),
index);
return serialize_keylet(kl, memory, write_ptr, write_len);
}
@@ -3024,6 +3117,7 @@ DEFINE_HOOK_FUNCTION(
return INVALID_ARGUMENT;
ripple::Keylet kl = ripple::keylet::hookState(
hash_options{(view.seq()), KEYLET_HOOK_STATE},
AccountID::fromVoid(memory + aread_ptr),
ripple::base_uint<256>::fromVoid(memory + kread_ptr),
ripple::base_uint<256>::fromVoid(memory + nread_ptr));
@@ -3049,6 +3143,7 @@ DEFINE_HOOK_FUNCTION(
return INVALID_ARGUMENT;
ripple::Keylet kl = ripple::keylet::hookStateDir(
hash_options{(view.seq()), KEYLET_HOOK_STATE_DIR},
AccountID::fromVoid(memory + aread_ptr),
ripple::base_uint<256>::fromVoid(memory + nread_ptr));
@@ -3061,7 +3156,11 @@ DEFINE_HOOK_FUNCTION(
return INVALID_ARGUMENT;
ripple::Keylet kl =
(b == 0 ? ripple::keylet::skip() : ripple::keylet::skip(a));
(b == 0 ? ripple::keylet::skip(
hash_options{(view.seq()), KEYLET_SKIP_LIST})
: ripple::keylet::skip(
hash_options{(view.seq()), KEYLET_SKIP_LIST},
a));
return serialize_keylet(kl, memory, write_ptr, write_len);
}
@@ -3086,14 +3185,16 @@ DEFINE_HOOK_FUNCTION(
return d;
};
static std::array<uint8_t, 34> cAmendments =
makeKeyCache(ripple::keylet::amendments());
static std::array<uint8_t, 34> cFees =
makeKeyCache(ripple::keylet::fees());
static std::array<uint8_t, 34> cNegativeUNL =
makeKeyCache(ripple::keylet::negativeUNL());
static std::array<uint8_t, 34> cEmittedDir =
makeKeyCache(ripple::keylet::emittedDir());
// Cannot statically cache these as hash depends on ledger
// sequence
auto cAmendments = makeKeyCache(ripple::keylet::amendments(
hash_options{(view.seq()), KEYLET_AMENDMENTS}));
auto cFees = makeKeyCache(ripple::keylet::fees(
hash_options{(view.seq()), KEYLET_FEES}));
auto cNegativeUNL = makeKeyCache(ripple::keylet::negativeUNL(
hash_options{(view.seq()), KEYLET_NEGATIVE_UNL}));
auto cEmittedDir = makeKeyCache(ripple::keylet::emittedDir(
hash_options{(view.seq()), KEYLET_EMITTED_DIR}));
WRITE_WASM_MEMORY_AND_RETURN(
write_ptr,
@@ -3131,6 +3232,7 @@ DEFINE_HOOK_FUNCTION(
return INVALID_ARGUMENT;
auto kl = ripple::keylet::line(
hash_options{(view.seq()), KEYLET_TRUSTLINE},
AccountID::fromVoid(memory + hi_ptr),
AccountID::fromVoid(memory + lo_ptr),
*cur);
@@ -3158,7 +3260,10 @@ DEFINE_HOOK_FUNCTION(
ripple::AccountID aid = AccountID::fromVoid(memory + aread_ptr);
ripple::AccountID bid = AccountID::fromVoid(memory + bread_ptr);
ripple::Keylet kl = ripple::keylet::depositPreauth(aid, bid);
ripple::Keylet kl = ripple::keylet::depositPreauth(
hash_options{(view.seq()), KEYLET_DEPOSIT_PREAUTH},
aid,
bid);
return serialize_keylet(kl, memory, write_ptr, write_len);
}
@@ -3193,7 +3298,8 @@ DEFINE_HOOK_FUNCTION(
seq = uint256::fromVoid(memory + e);
}
ripple::Keylet kl = ripple::keylet::payChan(aid, bid, seq);
ripple::Keylet kl = ripple::keylet::payChan(
hash_options{(view.seq()), KEYLET_PAYCHAN}, aid, bid, seq);
return serialize_keylet(kl, memory, write_ptr, write_len);
}
@@ -3264,6 +3370,16 @@ DEFINE_HOOK_FUNCTION(
return EMISSION_FAILURE;
}
ripple::TxType txType = stpTrans->getTxnType();
ripple::uint256 const& hookCanEmit = hookCtx.result.hookCanEmit;
if (!hook::canEmit(txType, hookCanEmit))
{
JLOG(j.trace()) << "HookEmit[" << HC_ACC()
<< "]: Hook cannot emit this txn.";
return EMISSION_FAILURE;
}
// check the emitted txn is valid
/* Emitted TXN rules
* 0. Account must match the hook account
@@ -3666,6 +3782,7 @@ DEFINE_HOOK_FUNCTION(
flags |= (hookCtx.result.hookChainPosition << 2U);
auto hash = ripple::sha512Half(
hash_options{HOOK_EMITTED_TXN_NONCE},
ripple::HashPrefix::emitTxnNonce,
applyCtx.tx.getTransactionID(),
hookCtx.emit_nonce_counter++,
@@ -3700,6 +3817,7 @@ DEFINE_HOOK_FUNCTION(
return TOO_MANY_NONCES;
auto hash = ripple::sha512Half(
hash_options{HOOK_LEDGER_NONCE},
ripple::HashPrefix::hookNonce,
view.info().seq,
view.info().parentCloseTime.time_since_epoch().count(),
@@ -3825,7 +3943,9 @@ DEFINE_HOOK_FUNCTION(
NOT_IN_BOUNDS(read_ptr, read_len, memory_length))
return OUT_OF_BOUNDS;
auto hash = ripple::sha512Half(ripple::Slice{memory + read_ptr, read_len});
auto hash = ripple::sha512Half(
hash_options{HOOK_UTIL_SHA512H},
ripple::Slice{memory + read_ptr, read_len});
WRITE_WASM_MEMORY_AND_RETURN(
write_ptr, 32, hash.data(), 32, memory, memory_length);
@@ -4612,6 +4732,8 @@ DEFINE_HOOK_FUNCTION(
}
catch (std::exception& e)
{
JLOG(j.trace()) << "HookInfo[" << HC_ACC()
<< "]: etxn_fee_base exception: " << e.what();
return INVALID_TXN;
}
@@ -4783,7 +4905,7 @@ DEFINE_HOOK_FUNCTION(
if (float1 == 0)
{
j.trace() << "HookTrace[" << HC_ACC() << "]:"
j.trace() << "HookTrace[" << HC_ACC() << "]: "
<< (read_len == 0
? ""
: std::string_view(
@@ -5397,7 +5519,7 @@ DEFINE_HOOK_FUNCTION(
const int64_t float_one_internal = make_float(1000000000000000ull, -15, false);
inline int64_t
float_divide_internal(int64_t float1, int64_t float2)
float_divide_internal(int64_t float1, int64_t float2, bool hasFix)
{
RETURN_IF_INVALID_FLOAT(float1);
RETURN_IF_INVALID_FLOAT(float2);
@@ -5450,8 +5572,16 @@ float_divide_internal(int64_t float1, int64_t float2)
while (man2 > 0)
{
int i = 0;
for (; man1 > man2; man1 -= man2, ++i)
;
if (hasFix)
{
for (; man1 >= man2; man1 -= man2, ++i)
;
}
else
{
for (; man1 > man2; man1 -= man2, ++i)
;
}
man3 *= 10;
man3 += i;
@@ -5471,7 +5601,8 @@ DEFINE_HOOK_FUNCTION(int64_t, float_divide, int64_t float1, int64_t float2)
HOOK_SETUP(); // populates memory_ctx, memory, memory_length, applyCtx,
// hookCtx on current stack
return float_divide_internal(float1, float2);
bool const hasFix = view.rules().enabled(fixFloatDivide);
return float_divide_internal(float1, float2, hasFix);
HOOK_TEARDOWN();
}
@@ -5490,7 +5621,9 @@ DEFINE_HOOK_FUNCTION(int64_t, float_invert, int64_t float1)
return DIVISION_BY_ZERO;
if (float1 == float_one_internal)
return float_one_internal;
return float_divide_internal(float_one_internal, float1);
bool const fixV3 = view.rules().enabled(fixFloatDivide);
return float_divide_internal(float_one_internal, float1, fixV3);
HOOK_TEARDOWN();
}

Some files were not shown because too many files have changed in this diff Show More