Compare commits

...

168 Commits

Author SHA1 Message Date
tequ
01e7ee4f03 fix: update comment for HookParameterValue size limit 2026-01-06 23:17:43 +09:00
tequ
222772d99b fix get_stobject_length to work STI_PATHSET correctly 2026-01-06 23:14:47 +09:00
tequ
9e6135ba42 refactor get_stobject_length 2026-01-06 20:21:12 +09:00
tequ
f0329b4054 Merge branch 'dev' into HookAPISerializedType240 2026-01-06 18:21:01 +09:00
Niq Dudfield
a8d7b2619e fix: restore [ips_fixed] to use addFixedPeer instead of addFallbackStrings (#641) 2026-01-05 13:46:02 +10:00
Niq Dudfield
775fb3a8b2 fix: increment manifest sequence for client code cache invalidation (#631) 2025-12-24 11:16:00 +10:00
tequ
5a9baed9d0 HookAPISerializedType240 Amendment 2025-12-17 19:39:31 +09:00
tequ
8d1aadd23d Update CMake version to 3.25.3 in macOS workflow 2025-12-17 12:53:20 +09:00
tequ
8d2a5e3c4e Merge branch 'dev' into sync-2.4.0 2025-12-17 12:38:43 +09:00
Niq Dudfield
5a118a4e2b fix(logs): formatting fixes, color handling, and debug build defaults (#607) 2025-12-17 09:45:41 +10:00
tequ
960f87857e Self hosted macos runner (#652) 2025-12-17 09:43:25 +10:00
tequ
f731bcfeba Increase ccache size from 10G to 100G in release-builder.sh for improved build performance (#643) 2025-12-16 14:45:45 +10:00
tequ
374b361daa Use Self hosted runner (#639) 2025-12-16 14:16:36 +10:00
tequ
52ccf27aa3 Hook API Refactor1: whitelist api at Enum.h (#605) 2025-12-10 19:32:03 +10:00
tequ
f6fe33103c Fix differences such as LedgerHash that occurred due to NetworkID in ltFeeSettings 2025-12-01 19:28:23 +09:00
tequ
b9d966dd32 Merge remote-tracking branch 'upstream/dev' into sync-2.4.0 2025-12-01 18:08:31 +09:00
tequ
e3ccddfaca Remove HookAPI test file HookAPI_test.cpp as unintentionally included. (#650) 2025-12-01 18:59:59 +10:00
Niq Dudfield
36e51662fe build: suppress openssl deprecation warnings (#606) 2025-12-01 18:58:48 +10:00
tequ
e319619dce Combine 3 Hook Api fix amendments (#648) 2025-12-01 16:26:15 +10:00
tequ
7e92374436 Merge remote-tracking branch 'upstream/dev' into sync-2.4.0 2025-12-01 13:06:58 +09:00
tequ
7ef8473c85 Merge remote-tracking branch 'upstream/dev' into sync-2.4.0 2025-12-01 12:53:49 +09:00
tequ
2073b562f0 Fix genesis feesettings NetworkiD (#649) 2025-12-01 12:55:00 +10:00
tequ
39353a6557 Fix: Ensure sto_subfield correctly handles STO field values of 16 or more. (#647) 2025-12-01 12:48:30 +10:00
tequ
64fb39d033 Merge remote-tracking branch 'upstream/dev' into sync-2.4.0 2025-11-30 13:53:35 +09:00
tequ
1bfae1a296 fixStoEmplaceFieldIdCheck Amendment (#637) 2025-11-28 18:31:15 +10:00
Niq Dudfield
f6a4e8f36d Wind back macOS runner version (#635) 2025-11-27 09:39:27 +10:00
tequ
70bbe83525 Revert "Update workers to self hosted" (#638) 2025-11-27 09:38:45 +10:00
tequ
bbff5e29d8 Enhance GitHub Actions workflow by escaping "double quotes in PR title" (#640) 2025-11-27 09:36:02 +10:00
Wietse Wind
c42cb0df62 Update workers to self hosted 2025-11-25 15:42:01 +01:00
Niq Dudfield
8efc02b2d4 refactor(ci): fix caching and improve [ci-] tag handling (#633) 2025-11-25 16:23:41 +10:00
tequ
ffcb203ce1 fixEtxnFeeBase Amendment (#630) 2025-11-24 09:52:53 +10:00
tequ
859391327d Merge branch 'dev' into sync-2.4.0 2025-11-20 10:47:16 +09:00
tequ
4a65401448 Fix Cron stacking (#627) 2025-11-15 17:41:07 +10:00
tequ
9ec631b1d8 Merge remote-tracking branch 'upstream/dev' into sync-2.4.0 2025-11-06 14:46:53 +09:00
tequ
8bcebdea42 Support 'cron' type for account_objects (#624) 2025-11-06 15:19:15 +10:00
Alloy Networks
4cc63c028a Change validators.txt to validators-xahau.txt (#619) 2025-11-01 15:26:56 +10:00
tequ
b9ed90e08b fix InvalidTxFlags Amendment to default Yes 2025-10-29 16:49:37 +09:00
tequ
066f8ed9ef Merge branch 'dev' into sync-2.4.0 2025-10-27 15:38:14 +09:00
tequ
9ed20a4f1c Refactor: SetCron to CronSet (#609) 2025-10-27 14:38:40 +10:00
tequ
89ffc1969b Add Previous fields to ltCron (#611) 2025-10-27 14:36:57 +10:00
tequ
79fdafe638 Support Cron in util_keylet Hook API (#612) 2025-10-27 14:35:01 +10:00
tequ
2a10013dfc Support 'cron' with ledger_entry RPC (#608) 2025-10-24 17:05:14 +10:00
tequ
6f148a8ac7 ExtendedHookState (#406) 2025-10-23 18:57:38 +10:00
tequ
96222baf5e Add hook header generators and CI verification workflow (#597) 2025-10-22 15:25:38 +10:00
Niq Dudfield
74477d2c13 added configurable NuDB block size support in xahaud (#601) 2025-10-22 14:15:12 +10:00
Alloy Networks
9378f1a0ad Update CONTRIBUTING.md (#599) 2025-10-21 14:20:10 +10:00
tequ
6fa6a96e3a Introduce StartTime in CronSet and improve next execution scheduling (#596) 2025-10-21 14:17:53 +10:00
RichardAH
b0fcd36bcd import_vl_keys logic fix (flap fix) (#588) 2025-10-18 16:27:05 +10:00
RichardAH
1ec31e79c9 Cron (on ledger cronjobs) (#590)
Co-authored-by: tequ <git@tequ.dev>
2025-10-17 18:45:16 +10:00
tequ
3487e2de67 Merge branch 'dev' into sync-2.4.0 2025-10-17 13:21:01 +09:00
tequ
9c8b005406 fix: improve logging for transaction preflight failures in applyHook.cpp (#566) 2025-10-15 12:33:32 +10:00
tequ
687ccf4203 Remove unused variable enabled in MultiSign_test.cpp (#592) 2025-10-15 12:32:31 +10:00
Niq Dudfield
83f09fd8ab ci: add clang to build matrix [ci-nix-full-matrix] (#569) 2025-10-15 11:26:31 +10:00
tequ
1da00892d3 Merge remote-tracking branch 'upstream/dev' into sync-2.4.0 2025-10-14 17:22:41 +09:00
tequ
15c7ad6f78 Fix Invalid Tx flags (#514) 2025-10-14 15:35:48 +10:00
Niq Dudfield
1f12b9ec5a feat(logs): add -DBEAST_ENHANCED_LOGGING with file:line numbers for JLOG macro (#552) 2025-10-14 10:44:03 +10:00
Niq Dudfield
ad0531ad6c chore: fix warnings (#509)
Co-authored-by: Denis Angell <dangell@transia.co>
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2025-10-11 11:47:13 +10:00
tequ
e580f7cfc0 chore(vscode): enable format on save in settings.json (#578) 2025-10-11 11:43:50 +10:00
tequ
094f011006 Fix emit Hook API testcase name (#580) 2025-10-11 11:43:09 +10:00
Niq Dudfield
39d1c43901 build: upgrade openssl from 1.1.1u to 3.6.0 (#587)
Updates OpenSSL dependency to the latest 3.x series available on Conan Center.
2025-10-10 19:53:35 +10:00
J. Scott Branson
b3e6a902cb Update Sample Configuration Files in /cfg for Congruence with xahaud (#584) 2025-10-10 14:59:39 +11:00
Niq Dudfield
fa1b93bfd8 build: migrate to conan 2 (#585)
Migrates the build system from Conan 1 to Conan 2
2025-10-10 14:57:46 +11:00
tequ
92e3a927fc refactor KEYLET_LINE in utils_keylet (#502)
Fixes the use of high and low in variable names, as these are determined by ripple::keylet::line processing.

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2025-10-09 21:02:14 +11:00
tequ
8f7ebf0377 Optimize github action cache (#544)
* optimize github action cache

* fix

* refactor: improve github actions cache optimization (#3)

- move ccache configuration logic to dedicated action
- rename conanfile-changed to should-save-conan-cache for clarity

---------

Co-authored-by: Niq Dudfield <ndudfield@gmail.com>
2025-09-08 15:53:40 +10:00
Niq Dudfield
46cf6785ab fix(tests): prevent buffer corruption from concurrent log writes (#565)
std::endl triggers flush() which calls sync() on the shared log buffer.
Multiple threads racing in sync() cause str()/str("") operations to
corrupt buffer state, leading to crashes and double frees.

Added mutex to serialize access to suite.log, preventing concurrent
sync() calls on the same buffer.
2025-09-08 13:57:49 +10:00
Niq Dudfield
3c4c9c87c5 Fix rwdb memory leak with online_delete and remove flatmap (#570)
Co-authored-by: Denis Angell <dangell@transia.co>
2025-08-26 14:00:58 +10:00
tequ
75636ee5c4 Merge branch 'dev' into sync-2.4.0 2025-08-20 14:11:26 +09:00
tequ
d1528021e2 Add ltORACLE for Remarks target (#562) 2025-08-18 16:17:49 +09:00
Niq Dudfield
7a790246fb fix: upgrade CI to GCC 13 and fix compilation issues, fixes #557 (#559) 2025-08-14 17:41:49 +10:00
tequ
d1395d0f41 Merge remote-tracking branch 'upstream/dev' into sync-2.4.0 2025-08-14 15:45:50 +09:00
tequ
b4f79257cb Conan Release Builder (2.4.0 sync) (#528) 2025-08-14 15:29:04 +09:00
Niq Dudfield
1a3d2db8ef fix(ci): export correct snappy version (#546) 2025-08-14 14:01:32 +10:00
tequ
2fc912d54d Make release build use conan deps where possible and hbb 4.0.1 (#516)
Co-authored-by: Denis Angell <dangell@transia.co>
Co-authored-by: Niq Dudfield <ndudfield@gmail.com>
2025-08-14 12:59:57 +10:00
tequ
43a4a3a3e2 Add Remit test to AMM Account 2025-07-19 19:18:03 +09:00
tequ
117bdb1c42 Optimize AccountDelete and Creadentials tests, Update tests priority 2025-07-19 04:49:25 +09:00
tequ
87e41a7888 Update Hook headers 2025-07-16 19:38:51 +09:00
tequ
df3bf8a958 VoteBehavior::DefaultYes for new fix Amendments
- NFToken related fix Amendments remains as `DefaultNo`.
2025-07-14 19:27:09 +09:00
tequ
c5fa112e16 Add TSH processing for AMM, AMMClawback, Clawback, Oracle (#532)
* Add TSH processing for `AMM`, `AMMClawback`, `Oracle`, `Clawback`

* Add empty TSH processing for other transaction types

* Add AMMTsh tests
2025-07-10 23:26:32 +09:00
tequ
d2e21da7a3 Merge branch 'dev' into sync-2.4.0 2025-07-09 13:38:41 +09:00
Niq Dudfield
849d447a20 docs(freeze): canceling escrows with deep frozen assets is allowed (#540) 2025-07-09 13:48:59 +10:00
tequ
ee27049687 IOUIssuerWeakTSH (#388) 2025-07-09 13:48:26 +10:00
tequ
60dec74baf Add DeepFreeze test for URIToken (#539) 2025-07-09 12:49:47 +10:00
Denis Angell
9abea13649 Feature Clawback (#534) 2025-07-09 12:48:46 +10:00
Denis Angell
810e15319c Feature DeepFreeze (#536)
---------

Co-authored-by: tequ <git@tequ.dev>
2025-07-09 10:33:08 +10:00
tequ
1f0bbdb288 Merge branch 'dev' into sync-2.4.0 2025-07-08 18:12:35 +09:00
Niq Dudfield
d593f3bef5 fix: provisional PreviousTxn{Id,LedgerSeq} double threading (#515)
---------

Co-authored-by: tequ <git@tequ.dev>
2025-07-08 18:04:39 +10:00
tequ
5cc56d5f15 Support Hook execution in simulate RPC (#531) 2025-07-05 14:32:42 +09:00
tequ
0a9d3d3d75 Merge branch 'dev' into sync-2.4.0 2025-07-03 10:14:42 +09:00
Niq Dudfield
1233694b6c chore: add suspicious_patterns to .scripts/pre-hook and not-suspicious filter (#525)
* chore: add suspicious_patterns to .scripts/pre-hook and not-suspicious filter

* rm: kill annoying checkpatterns job

* chore: cleanup

---------

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2025-07-01 20:58:06 +10:00
tequ
0d192d48ce add tests for DeepFreeze 2025-07-01 19:35:35 +09:00
tequ
574dc20641 Supported::No for featurePermissionedDomains 2025-07-01 16:08:01 +09:00
tequ
3367f40ef5 Supported::No for featureDynamicNFT 2025-07-01 16:01:27 +09:00
tequ
7080d292e6 Supported::No for featureCredentials 2025-07-01 15:54:07 +09:00
tequ
85a1eb5dba Supported::No for featureMPTokensV1 2025-07-01 15:22:52 +09:00
tequ
534ed875a2 Supported::No for featureNFTokenMintOffer 2025-07-01 15:14:08 +09:00
tequ
72b85d75c9 Supported::No for featureDID 2025-07-01 15:09:00 +09:00
tequ
9229ed779f Supported::No for featureXChainBridge 2025-07-01 14:52:24 +09:00
tequ
e9f671043d Combine AMM Amendments (#521)
* fixAMMv1_2
* fixAMMv1_1
* fixAMMOverflowOffer
* fixLPTokenTransfer
* suppress AMM test logs
* exclude `ltAMM` from `fixPreviousTxnID` Amendment
    - make `sfPreviousTxnID` and `sfPreviousTxnLgrSeq` required for ltAMM
2025-07-01 13:51:32 +09:00
tequ
37669452f6 Combine XChainBridge Amendments (#523) 2025-06-30 19:12:33 +09:00
tequ
d4fd40c471 Combine fixInnerObjTemplate Amendments (#524) 2025-06-30 18:14:04 +09:00
tequ
51aae2ce36 fix to DefaultNo for featureDeletableAccounts 2025-06-30 17:15:46 +09:00
tequ
c4106a2752 Disable instrumentation-build workflow (#530) 2025-06-30 16:54:11 +09:00
tequ
e955909a40 Merge branch 'dev' into sync-2.4.0 2025-06-30 13:33:12 +09:00
Niq Dudfield
396587c160 fix: prevent SOCI from linking ALL boost libraries (#529)
SOCI's vendored conanfile was using boost::boost which links against
every single Boost library (40+ libraries) when only boost::headers
is needed for SOCI's template specializations (boost::optional and
boost::gregorian::date support).

This was causing excessive linking and potential symbol conflicts,
particularly on Linux CI where boost_stacktrace_from_exception was
causing multiple definition errors with libstdc++.

Changed SOCI's boost dependency from boost::boost to boost::headers
since SOCI only needs Boost headers for its template specializations,
not the compiled libraries. The project already provides all necessary
Boost libraries through the ripple_boost target.

This reduces the linked libraries from 40+ down to just the ~14 that
the project actually uses, fixing the Linux CI build failures and
reducing binary size.

Note: The SOCI Conan recipe for Conan 2.0 already implements this
fix correctly.
2025-06-30 13:16:10 +09:00
tequ
a1d42b7380 Improve unittests (#494)
* Match unit tests on start of test name (#4634)

* For example, without this change, to run the TxQ tests, must specify
  `--unittest=TxQ1,TxQ2` on the command line. With this change, can use
  `--unittest=TxQ`, and both will be run.
* An exact match will prevent any further partial matching.
* This could have some side effects for different tests with a common
  name beginning. For example, NFToken, NFTokenBurn, NFTokenDir. This
  might be useful. If not, the shorter-named test(s) can be renamed. For
  example, NFToken to NFTokens.
* Split the NFToken, NFTokenBurn, and Offer test classes. Potentially speeds
  up parallel tests by a factor of 5.

* SetHook_test, SetHookTSH_test, XahauGenesis_test

---------

Co-authored-by: Ed Hennis <ed@ripple.com>
2025-06-30 10:03:02 +10:00
tequ
c065bc4938 Reduce numFeatures for DID Amendments combine 2025-06-28 21:36:25 +09:00
Niq Dudfield
2470926a1d fix: remove vestigial -DBOOST_ASIO_DISABLE_CONCEPTS usage (#526) 2025-06-27 16:41:30 +09:00
Denis Angell
c35890d5f8 fix cmake & xrpl_core 2025-06-25 10:30:30 +02:00
Denis Angell
248d485aed Update build-full.sh 2025-06-25 10:03:13 +02:00
Denis Angell
846965e77c fix cmake 2025-06-25 09:52:53 +02:00
Denis Angell
bf1f4e1a6f Update build-full.sh 2025-06-25 09:26:48 +02:00
Denis Angell
f8c4639ff4 add DeepFreeze to trustTransferAllowed 2025-06-25 09:15:05 +02:00
tequ
2451d78ae0 fix release-builder, workflow building 2025-06-24 21:27:20 +09:00
tequ
092f907724 remove checkpatterns workflow 2025-06-24 19:57:41 +09:00
tequ
33d4a989a2 Merge branch 'dev' into sync-2.4.0 2025-06-24 19:33:30 +09:00
tequ
348dab7491 Combine DID Amendments (#522)
fixEmptyDID -> featureDID
2025-06-23 21:13:52 +09:00
tequ
6728221831 Additional support for HookDefinition, HookState, ImportVLSequence at fixPreviousTxnID Amendment 2025-06-23 17:59:40 +09:00
Mark Travis
65f4945f22 Log detailed correlated consensus data together (#5302)
Combine multiple related debug log data points into a single
message. Allows quick correlation of events that
previously were either not logged or, if logged, strewn
across multiple lines, making correlation difficult.
The Heartbeat Timer and consensus ledger accept processing
each have this capability.

Also guarantees that log entries will be written if the
node is a validator, regardless of log severity level.
Otherwise, the level of these messages is at INFO severity.
2025-06-20 15:30:36 +09:00
Mark Travis
aff89c3457 fix: Acquire previously failed transaction set from network as new proposal arrives (#5318)
Reset the failure variable.
2025-06-20 15:12:58 +09:00
Bronek Kozicki
52e1766fb3 Fix Replace assert with XRPL_ASSERT (#5312) 2025-06-20 15:12:20 +09:00
Bronek Kozicki
3166ddc460 fix: Remove 'new parent hash' assert (#5313)
This assert is known to occasionally trigger, without causing errors
downstream. It is replaced with a log message.
2025-06-20 14:58:59 +09:00
Ed Hennis
db1591950d Add logging and improve counting of amendment votes from UNL (#5173)
* Add logging for amendment voting decision process
* When counting "received validations" to determine quorum, count the number of validators actually voting, not the total number of possible votes.
2025-06-20 14:58:48 +09:00
Bart
0d8c997867 docs: Revert peer port to 51235 (#5299)
Reverts the [port_peer] back to the legacy port 51235 rather than to the default port 2459, to avoid potentially inconveniencing existing operators.
2025-06-20 14:58:32 +09:00
Olek
41405706b0 fix: Switch Permissioned Domain to Supported::yes (#5287)
Switch Permissioned Domain feature's supported flag from Supported::no to Supported::yes for it to be votable.
2025-06-20 14:58:20 +09:00
Bart
d7480c6474 docs: Clarifies default port of hosts (#5290)
The current comment in the example cfg file incorrectly mentions both "may" and "must". This change fixes this comment to clarify that the default port of hosts is 2459 and that specifying it is therefore optional. It further sets the default port to 2459 instead of the legacy 51235.
2025-06-20 14:58:07 +09:00
Mark Travis
7b46e26d78 Log proposals and validations (#5291)
Adds detailed log messages for each validation and proposal received from the network.
2025-06-20 14:57:59 +09:00
Bart
5f5a73acbc Support canonical ledger entry names (#5271)
This change enhances the filtering in the ledger, ledger_data, and account_objects methods by also supporting filtering by the canonical name of the LedgerEntryType using case-insensitive matching.
2025-06-20 14:18:08 +09:00
Ed Hennis
c24f5b10b8 refactor: Change recursive_mutex to mutex in DatabaseRotatingImp (#5276)
Rewrites the code so that the lock is not held during the callback. Instead it locks twice, once before, and once after. This is safe due to the structure of the code, but is checked after the second lock. This allows mutex_ to be changed back to a regular mutex.
2025-06-20 14:17:06 +09:00
Bart
601bb7ed0f fix: Replace charge() by fee_.update() in OnMessage functions (#5269)
In PeerImpl.cpp, if the function is a message handler (onMessage) or called directly from a message handler, then it should use fee_, since when the handler returns (OnMessageEnd) then the charge function is called. If the function is not a message handler, such as a job queue item, it should remain charge.
2025-06-20 13:44:30 +09:00
Elliot Lee
0d3dd400f0 docs: ensure build_type and CMAKE_BUILD_TYPE match (#5274) 2025-06-20 13:43:59 +09:00
code0xff
4d763b7340 chore: Fix small typos in protocol files (#5279) 2025-06-20 13:43:42 +09:00
Ed Hennis
63665a6673 docs: Add a summary of the git commit message rules (#5283) 2025-06-20 13:43:03 +09:00
Olek
cbd7d5dc3a fix: Amendment to add transaction flag checking functionality for Credentials (#5250)
CredentialCreate / CredentialAccept / CredentialDelete transactions will check sfFlags field in preflight() when the amendment is enabled.
2025-06-20 13:42:18 +09:00
Donovan Hide
3e49ee604e fix: Omit superfluous setCurrentThreadName call in GRPCServer.cpp (#5280) 2025-06-20 11:21:55 +09:00
Bronek Kozicki
5e542f5215 fix: Do not allow creating Permissioned Domains if credentials are not enabled (#5275)
If the permissioned domains amendment XLS-80 is enabled before credentials XLS-70, then the permissioned domain users will not be able to match any credentials. The changes here prevent the creation of any permissioned domain objects if credentials are not enabled.
2025-06-20 11:21:45 +09:00
Mayukha Vadari
bdc404837c fix: issues in simulate RPC (#5265)
Make `simulate` RPC easier to use:
* Prevent the use of `seed`, `secret`, `seed_hex`, and `passphrase` fields (to avoid confusing with the signing methods).
* Add autofilling of the `NetworkID` field.
2025-06-20 11:21:33 +09:00
Bart
a62919a9cc Updates Conan dependencies (#5256)
This PR updates several Conan dependencies:
* boost
* date
* libarchive
* libmysqlclient
* libpq
* lz4
* onetbb
* openssl
* sqlite3
* zlib
* zstd
2025-06-20 11:21:21 +09:00
Shawn Xie
41dcc0fb23 Amendment fixFrozenLPTokenTransfer (#5227)
Prohibits LPToken holders from sending LPToken to others if they have been frozen by one of the assets in AMM pool.
2025-06-20 11:04:00 +09:00
Ed Hennis
b109dbf10f Improve git commit hash lookup (#5225)
- Also get the branch name.
- Use rev-parse instead of describe to get a clean hash.
- Return the git hash and branch name in server_info for admin
  connections.
- Include git hash and branch name on separate lines in --version.
2025-06-20 10:58:40 +09:00
Vlad
01372a67a8 Add deep freeze feature (XLS-77d) (#5187)
- spec: XRPLF/XRPL-Standards#220
- amendment: "DeepFreeze"
- implemented deep freeze spec to allow token issuers to prevent currency holders from being able to acquire more of these tokens.
- in combination with normal freeze, deep freeze effectively prevents any balance trust line balance change of a currency holder (except direct issuer <-> holder payments).
- added 2 new invariant checks to verify that deep freeze cannot be enacted without normal freeze and transfer is not frozen.
- made some fixes to existing freeze handling.

Co-authored-by: Ed Hennis <ed@ripple.com>
Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
2025-06-20 10:52:31 +09:00
Mayukha Vadari
2b59176cfd Add RPC "simulate" to execute a dry run of a transaction (#5069)
- Spec: https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0069d-simulate
- Also update signing methods to autofill fees better and properly handle transactions that require a non-standard fee.
2025-06-20 10:03:49 +09:00
Olek
a0505ce47d Fix CI unit tests (#5196)
- Add retries for rpc client
- Add dynamic port allocation for rpc servers
2025-06-20 02:28:17 +09:00
Michael Legleux
91aabaa4aa Update secp256k1 library to 0.6.0 (#5254) 2025-06-20 01:35:15 +09:00
Bronek Kozicki
a63008b1be Add [validator_list_threshold] to validators.txt to improve UNL security (#5112) 2025-06-20 01:34:38 +09:00
Bronek Kozicki
33b5ed931c Switch from assert to XRPL_ASSERT (#5245) 2025-06-20 01:29:51 +09:00
tequ
ce5c3c98c9 Add missing space character to a log message (#5251) 2025-06-20 01:29:43 +09:00
Bronek Kozicki
aeadad26cb Cleanup API-CHANGELOG.md (#5207) 2025-06-20 01:29:24 +09:00
Ed Hennis
74c50ebdab test: Unit tests to recreate invalid index logic error (#5242)
* One hits the global cache, one does not.
* Also some extra checking.

Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
2025-06-20 01:29:02 +09:00
Sergey Kuznetsov
1e2c92290d fix: Error consistency in LedgerEntry::parsePermissionedDomains() (#5252)
Update errors for parsing permissioned domains in the LedgerEntry handler to make them consistent with other parsers.
2025-06-20 00:56:42 +09:00
Ed Hennis
0617dc221d fix: Use consistent CMake settings for all modules (#5228)
* Resolves an issue introduced in #5111, which inadvertently removed the
  -Wno-maybe-uninitialized compiler option from some xrpl.libxrpl
  modules. This resulted in new "may be used uninitialized" build
  warnings, first noticed in the "protocol" module. When compiling with
  derr=TRUE, those warnings became errors, which made the build fail.
* Github CI actions will build with the assert and werr options turned
  on. This will cause CI jobs to fail if a developer introduces a new
  compiler warning, or causes an assert to fail in release builds.
* Includes the OS and compiler version in the linux dependencies jobs in
  the "check environment" step.
* Translates the `unity` build option into `CMAKE_UNITY_BUILD` setting.
2025-06-20 00:56:18 +09:00
Valentin Balaschenko
a4a8295567 Fix levelization script to ignore commented includes (#5194)
Check to ignore single-line comments during dependency analysis.
2025-06-20 00:45:59 +09:00
tequ
2a836cbbb8 Fix the flag processing of NFTokenModify (#5246)
Adds checks for invalid flags.
2025-06-20 00:45:51 +09:00
Mayukha Vadari
79935d4db8 Fix failing assert in connect RPC (#5235) 2025-06-20 00:45:43 +09:00
Olek
7088c64427 Permissioned Domains (XLS-80d) (#5161) 2025-06-20 00:45:28 +09:00
tequ
27ddfae5e1 XLS-46: DynamicNFT (#5048)
This Amendment adds functionality to update the URI of NFToken objects as described in the XLS-46d: Dynamic Non Fungible Tokens (dNFTs) spec.
2025-06-20 00:27:50 +09:00
Shawn Xie
cf957db8da prefix Uint384 and Uint512 with Hash in server_definitions (#5231) 2025-06-20 00:10:23 +09:00
Mayukha Vadari
ac532d9d16 refactor: add rpcName to LEDGER_ENTRY macro (#5202)
The LEDGER_ENTRY macro now takes an additional parameter, which makes it easier to avoid missing including the new field in jss.h and to the list of account_objects/ledger_data filters.
2025-06-20 00:08:47 +09:00
Michael Legleux
37614773bb fix: Add header for set_difference (#5197)
Fix `error C2039: 'set_difference': is not a member of 'std'`
2025-06-19 23:42:21 +09:00
Mayukha Vadari
c329d71717 fix: allow overlapping types in Expected (#5218)
For example, Expected<std::uint32_t, Json::Value>, will now build even though there is animplicit conversion from unsigned int to Json::Value.
2025-06-19 23:42:12 +09:00
Gregory Tsipenyuk
7de6a70221 Add MPTIssue to STIssue (#5200)
Replace Issue in STIssue with Asset. STIssue with MPTIssue is only used in MPT tests.
Will be used in Vault and in transactions with STIssue fields once MPT is integrated into DEX.
2025-06-19 23:41:59 +09:00
Bronek Kozicki
0fa542f672 Antithesis instrumentation improvements (#5213)
* Rename ASSERT to XRPL_ASSERT
* Upgrade to Anthithesis SDK 0.4.4, and use new 0.4.4 features
  * automatic cast to bool, like assert
* Add instrumentation workflow to verify build with instrumentation enabled
2025-06-19 23:27:49 +09:00
John Freeman
68705eee2c Enforce levelization in libxrpl with CMake (#5111)
Adds two CMake functions:

* add_module(library subdirectory): Declares an OBJECT "library" (a CMake abstraction for a collection of object files) with sources from the given subdirectory of the given library, representing a module. Isolates the module's headers by creating a subdirectory in the build directory, e.g. .build/tmp123, that contains just a symlink, e.g. .build/tmp123/basics, to the module's header directory, e.g. include/xrpl/basics, in the source directory, and putting .build/tmp123 (but not include/xrpl) on the include path of the module sources. This prevents the module sources from including headers not explicitly linked to the module in CMake with target_link_libraries.
* target_link_modules(library scope modules...): Links the library target to each of the module targets, and removes their sources from its source list (so they are not compiled and linked twice).

Uses these functions to separate and explicitly link modules in libxrpl:

    Level 01: beast
    Level 02: basics
    Level 03: json, crypto
    Level 04: protocol
    Level 05: resource, server
2025-06-19 23:06:46 +09:00
Mayukha Vadari
fdbb24d898 refactor: clean up LedgerEntry.cpp (#5199)
Refactors LedgerEntry to make it easier to read and understand.
2025-06-19 21:49:47 +09:00
Ed Hennis
60a8f3c05b test: Add more test cases for Base58 parser (#5174)
---------
Co-authored-by: John Freeman <jfreeman08@gmail.com>
2025-06-19 19:57:12 +09:00
Ed Hennis
5a3a71ecb8 test: Check for some unlikely null dereferences in tests (#5004) 2025-06-19 19:57:04 +09:00
Bronek Kozicki
16b3221f80 Add Antithesis intrumentation (#5042)
* Copy Antithesis SDK version 0.4.0 to directory external/
* Add build option `voidstar` to enable instrumentation with Antithesis SDK
* Define instrumentation macros ASSERT and UNREACHABLE in terms of regular C assert
* Replace asserts with named ASSERT or UNREACHABLE
* Add UNREACHABLE to LogicError
* Document instrumentation macros in CONTRIBUTING.md
2025-06-19 19:56:21 +09:00
Valentin Balaschenko
dd4b060f09 Reduce the peer charges for well-behaved peers:
- Fix an erroneous high fee penalty that peers could incur for sending
  older transactions.
- Update to the fees charged for imposing a load on the server.
- Prevent the relaying of internal pseudo-transactions.
  - Before: Pseudo-transactions received from a peer will fail the signature
    check, even if they were requested (using TMGetObjectByHash), because
    they have no signature. This causes the peer to be charge for an
    invalid signature.
  - After: Pseudo-transactions, are put into the global cache
    (TransactionMaster) only. If the transaction is not part of
    a TMTransactions batch, the peer is charged an unwanted data fee.
    These fees will not be a problem in the normal course of operations,
    but should dissuade peers from behaving badly by sending a bunch of
    junk.
- Improve logging: include the reason for fees charged to a peer.

Co-authored-by: Ed Hennis <ed@ripple.com>
2025-06-19 17:05:23 +09:00
tequ
f6d2bf819d Fix governance vote purge (#221)
governance hook should be independently and deterministically recompiled before being voted in
2025-06-16 17:12:06 +10:00
694 changed files with 44149 additions and 27491 deletions

View File

@@ -1,31 +0,0 @@
name: 'Configure ccache'
description: 'Sets up ccache with consistent configuration'
inputs:
max_size:
description: 'Maximum cache size'
required: false
default: '2G'
hash_dir:
description: 'Whether to include directory paths in hash'
required: false
default: 'true'
compiler_check:
description: 'How to check compiler for changes'
required: false
default: 'content'
runs:
using: 'composite'
steps:
- name: Configure ccache
shell: bash
run: |
mkdir -p ~/.ccache
export CONF_PATH="${CCACHE_CONFIGPATH:-${CCACHE_DIR:-$HOME/.ccache}/ccache.conf}"
mkdir -p $(dirname "$CONF_PATH")
echo "max_size = ${{ inputs.max_size }}" > "$CONF_PATH"
echo "hash_dir = ${{ inputs.hash_dir }}" >> "$CONF_PATH"
echo "compiler_check = ${{ inputs.compiler_check }}" >> "$CONF_PATH"
ccache -p # Print config for verification
ccache -z # Zero statistics before the build

View File

@@ -21,13 +21,17 @@ inputs:
required: false
default: ''
compiler-id:
description: 'Unique identifier for compiler/version combination used for cache keys'
description: 'Unique identifier: compiler-version-stdlib[-gccversion] (e.g. clang-14-libstdcxx-gcc11, gcc-13-libstdcxx)'
required: false
default: ''
cache_version:
description: 'Cache version for invalidation'
required: false
default: '1'
gha_cache_enabled:
description: 'Whether to use actions/cache (disable for self-hosted with volume mounts)'
required: false
default: 'true'
ccache_enabled:
description: 'Whether to use ccache'
required: false
@@ -36,6 +40,29 @@ inputs:
description: 'Main branch name for restore keys'
required: false
default: 'dev'
stdlib:
description: 'C++ standard library to use'
required: true
type: choice
options:
- libstdcxx
- libcxx
clang_gcc_toolchain:
description: 'GCC version to use for Clang toolchain (e.g. 11, 13)'
required: false
default: ''
ccache_max_size:
description: 'Maximum ccache size'
required: false
default: '2G'
ccache_hash_dir:
description: 'Whether to include directory paths in hash'
required: false
default: 'true'
ccache_compiler_check:
description: 'How to check compiler for changes'
required: false
default: 'content'
runs:
using: 'composite'
@@ -48,18 +75,37 @@ runs:
SAFE_BRANCH=$(echo "${{ github.ref_name }}" | tr -c 'a-zA-Z0-9_.-' '-')
echo "name=${SAFE_BRANCH}" >> $GITHUB_OUTPUT
- name: Restore ccache directory
- name: Configure ccache
if: inputs.ccache_enabled == 'true'
id: ccache-restore
uses: actions/cache/restore@v4
with:
path: ~/.ccache
key: ${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ inputs.configuration }}-${{ steps.safe-branch.outputs.name }}
restore-keys: |
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ inputs.configuration }}-${{ inputs.main_branch }}
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ inputs.configuration }}-
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-
shell: bash
run: |
# Create cache directories
mkdir -p ~/.ccache-cache
# Keep config separate from cache_dir so configs aren't swapped when CCACHE_DIR changes between steps
mkdir -p ~/.config/ccache
export CCACHE_CONFIGPATH="$HOME/.config/ccache/ccache.conf"
echo "CCACHE_CONFIGPATH=$CCACHE_CONFIGPATH" >> $GITHUB_ENV
# Keep config separate from cache_dir so configs aren't swapped when CCACHE_DIR changes between steps
mkdir -p ~/.config/ccache
export CCACHE_CONFIGPATH="$HOME/.config/ccache/ccache.conf"
echo "CCACHE_CONFIGPATH=$CCACHE_CONFIGPATH" >> $GITHUB_ENV
# Configure ccache settings AFTER cache restore (prevents stale cached config)
ccache --set-config=max_size=${{ inputs.ccache_max_size }}
ccache --set-config=hash_dir=${{ inputs.ccache_hash_dir }}
ccache --set-config=compiler_check=${{ inputs.ccache_compiler_check }}
ccache --set-config=cache_dir="$HOME/.ccache-cache"
echo "CCACHE_DIR=$HOME/.ccache-cache" >> $GITHUB_ENV
echo "📦 using ~/.ccache-cache as ccache cache directory"
# Print config for verification
echo "=== ccache configuration ==="
ccache -p
# Zero statistics before the build
ccache -z
- name: Configure project
shell: bash
@@ -75,36 +121,97 @@ runs:
if [ -n "${{ inputs.cxx }}" ]; then
export CXX="${{ inputs.cxx }}"
fi
# Configure ccache launcher args
CCACHE_ARGS=""
# Create wrapper toolchain that overlays ccache on top of Conan's toolchain
# This enables ccache for the main app build without affecting Conan dependency builds
if [ "${{ inputs.ccache_enabled }}" = "true" ]; then
CCACHE_ARGS="-DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache"
cat > wrapper_toolchain.cmake <<'EOF'
# Include Conan's generated toolchain first (sets compiler, flags, etc.)
# Note: CMAKE_CURRENT_LIST_DIR is the directory containing this wrapper (.build/)
include(${CMAKE_CURRENT_LIST_DIR}/build/generators/conan_toolchain.cmake)
# Overlay ccache configuration for main application build
# This does NOT affect Conan dependency builds (already completed)
set(CMAKE_C_COMPILER_LAUNCHER ccache CACHE STRING "C compiler launcher" FORCE)
set(CMAKE_CXX_COMPILER_LAUNCHER ccache CACHE STRING "C++ compiler launcher" FORCE)
EOF
TOOLCHAIN_FILE="wrapper_toolchain.cmake"
echo "✅ Created wrapper toolchain with ccache enabled"
else
TOOLCHAIN_FILE="build/generators/conan_toolchain.cmake"
echo " Using Conan toolchain directly (ccache disabled)"
fi
# Configure C++ standard library if specified
# libstdcxx used for clang-14/16 to work around missing lexicographical_compare_three_way in libc++
# libcxx can be used with clang-17+ which has full C++20 support
# Note: -stdlib flag is Clang-specific, GCC always uses libstdc++
CMAKE_CXX_FLAGS=""
if [[ "${{ inputs.cxx }}" == clang* ]]; then
# Only Clang needs the -stdlib flag
if [ "${{ inputs.stdlib }}" = "libstdcxx" ]; then
CMAKE_CXX_FLAGS="-stdlib=libstdc++"
elif [ "${{ inputs.stdlib }}" = "libcxx" ]; then
CMAKE_CXX_FLAGS="-stdlib=libc++"
fi
fi
# GCC always uses libstdc++ and doesn't need/support the -stdlib flag
# Configure GCC toolchain for Clang if specified
if [ -n "${{ inputs.clang_gcc_toolchain }}" ] && [[ "${{ inputs.cxx }}" == clang* ]]; then
# Extract Clang version from compiler executable name (e.g., clang++-14 -> 14)
clang_version=$(echo "${{ inputs.cxx }}" | grep -oE '[0-9]+$')
# Clang 16+ supports --gcc-install-dir (precise path specification)
# Clang <16 only has --gcc-toolchain (uses discovery heuristics)
if [ -n "$clang_version" ] && [ "$clang_version" -ge "16" ]; then
# Clang 16+ uses --gcc-install-dir (canonical, precise)
CMAKE_CXX_FLAGS="$CMAKE_CXX_FLAGS --gcc-install-dir=/usr/lib/gcc/x86_64-linux-gnu/${{ inputs.clang_gcc_toolchain }}"
else
# Clang 14-15 uses --gcc-toolchain (deprecated but necessary)
# Note: This still uses discovery, so we hide newer GCC versions in the workflow
CMAKE_CXX_FLAGS="$CMAKE_CXX_FLAGS --gcc-toolchain=/usr"
fi
fi
# Run CMake configure
# Note: conanfile.py hardcodes 'build/generators' as the output path.
# If we're in a 'build' folder, Conan detects this and uses just 'generators/'
# If we're in '.build' (non-standard), Conan adds the full 'build/generators/'
# So we get: .build/build/generators/ with our non-standard folder name
cmake .. \
-G "${{ inputs.generator }}" \
$CCACHE_ARGS \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=${{ inputs.configuration }}
${CMAKE_CXX_FLAGS:+-DCMAKE_CXX_FLAGS="$CMAKE_CXX_FLAGS"} \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=${TOOLCHAIN_FILE} \
-DCMAKE_BUILD_TYPE=${{ inputs.configuration }} \
-Dtests=TRUE \
-Dxrpld=TRUE
- name: Show ccache config before build
if: inputs.ccache_enabled == 'true'
shell: bash
run: |
echo "=========================================="
echo "ccache configuration before build"
echo "=========================================="
ccache -p
echo ""
- name: Build project
shell: bash
run: |
cd ${{ inputs.build_dir }}
cmake --build . --config ${{ inputs.configuration }} --parallel $(nproc)
# Check for verbose build flag in commit message
VERBOSE_FLAG=""
if echo "${XAHAU_GA_COMMIT_MSG}" | grep -q '\[ci-ga-cmake-verbose\]'; then
echo "🔊 [ci-ga-cmake-verbose] detected - enabling verbose output"
VERBOSE_FLAG="-- -v"
fi
cmake --build . --config ${{ inputs.configuration }} --parallel $(nproc) ${VERBOSE_FLAG}
- name: Show ccache statistics
if: inputs.ccache_enabled == 'true'
shell: bash
run: ccache -s
- name: Save ccache directory
if: inputs.ccache_enabled == 'true'
uses: actions/cache/save@v4
with:
path: ~/.ccache
key: ${{ steps.ccache-restore.outputs.cache-primary-key }}

View File

@@ -0,0 +1,146 @@
name: 'Cache Restore'
description: 'Restores cache with optional clearing based on commit message tags'
inputs:
path:
description: 'A list of files, directories, and wildcard patterns to cache'
required: true
key:
description: 'An explicit key for restoring the cache'
required: true
restore-keys:
description: 'An ordered list of prefix-matched keys to use for restoring stale cache if no cache hit occurred for key'
required: false
default: ''
cache-type:
description: 'Type of cache (for logging purposes, e.g., "ccache-main", "Conan")'
required: false
default: 'cache'
fail-on-cache-miss:
description: 'Fail the workflow if cache entry is not found'
required: false
default: 'false'
lookup-only:
description: 'Check if a cache entry exists for the given input(s) without downloading it'
required: false
default: 'false'
additional-clear-keys:
description: 'Additional cache keys to clear (newline separated)'
required: false
default: ''
outputs:
cache-hit:
description: 'A boolean value to indicate an exact match was found for the primary key'
value: ${{ steps.restore-cache.outputs.cache-hit }}
cache-primary-key:
description: 'The key that was used to restore the cache'
value: ${{ steps.restore-cache.outputs.cache-primary-key }}
cache-matched-key:
description: 'The key that was used to restore the cache (exact or prefix match)'
value: ${{ steps.restore-cache.outputs.cache-matched-key }}
runs:
using: 'composite'
steps:
- name: Clear cache if requested via commit message
shell: bash
env:
GH_TOKEN: ${{ github.token }}
run: |
echo "=========================================="
echo "${{ inputs.cache-type }} cache clear tag detection"
echo "=========================================="
echo "Searching for: [ci-ga-clear-cache] or [ci-ga-clear-cache:*]"
echo ""
CACHE_KEY="${{ inputs.key }}"
# Extract search terms if present (e.g., "ccache" from "[ci-ga-clear-cache:ccache]")
SEARCH_TERMS=$(echo "${XAHAU_GA_COMMIT_MSG}" | grep -o '\[ci-ga-clear-cache:[^]]*\]' | sed 's/\[ci-ga-clear-cache://;s/\]//' || echo "")
SHOULD_CLEAR=false
if [ -n "${SEARCH_TERMS}" ]; then
# Search terms provided - check if THIS cache key matches ALL terms (AND logic)
echo "🔍 [ci-ga-clear-cache:${SEARCH_TERMS}] detected"
echo "Checking if cache key matches search terms..."
echo " Cache key: ${CACHE_KEY}"
echo " Search terms: ${SEARCH_TERMS}"
echo ""
MATCHES=true
for term in ${SEARCH_TERMS}; do
if ! echo "${CACHE_KEY}" | grep -q "${term}"; then
MATCHES=false
echo " ✗ Key does not contain '${term}'"
break
else
echo " ✓ Key contains '${term}'"
fi
done
if [ "${MATCHES}" = "true" ]; then
echo ""
echo "✅ Cache key matches all search terms - will clear cache"
SHOULD_CLEAR=true
else
echo ""
echo "⏭️ Cache key doesn't match search terms - skipping cache clear"
fi
elif echo "${XAHAU_GA_COMMIT_MSG}" | grep -q '\[ci-ga-clear-cache\]'; then
# No search terms - always clear this job's cache
echo "🗑️ [ci-ga-clear-cache] detected in commit message"
echo "Clearing ${{ inputs.cache-type }} cache for key: ${CACHE_KEY}"
SHOULD_CLEAR=true
fi
if [ "${SHOULD_CLEAR}" = "true" ]; then
echo ""
echo "Deleting ${{ inputs.cache-type }} caches via GitHub API..."
# Delete primary cache key
echo "Checking for cache: ${CACHE_KEY}"
if gh cache list --key "${CACHE_KEY}" --json key --jq '.[].key' | grep -q "${CACHE_KEY}"; then
echo " Deleting: ${CACHE_KEY}"
gh cache delete "${CACHE_KEY}" || true
echo " ✓ Deleted"
else
echo " Not found"
fi
# Delete additional keys if provided
if [ -n "${{ inputs.additional-clear-keys }}" ]; then
echo ""
echo "Checking additional keys..."
while IFS= read -r key; do
[ -z "${key}" ] && continue
echo "Checking for cache: ${key}"
if gh cache list --key "${key}" --json key --jq '.[].key' | grep -q "${key}"; then
echo " Deleting: ${key}"
gh cache delete "${key}" || true
echo " ✓ Deleted"
else
echo " Not found"
fi
done <<< "${{ inputs.additional-clear-keys }}"
fi
echo ""
echo "✅ ${{ inputs.cache-type }} cache cleared successfully"
echo "Build will proceed from scratch"
else
echo ""
echo " No ${{ inputs.cache-type }} cache clear requested"
fi
echo "=========================================="
- name: Restore cache
id: restore-cache
uses: actions/cache/restore@v4
with:
path: ${{ inputs.path }}
key: ${{ inputs.key }}
restore-keys: ${{ inputs.restore-keys }}
fail-on-cache-miss: ${{ inputs.fail-on-cache-miss }}
lookup-only: ${{ inputs.lookup-only }}

View File

@@ -10,21 +10,46 @@ inputs:
required: false
default: '.build'
compiler-id:
description: 'Unique identifier for compiler/version combination used for cache keys'
description: 'Unique identifier: compiler-version-stdlib[-gccversion] (e.g. clang-14-libstdcxx-gcc11, gcc-13-libstdcxx)'
required: false
default: ''
cache_version:
description: 'Cache version for invalidation'
required: false
default: '1'
cache_enabled:
description: 'Whether to use caching'
required: false
default: 'true'
main_branch:
description: 'Main branch name for restore keys'
required: false
default: 'dev'
os:
description: 'Operating system (Linux, Macos)'
required: false
default: 'Linux'
arch:
description: 'Architecture (x86_64, armv8)'
required: false
default: 'x86_64'
compiler:
description: 'Compiler type (gcc, clang, apple-clang)'
required: true
compiler_version:
description: 'Compiler version (11, 13, 14, etc.)'
required: true
cc:
description: 'C compiler executable (gcc-13, clang-14, etc.), empty for macOS'
required: false
default: ''
cxx:
description: 'C++ compiler executable (g++-14, clang++-14, etc.), empty for macOS'
required: false
default: ''
stdlib:
description: 'C++ standard library for Conan configuration (note: also in compiler-id)'
required: true
type: choice
options:
- libstdcxx
- libcxx
outputs:
cache-hit:
@@ -34,36 +59,89 @@ outputs:
runs:
using: 'composite'
steps:
- name: Generate safe branch name
if: inputs.cache_enabled == 'true'
id: safe-branch
- name: Configure Conan cache paths
if: inputs.os == 'Linux'
shell: bash
run: |
SAFE_BRANCH=$(echo "${{ github.ref_name }}" | tr -c 'a-zA-Z0-9_.-' '-')
echo "name=${SAFE_BRANCH}" >> $GITHUB_OUTPUT
mkdir -p /.conan-cache/conan2 /.conan-cache/conan2_download /.conan-cache/conan2_sources
echo 'core.cache:storage_path=/.conan-cache/conan2' > ~/.conan2/global.conf
echo 'core.download:download_cache=/.conan-cache/conan2_download' >> ~/.conan2/global.conf
echo 'core.sources:download_cache=/.conan-cache/conan2_sources' >> ~/.conan2/global.conf
- name: Restore Conan cache
if: inputs.cache_enabled == 'true'
id: cache-restore-conan
uses: actions/cache/restore@v4
with:
path: |
~/.conan
~/.conan2
key: ${{ runner.os }}-conan-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ hashFiles('**/conanfile.txt', '**/conanfile.py') }}-${{ inputs.configuration }}
restore-keys: |
${{ runner.os }}-conan-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ hashFiles('**/conanfile.txt', '**/conanfile.py') }}-
${{ runner.os }}-conan-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-
${{ runner.os }}-conan-v${{ inputs.cache_version }}-
- name: Configure Conan cache paths
if: inputs.gha_cache_enabled == 'false'
shell: bash
# For self-hosted runners, register cache paths to be used as volumes
# This allows the cache to be shared between containers
run: |
mkdir -p /.conan-cache/conan2 /.conan-cache/conan2_download /.conan-cache/conan2_sources
echo 'core.cache:storage_path=/.conan-cache/conan2' > ~/.conan2/global.conf
echo 'core.download:download_cache=/.conan-cache/conan2_download' >> ~/.conan2/global.conf
echo 'core.sources:download_cache=/.conan-cache/conan2_sources' >> ~/.conan2/global.conf
- name: Configure Conan
shell: bash
run: |
# Create the default profile directory if it doesn't exist
mkdir -p ~/.conan2/profiles
# Determine the correct libcxx based on stdlib parameter
if [ "${{ inputs.stdlib }}" = "libcxx" ]; then
LIBCXX="libc++"
else
LIBCXX="libstdc++11"
fi
# Create profile with our specific settings
# This overwrites any cached profile to ensure fresh configuration
cat > ~/.conan2/profiles/default <<EOF
[settings]
arch=${{ inputs.arch }}
build_type=${{ inputs.configuration }}
compiler=${{ inputs.compiler }}
compiler.cppstd=20
compiler.libcxx=${LIBCXX}
compiler.version=${{ inputs.compiler_version }}
os=${{ inputs.os }}
EOF
# Add buildenv and conf sections for Linux (not needed for macOS)
if [ "${{ inputs.os }}" = "Linux" ] && [ -n "${{ inputs.cc }}" ]; then
cat >> ~/.conan2/profiles/default <<EOF
[buildenv]
CC=/usr/bin/${{ inputs.cc }}
CXX=/usr/bin/${{ inputs.cxx }}
[conf]
tools.build:compiler_executables={"c": "/usr/bin/${{ inputs.cc }}", "cpp": "/usr/bin/${{ inputs.cxx }}"}
EOF
fi
# Add macOS-specific conf if needed
if [ "${{ inputs.os }}" = "Macos" ]; then
cat >> ~/.conan2/profiles/default <<EOF
[conf]
# Workaround for gRPC with newer Apple Clang
tools.build:cxxflags=["-Wno-missing-template-arg-list-after-template-kw"]
EOF
fi
# Display profile for verification
conan profile show
- name: Export custom recipes
shell: bash
run: |
conan export external/snappy snappy/1.1.10@
conan export external/soci soci/4.0.3@
conan export external/snappy --version 1.1.10 --user xahaud --channel stable
conan export external/soci --version 4.0.3 --user xahaud --channel stable
conan export external/wasmedge --version 0.11.2 --user xahaud --channel stable
- name: Install dependencies
shell: bash
env:
CONAN_REQUEST_TIMEOUT: 180 # Increase timeout to 3 minutes for slow mirrors
run: |
# Create build directory
mkdir -p ${{ inputs.build_dir }}
@@ -74,15 +152,4 @@ runs:
--output-folder . \
--build missing \
--settings build_type=${{ inputs.configuration }} \
-Dtests=TRUE \
-Dxrpld=TRUE \
..
- name: Save Conan cache
if: inputs.cache_enabled == 'true' && steps.cache-restore-conan.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.conan
~/.conan2
key: ${{ steps.cache-restore-conan.outputs.cache-primary-key }}

View File

@@ -0,0 +1,74 @@
name: 'Get Commit Message'
description: 'Gets commit message for both push and pull_request events and sets XAHAU_GA_COMMIT_MSG env var'
inputs:
event-name:
description: 'The event name (push or pull_request)'
required: true
head-commit-message:
description: 'The head commit message (for push events)'
required: false
default: ''
pr-head-sha:
description: 'The PR head SHA (for pull_request events)'
required: false
default: ''
runs:
using: 'composite'
steps:
- name: Get commit message and set environment variable
shell: python
env:
GH_TOKEN: ${{ github.token }}
run: |
import json
import os
import secrets
import urllib.request
event_name = "${{ inputs.event-name }}"
pr_head_sha = "${{ inputs.pr-head-sha }}"
repository = "${{ github.repository }}"
print("==========================================")
print("Setting XAHAU_GA_COMMIT_MSG environment variable")
print("==========================================")
print(f"Event: {event_name}")
if event_name == 'push':
# For push events, use the input directly
message = """${{ inputs.head-commit-message }}"""
print("Source: workflow input (github.event.head_commit.message)")
elif event_name == 'pull_request' and pr_head_sha:
# For PR events, fetch via GitHub API
print(f"Source: GitHub API (fetching commit {pr_head_sha})")
try:
url = f"https://api.github.com/repos/{repository}/commits/{pr_head_sha}"
req = urllib.request.Request(url, headers={
"Accept": "application/vnd.github.v3+json",
"Authorization": f"Bearer {os.environ.get('GH_TOKEN', '')}"
})
with urllib.request.urlopen(req) as response:
data = json.load(response)
message = data["commit"]["message"]
except Exception as e:
print(f"Failed to fetch commit message: {e}")
message = ""
else:
message = ""
print(f"Warning: Unknown event type: {event_name}")
print(f"Commit message (first 100 chars): {message[:100]}")
# Write to GITHUB_ENV using heredoc with random delimiter (prevents injection attacks)
# See: https://securitylab.github.com/resources/github-actions-untrusted-input/
delimiter = f"EOF_{secrets.token_hex(16)}"
with open(os.environ['GITHUB_ENV'], 'a') as f:
f.write(f'XAHAU_GA_COMMIT_MSG<<{delimiter}\n')
f.write(message)
f.write(f'\n{delimiter}\n')
print(f"✓ XAHAU_GA_COMMIT_MSG set (available to all subsequent steps)")
print("==========================================")

View File

@@ -32,19 +32,9 @@ jobs:
clean: true
fetch-depth: 2 # Only get the last 2 commits, to avoid fetching all history
checkpatterns:
runs-on: [self-hosted, vanity]
needs: checkout
defaults:
run:
working-directory: ${{ needs.checkout.outputs.checkout_path }}
steps:
- name: Check for suspicious patterns
run: /bin/bash suspicious_patterns.sh
build:
runs-on: [self-hosted, vanity]
needs: [checkpatterns, checkout]
runs-on: [self-hosted, xahaud-build]
needs: [checkout]
defaults:
run:
working-directory: ${{ needs.checkout.outputs.checkout_path }}
@@ -84,7 +74,7 @@ jobs:
fi
tests:
runs-on: [self-hosted, vanity]
runs-on: [self-hosted, xahaud-build]
needs: [build, checkout]
defaults:
run:
@@ -94,7 +84,7 @@ jobs:
run: /bin/bash docker-unit-tests.sh
cleanup:
runs-on: [self-hosted, vanity]
runs-on: [self-hosted, xahaud-build]
needs: [tests, checkout]
if: always()
steps:

View File

@@ -1,20 +0,0 @@
name: checkpatterns
on: [push, pull_request]
jobs:
checkpatterns:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Check for suspicious patterns
run: |
if [ -f "suspicious_patterns.sh" ]; then
bash suspicious_patterns.sh
else
echo "Warning: suspicious_patterns.sh not found, skipping check"
# Still exit with success for compatibility with dependent jobs
exit 0
fi

104
.github/workflows/instrumentation.yml vendored Normal file
View File

@@ -0,0 +1,104 @@
name: instrumentation
on:
pull_request:
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows, instrumentation)
branches:
# Always build the package branches
- develop
- release
- master
# Branches that opt-in to running
- 'ci/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
# NOTE we are not using dependencies built inside nix because nix is lagging
# with compiler versions. Instrumentation requires clang version 16 or later
instrumentation-build:
if: false # disable for now
env:
CLANG_RELEASE: 16
strategy:
fail-fast: false
runs-on: [self-hosted, heavy]
container: debian:bookworm
steps:
- name: install prerequisites
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update
apt-get install --yes --no-install-recommends \
clang-${CLANG_RELEASE} clang++-${CLANG_RELEASE} \
python3-pip python-is-python3 make cmake git wget
apt-get clean
update-alternatives --install \
/usr/bin/clang clang /usr/bin/clang-${CLANG_RELEASE} 100 \
--slave /usr/bin/clang++ clang++ /usr/bin/clang++-${CLANG_RELEASE}
update-alternatives --auto clang
pip install --no-cache --break-system-packages "conan<2"
- name: checkout
uses: actions/checkout@v4
- name: prepare environment
run: |
mkdir ${GITHUB_WORKSPACE}/.build
echo "SOURCE_DIR=$GITHUB_WORKSPACE" >> $GITHUB_ENV
echo "BUILD_DIR=$GITHUB_WORKSPACE/.build" >> $GITHUB_ENV
echo "CC=/usr/bin/clang" >> $GITHUB_ENV
echo "CXX=/usr/bin/clang++" >> $GITHUB_ENV
- name: configure Conan
run: |
conan profile new --detect default
conan profile update settings.compiler=clang default
conan profile update settings.compiler.version=${CLANG_RELEASE} default
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update settings.compiler.cppstd=20 default
conan profile update options.rocksdb=False default
conan profile update \
'conf.tools.build:compiler_executables={"c": "/usr/bin/clang", "cpp": "/usr/bin/clang++"}' default
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_DISABLE_CONCEPTS"' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
conan export external/snappy snappy/1.1.10@
conan export external/soci soci/4.0.3@
- name: build dependencies
run: |
cd ${BUILD_DIR}
conan install ${SOURCE_DIR} \
--output-folder ${BUILD_DIR} \
--install-folder ${BUILD_DIR} \
--build missing \
--settings build_type=Debug
- name: build with instrumentation
run: |
cd ${BUILD_DIR}
cmake -S ${SOURCE_DIR} -B ${BUILD_DIR} \
-Dvoidstar=ON \
-Dtests=ON \
-Dxrpld=ON \
-DCMAKE_BUILD_TYPE=Debug \
-DSECP256K1_BUILD_BENCHMARK=OFF \
-DSECP256K1_BUILD_TESTS=OFF \
-DSECP256K1_BUILD_EXHAUSTIVE_TESTS=OFF \
-DCMAKE_TOOLCHAIN_FILE=${BUILD_DIR}/build/generators/conan_toolchain.cmake
cmake --build . --parallel $(nproc)
- name: verify instrumentation enabled
run: |
cd ${BUILD_DIR}
./rippled --version | grep libvoidstar
- name: run unit tests
run: |
cd ${BUILD_DIR}
./rippled -u --unittest-jobs $(( $(nproc)/4 ))

View File

@@ -0,0 +1,36 @@
name: Verify Generated Hook Headers
on:
push:
pull_request:
jobs:
verify-generated-headers:
strategy:
fail-fast: false
matrix:
include:
- target: hook/error.h
generator: ./hook/generate_error.sh
- target: hook/extern.h
generator: ./hook/generate_extern.sh
- target: hook/sfcodes.h
generator: bash ./hook/generate_sfcodes.sh
- target: hook/tts.h
generator: ./hook/generate_tts.sh
runs-on: ubuntu-latest
name: ${{ matrix.target }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Verify ${{ matrix.target }}
run: |
set -euo pipefail
chmod +x hook/generate_*.sh || true
tmp=$(mktemp)
trap 'rm -f "$tmp"' EXIT
${{ matrix.generator }} > "$tmp"
diff -u ${{ matrix.target }} "$tmp"

View File

@@ -5,6 +5,8 @@ on:
branches: ["dev", "candidate", "release"]
pull_request:
branches: ["dev", "candidate", "release"]
schedule:
- cron: '0 0 * * *'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -18,62 +20,39 @@ jobs:
- Ninja
configuration:
- Debug
runs-on: macos-15
runs-on: [self-hosted, macOS]
env:
build_dir: .build
# Bump this number to invalidate all caches globally.
CACHE_VERSION: 1
CACHE_VERSION: 3
MAIN_BRANCH_NAME: dev
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Conan
- name: Add Homebrew to PATH
run: |
brew install conan@1
# Add Conan 1 to the PATH for this job
echo "$(brew --prefix conan@1)/bin" >> $GITHUB_PATH
echo "/opt/homebrew/bin" >> "$GITHUB_PATH"
echo "/opt/homebrew/sbin" >> "$GITHUB_PATH"
- name: Install Coreutils
run: |
brew install coreutils
echo "Num proc: $(nproc)"
- name: Install Ninja
if: matrix.generator == 'Ninja'
run: brew install ninja
- name: Install Python
run: |
if which python3 > /dev/null 2>&1; then
echo "Python 3 executable exists"
python3 --version
else
brew install python@3.12
fi
# Create 'python' symlink if it doesn't exist (for tools expecting 'python')
if ! which python > /dev/null 2>&1; then
sudo ln -sf $(which python3) /usr/local/bin/python
fi
- name: Install CMake
run: |
if which cmake > /dev/null 2>&1; then
echo "cmake executable exists"
cmake --version
else
brew install cmake
fi
- name: Install ccache
run: brew install ccache
- name: Configure ccache
uses: ./.github/actions/xahau-configure-ccache
# To isolate environments for each Runner, instead of installing globally with brew,
# use mise to isolate environments for each Runner directory.
- name: Setup toolchain (mise)
uses: jdx/mise-action@v2
with:
max_size: 2G
hash_dir: true
compiler_check: content
install: true
- name: Install tools via mise
run: |
mise install
mise use cmake@3.25.3 python@3.12 pipx@latest conan@2 ninja@latest ccache@latest
mise reshim
echo "$HOME/.local/share/mise/shims" >> "$GITHUB_PATH"
- name: Check environment
run: |
@@ -87,11 +66,20 @@ jobs:
echo "---- Full Environment ----"
env
- name: Configure Conan
- name: Get commit message
id: get-commit-message
uses: ./.github/actions/xahau-ga-get-commit-message
with:
event-name: ${{ github.event_name }}
head-commit-message: ${{ github.event.head_commit.message }}
pr-head-sha: ${{ github.event.pull_request.head.sha }}
- name: Detect compiler version
id: detect-compiler
run: |
conan profile new default --detect || true # Ignore error if profile exists
conan profile update settings.compiler.cppstd=20 default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
COMPILER_VERSION=$(clang --version | grep -oE 'version [0-9]+' | grep -oE '[0-9]+')
echo "compiler_version=${COMPILER_VERSION}" >> $GITHUB_OUTPUT
echo "Detected Apple Clang version: ${COMPILER_VERSION}"
- name: Install dependencies
uses: ./.github/actions/xahau-ga-dependencies
@@ -101,6 +89,11 @@ jobs:
compiler-id: clang
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
os: Macos
arch: armv8
compiler: apple-clang
compiler_version: ${{ steps.detect-compiler.outputs.compiler_version }}
stdlib: libcxx
- name: Build
uses: ./.github/actions/xahau-ga-build
@@ -111,6 +104,8 @@ jobs:
compiler-id: clang
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
stdlib: libcxx
ccache_max_size: '100G'
- name: Test
run: |

View File

@@ -5,30 +5,195 @@ on:
branches: ["dev", "candidate", "release"]
pull_request:
branches: ["dev", "candidate", "release"]
schedule:
- cron: '0 0 * * *'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build-job:
runs-on: ubuntu-latest
matrix-setup:
runs-on: [self-hosted, generic, 20.04]
container: python:3-slim
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: escape double quotes
id: escape
shell: bash
env:
PR_TITLE: ${{ github.event.pull_request.title }}
run: |
ESCAPED_PR_TITLE="${PR_TITLE//\"/\\\"}"
echo "title=${ESCAPED_PR_TITLE}" >> "$GITHUB_OUTPUT"
- name: Generate build matrix
id: set-matrix
shell: python
env:
GH_TOKEN: ${{ github.token }}
run: |
import json
import os
import urllib.request
# Full matrix with all 6 compiler configurations
# Each configuration includes all parameters needed by the build job
full_matrix = [
{
"compiler_id": "gcc-11-libstdcxx",
"compiler": "gcc",
"cc": "gcc-11",
"cxx": "g++-11",
"compiler_version": 11,
"stdlib": "libstdcxx",
"configuration": "Debug"
},
{
"compiler_id": "gcc-13-libstdcxx",
"compiler": "gcc",
"cc": "gcc-13",
"cxx": "g++-13",
"compiler_version": 13,
"stdlib": "libstdcxx",
"configuration": "Debug"
},
{
"compiler_id": "clang-14-libstdcxx-gcc11",
"compiler": "clang",
"cc": "clang-14",
"cxx": "clang++-14",
"compiler_version": 14,
"stdlib": "libstdcxx",
"clang_gcc_toolchain": 11,
"configuration": "Debug"
},
{
"compiler_id": "clang-16-libstdcxx-gcc13",
"compiler": "clang",
"cc": "clang-16",
"cxx": "clang++-16",
"compiler_version": 16,
"stdlib": "libstdcxx",
"clang_gcc_toolchain": 13,
"configuration": "Debug"
},
{
"compiler_id": "clang-17-libcxx",
"compiler": "clang",
"cc": "clang-17",
"cxx": "clang++-17",
"compiler_version": 17,
"stdlib": "libcxx",
"configuration": "Debug"
},
{
# Clang 18 - testing if it's faster than Clang 17 with libc++
# Requires patching Conan v1 settings.yml to add version 18
"compiler_id": "clang-18-libcxx",
"compiler": "clang",
"cc": "clang-18",
"cxx": "clang++-18",
"compiler_version": 18,
"stdlib": "libcxx",
"configuration": "Debug"
}
]
# Minimal matrix for PRs and feature branches
minimal_matrix = [
full_matrix[1], # gcc-13 (middle-ground gcc)
full_matrix[2] # clang-14 (mature, stable clang)
]
# Determine which matrix to use based on the target branch
ref = "${{ github.ref }}"
base_ref = "${{ github.base_ref }}" # For PRs, this is the target branch
event_name = "${{ github.event_name }}"
pr_title = """${{ steps.escape.outputs.title }}"""
pr_head_sha = "${{ github.event.pull_request.head.sha }}"
# Get commit message - for PRs, fetch via API since head_commit.message is empty
if event_name == "pull_request" and pr_head_sha:
try:
url = f"https://api.github.com/repos/${{ github.repository }}/commits/{pr_head_sha}"
req = urllib.request.Request(url, headers={
"Accept": "application/vnd.github.v3+json",
"Authorization": f"Bearer {os.environ.get('GH_TOKEN', '')}"
})
with urllib.request.urlopen(req) as response:
data = json.load(response)
commit_message = data["commit"]["message"]
except Exception as e:
print(f"Failed to fetch commit message: {e}")
commit_message = ""
else:
commit_message = """${{ github.event.head_commit.message }}"""
# Debug logging
print(f"Event: {event_name}")
print(f"Ref: {ref}")
print(f"Base ref: {base_ref}")
print(f"PR head SHA: {pr_head_sha}")
print(f"PR title: {pr_title}")
print(f"Commit message: {commit_message}")
# Check for override tags in commit message or PR title
force_full = "[ci-nix-full-matrix]" in commit_message or "[ci-nix-full-matrix]" in pr_title
print(f"Force full matrix: {force_full}")
# Check if this is targeting a main branch
# For PRs: check base_ref (target branch)
# For pushes: check ref (current branch)
main_branches = ["refs/heads/dev", "refs/heads/release", "refs/heads/candidate"]
if force_full:
# Override: always use full matrix if tag is present
use_full = True
elif event_name == "pull_request":
# For PRs, base_ref is just the branch name (e.g., "dev", not "refs/heads/dev")
# Check if the PR targets release or candidate (more critical branches)
use_full = base_ref in ["release", "candidate"]
else:
# For pushes, ref is the full reference (e.g., "refs/heads/dev")
use_full = ref in main_branches
# Select the appropriate matrix
if use_full:
if force_full:
print(f"Using FULL matrix (6 configs) - forced by [ci-nix-full-matrix] tag")
else:
print(f"Using FULL matrix (6 configs) - targeting main branch")
matrix = full_matrix
else:
print(f"Using MINIMAL matrix (2 configs) - feature branch/PR")
matrix = minimal_matrix
# Output the matrix as JSON
output = json.dumps({"include": matrix})
with open(os.environ['GITHUB_OUTPUT'], 'a') as f:
f.write(f"matrix={output}\n")
build:
needs: matrix-setup
runs-on: [self-hosted, generic, 20.04]
container:
image: ubuntu:24.04
volumes:
- /home/runner/.conan-cache:/.conan-cache
- /home/runner/.ccache-cache:/github/home/.ccache-cache
defaults:
run:
shell: bash
outputs:
artifact_name: ${{ steps.set-artifact-name.outputs.artifact_name }}
strategy:
fail-fast: false
matrix:
compiler: [gcc]
configuration: [Debug]
include:
- compiler: gcc
cc: gcc-11
cxx: g++-11
compiler_id: gcc-11
matrix: ${{ fromJSON(needs.matrix-setup.outputs.matrix) }}
env:
build_dir: .build
# Bump this number to invalidate all caches globally.
CACHE_VERSION: 1
CACHE_VERSION: 3
MAIN_BRANCH_NAME: dev
steps:
- name: Checkout
@@ -36,36 +201,80 @@ jobs:
- name: Install build dependencies
run: |
sudo apt-get update
sudo apt-get install -y ninja-build ${{ matrix.cc }} ${{ matrix.cxx }} ccache
# Install specific Conan version needed
pip install --upgrade "conan<2"
apt-get update
apt-get install -y software-properties-common
add-apt-repository ppa:ubuntu-toolchain-r/test -y
apt-get update
apt-get install -y python3 python-is-python3 pipx
pipx ensurepath
apt-get install -y cmake ninja-build ${{ matrix.cc }} ${{ matrix.cxx }} ccache
apt-get install -y perl # for openssl build
apt-get install -y libsqlite3-dev # for xahaud build
- name: Configure ccache
uses: ./.github/actions/xahau-configure-ccache
with:
max_size: 2G
hash_dir: true
compiler_check: content
# Install the specific GCC version needed for Clang
if [ -n "${{ matrix.clang_gcc_toolchain }}" ]; then
echo "=== Installing GCC ${{ matrix.clang_gcc_toolchain }} for Clang ==="
apt-get install -y gcc-${{ matrix.clang_gcc_toolchain }} g++-${{ matrix.clang_gcc_toolchain }} libstdc++-${{ matrix.clang_gcc_toolchain }}-dev
- name: Configure Conan
run: |
conan profile new default --detect || true # Ignore error if profile exists
conan profile update settings.compiler.cppstd=20 default
conan profile update settings.compiler=${{ matrix.compiler }} default
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update env.CC=/usr/bin/${{ matrix.cc }} default
conan profile update env.CXX=/usr/bin/${{ matrix.cxx }} default
conan profile update conf.tools.build:compiler_executables='{"c": "/usr/bin/${{ matrix.cc }}", "cpp": "/usr/bin/${{ matrix.cxx }}"}' default
# Set correct compiler version based on matrix.compiler
if [ "${{ matrix.compiler }}" = "gcc" ]; then
conan profile update settings.compiler.version=11 default
elif [ "${{ matrix.compiler }}" = "clang" ]; then
conan profile update settings.compiler.version=14 default
echo "=== GCC versions available after installation ==="
ls -la /usr/lib/gcc/x86_64-linux-gnu/ | grep -E "^d"
fi
# Display profile for verification
conan profile show default
# For Clang < 16 with --gcc-toolchain, hide newer GCC versions
# This is needed because --gcc-toolchain still picks the highest version
#
# THE GREAT GCC HIDING TRICK (for Clang < 16):
# Clang versions before 16 don't have --gcc-install-dir, only --gcc-toolchain
# which is deprecated and still uses discovery heuristics that ALWAYS pick
# the highest version number. So we play a sneaky game...
#
# We rename newer GCC versions to very low integers (1, 2, 3...) which makes
# Clang think they're ancient GCC versions. Since 11 > 3 > 2 > 1, Clang will
# pick GCC 11 over our renamed versions. It's dumb but it works!
#
# Example: GCC 12→1, GCC 13→2, GCC 14→3, so Clang picks 11 (highest number)
if [ -n "${{ matrix.clang_gcc_toolchain }}" ] && [ "${{ matrix.compiler_version }}" -lt "16" ]; then
echo "=== Hiding GCC versions newer than ${{ matrix.clang_gcc_toolchain }} for Clang < 16 ==="
target_version=${{ matrix.clang_gcc_toolchain }}
counter=1 # Start with 1 - these will be seen as "GCC version 1, 2, 3" etc
for dir in /usr/lib/gcc/x86_64-linux-gnu/*/; do
if [ -d "$dir" ]; then
version=$(basename "$dir")
# Check if version is numeric and greater than target
if [[ "$version" =~ ^[0-9]+$ ]] && [ "$version" -gt "$target_version" ]; then
echo "Hiding GCC $version -> renaming to $counter (will be seen as GCC version $counter)"
# Safety check: ensure target doesn't already exist
if [ ! -e "/usr/lib/gcc/x86_64-linux-gnu/$counter" ]; then
mv "$dir" "/usr/lib/gcc/x86_64-linux-gnu/$counter"
else
echo "ERROR: Cannot rename GCC $version - /usr/lib/gcc/x86_64-linux-gnu/$counter already exists"
exit 1
fi
counter=$((counter + 1))
fi
fi
done
fi
# Verify what Clang will use
if [ -n "${{ matrix.clang_gcc_toolchain }}" ]; then
echo "=== Verifying GCC toolchain selection ==="
echo "Available GCC versions:"
ls -la /usr/lib/gcc/x86_64-linux-gnu/ | grep -E "^d.*[0-9]+$" || true
echo ""
echo "Clang's detected GCC installation:"
${{ matrix.cxx }} -v -E -x c++ /dev/null -o /dev/null 2>&1 | grep "Found candidate GCC installation" || true
fi
# Install libc++ dev packages if using libc++ (not needed for libstdc++)
if [ "${{ matrix.stdlib }}" = "libcxx" ]; then
apt-get install -y libc++-${{ matrix.compiler_version }}-dev libc++abi-${{ matrix.compiler_version }}-dev
fi
# Install Conan 2
pipx install "conan>=2.0,<3"
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Check environment
run: |
@@ -79,6 +288,14 @@ jobs:
echo "---- Full Environment ----"
env
- name: Get commit message
id: get-commit-message
uses: ./.github/actions/xahau-ga-get-commit-message
with:
event-name: ${{ github.event_name }}
head-commit-message: ${{ github.event.head_commit.message }}
pr-head-sha: ${{ github.event.pull_request.head.sha }}
- name: Install dependencies
uses: ./.github/actions/xahau-ga-dependencies
with:
@@ -87,6 +304,12 @@ jobs:
compiler-id: ${{ matrix.compiler_id }}
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
compiler: ${{ matrix.compiler }}
compiler_version: ${{ matrix.compiler_version }}
cc: ${{ matrix.cc }}
cxx: ${{ matrix.cxx }}
stdlib: ${{ matrix.stdlib }}
gha_cache_enabled: 'false' # Disable caching for self hosted runner
- name: Build
uses: ./.github/actions/xahau-ga-build
@@ -99,6 +322,9 @@ jobs:
compiler-id: ${{ matrix.compiler_id }}
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
stdlib: ${{ matrix.stdlib }}
clang_gcc_toolchain: ${{ matrix.clang_gcc_toolchain || '' }}
ccache_max_size: '100G'
- name: Set artifact name
id: set-artifact-name
@@ -120,4 +346,4 @@ jobs:
else
echo "Error: rippled executable not found in ${{ env.build_dir }}"
exit 1
fi
fi

5
.gitignore vendored
View File

@@ -24,6 +24,11 @@ bin/project-cache.jam
build/docker
# Ignore release builder files
.env
release-build
cmake-*.tar.gz
# Ignore object files.
*.o
build

View File

@@ -8,6 +8,6 @@
"editor.semanticHighlighting.enabled": true,
"editor.tabSize": 4,
"editor.defaultFormatter": "xaver.clang-format",
"editor.formatOnSave": false
"editor.formatOnSave": true
}
}

View File

@@ -83,28 +83,52 @@ The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conve
The `network_id` field was added in the `server_info` response in version 1.5.0 (2019), but it is not returned in [reporting mode](https://xrpl.org/rippled-server-modes.html#reporting-mode). However, use of reporting mode is now discouraged, in favor of using [Clio](https://github.com/XRPLF/clio) instead.
## XRP Ledger server version 2.2.0
## XRP Ledger server version 2.4.0
The following is a non-breaking addition to the API.
As of 2025-01-28, version 2.4.0 is in development. You can use a pre-release version by building from source or [using the `nightly` package](https://xrpl.org/docs/infrastructure/installation/install-rippled-on-ubuntu).
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
### Additions and bugfixes in 2.4.0
### Breaking change in 2.3
- `ledger_entry`: `state` is added an alias for `ripple_state`.
- `ledger_entry`: Enables case-insensitive filtering by canonical name in addition to case-sensitive filtering by RPC name.
- `validators`: Added new field `validator_list_threshold` in response.
- `simulate`: A new RPC that executes a [dry run of a transaction submission](https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0069d-simulate#2-rpc-simulate)
- Signing methods autofill fees better and properly handle transactions that don't have a base fee, and will also autofill the `NetworkID` field.
## XRP Ledger server version 2.3.0
[Version 2.3.0](https://github.com/XRPLF/rippled/releases/tag/2.3.0) was released on Nov 25, 2024.
### Breaking changes in 2.3.0
- `book_changes`: If the requested ledger version is not available on this node, a `ledgerNotFound` error is returned and the node does not attempt to acquire the ledger from the p2p network (as with other non-admin RPCs).
Admins can still attempt to retrieve old ledgers with the `ledger_request` RPC.
### Addition in 2.3
### Additions and bugfixes in 2.3.0
- `book_changes`: Returns a `validated` field in its response, which was missing in prior versions.
The following additions are non-breaking (because they are purely additive).
## XRP Ledger server version 2.2.0
[Version 2.2.0](https://github.com/XRPLF/rippled/releases/tag/2.2.0) was released on Jun 5, 2024. The following additions are non-breaking (because they are purely additive):
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 2.0.0
[Version 2.0.0](https://github.com/XRPLF/rippled/releases/tag/2.0.0) was released on Jan 9, 2024. The following additions are non-breaking (because they are purely additive):
- `server_definitions`: A new RPC that generates a `definitions.json`-like output that can be used in XRPL libraries.
- In `Payment` transactions, `DeliverMax` has been added. This is a replacement for the `Amount` field, which should not be used. Typically, the `delivered_amount` (in transaction metadata) should be used. To ease the transition, `DeliverMax` is present regardless of API version, since adding a field is non-breaking.
- API version 2 has been moved from beta to supported, meaning that it is generally available (regardless of the `beta_rpc_api` setting).
## XRP Ledger server version 2.2.0
The following is a non-breaking addition to the API.
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 1.12.0
[Version 1.12.0](https://github.com/XRPLF/rippled/releases/tag/1.12.0) was released on Sep 6, 2023. The following additions are non-breaking (because they are purely additive).

144
BUILD.md
View File

@@ -36,7 +36,7 @@ See [System Requirements](https://xrpl.org/system-requirements.html).
Building rippled generally requires git, Python, Conan, CMake, and a C++ compiler. Some guidance on setting up such a [C++ development environment can be found here](./docs/build/environment.md).
- [Python 3.7](https://www.python.org/downloads/)
- [Conan 1.60](https://conan.io/downloads.html)[^1]
- [Conan 2.x](https://conan.io/downloads)
- [CMake 3.16](https://cmake.org/download/)
[^1]: It is possible to build with Conan 2.x,
@@ -89,13 +89,24 @@ If you are unfamiliar with Conan, then please read [this crash course](./docs/bu
You'll need at least one Conan profile:
```
conan profile new default --detect
conan profile detect --force
```
Update the compiler settings:
For Conan 2, you can edit the profile directly at `~/.conan2/profiles/default`,
or use the Conan CLI. Ensure C++20 is set:
```
conan profile update settings.compiler.cppstd=20 default
conan profile show
```
Look for `compiler.cppstd=20` in the output. If it's not set, edit the profile:
```
# Edit ~/.conan2/profiles/default and ensure these settings exist:
[settings]
compiler.cppstd=20
```
Configure Conan (1.x only) to use recipe revisions:
@@ -110,7 +121,9 @@ If you are linking with libstdc++ (see profile setting `compiler.libcxx`),
then you will need to choose the `libstdc++11` ABI:
```
conan profile update settings.compiler.libcxx=libstdc++11 default
# In ~/.conan2/profiles/default, ensure:
[settings]
compiler.libcxx=libstdc++11
```
@@ -135,73 +148,50 @@ Prompt" for the version of Visual Studio that you have installed.
architecture:
```
conan profile update settings.arch=x86_64 default
# In ~/.conan2/profiles/default, ensure:
[settings]
arch=x86_64
```
### Multiple compilers
When `/usr/bin/g++` exists on a platform, it is the default cpp compiler. This
default works for some users.
However, if this compiler cannot build rippled or its dependencies, then you can
install another compiler and set Conan and CMake to use it.
Update the `conf.tools.build:compiler_executables` setting in order to set the correct variables (`CMAKE_<LANG>_COMPILER`) in the
generated CMake toolchain file.
For example, on Ubuntu 20, you may have gcc at `/usr/bin/gcc` and g++ at `/usr/bin/g++`; if that is the case, you can select those compilers with:
```
conan profile update 'conf.tools.build:compiler_executables={"c": "/usr/bin/gcc", "cpp": "/usr/bin/g++"}' default
```
Replace `/usr/bin/gcc` and `/usr/bin/g++` with paths to the desired compilers.
It should choose the compiler for dependencies as well,
but not all of them have a Conan recipe that respects this setting (yet).
For the rest, you can set these environment variables.
Replace `<path>` with paths to the desired compilers:
- `conan profile update env.CC=<path> default`
- `conan profile update env.CXX=<path> default`
Export our [Conan recipe for Snappy](./external/snappy).
It does not explicitly link the C++ standard library,
which allows you to statically link it with GCC, if you want.
3. (Optional) If you have multiple compilers installed on your platform,
make sure that Conan and CMake select the one you want to use.
This setting will set the correct variables (`CMAKE_<LANG>_COMPILER`)
in the generated CMake toolchain file.
```
# Conan 1.x
conan export external/snappy snappy/1.1.10@
# Conan 2.x
conan export --version 1.1.10 external/snappy
# In ~/.conan2/profiles/default, add under [conf] section:
[conf]
tools.build:compiler_executables={"c": "<path>", "cpp": "<path>"}
```
For setting environment variables for dependencies:
```
# In ~/.conan2/profiles/default, add under [buildenv] section:
[buildenv]
CC=<path>
CXX=<path>
```
4. Export our [Conan recipe for Snappy](./external/snappy).
It doesn't explicitly link the C++ standard library,
which allows you to statically link it with GCC, if you want.
```
conan export external/snappy --version 1.1.10 --user xahaud --channel stable
```
Export our [Conan recipe for RocksDB](./external/rocksdb).
It does not override paths to dependencies when building with Visual Studio.
```
# Conan 1.x
conan export external/rocksdb rocksdb/6.29.5@
# Conan 2.x
conan export --version 6.29.5 external/rocksdb
conan export external/soci --version 4.0.3 --user xahaud --channel stable
```
Export our [Conan recipe for SOCI](./external/soci).
It patches their CMake to correctly import its dependencies.
6. Export our [Conan recipe for WasmEdge](./external/wasmedge).
```
# Conan 1.x
conan export external/soci soci/4.0.3@
# Conan 2.x
conan export --version 4.0.3 external/soci
```
Export our [Conan recipe for NuDB](./external/nudb).
It fixes some source files to add missing `#include`s.
```
# Conan 1.x
conan export external/nudb nudb/2.0.8@
# Conan 2.x
conan export --version 2.0.8 external/nudb
conan export external/wasmedge --version 0.11.2 --user xahaud --channel stable
```
### Build and Test
@@ -222,13 +212,15 @@ It fixes some source files to add missing `#include`s.
the `install-folder` or `-if` option to every `conan install` command
in the next step.
2. Generate CMake files for every configuration you want to build.
2. Use conan to generate CMake files for every configuration you want to build:
```
conan install .. --output-folder . --build missing --settings build_type=Release
conan install .. --output-folder . --build missing --settings build_type=Debug
```
To build Debug, in the next step, be sure to set `-DCMAKE_BUILD_TYPE=Debug`
For a single-configuration generator, e.g. `Unix Makefiles` or `Ninja`,
you only need to run this command once.
For a multi-configuration generator, e.g. `Visual Studio`, you may want to
@@ -258,13 +250,16 @@ It fixes some source files to add missing `#include`s.
Single-config generators:
Pass the CMake variable [`CMAKE_BUILD_TYPE`][build_type]
and make sure it matches the one of the `build_type` settings
you chose in the previous step.
For example, to build Debug, in the next command, replace "Release" with "Debug"
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release -Dxrpld=ON -Dtests=ON ..
```
Pass the CMake variable [`CMAKE_BUILD_TYPE`][build_type]
and make sure it matches the `build_type` setting you chose in the previous
step.
Multi-config generators:
@@ -274,7 +269,7 @@ It fixes some source files to add missing `#include`s.
**Note:** You can pass build options for `rippled` in this step.
4. Build `rippled`.
5. Build `rippled`.
For a single-configuration generator, it will build whatever configuration
you passed for `CMAKE_BUILD_TYPE`. For a multi-configuration generator,
@@ -293,7 +288,7 @@ It fixes some source files to add missing `#include`s.
cmake --build . --config Debug
```
5. Test rippled.
6. Test rippled.
Single-config generators:
@@ -396,23 +391,26 @@ and can be helpful for detecting `#include` omissions.
If you have trouble building dependencies after changing Conan settings,
try removing the Conan cache.
For Conan 2:
```
rm -rf ~/.conan/data
rm -rf ~/.conan2/p
```
Or clear the entire Conan 2 cache:
```
conan cache clean "*"
```
### no std::result_of
### macOS compilation with Apple Clang 17+
If your compiler version is recent enough to have removed `std::result_of` as
part of C++20, e.g. Apple Clang 15.0, then you might need to add a preprocessor
definition to your build.
If you're on macOS with Apple Clang 17 or newer, you need to add a compiler flag to work around a compilation error in gRPC dependencies.
Edit `~/.conan2/profiles/default` and add under the `[conf]` section:
```
conan profile update 'options.boost:extra_b2_flags="define=BOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'conf.tools.build:cflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
[conf]
tools.build:cxxflags=["-Wno-missing-template-arg-list-after-template-kw"]
```

View File

@@ -21,7 +21,7 @@ mkdir results
includes="$( pwd )/results/rawincludes.txt"
pushd ../..
echo Raw includes:
grep -r '#include.*/.*\.h' include src | \
grep -r '^[ ]*#include.*/.*\.h' include src | \
grep -v boost | tee ${includes}
popd
pushd results

View File

@@ -10,9 +10,6 @@ Loop: test.jtx test.toplevel
Loop: test.jtx test.unit_test
test.unit_test == test.jtx
Loop: xrpl.basics xrpl.json
xrpl.json == xrpl.basics
Loop: xrpl.protocol xrpld.app
xrpld.app > xrpl.protocol

View File

@@ -1,5 +1,4 @@
libxrpl.basics > xrpl.basics
libxrpl.basics > xrpl.protocol
libxrpl.crypto > xrpl.basics
libxrpl.json > xrpl.basics
libxrpl.json > xrpl.json
@@ -19,6 +18,7 @@ test.app > xrpl.basics
test.app > xrpld.app
test.app > xrpld.core
test.app > xrpld.ledger
test.app > xrpld.nodestore
test.app > xrpld.overlay
test.app > xrpld.rpc
test.app > xrpl.hook
@@ -85,6 +85,7 @@ test.nodestore > xrpld.core
test.nodestore > xrpld.nodestore
test.nodestore > xrpld.unity
test.overlay > test.jtx
test.overlay > test.toplevel
test.overlay > test.unit_test
test.overlay > xrpl.basics
test.overlay > xrpld.app
@@ -102,6 +103,10 @@ test.protocol > test.toplevel
test.protocol > xrpl.basics
test.protocol > xrpl.json
test.protocol > xrpl.protocol
test.rdb > test.jtx
test.rdb > test.toplevel
test.rdb > xrpld.app
test.rdb > xrpld.core
test.resource > test.unit_test
test.resource > xrpl.basics
test.resource > xrpl.resource
@@ -135,6 +140,7 @@ test.toplevel > test.csf
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
xrpl.hook > xrpl.basics
xrpl.json > xrpl.basics
xrpl.protocol > xrpl.basics
xrpl.protocol > xrpl.json
xrpl.resource > xrpl.basics

View File

@@ -24,15 +24,33 @@ set(Boost_NO_BOOST_CMAKE ON)
# make GIT_COMMIT_HASH define available to all sources
find_package(Git)
if(Git_FOUND)
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git describe --always --abbrev=40
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git rev-parse HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gch)
if(gch)
set(GIT_COMMIT_HASH "${gch}")
message(STATUS gch: ${GIT_COMMIT_HASH})
add_definitions(-DGIT_COMMIT_HASH="${GIT_COMMIT_HASH}")
endif()
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git rev-parse --abbrev-ref HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gb)
if(gb)
set(GIT_BRANCH "${gb}")
message(STATUS gb: ${GIT_BRANCH})
add_definitions(-DGIT_BRANCH="${GIT_BRANCH}")
endif()
endif() #git
# make SOURCE_ROOT_PATH define available for logging
set(SOURCE_ROOT_PATH "${CMAKE_CURRENT_SOURCE_DIR}/src/")
add_definitions(-DSOURCE_ROOT_PATH="${SOURCE_ROOT_PATH}")
# BEAST_ENHANCED_LOGGING - adds file:line numbers and formatting to logs
# Automatically enabled for Debug builds via generator expression
# Can be explicitly controlled with -DBEAST_ENHANCED_LOGGING=ON/OFF
option(BEAST_ENHANCED_LOGGING "Include file and line numbers in log messages (auto: Debug=ON, Release=OFF)" OFF)
message(STATUS "BEAST_ENHANCED_LOGGING option: ${BEAST_ENHANCED_LOGGING}")
if(thread_safety_analysis)
add_compile_options(-Wthread-safety -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS -DRIPPLE_ENABLE_THREAD_SAFETY_ANNOTATIONS)
add_compile_options("-stdlib=libc++")
@@ -50,11 +68,6 @@ if(CMAKE_TOOLCHAIN_FILE)
endif()
endif()
if (NOT USE_CONAN)
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake/deps")
endif()
include (CheckCXXCompilerFlag)
include (FetchContent)
include (ExternalProject)
@@ -66,9 +79,7 @@ endif ()
include(RippledSanity)
include(RippledVersion)
include(RippledSettings)
if (NOT USE_CONAN)
include(RippledNIH)
endif()
# this check has to remain in the top-level cmake
# because of the early return statement
if (packages_only)
@@ -80,79 +91,56 @@ endif ()
include(RippledCompiler)
include(RippledInterface)
###
if (NOT USE_CONAN)
set(SECP256K1_INSTALL TRUE)
add_subdirectory(external/secp256k1)
add_library(secp256k1::secp256k1 ALIAS secp256k1)
add_subdirectory(external/ed25519-donna)
include(deps/Boost)
include(deps/OpenSSL)
# include(deps/Secp256k1)
# include(deps/Ed25519-donna)
include(deps/Lz4)
include(deps/Libarchive)
include(deps/Sqlite)
include(deps/Soci)
include(deps/Rocksdb)
include(deps/Nudb)
include(deps/date)
# include(deps/Protobuf)
# include(deps/gRPC)
include(deps/cassandra)
include(deps/Postgres)
include(deps/WasmEdge)
else()
include(conan/Boost)
find_package(OpenSSL 1.1.1 REQUIRED)
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2
)
set(SECP256K1_INSTALL TRUE)
add_subdirectory(external/secp256k1)
add_library(secp256k1::secp256k1 ALIAS secp256k1)
add_subdirectory(external/ed25519-donna)
find_package(gRPC REQUIRED)
find_package(lz4 REQUIRED)
# Target names with :: are not allowed in a generator expression.
# We need to pull the include directories and imported location properties
# from separate targets.
find_package(LibArchive REQUIRED)
find_package(SOCI REQUIRED)
find_package(SQLite3 REQUIRED)
find_package(wasmedge REQUIRED)
option(rocksdb "Enable RocksDB" ON)
if(rocksdb)
find_package(RocksDB REQUIRED)
set_target_properties(RocksDB::rocksdb PROPERTIES
INTERFACE_COMPILE_DEFINITIONS RIPPLE_ROCKSDB_AVAILABLE=1
)
target_link_libraries(ripple_libs INTERFACE RocksDB::rocksdb)
endif()
find_package(nudb REQUIRED)
find_package(date REQUIRED)
find_package(xxHash REQUIRED)
if(TARGET nudb::core)
set(nudb nudb::core)
elseif(TARGET NuDB::nudb)
set(nudb NuDB::nudb)
else()
message(FATAL_ERROR "unknown nudb target")
endif()
target_link_libraries(ripple_libs INTERFACE ${nudb})
target_link_libraries(ripple_libs INTERFACE
ed25519::ed25519
lz4::lz4
OpenSSL::Crypto
OpenSSL::SSL
# Ripple::grpc_pbufs
# Ripple::pbufs
secp256k1::secp256k1
soci::soci
SQLite::SQLite3
include(deps/Boost)
find_package(OpenSSL 1.1.1 REQUIRED)
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2
)
set(SECP256K1_INSTALL TRUE)
add_subdirectory(external/secp256k1)
add_library(secp256k1::secp256k1 ALIAS secp256k1)
add_subdirectory(external/ed25519-donna)
add_subdirectory(external/antithesis-sdk)
find_package(gRPC REQUIRED)
find_package(lz4 REQUIRED)
# Target names with :: are not allowed in a generator expression.
# We need to pull the include directories and imported location properties
# from separate targets.
find_package(LibArchive REQUIRED)
find_package(SOCI REQUIRED)
find_package(SQLite3 REQUIRED)
include(deps/WasmEdge)
option(rocksdb "Enable RocksDB" ON)
if(rocksdb)
find_package(RocksDB REQUIRED)
set_target_properties(RocksDB::rocksdb PROPERTIES
INTERFACE_COMPILE_DEFINITIONS RIPPLE_ROCKSDB_AVAILABLE=1
)
target_link_libraries(ripple_libs INTERFACE RocksDB::rocksdb)
endif()
find_package(nudb REQUIRED)
find_package(date REQUIRED)
find_package(xxHash REQUIRED)
if(TARGET nudb::core)
set(nudb nudb::core)
elseif(TARGET NuDB::nudb)
set(nudb NuDB::nudb)
else()
message(FATAL_ERROR "unknown nudb target")
endif()
target_link_libraries(ripple_libs INTERFACE ${nudb})
target_link_libraries(ripple_libs INTERFACE
ed25519::ed25519
lz4::lz4
OpenSSL::Crypto
OpenSSL::SSL
# Ripple::grpc_pbufs
# Ripple::pbufs
secp256k1::secp256k1
soci::soci
SQLite::SQLite3
)
if(coverage)
include(RippledCov)
@@ -160,10 +148,6 @@ endif()
set(PROJECT_EXPORT_SET RippleExports)
include(RippledCore)
if (NOT USE_CONAN)
include(deps/Protobuf)
include(deps/gRPC)
endif()
include(RippledInstall)
include(RippledDocs)
include(RippledValidatorKeys)

View File

@@ -95,6 +95,19 @@ Refer to
["How to Write a Git Commit Message"](https://cbea.ms/git-commit/)
for general rules on writing a good commit message.
tl;dr
> 1. Separate subject from body with a blank line.
> 2. Limit the subject line to 50 characters.
> * [...]shoot for 50 characters, but consider 72 the hard limit.
> 3. Capitalize the subject line.
> 4. Do not end the subject line with a period.
> 5. Use the imperative mood in the subject line.
> * A properly formed Git commit subject line should always be able
> to complete the following sentence: "If applied, this commit will
> _your subject line here_".
> 6. Wrap the body at 72 characters.
> 7. Use the body to explain what and why vs. how.
In addition to those guidelines, please add one of the following
prefixes to the subject line if appropriate.
* `fix:` - The primary purpose is to fix an existing bug.
@@ -341,6 +354,68 @@ pip3 install pre-commit
pre-commit install
```
## Contracts and instrumentation
We are using [Antithesis](https://antithesis.com/) for continuous fuzzing,
and keep a copy of [Antithesis C++ SDK](https://github.com/antithesishq/antithesis-sdk-cpp/)
in `external/antithesis-sdk`. One of the aims of fuzzing is to identify bugs
by finding external conditions which cause contracts violations inside `rippled`.
The contracts are expressed as `XRPL_ASSERT` or `UNREACHABLE` (defined in
`include/xrpl/beast/utility/instrumentation.h`), which are effectively (outside
of Antithesis) wrappers for `assert(...)` with added name. The purpose of name
is to provide contracts with stable identity which does not rely on line numbers.
When `rippled` is built with the Antithesis instrumentation enabled
(using `voidstar` CMake option) and ran on the Antithesis platform, the
contracts become
[test properties](https://antithesis.com/docs/using_antithesis/properties.html);
otherwise they are just like a regular `assert`.
To learn more about Antithesis, see
[How Antithesis Works](https://antithesis.com/docs/introduction/how_antithesis_works.html)
and [C++ SDK](https://antithesis.com/docs/using_antithesis/sdk/cpp/overview.html#)
We continue to use the old style `assert` or `assert(false)` in certain
locations, where the reporting of contract violations on the Antithesis
platform is either not possible or not useful.
For this reason:
* The locations where `assert` or `assert(false)` contracts should continue to be used:
* `constexpr` functions
* unit tests i.e. files under `src/test`
* unit tests-related modules (files under `beast/test` and `beast/unit_test`)
* Outside of the listed locations, do not use `assert`; use `XRPL_ASSERT` instead,
giving it unique name, with the short description of the contract.
* Outside of the listed locations, do not use `assert(false)`; use
`UNREACHABLE` instead, giving it unique name, with the description of the
condition being violated
* The contract name should start with a full name (including scope) of the
function, optionally a named lambda, followed by a colon ` : ` and a brief
(typically at most five words) description. `UNREACHABLE` contracts
can use slightly longer descriptions. If there are multiple overloads of the
function, use common sense to balance both brevity and unambiguity of the
function name. NOTE: the purpose of name is to provide stable means of
unique identification of every contract; for this reason try to avoid elements
which can change in some obvious refactors or when reinforcing the condition.
* Contract description typically (except for `UNREACHABLE`) should describe the
_expected_ condition, as in "I assert that _expected_ is true".
* Contract description for `UNREACHABLE` should describe the _unexpected_
situation which caused the line to have been reached.
* Example good name for an
`UNREACHABLE` macro `"Json::operator==(Value, Value) : invalid type"`; example
good name for an `XRPL_ASSERT` macro `"Json::Value::asCString : valid type"`.
* Example **bad** name
`"RFC1751::insert(char* s, int x, int start, int length) : length is greater than or equal zero"`
(missing namespace, unnecessary full function signature, description too verbose).
Good name: `"ripple::RFC1751::insert : minimum length"`.
* In **few** well-justified cases a non-standard name can be used, in which case a
comment should be placed to explain the rationale (example in `contract.cpp`)
* Do **not** rename a contract without a good reason (e.g. the name no longer
reflects the location or the condition being checked)
* Do not use `std::unreachable`
* Do not put contracts where they can be violated by an external condition
(e.g. timing, data payload before mandatory validation etc.) as this creates
bogus bug reports (and causes crashes of Debug builds)
## Unit Tests
To execute all unit tests:
@@ -413,9 +488,10 @@ existing maintainer without a vote.
## Current Maintainers
* [Richard Holland](https://github.com/RichardAH) (XRPL Labs + XRP Ledger Foundation)
* [Denis Angell](https://github.com/dangell7) (XRPL Labs + XRP Ledger Foundation)
* [Wietse Wind](https://github.com/WietseWind) (XRPL Labs + XRP Ledger Foundation)
* [Richard Holland](https://github.com/RichardAH) (XRPL Labs + INFTF)
* [Denis Angell](https://github.com/dangell7) (XRPL Labs + INFTF)
* [Wietse Wind](https://github.com/WietseWind) (XRPL Labs + INFTF)
* [tequ](https://github.com/tequdev) (Independent + INFTF)
[1]: https://docs.github.com/en/get-started/quickstart/contributing-to-projects

View File

@@ -2,7 +2,7 @@
**Note:** Throughout this README, references to "we" or "our" pertain to the community and contributors involved in the Xahau network. It does not imply a legal entity or a specific collection of individuals.
[Xahau](https://xahau.network/) is a decentralized cryptographic ledger that builds upon the robust foundation of the XRP Ledger. It inherits the XRP Ledger's Byzantine Fault Tolerant consensus algorithm and enhances it with additional features and functionalities. Developers and users familiar with the XRP Ledger will find that most documentation and tutorials available on [xrpl.org](https://xrpl.org) are relevant and applicable to Xahau, including those related to running validators and managing validator keys. For Xahau specific documentation you can visit our [documentation](https://docs.xahau.network/)
[Xahau](https://xahau.network/) is a decentralized cryptographic ledger that builds upon the robust foundation of the XRP Ledger. It inherits the XRP Ledger's Byzantine Fault Tolerant consensus algorithm and enhances it with additional features and functionalities. Developers and users familiar with the XRP Ledger will find that most documentation and tutorials available on [xrpl.org](https://xrpl.org) are relevant and applicable to Xahau, including those related to running validators and managing validator keys. For Xahau specific documentation you can visit our [documentation](https://xahau.network/)
## XAH
XAH is the public, counterparty-free asset native to Xahau and functions primarily as network gas. Transactions submitted to the Xahau network must supply an appropriate amount of XAH, to be burnt by the network as a fee, in order to be successfully included in a validated ledger. In addition, XAH also acts as a bridge currency within the Xahau DEX. XAH is traded on the open-market and is available for anyone to access. Xahau was created in 2023 with a supply of 600 million units of XAH.
@@ -12,7 +12,7 @@ The server software that powers Xahau is called `xahaud` and is available in thi
### Build from Source
* [Read the build instructions in our documentation](https://docs.xahau.network/infrastructure/building-xahau)
* [Read the build instructions in our documentation](https://xahau.network/infrastructure/building-xahau)
* If you encounter any issues, please [open an issue](https://github.com/xahau/xahaud/issues)
## Highlights of Xahau
@@ -58,7 +58,7 @@ git-subtree. See those directories' README files for more details.
- **Documentation**: Documentation for XRPL, Xahau and Hooks.
- [Xrpl Documentation](https://xrpl.org)
- [Xahau Documentation](https://docs.xahau.network/)
- [Xahau Documentation](https://xahau.network/)
- [Hooks Technical Documentation](https://xrpl-hooks.readme.io/)
- **Explorers**: Explore the Xahau ledger using various explorers:
- [xahauexplorer.com](https://xahauexplorer.com)

View File

@@ -7,6 +7,135 @@ This document contains the release notes for `rippled`, the reference server imp
Have new ideas? Need help with setting up your node? [Please open an issue here](https://github.com/xrplf/rippled/issues/new/choose).
# Version 2.3.1
Version 2.3.1 of `rippled`, the reference server implementation of the XRP Ledger protocol, is now available.
This is a hotfix release that includes the following updates:
- Fix an erroneous high fee penalty that peers could incur for sending older transactions.
- Update to the fees charged for imposing a load on the server.
- Prevent the relaying of internal pseudo-transactions.
- Before: Pseudo-transactions received from a peer will fail the signature check, even if they were requested (using TMGetObjectByHash) because they have no signature. This causes the peer to be charged for an invalid signature.
- After: Pseudo-transactions, are put into the global cache (TransactionMaster) only. If the transaction is not part of a TMTransactions batch, the peer is charged an unwanted data fee. These fees will not be a problem in the normal course of operations but should dissuade peers from behaving badly by sending a bunch of junk.
- Improved logging now specifies the reason for the fee charged to the peer.
[Sign Up for Future Release Announcements](https://groups.google.com/g/ripple-server)
<!-- BREAK -->
## Action Required
If you run an XRP Ledger validator, upgrade to version 2.3.1 as soon as possible to ensure stable and uninterrupted network behavior.
## Changelog
### Amendments and New Features
- None
### Bug Fixes and Performance Improvements
- Change the charged fee for sending older transactions from feeInvalidSignature to feeUnwantedData. [#5243](https://github.com/XRPLF/rippled/pull/5243)
### Docs and Build System
- None
### GitHub
The public source code repository for `rippled` is hosted on GitHub at <https://github.com/XRPLF/rippled>.
We welcome all contributions and invite everyone to join the community of XRP Ledger developers to help build the Internet of Value.
## Credits
The following people contributed directly to this release:
Ed Hennis <ed@ripple.com>
JoelKatz <DavidJoelSchwartz@GMail.com>
Sophia Xie <106177003+sophiax851@users.noreply.github.com>
Valentin Balaschenko <13349202+vlntb@users.noreply.github.com>
Bug Bounties and Responsible Disclosures:
We welcome reviews of the `rippled` code and urge researchers to responsibly disclose any issues they may find.
To report a bug, please send a detailed report to: <bugs@xrpl.org>
# Version 2.3.0
Version 2.3.0 of `rippled`, the reference server implementation of the XRP Ledger protocol, is now available. This release includes 8 new amendments, including Multi-Purpose Tokens, Credentials, Clawback support for AMMs, and the ability to make offers as part of minting NFTs. Additionally, this release includes important fixes for stability, so server operators are encouraged to upgrade as soon as possible.
## Action Required
If you run an XRP Ledger server, upgrade to version 2.3.0 as soon as possible to ensure service continuity.
Additionally, new amendments are now open for voting according to the XRP Ledger's [amendment process](https://xrpl.org/amendments.html), which enables protocol changes following two weeks of >80% support from trusted validators. The exact time that protocol changes take effect depends on the voting decisions of the decentralized network.
## Full Changelog
### Amendments
The following amendments are open for voting with this release:
- **XLS-70 Credentials** - Users can issue Credentials on the ledger and use Credentials to pre-approve incoming payments when using Deposit Authorization instead of individually approving payers. ([#5103](https://github.com/XRPLF/rippled/pull/5103))
- related fix: #5189 (https://github.com/XRPLF/rippled/pull/5189)
- **XLS-33 Multi-Purpose Tokens** - A new type of fungible token optimized for institutional DeFi including stablecoins. ([#5143](https://github.com/XRPLF/rippled/pull/5143))
- **XLS-37 AMM Clawback** - Allows clawback-enabled tokens to be used in AMMs, with appropriate guardrails. ([#5142](https://github.com/XRPLF/rippled/pull/5142))
- **XLS-52 NFTokenMintOffer** - Allows creating an NFT sell offer as part of minting a new NFT. ([#4845](https://github.com/XRPLF/rippled/pull/4845))
- **fixAMMv1_2** - Fixes two bugs in Automated Market Maker (AMM) transaction processing. ([#5176](https://github.com/XRPLF/rippled/pull/5176))
- **fixNFTokenPageLinks** - Fixes a bug that can cause NFT directories to have missing links, and introduces a transaction to repair corrupted ledger state. ([#4945](https://github.com/XRPLF/rippled/pull/4945))
- **fixEnforceNFTokenTrustline** - Fixes two bugs in the interaction between NFT offers and trust lines. ([#4946](https://github.com/XRPLF/rippled/pull/4946))
- **fixInnerObjTemplate2** - Standardizes the way inner objects are enforced across all transaction and ledger data. ([#5047](https://github.com/XRPLF/rippled/pull/5047))
The following amendment is partially implemented but not open for voting:
- **InvariantsV1_1** - Adds new invariants to ensure transactions process as intended, starting with an invariant to ensure that ledger entries owned by an account are deleted when the account is deleted. ([#4663](https://github.com/XRPLF/rippled/pull/4663))
### New Features
- Allow configuration of SQLite database page size. ([#5135](https://github.com/XRPLF/rippled/pull/5135), [#5140](https://github.com/XRPLF/rippled/pull/5140))
- In the `libxrpl` C++ library, provide a list of known amendments. ([#5026](https://github.com/XRPLF/rippled/pull/5026))
### Deprecations
- History Shards are removed. ([#5066](https://github.com/XRPLF/rippled/pull/5066))
- Reporting mode is removed. ([#5092](https://github.com/XRPLF/rippled/pull/5092))
For users wanting to store more ledger history, it is recommended to run a Clio server instead.
### Bug fixes
- Fix a crash in debug builds when amm_info request contains an invalid AMM account ID. ([#5188](https://github.com/XRPLF/rippled/pull/5188))
- Fix a crash caused by a race condition in peer-to-peer code. ([#5071](https://github.com/XRPLF/rippled/pull/5071))
- Fix a crash in certain situations
- Fix several bugs in the book_changes API method. ([#5096](https://github.com/XRPLF/rippled/pull/5096))
- Fix bug triggered by providing an invalid marker to the account_nfts API method. ([#5045](https://github.com/XRPLF/rippled/pull/5045))
- Accept lower-case hexadecimal in compact transaction identifier (CTID) parameters in API methods. ([#5049](https://github.com/XRPLF/rippled/pull/5049))
- Disallow filtering by types that an account can't own in the account_objects API method. ([#5056](https://github.com/XRPLF/rippled/pull/5056))
- Fix error code returned by the feature API method when providing an invalid parameter. ([#5063](https://github.com/XRPLF/rippled/pull/5063))
- (API v3) Fix error code returned by amm_info when providing invalid parameters. ([#4924](https://github.com/XRPLF/rippled/pull/4924))
### Other Improvements
- Adds a new default hub, hubs.xrpkuwait.com, to the config file and bootstrapping code. ([#5169](https://github.com/XRPLF/rippled/pull/5169))
- Improve error message when commandline interface fails with `rpcInternal` because there was no response from the server. ([#4959](https://github.com/XRPLF/rippled/pull/4959))
- Add tools for debugging specific transactions via replay. ([#5027](https://github.com/XRPLF/rippled/pull/5027), [#5087](https://github.com/XRPLF/rippled/pull/5087))
- Major reorganization of source code files. ([#4997](https://github.com/XRPLF/rippled/pull/4997))
- Add new unit tests. ([#4886](https://github.com/XRPLF/rippled/pull/4886))
- Various improvements to build tools and contributor documentation. ([#5001](https://github.com/XRPLF/rippled/pull/5001), [#5028](https://github.com/XRPLF/rippled/pull/5028), [#5052](https://github.com/XRPLF/rippled/pull/5052), [#5091](https://github.com/XRPLF/rippled/pull/5091), [#5084](https://github.com/XRPLF/rippled/pull/5084), [#5120](https://github.com/XRPLF/rippled/pull/5120), [#5010](https://github.com/XRPLF/rippled/pull/5010). [#5055](https://github.com/XRPLF/rippled/pull/5055), [#5067](https://github.com/XRPLF/rippled/pull/5067), [#5061](https://github.com/XRPLF/rippled/pull/5061), [#5072](https://github.com/XRPLF/rippled/pull/5072), [#5044](https://github.com/XRPLF/rippled/pull/5044) )
- Various code cleanup and refactoring. ([#4509](https://github.com/XRPLF/rippled/pull/4509), [#4521](https://github.com/XRPLF/rippled/pull/4521), [#4856](https://github.com/XRPLF/rippled/pull/4856), [#5190](https://github.com/XRPLF/rippled/pull/5190), [#5081](https://github.com/XRPLF/rippled/pull/5081), [#5053](https://github.com/XRPLF/rippled/pull/5053), [#5058](https://github.com/XRPLF/rippled/pull/5058), [#5122](https://github.com/XRPLF/rippled/pull/5122), [#5059](https://github.com/XRPLF/rippled/pull/5059), [#5041](https://github.com/XRPLF/rippled/pull/5041))
Bug Bounties and Responsible Disclosures:
We welcome reviews of the `rippled` code and urge researchers to responsibly disclose any issues they may find.
To report a bug, please send a detailed report to: <bugs@xrpl.org>
# Version 2.2.3
Version 2.2.3 of `rippled`, the reference server implementation of the XRP Ledger protocol, is now available. This release fixes a problem that can cause full-history servers to run out of space in their SQLite databases, depending on configuration. There are no new amendments in this release.

View File

@@ -5,8 +5,6 @@
# debugging.
set -ex
set -e
echo "START INSIDE CONTAINER - CORE"
echo "-- BUILD CORES: $3"
@@ -14,6 +12,13 @@ echo "-- GITHUB_REPOSITORY: $1"
echo "-- GITHUB_SHA: $2"
echo "-- GITHUB_RUN_NUMBER: $4"
# Use mounted filesystem for temp files to avoid container space limits
export TMPDIR=/io/tmp
export TEMP=/io/tmp
export TMP=/io/tmp
mkdir -p /io/tmp
echo "=== Using temp directory: /io/tmp ==="
umask 0000;
cd /io/ &&
@@ -27,7 +32,8 @@ if [[ "$?" -ne "0" ]]; then
exit 127
fi
perl -i -pe "s/^(\\s*)-DBUILD_SHARED_LIBS=OFF/\\1-DBUILD_SHARED_LIBS=OFF\\n\\1-DROCKSDB_BUILD_SHARED=OFF/g" cmake/deps/Rocksdb.cmake &&
BUILD_TYPE=Release
mv cmake/deps/WasmEdge.cmake cmake/deps/WasmEdge.old &&
echo "find_package(LLVM REQUIRED CONFIG)
message(STATUS \"Found LLVM \${LLVM_PACKAGE_VERSION}\")
@@ -38,18 +44,47 @@ target_link_libraries (ripple_libs INTERFACE wasmedge)
add_library (wasmedge::wasmedge ALIAS wasmedge)
message(\"WasmEdge DONE\")
" > cmake/deps/WasmEdge.cmake &&
git checkout src/ripple/protocol/impl/BuildInfo.cpp &&
sed -i s/\"0.0.0\"/\"$(date +%Y).$(date +%-m).$(date +%-d)-$(git rev-parse --abbrev-ref HEAD)+$4\"/g src/ripple/protocol/impl/BuildInfo.cpp &&
export LDFLAGS="-static-libstdc++"
export CMAKE_EXE_LINKER_FLAGS="-static-libstdc++"
export CMAKE_STATIC_LINKER_FLAGS="-static-libstdc++"
git config --global --add safe.directory /io &&
git checkout src/libxrpl/protocol/BuildInfo.cpp &&
sed -i s/\"0.0.0\"/\"$(date +%Y).$(date +%-m).$(date +%-d)-$(git rev-parse --abbrev-ref HEAD)$(if [ -n "$4" ]; then echo "+$4"; fi)\"/g src/libxrpl/protocol/BuildInfo.cpp &&
conan export external/snappy --version 1.1.10 --user xahaud --channel stable &&
conan export external/soci --version 4.0.3 --user xahaud --channel stable &&
conan export external/wasmedge --version 0.11.2 --user xahaud --channel stable &&
cd release-build &&
cmake .. -DCMAKE_BUILD_TYPE=Release -DBoost_NO_BOOST_CMAKE=ON -DLLVM_DIR=/usr/lib64/llvm13/lib/cmake/llvm/ -DLLVM_LIBRARY_DIR=/usr/lib64/llvm13/lib/ -DWasmEdge_LIB=/usr/local/lib64/libwasmedge.a &&
make -j$3 VERBOSE=1 &&
# Install dependencies - tool_requires in conanfile.py handles glibc 2.28 compatibility
# for build tools (protoc, grpc plugins, b2) in HBB environment
# The tool_requires('b2/5.3.2') in conanfile.py should force b2 to build from source
# with the correct toolchain, avoiding the GLIBCXX_3.4.29 issue
echo "=== Installing dependencies ===" &&
conan install .. --output-folder . --build missing --settings build_type=$BUILD_TYPE \
-o with_wasmedge=False -o tool_requires_b2=True &&
cmake .. -G Ninja \
-DCMAKE_BUILD_TYPE=$BUILD_TYPE \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_EXE_LINKER_FLAGS="-static-libstdc++" \
-DLLVM_DIR=$LLVM_DIR \
-DWasmEdge_LIB=$WasmEdge_LIB \
-Dxrpld=TRUE \
-Dtests=TRUE &&
ccache -z &&
ninja -j $3 && echo "=== Re-running final link with verbose output ===" && rm -f rippled && ninja -v rippled &&
ccache -s &&
strip -s rippled &&
mv rippled xahaud &&
echo "=== Full ldd output ===" &&
ldd xahaud &&
echo "=== Running libcheck ===" &&
libcheck xahaud &&
echo "Build host: `hostname`" > release.info &&
echo "Build date: `date`" >> release.info &&
echo "Build md5: `md5sum xahaud`" >> release.info &&
echo "Git remotes:" >> release.info &&
git remote -v >> release.info
git remote -v >> release.info &&
echo "Git status:" >> release.info &&
git status -v >> release.info &&
echo "Git log [last 20]:" >> release.info &&
@@ -68,9 +103,9 @@ fi
cd ..;
mv src/ripple/net/impl/RegisterSSLCerts.cpp.old src/ripple/net/impl/RegisterSSLCerts.cpp;
mv cmake/deps/Rocksdb.cmake.old cmake/deps/Rocksdb.cmake;
mv src/xrpld/net/detail/RegisterSSLCerts.cpp.old src/xrpld/net/detail/RegisterSSLCerts.cpp;
mv cmake/deps/WasmEdge.old cmake/deps/WasmEdge.cmake;
rm src/certs/certbundle.h;
git checkout src/libxrpl/protocol/BuildInfo.cpp;
echo "END INSIDE CONTAINER - CORE"

View File

@@ -3,8 +3,6 @@
# processes launched or upon any unbound variable.
# We use set -x to print commands before running them to help with
# debugging.
set -ex
set -e
echo "START INSIDE CONTAINER - FULL"
@@ -16,21 +14,14 @@ echo "-- GITHUB_RUN_NUMBER: $4"
umask 0000;
echo "Fixing CentOS 7 EOL"
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
yum clean all
yum-config-manager --disable centos-sclo-sclo
####
cd /io;
mkdir -p src/certs;
curl --silent -k https://raw.githubusercontent.com/RichardAH/rippled-release-builder/main/ca-bundle/certbundle.h -o src/certs/certbundle.h;
if [ "`grep certbundle.h src/ripple/net/impl/RegisterSSLCerts.cpp | wc -l`" -eq "0" ]
if [ "`grep certbundle.h src/xrpld/net/detail/RegisterSSLCerts.cpp | wc -l`" -eq "0" ]
then
cp src/ripple/net/impl/RegisterSSLCerts.cpp src/ripple/net/impl/RegisterSSLCerts.cpp.old
cp src/xrpld/net/detail/RegisterSSLCerts.cpp src/xrpld/net/detail/RegisterSSLCerts.cpp.old
perl -i -pe "s/^{/{
#ifdef EMBEDDED_CA_BUNDLE
BIO *cbio = BIO_new_mem_buf(ca_bundle.data(), ca_bundle.size());
@@ -70,95 +61,18 @@ then
BIO_free(cbio);
}
}
#endif/g" src/ripple/net/impl/RegisterSSLCerts.cpp &&
sed -i "s/#include <ripple\/net\/RegisterSSLCerts.h>/\0\n#include <certs\/certbundle.h>/g" src/ripple/net/impl/RegisterSSLCerts.cpp
#endif/g" src/xrpld/net/detail/RegisterSSLCerts.cpp &&
sed -i "s/#include <xrpld\/net\/RegisterSSLCerts.h>/\0\n#include <certs\/certbundle.h>/g" src/xrpld/net/detail/RegisterSSLCerts.cpp
fi
mkdir -p .nih_c;
mkdir -p .nih_toolchain;
cd .nih_toolchain &&
yum install -y wget lz4 lz4-devel git llvm13-static.x86_64 llvm13-devel.x86_64 devtoolset-10-binutils zlib-static ncurses-static -y \
devtoolset-7-gcc-c++ \
devtoolset-9-gcc-c++ \
devtoolset-10-gcc-c++ \
snappy snappy-devel \
zlib zlib-devel \
lz4-devel \
libasan &&
export PATH=`echo $PATH | sed -E "s/devtoolset-9/devtoolset-7/g"` &&
echo "-- Install ZStd 1.1.3 --" &&
yum install epel-release -y &&
ZSTD_VERSION="1.1.3" &&
( wget -nc -q -O zstd-${ZSTD_VERSION}.tar.gz https://github.com/facebook/zstd/archive/v${ZSTD_VERSION}.tar.gz; echo "" ) &&
tar xzvf zstd-${ZSTD_VERSION}.tar.gz &&
cd zstd-${ZSTD_VERSION} &&
make -j$3 install &&
cd .. &&
echo "-- Install Cmake 3.23.1 --" &&
pwd &&
( wget -nc -q https://github.com/Kitware/CMake/releases/download/v3.23.1/cmake-3.23.1-linux-x86_64.tar.gz; echo "" ) &&
tar -xzf cmake-3.23.1-linux-x86_64.tar.gz -C /hbb/ &&
echo "-- Install Boost 1.86.0 --" &&
pwd &&
( wget -nc -q https://archives.boost.io/release/1.86.0/source/boost_1_86_0.tar.gz; echo "" ) &&
tar -xzf boost_1_86_0.tar.gz &&
cd boost_1_86_0 && ./bootstrap.sh && ./b2 link=static -j$3 && ./b2 install &&
cd ../ &&
echo "-- Install Protobuf 3.20.0 --" &&
pwd &&
( wget -nc -q https://github.com/protocolbuffers/protobuf/releases/download/v3.20.0/protobuf-all-3.20.0.tar.gz; echo "" ) &&
tar -xzf protobuf-all-3.20.0.tar.gz &&
cd protobuf-3.20.0/ &&
./autogen.sh && ./configure --prefix=/usr --disable-shared link=static && make -j$3 && make install &&
cd .. &&
echo "-- Build LLD --" &&
pwd &&
ln /usr/bin/llvm-config-13 /usr/bin/llvm-config &&
mv /opt/rh/devtoolset-9/root/usr/bin/ar /opt/rh/devtoolset-9/root/usr/bin/ar-9 &&
ln /opt/rh/devtoolset-10/root/usr/bin/ar /opt/rh/devtoolset-9/root/usr/bin/ar &&
( wget -nc -q https://github.com/llvm/llvm-project/releases/download/llvmorg-13.0.1/lld-13.0.1.src.tar.xz; echo "" ) &&
( wget -nc -q https://github.com/llvm/llvm-project/releases/download/llvmorg-13.0.1/libunwind-13.0.1.src.tar.xz; echo "" ) &&
tar -xf lld-13.0.1.src.tar.xz &&
tar -xf libunwind-13.0.1.src.tar.xz &&
cp -r libunwind-13.0.1.src/include libunwind-13.0.1.src/src lld-13.0.1.src/ &&
cd lld-13.0.1.src &&
rm -rf build CMakeCache.txt &&
mkdir -p build &&
cd build &&
cmake .. -DLLVM_LIBRARY_DIR=/usr/lib64/llvm13/lib/ -DCMAKE_INSTALL_PREFIX=/usr/lib64/llvm13/ -DCMAKE_BUILD_TYPE=Release &&
make -j$3 install &&
ln -s /usr/lib64/llvm13/lib/include/lld /usr/include/lld &&
cp /usr/lib64/llvm13/lib/liblld*.a /usr/local/lib/ &&
cd ../../ &&
echo "-- Build WasmEdge --" &&
( wget -nc -q https://github.com/WasmEdge/WasmEdge/archive/refs/tags/0.11.2.zip; unzip -o 0.11.2.zip; ) &&
cd WasmEdge-0.11.2 &&
( mkdir -p build; echo "" ) &&
cd build &&
export BOOST_ROOT="/usr/local/src/boost_1_86_0" &&
export Boost_LIBRARY_DIRS="/usr/local/lib" &&
export BOOST_INCLUDEDIR="/usr/local/src/boost_1_86_0" &&
export PATH=`echo $PATH | sed -E "s/devtoolset-7/devtoolset-9/g"` &&
cmake .. \
-DCMAKE_BUILD_TYPE=Release \
-DWASMEDGE_BUILD_SHARED_LIB=OFF \
-DWASMEDGE_BUILD_STATIC_LIB=ON \
-DWASMEDGE_BUILD_AOT_RUNTIME=ON \
-DWASMEDGE_FORCE_DISABLE_LTO=ON \
-DCMAKE_POSITION_INDEPENDENT_CODE=ON \
-DWASMEDGE_LINK_LLVM_STATIC=ON \
-DWASMEDGE_BUILD_PLUGINS=OFF \
-DWASMEDGE_LINK_TOOLS_STATIC=ON \
-DBoost_NO_BOOST_CMAKE=ON -DLLVM_DIR=/usr/lib64/llvm13/lib/cmake/llvm/ -DLLVM_LIBRARY_DIR=/usr/lib64/llvm13/lib/ &&
make -j$3 install &&
export PATH=`echo $PATH | sed -E "s/devtoolset-9/devtoolset-10/g"` &&
cp -r include/api/wasmedge /usr/include/ &&
cd /io/ &&
# Environment setup moved to Dockerfile in release-builder.sh
source /opt/rh/gcc-toolset-11/enable
export PATH=/usr/local/bin:$PATH
export CC='/usr/lib64/ccache/gcc' &&
export CXX='/usr/lib64/ccache/g++' &&
echo "-- Build Rippled --" &&
pwd &&
cp cmake/deps/Rocksdb.cmake cmake/deps/Rocksdb.cmake.old &&
echo "MOVING TO [ build-core.sh ]"
cd /io;
echo "MOVING TO [ build-core.sh ]";
printenv > .env.temp;
cat .env.temp | grep '=' | sed s/\\\(^[^=]\\+=\\\)/\\1\\\"/g|sed s/\$/\\\"/g > .env;

View File

@@ -1,7 +1,7 @@
#
# Default validators.txt
#
# This file is located in the same folder as your rippled.cfg file
# This file is located in the same folder as your xahaud.cfg file
# and defines which validators your server trusts not to collude.
#
# This file is UTF-8 with DOS, UNIX, or Mac style line endings.
@@ -17,18 +17,17 @@
# See validator_list_sites and validator_list_keys below.
#
# Examples:
# n9KorY8QtTdRx7TVDpwnG9NvyxsDwHUKUEeDLY3AkiGncVaSXZi5
# n9MqiExBcoG19UXwoLjBJnhsxEhAZMuWwJDRdkyDz1EkEkwzQTNt
# n9L3GdotB8a3AqtsvS7NXt4BUTQSAYyJUr9xtFj2qXJjfbZsawKY
# n9M7G6eLwQtUjfCthWUmTN8L4oEZn1sNr46yvKrpsq58K1C6LAxz
#
# [validator_list_sites]
#
# List of URIs serving lists of recommended validators.
#
# Examples:
# https://vl.ripple.com
# https://vl.xrplf.org
# https://vl.xahau.org
# http://127.0.0.1:8000
# file:///etc/opt/ripple/vl.txt
# file:///etc/opt/xahaud/vl.txt
#
# [validator_list_keys]
#
@@ -39,50 +38,66 @@
# Validator list keys should be hex-encoded.
#
# Examples:
# ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
# ED307A760EE34F2D0CAA103377B1969117C38B8AA0AA1E2A24DAC1F32FC97087ED
# EDA46E9C39B1389894E690E58914DC1029602870370A0993E5B87C4A24EAF4A8E8
#
# [import_vl_keys]
#
# This section is used to import the public keys of trusted validator list publishers.
# The keys are used to authenticate and accept new lists of trusted validators.
# In this example, the key for the publisher "vl.xrplf.org" is imported.
# Each key is represented as a hexadecimal string.
#
# Examples:
# ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
# ED45D1840EE724BE327ABE9146503D5848EFD5F38B6D5FEDE71E80ACCE5E6E738B
# ED42AEC58B701EEBB77356FFFEC26F83C1F0407263530F068C7C73D392C7E06FD1
# ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
# The default validator list publishers that the rippled instance
# The default validator list publishers that the xahaud instance
# trusts.
#
# WARNING: Changing these values can cause your rippled instance to see a
# validated ledger that contradicts other rippled instances'
# WARNING: Changing these values can cause your xahaud instance to see a
# validated ledger that contradicts other xahaud instances'
# validated ledgers (aka a ledger fork) if your validator list(s)
# do not sufficiently overlap with the list(s) used by others.
# See: https://arxiv.org/pdf/1802.07242.pdf
[validator_list_sites]
https://vl.ripple.com
https://vl.xrplf.org
https://vl.xahau.org
[validator_list_keys]
#vl.ripple.com
ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
# vl.xrplf.org
ED45D1840EE724BE327ABE9146503D5848EFD5F38B6D5FEDE71E80ACCE5E6E738B
# vl.xahau.org
EDA46E9C39B1389894E690E58914DC1029602870370A0993E5B87C4A24EAF4A8E8
[import_vl_keys]
# vl.xrplf.org
ED45D1840EE724BE327ABE9146503D5848EFD5F38B6D5FEDE71E80ACCE5E6E738B
ED42AEC58B701EEBB77356FFFEC26F83C1F0407263530F068C7C73D392C7E06FD1
ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
# To use the test network (see https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html),
# To use the test network (see https://xahau.network/docs/infrastructure/installing-xahaud),
# use the following configuration instead:
#
# [validator_list_sites]
# https://vl.altnet.rippletest.net
#
# [validator_list_keys]
# ED264807102805220DA0F312E71FC2C69E1552C9C5790F6C25E3729DEB573D5860
# [validators]
# nHBoJCE3wPgkTcrNPMHyTJFQ2t77EyCAqcBRspFCpL6JhwCm94VZ
# nHUVv4g47bFMySAZFUKVaXUYEmfiUExSoY4FzwXULNwJRzju4XnQ
# nHBvr8avSFTz4TFxZvvi4rEJZZtyqE3J6KAAcVWVtifsE7edPM7q
# nHUH3Z8TRU57zetHbEPr1ynyrJhxQCwrJvNjr4j1SMjYADyW1WWe
#
# [import_vl_keys]
# ED264807102805220DA0F312E71FC2C69E1552C9C5790F6C25E3729DEB573D5860
# [validator_list_threshold]
#
# Minimum number of validator lists on which a validator must be listed in
# order to be used.
#
# This can be set explicitly to any positive integer number not greater than
# the size of [validator_list_keys]. If it is not set, or set to 0, the
# value will be calculated at startup from the size of [validator_list_keys],
# where the calculation is:
#
# threshold = size(validator_list_keys) < 3
# ? 1
# : floor(size(validator_list_keys) / 2) + 1
[validator_list_threshold]
0

View File

@@ -9,7 +9,7 @@
#
# 2. Peer Protocol
#
# 3. Ripple Protocol
# 3. XRPL Protocol
#
# 4. HTTPS Client
#
@@ -29,18 +29,17 @@
#
# Purpose
#
# This file documents and provides examples of all rippled server process
# configuration options. When the rippled server instance is launched, it
# This file documents and provides examples of all xahaud server process
# configuration options. When the xahaud server instance is launched, it
# looks for a file with the following name:
#
# rippled.cfg
# xahaud.cfg
#
# For more information on where the rippled server instance searches for the
# file, visit:
# To run xahaud with a custom configuration file, use the "--conf {file}" flag.
# By default, xahaud will look in the local working directory or the home directory.
#
# https://xrpl.org/commandline-usage.html#generic-options
#
# This file should be named rippled.cfg. This file is UTF-8 with DOS, UNIX,
# This file should be named xahaud.cfg. This file is UTF-8 with DOS, UNIX,
# or Mac style end of lines. Blank lines and lines beginning with '#' are
# ignored. Undefined sections are reserved. No escapes are currently defined.
#
@@ -89,8 +88,8 @@
#
#
#
# rippled offers various server protocols to clients making inbound
# connections. The listening ports rippled uses are "universal" ports
# xahaud offers various server protocols to clients making inbound
# connections. The listening ports xahaud uses are "universal" ports
# which may be configured to handshake in one or more of the available
# supported protocols. These universal ports simplify administration:
# A single open port can be used for multiple protocols.
@@ -103,7 +102,7 @@
#
# A list of port names and key/value pairs. A port name must start with a
# letter and contain only letters and numbers. The name is not case-sensitive.
# For each name in this list, rippled will look for a configuration file
# For each name in this list, xahaud will look for a configuration file
# section with the same name and use it to create a listening port. The
# name is informational only; the choice of name does not affect the function
# of the listening port.
@@ -134,7 +133,7 @@
# ip = 127.0.0.1
# protocol = http
#
# When rippled is used as a command line client (for example, issuing a
# When xahaud is used as a command line client (for example, issuing a
# server stop command), the first port advertising the http or https
# protocol will be used to make the connection.
#
@@ -175,7 +174,7 @@
# same time. It is possible have both Websockets and Secure Websockets
# together in one port.
#
# NOTE If no ports support the peer protocol, rippled cannot
# NOTE If no ports support the peer protocol, xahaud cannot
# receive incoming peer connections or become a superpeer.
#
# limit = <number>
@@ -194,7 +193,7 @@
# required. IP address restrictions, if any, will be checked in addition
# to the credentials specified here.
#
# When acting in the client role, rippled will supply these credentials
# When acting in the client role, xahaud will supply these credentials
# using HTTP's Basic Authentication headers when making outbound HTTP/S
# requests.
#
@@ -237,7 +236,7 @@
# WS, or WSS protocol interfaces. If administrative commands are
# disabled for a port, these credentials have no effect.
#
# When acting in the client role, rippled will supply these credentials
# When acting in the client role, xahaud will supply these credentials
# in the submitted JSON for any administrative command requests when
# invoking JSON-RPC commands on remote servers.
#
@@ -258,7 +257,7 @@
# resource controls will default to those for non-administrative users.
#
# The secure_gateway IP addresses are intended to represent
# proxies. Since rippled trusts these hosts, they must be
# proxies. Since xahaud trusts these hosts, they must be
# responsible for properly authenticating the remote user.
#
# If some IP addresses are included for both "admin" and
@@ -272,7 +271,7 @@
# Use the specified files when configuring SSL on the port.
#
# NOTE If no files are specified and secure protocols are selected,
# rippled will generate an internal self-signed certificate.
# xahaud will generate an internal self-signed certificate.
#
# The files have these meanings:
#
@@ -297,12 +296,12 @@
# Control the ciphers which the server will support over SSL on the port,
# specified using the OpenSSL "cipher list format".
#
# NOTE If unspecified, rippled will automatically configure a modern
# NOTE If unspecified, xahaud will automatically configure a modern
# cipher suite. This default suite should be widely supported.
#
# You should not modify this string unless you have a specific
# reason and cryptographic expertise. Incorrect modification may
# keep rippled from connecting to other instances of rippled or
# keep xahaud from connecting to other instances of xahaud or
# prevent RPC and WebSocket clients from connecting.
#
# send_queue_limit = [1..65535]
@@ -353,7 +352,7 @@
#
# Examples:
# { "command" : "server_info" }
# { "command" : "log_level", "partition" : "ripplecalc", "severity" : "trace" }
# { "command" : "log_level", "partition" : "xahaudcalc", "severity" : "trace" }
#
#
#
@@ -382,10 +381,9 @@
#-----------------
#
# These settings control security and access attributes of the Peer to Peer
# server section of the rippled process. Peer Protocol implements the
# Ripple Payment protocol. It is over peer connections that transactions
# and validations are passed from to machine to machine, to determine the
# contents of validated ledgers.
# server section of the xahaud process. It is over peer connections that
# transactions and validations are passed from to machine to machine, to
# determine the contents of validated ledgers.
#
#
#
@@ -396,7 +394,7 @@
# true - enables compression
# false - disables compression [default].
#
# The rippled server can save bandwidth by compressing its peer-to-peer communications,
# The xahaud server can save bandwidth by compressing its peer-to-peer communications,
# at a cost of greater CPU usage. If you enable link compression,
# the server automatically compresses communications with peer servers
# that also have link compression enabled.
@@ -406,33 +404,34 @@
#
# [ips]
#
# List of hostnames or ips where the Ripple protocol is served. A default
# List of hostnames or ips where the XRPL protocol is served. A default
# starter list is included in the code and used if no other hostnames are
# available.
#
# One address or domain name per line is allowed. A port may must be
# specified after adding a space to the address. The ordering of entries
# does not generally matter.
# One address or domain name per line is allowed. A port may be specified
# after adding a space to the address. If a port is not specified, the default
# port of 2459 will be used. Many servers still use the legacy port of 51235.
# To connect to such servers, you must specify the port number. The ordering
# of entries does not generally matter.
#
# The default list of entries is:
# - r.ripple.com 51235
# - zaphod.alloy.ee 51235
# - sahyadri.isrdc.in 51235
# - hubs.xahau.as16089.net 21337
# - bacab.alloy.ee 21337
#
# Examples:
#
# [ips]
# 192.168.0.1
# 192.168.0.1 2459
# r.ripple.com 51235
# 192.168.0.1 21337
# bacab.alloy.ee 21337
#
#
# [ips_fixed]
#
# List of IP addresses or hostnames to which rippled should always attempt to
# List of IP addresses or hostnames to which xahaud should always attempt to
# maintain peer connections with. This is useful for manually forming private
# networks, for example to configure a validation server that connects to the
# Ripple network through a public-facing server, or for building a set
# Xahau Network through a public-facing server, or for building a set
# of cluster peers.
#
# One address or domain names per line is allowed. A port must be specified
@@ -570,7 +569,7 @@
#
# minimum_txn_in_ledger_standalone = <number>
#
# Like minimum_txn_in_ledger when rippled is running in standalone
# Like minimum_txn_in_ledger when xahaud is running in standalone
# mode. Default: 1000.
#
# target_txn_in_ledger = <number>
@@ -707,7 +706,7 @@
#
# [validator_token]
#
# This is an alternative to [validation_seed] that allows rippled to perform
# This is an alternative to [validation_seed] that allows xahaud to perform
# validation without having to store the validator keys on the network
# connected server. The field should contain a single token in the form of a
# base64-encoded blob.
@@ -742,19 +741,18 @@
#
# Specify the file by its name or path.
# Unless an absolute path is specified, it will be considered relative to
# the folder in which the rippled.cfg file is located.
# the folder in which the xahaud.cfg file is located.
#
# Examples:
# /home/ripple/validators.txt
# C:/home/ripple/validators.txt
# /home/xahaud/validators.txt
# C:/home/xahaud/validators.txt
#
# Example content:
# [validators]
# n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7
# n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj
# n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C
# n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS
# n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA
# n9L3GdotB8a3AqtsvS7NXt4BUTQSAYyJUr9xtFj2qXJjfbZsawKY
# n9LQDHLWyFuAn5BXJuW2ow5J9uGqpmSjRYS2cFRpxf6uJbxwDzvM
# n9MCWyKVUkiatXVJTKUrAESB5kBFP8R3hm43jGHtg8WBnjv3iDfb
# n9KWXCLRhjpajuZtULTXsy6R5xbisA6ozGxM4zdEJFq6uHiFZDvW
#
#
#
@@ -837,7 +835,7 @@
#
# 0: Disable the ledger replay feature [default]
# 1: Enable the ledger replay feature. With this feature enabled, when
# acquiring a ledger from the network, a rippled node only downloads
# acquiring a ledger from the network, a xahaud node only downloads
# the ledger header and the transactions instead of the whole ledger.
# And the ledger is built by applying the transactions to the parent
# ledger.
@@ -848,10 +846,9 @@
#
#----------------
#
# The rippled server instance uses HTTPS GET requests in a variety of
# The xahaud server instance uses HTTPS GET requests in a variety of
# circumstances, including but not limited to contacting trusted domains to
# fetch information such as mapping an email address to a Ripple Payment
# Network address.
# fetch information such as mapping an email address to a user's r address.
#
# [ssl_verify]
#
@@ -888,7 +885,7 @@
#
#------------
#
# rippled creates 4 SQLite database to hold bookkeeping information
# xahaud creates 4 SQLite database to hold bookkeeping information
# about transactions, local credentials, and various other things.
# It also creates the NodeDB, which holds all the objects that
# make up the current and historical ledgers.
@@ -899,7 +896,7 @@
# the performance of the server.
#
# Partial pathnames will be considered relative to the location of
# the rippled.cfg file.
# the xahaud.cfg file.
#
# [node_db] Settings for the Node Database (required)
#
@@ -910,18 +907,18 @@
#
# Example:
# type=nudb
# path=db/nudb
# path=/opt/xahaud/db/nudb
#
# The "type" field must be present and controls the choice of backend:
#
# type = NuDB
#
# NuDB is a high-performance database written by Ripple Labs and optimized
# for rippled and solid-state drives.
# for and solid-state drives.
#
# NuDB maintains its high speed regardless of the amount of history
# stored. Online delete may be selected, but is not required. NuDB is
# available on all platforms that rippled runs on.
# available on all platforms that xahaud runs on.
#
# type = RocksDB
#
@@ -940,11 +937,7 @@
# RWDB is recommended for Validator and Peer nodes that are not required to
# store history.
#
# RWDB maintains its high speed regardless of the amount of history
# stored. Online delete should NOT be used instead RWDB will use the
# ledger_history config value to determine how many ledgers to keep in memory.
#
# Required keys for NuDB, RWDB and RocksDB:
# Required keys for NuDB and RocksDB:
#
# path Location to store the database
#
@@ -972,18 +965,56 @@
# if sufficient IOPS capacity is available.
# Default 0.
#
# Optional keys for NuDB or RocksDB:
#
# earliest_seq The default is 32570 to match the XRP ledger
# network's earliest allowed sequence. Alternate
# networks may set this value. Minimum value of 1.
# online_delete for RWDB, NuDB and RocksDB:
#
# online_delete Minimum value of 256. Enable automatic purging
# of older ledger information. Maintain at least this
# number of ledger records online. Must be greater
# than or equal to ledger_history. If using RWDB
# this value is ignored.
# than or equal to ledger_history.
#
# REQUIRED for RWDB to prevent out-of-memory errors.
# Optional for NuDB and RocksDB.
#
# Optional keys for NuDB and RocksDB:
#
# earliest_seq The default is 32570 to match the XRP Ledger's
# network's earliest allowed sequence. Alternate
# networks may set this value. Minimum value of 1.
#
# Optional keys for NuDB only:
#
# nudb_block_size EXPERIMENTAL: Block size in bytes for NuDB storage.
# Must be a power of 2 between 4096 and 32768. Default is 4096.
#
# This parameter controls the fundamental storage unit
# size for NuDB's internal data structures. The choice
# of block size can significantly impact performance
# depending on your storage hardware and filesystem:
#
# - 4096 bytes: Optimal for most standard SSDs and
# traditional filesystems (ext4, NTFS, HFS+).
# Provides good balance of performance and storage
# efficiency. Recommended for most deployments.
#
# - 8192-16384 bytes: May improve performance on
# high-end NVMe SSDs and copy-on-write filesystems
# like ZFS or Btrfs that benefit from larger block
# alignment. Can reduce metadata overhead for large
# databases.
#
# - 32768 bytes (32K): Maximum supported block size
# for high-performance scenarios with very fast
# storage. May increase memory usage and reduce
# efficiency for smaller databases.
#
# Note: This setting cannot be changed after database
# creation without rebuilding the entire database.
# Choose carefully based on your hardware and expected
# database size.
#
# Example: nudb_block_size=4096
#
# These keys modify the behavior of online_delete, and thus are only
# relevant if online_delete is defined and non-zero:
#
@@ -1017,7 +1048,7 @@
#
# recovery_wait_seconds
# The online delete process checks periodically
# that rippled is still in sync with the network,
# that xahaud is still in sync with the network,
# and that the validated ledger is less than
# 'age_threshold_seconds' old. If not, then continue
# sleeping for this number of seconds and
@@ -1037,8 +1068,8 @@
# The server creates and maintains 4 to 5 bookkeeping SQLite databases in
# the 'database_path' location. If you omit this configuration setting,
# the server creates a directory called "db" located in the same place as
# your rippled.cfg file.
# Partial pathnames are relative to the location of the rippled executable.
# your xahaud.cfg file.
# Partial pathnames are relative to the location of the xahaud executable.
#
# [sqlite] Tuning settings for the SQLite databases (optional)
#
@@ -1088,7 +1119,7 @@
# The default is "wal", which uses a write-ahead
# log to implement database transactions.
# Alternately, "memory" saves disk I/O, but if
# rippled crashes during a transaction, the
# xahaud crashes during a transaction, the
# database is likely to be corrupted.
# See https://www.sqlite.org/pragma.html#pragma_journal_mode
# for more details about the available options.
@@ -1098,7 +1129,7 @@
# synchronous Valid values: off, normal, full, extra
# The default is "normal", which works well with
# the "wal" journal mode. Alternatively, "off"
# allows rippled to continue as soon as data is
# allows xahaud to continue as soon as data is
# passed to the OS, which can significantly
# increase speed, but risks data corruption if
# the host computer crashes before writing that
@@ -1112,7 +1143,7 @@
# The default is "file", which will use files
# for temporary database tables and indices.
# Alternatively, "memory" may save I/O, but
# rippled does not currently use many, if any,
# xahaud does not currently use many, if any,
# of these temporary objects.
# See https://www.sqlite.org/pragma.html#pragma_temp_store
# for more details about the available options.
@@ -1141,7 +1172,7 @@
#
# These settings are designed to help server administrators diagnose
# problems, and obtain detailed information about the activities being
# performed by the rippled process.
# performed by the xahaud process.
#
#
#
@@ -1158,7 +1189,7 @@
#
# Configuration parameters for the Beast. Insight stats collection module.
#
# Insight is a module that collects information from the areas of rippled
# Insight is a module that collects information from the areas of xahaud
# that have instrumentation. The configuration parameters control where the
# collection metrics are sent. The parameters are expressed as key = value
# pairs with no white space. The main parameter is the choice of server:
@@ -1167,7 +1198,7 @@
#
# Choice of server to send metrics to. Currently the only choice is
# "statsd" which sends UDP packets to a StatsD daemon, which must be
# running while rippled is running. More information on StatsD is
# running while xahaud is running. More information on StatsD is
# available here:
# https://github.com/b/statsd_spec
#
@@ -1177,7 +1208,7 @@
# in the format, n.n.n.n:port.
#
# "prefix" A string prepended to each collected metric. This is used
# to distinguish between different running instances of rippled.
# to distinguish between different running instances of xahaud.
#
# If this section is missing, or the server type is unspecified or unknown,
# statistics are not collected or reported.
@@ -1204,7 +1235,7 @@
#
# Example:
# [perf]
# perf_log=/var/log/rippled/perf.log
# perf_log=/var/log/xahaud/perf.log
# log_interval=2
#
#-------------------------------------------------------------------------------
@@ -1213,8 +1244,8 @@
#
#----------
#
# The vote settings configure settings for the entire Ripple network.
# While a single instance of rippled cannot unilaterally enforce network-wide
# The vote settings configure settings for the entire Xahau Network.
# While a single instance of xahaud cannot unilaterally enforce network-wide
# settings, these choices become part of the instance's vote during the
# consensus process for each voting ledger.
#
@@ -1226,9 +1257,9 @@
#
# The cost of the reference transaction fee, specified in drops.
# The reference transaction is the simplest form of transaction.
# It represents an XRP payment between two parties.
# It represents an XAH payment between two parties.
#
# If this parameter is unspecified, rippled will use an internal
# If this parameter is unspecified, xahaud will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
@@ -1237,26 +1268,26 @@
# account_reserve = <drops>
#
# The account reserve requirement is specified in drops. The portion of an
# account's XRP balance that is at or below the reserve may only be
# account's XAH balance that is at or below the reserve may only be
# spent on transaction fees, and not transferred out of the account.
#
# If this parameter is unspecified, rippled will use an internal
# If this parameter is unspecified, xahaud will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# account_reserve = 10000000 # 10 XRP
# account_reserve = 10000000 # 10 XAH
#
# owner_reserve = <drops>
#
# The owner reserve is the amount of XRP reserved in the account for
# The owner reserve is the amount of XAH reserved in the account for
# each ledger item owned by the account. Ledger items an account may
# own include trust lines, open orders, and tickets.
#
# If this parameter is unspecified, rippled will use an internal
# If this parameter is unspecified, xahaud will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# owner_reserve = 2000000 # 2 XRP
# owner_reserve = 2000000 # 2 XAH
#
#-------------------------------------------------------------------------------
#
@@ -1294,7 +1325,7 @@
# tool instead.
#
# This flag has no effect on the "sign" and "sign_for" command line options
# that rippled makes available.
# that xahaud makes available.
#
# The default value of this field is "false"
#
@@ -1373,7 +1404,7 @@
#--------------------
#
# Administrators can use these values as a starting point for configuring
# their instance of rippled, but each value should be checked to make sure
# their instance of xahaud, but each value should be checked to make sure
# it meets the business requirements for the organization.
#
# Server
@@ -1383,7 +1414,7 @@
# "peer"
#
# Peer protocol open to everyone. This is required to accept
# incoming rippled connections. This does not affect automatic
# incoming xahaud connections. This does not affect automatic
# or manual outgoing Peer protocol connections.
#
# "rpc"
@@ -1400,7 +1431,7 @@
#
# ETL commands for Clio. We recommend setting secure_gateway
# in this section to a comma-separated list of the addresses
# of your Clio servers, in order to bypass rippled's rate limiting.
# of your Clio servers, in order to bypass xahaud's rate limiting.
#
# This port is commented out but can be enabled by removing
# the '#' from each corresponding line including the entry under [server]
@@ -1417,8 +1448,8 @@
# NOTE
#
# To accept connections on well known ports such as 80 (HTTP) or
# 443 (HTTPS), most operating systems will require rippled to
# run with administrator privileges, or else rippled will not start.
# 443 (HTTPS), most operating systems will require xahaud to
# run with administrator privileges, or else xahaud will not start.
[server]
port_rpc_admin_local
@@ -1429,20 +1460,20 @@ port_ws_admin_local
#ssl_cert = /etc/ssl/certs/server.crt
[port_rpc_admin_local]
port = 5005
port = 5009
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_peer]
port = 51235
port = 21337
ip = 0.0.0.0
# alternatively, to accept connections on IPv4 + IPv6, use:
#ip = ::
protocol = peer
[port_ws_admin_local]
port = 6006
port = 6009
ip = 127.0.0.1
admin = 127.0.0.1
protocol = ws
@@ -1454,16 +1485,16 @@ ip = 127.0.0.1
secure_gateway = 127.0.0.1
#[port_ws_public]
#port = 6005
#port = 6008
#ip = 127.0.0.1
#protocol = wss
#send_queue_limit = 500
#-------------------------------------------------------------------------------
# This is primary persistent datastore for rippled. This includes transaction
# This is primary persistent datastore for xahaud. This includes transaction
# metadata, account states, and ledger headers. Helpful information can be
# found at https://xrpl.org/capacity-planning.html#node-db-type
# found at https://xahau.network/docs/infrastructure/system-requirements
# type=NuDB is recommended for non-validators with fast SSDs. Validators or
# slow / spinning disks should use RocksDB. Caution: Spinning disks are
# not recommended. They do not perform well enough to consistently remain
@@ -1476,30 +1507,34 @@ secure_gateway = 127.0.0.1
# deletion.
[node_db]
type=NuDB
path=/var/lib/rippled/db/nudb
path=/opt/xahaud/db/nudb
online_delete=512
advisory_delete=0
[database_path]
/var/lib/rippled/db
/opt/xahaud/db
# This needs to be an absolute directory reference, not a relative one.
# Modify this value as required.
[debug_logfile]
/var/log/rippled/debug.log
/var/log/xahaud/debug.log
# To use the XRP test network
# (see https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html),
# To use the Xahau test network
# (see https://xahau.network/docs/infrastructure/installing-xahaud),
# use the following [ips] section:
# [ips]
# r.altnet.rippletest.net 51235
# 79.110.60.121 21338
# 79.110.60.122 21338
# 79.110.60.124 21338
# 79.110.60.125 21338
# File containing trusted validator keys or validator list publishers.
# Unless an absolute path is specified, it will be considered relative to the
# folder in which the rippled.cfg file is located.
# folder in which the xahaud.cfg file is located.
[validators_file]
validators.txt
validators-xahau.txt
# Turn down default logging to save disk space in the long run.
# Valid values here are trace, debug, info, warning, error, and fatal

View File

@@ -1,4 +1,4 @@
# standalone: ./rippled -a --ledgerfile config/genesis.json --conf config/rippled-standalone.cfg
# standalone: ./xahaud -a --ledgerfile config/genesis.json --conf config/xahaud-standalone.cfg
[server]
port_rpc_admin_local
port_ws_public
@@ -21,7 +21,7 @@ ip = 0.0.0.0
protocol = ws
# [port_peer]
# port = 51235
# port = 21337
# ip = 0.0.0.0
# protocol = peer
@@ -69,7 +69,8 @@ time.nist.gov
pool.ntp.org
[ips]
r.ripple.com 51235
bacab.alloy.ee 21337
hubs.xahau.as16089.net 21337
[validators_file]
validators-example.txt
@@ -94,7 +95,7 @@ validators-example.txt
1000000
[network_id]
21338
21337
[amendments]
740352F2412A9909880C23A559FCECEDA3BE2126FED62FC7660D628A06927F11 Flow

View File

@@ -17,7 +17,9 @@ target_compile_features (common INTERFACE cxx_std_20)
target_compile_definitions (common
INTERFACE
$<$<CONFIG:Debug>:DEBUG _DEBUG>
$<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG>)
$<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG>
# TODO: Remove once we have migrated functions from OpenSSL 1.x to 3.x.
OPENSSL_SUPPRESS_DEPRECATED)
# ^^^^ NOTE: CMAKE release builds already have NDEBUG
# defined, so no need to add it explicitly except for
# this special case of (profile ON) and (assert OFF)
@@ -120,6 +122,7 @@ else ()
target_link_libraries (common
INTERFACE
-rdynamic
$<$<BOOL:${is_linux}>:-Wl,-z,relro,-z,now,--build-id>
# link to static libc/c++ iff:
# * static option set and
# * NOT APPLE (AppleClang does not support static libc/c++) and
@@ -130,6 +133,17 @@ else ()
>)
endif ()
# Antithesis instrumentation will only be built and deployed using machines running Linux.
if (voidstar)
if (NOT CMAKE_BUILD_TYPE STREQUAL "Debug")
message(FATAL_ERROR "Antithesis instrumentation requires Debug build type, aborting...")
elseif (NOT is_linux)
message(FATAL_ERROR "Antithesis instrumentation requires Linux, aborting...")
elseif (NOT (is_clang AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 16.0))
message(FATAL_ERROR "Antithesis instrumentation requires Clang version 16 or later, aborting...")
endif ()
endif ()
if (use_mold)
# use mold linker if available
execute_process (
@@ -137,6 +151,16 @@ if (use_mold)
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "mold")
target_link_libraries (common INTERFACE -fuse-ld=mold)
else ()
# Checking for mold linker (< GCC 12.1.0)
execute_process (
COMMAND ${CMAKE_CXX_COMPILER} -B/usr/libexec/mold -Wl,--version
OUTPUT_VARIABLE LD_VERSION_OUT
ERROR_VARIABLE LD_VERSION_ERR)
set(LD_VERSION "${LD_VERSION_OUT}${LD_VERSION_ERR}")
if ("${LD_VERSION}" MATCHES "mold")
target_link_libraries (common INTERFACE -B/usr/libexec/mold)
endif ()
endif ()
unset (LD_VERSION)
elseif (use_gold AND is_gcc)

View File

@@ -9,6 +9,7 @@ include(target_protobuf_sources)
# define a bunch of `static const` variables with the same names,
# so we just build them as a separate library.
add_library(xrpl.libpb)
set_target_properties(xrpl.libpb PROPERTIES UNITY_BUILD OFF)
target_protobuf_sources(xrpl.libpb xrpl/proto
LANGUAGE cpp
IMPORT_DIRS include/xrpl/proto
@@ -47,11 +48,74 @@ target_link_libraries(xrpl.libpb
gRPC::grpc++
)
# TODO: Clean up the number of library targets later.
add_library(xrpl.imports.main INTERFACE)
target_link_libraries(xrpl.imports.main
INTERFACE
LibArchive::LibArchive
OpenSSL::Crypto
Ripple::boost
wasmedge::wasmedge
Ripple::opts
Ripple::syslibs
absl::random_random
date::date
ed25519::ed25519
secp256k1::secp256k1
xrpl.libpb
xxHash::xxhash
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp>
)
include(add_module)
include(target_link_modules)
# Level 01
add_module(xrpl beast)
target_link_libraries(xrpl.libxrpl.beast PUBLIC
xrpl.imports.main
xrpl.libpb
)
# Conditionally add enhanced logging source when BEAST_ENHANCED_LOGGING is enabled
if(DEFINED BEAST_ENHANCED_LOGGING AND BEAST_ENHANCED_LOGGING)
target_sources(xrpl.libxrpl.beast PRIVATE
src/libxrpl/beast/utility/src/beast_EnhancedLogging.cpp)
endif()
# Level 02
add_module(xrpl basics)
target_link_libraries(xrpl.libxrpl.basics PUBLIC xrpl.libxrpl.beast)
# Level 03
add_module(xrpl json)
target_link_libraries(xrpl.libxrpl.json PUBLIC xrpl.libxrpl.basics)
add_module(xrpl crypto)
target_link_libraries(xrpl.libxrpl.crypto PUBLIC xrpl.libxrpl.basics)
add_module(xrpl hook)
target_link_libraries(xrpl.libxrpl.hook PUBLIC xrpl.libxrpl.basics)
# Level 04
add_module(xrpl protocol)
target_link_libraries(xrpl.libxrpl.protocol PUBLIC
xrpl.libxrpl.crypto
xrpl.libxrpl.hook
xrpl.libxrpl.json
)
# Level 05
add_module(xrpl resource)
target_link_libraries(xrpl.libxrpl.resource PUBLIC xrpl.libxrpl.protocol)
add_module(xrpl server)
target_link_libraries(xrpl.libxrpl.server PUBLIC xrpl.libxrpl.protocol)
add_library(xrpl.libxrpl)
set_target_properties(xrpl.libxrpl PROPERTIES OUTPUT_NAME xrpl)
if(unity)
set_target_properties(xrpl.libxrpl PROPERTIES UNITY_BUILD ON)
endif()
# Try to find the ACL library
find_library(ACL_LIBRARY NAMES acl)
@@ -70,43 +134,28 @@ file(GLOB_RECURSE sources CONFIGURE_DEPENDS
)
target_sources(xrpl.libxrpl PRIVATE ${sources})
target_include_directories(xrpl.libxrpl
PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
$<INSTALL_INTERFACE:include>)
target_compile_definitions(xrpl.libxrpl
PUBLIC
BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT
BOOST_CONTAINER_FWD_BAD_DEQUE
HAS_UNCAUGHT_EXCEPTIONS=1)
target_compile_options(xrpl.libxrpl
PUBLIC
$<$<BOOL:${is_gcc}>:-Wno-maybe-uninitialized>
target_link_modules(xrpl PUBLIC
basics
beast
crypto
hook
json
protocol
resource
server
)
target_link_libraries(xrpl.libxrpl
PUBLIC
LibArchive::LibArchive
OpenSSL::Crypto
Ripple::boost
wasmedge::wasmedge
Ripple::opts
Ripple::syslibs
absl::random_random
date::date
ed25519::ed25519
secp256k1::secp256k1
xrpl.libpb
xxHash::xxhash
)
# All headers in libxrpl are in modules.
# Uncomment this stanza if you have not yet moved new headers into a module.
# target_include_directories(xrpl.libxrpl
# PRIVATE
# $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
# PUBLIC
# $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
# $<INSTALL_INTERFACE:include>)
if(xrpld)
add_executable(rippled)
if(unity)
set_target_properties(rippled PROPERTIES UNITY_BUILD ON)
endif()
if(tests)
target_compile_definitions(rippled PUBLIC ENABLE_TESTS)
endif()
@@ -132,6 +181,11 @@ if(xrpld)
Ripple::opts
Ripple::libs
xrpl.libxrpl
# Workaround for a Conan 1.x bug that prevents static linking of libstdc++
# when a dependency (snappy) modifies system_libs. See the comment in
# external/snappy/conanfile.py for a full explanation.
# This is likely not strictly necessary, but listed explicitly as a good practice.
m
)
exclude_if_included(rippled)
# define a macro for tests that might need to
@@ -140,6 +194,19 @@ if(xrpld)
target_compile_definitions(rippled PRIVATE RIPPLED_RUNNING_IN_CI)
endif ()
if(voidstar)
target_compile_options(rippled
PRIVATE
-fsanitize-coverage=trace-pc-guard
)
# rippled requires access to antithesis-sdk-cpp implementation file
# antithesis_instrumentation.h, which is not exported as INTERFACE
target_include_directories(rippled
PRIVATE
${CMAKE_SOURCE_DIR}/external/antithesis-sdk
)
endif()
# any files that don't play well with unity should be added here
if(tests)
set_source_files_properties(

View File

@@ -2,14 +2,26 @@
install stuff
#]===================================================================]
include(create_symbolic_link)
install (
TARGETS
common
opts
ripple_syslibs
ripple_boost
xrpl.imports.main
xrpl.libpb
xrpl.libxrpl.basics
xrpl.libxrpl.beast
xrpl.libxrpl.hook
xrpl.libxrpl.crypto
xrpl.libxrpl.json
xrpl.libxrpl.protocol
xrpl.libxrpl.resource
xrpl.libxrpl.server
xrpl.libxrpl
antithesis-sdk-cpp
EXPORT RippleExports
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
@@ -21,12 +33,12 @@ install(
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}"
)
if(NOT WIN32)
install(
CODE "file(CREATE_LINK xrpl \
\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_INCLUDEDIR}/ripple SYMBOLIC)"
)
endif()
install(CODE "
set(CMAKE_MODULE_PATH \"${CMAKE_MODULE_PATH}\")
include(create_symbolic_link)
create_symbolic_link(xrpl \
\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_INCLUDEDIR}/ripple)
")
install (EXPORT RippleExports
FILE RippleTargets.cmake
@@ -55,12 +67,12 @@ if (is_root_project AND TARGET rippled)
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/rippled-example.cfg\" etc rippled.cfg)
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/validators-example.txt\" etc validators.txt)
")
if(NOT WIN32)
install(
CODE "file(CREATE_LINK rippled${suffix} \
\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_BINDIR}/xrpld${suffix} SYMBOLIC)"
)
endif()
install(CODE "
set(CMAKE_MODULE_PATH \"${CMAKE_MODULE_PATH}\")
include(create_symbolic_link)
create_symbolic_link(rippled${suffix} \
\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_BINDIR}/xrpld${suffix})
")
endif ()
install (

View File

@@ -7,6 +7,9 @@ add_library (Ripple::opts ALIAS opts)
target_compile_definitions (opts
INTERFACE
BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS
BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT
BOOST_CONTAINER_FWD_BAD_DEQUE
HAS_UNCAUGHT_EXCEPTIONS=1
$<$<BOOL:${boost_show_deprecated}>:
BOOST_ASIO_NO_DEPRECATED
BOOST_FILESYSTEM_NO_DEPRECATED
@@ -18,10 +21,12 @@ target_compile_definitions (opts
>
$<$<BOOL:${beast_no_unit_test_inline}>:BEAST_NO_UNIT_TEST_INLINE=1>
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
$<$<BOOL:${single_io_service_thread}>:RIPPLE_SINGLE_IO_SERVICE_THREAD=1>)
$<$<BOOL:${single_io_service_thread}>:RIPPLE_SINGLE_IO_SERVICE_THREAD=1>
$<$<BOOL:${voidstar}>:ENABLE_VOIDSTAR>)
target_compile_options (opts
INTERFACE
$<$<AND:$<BOOL:${is_gcc}>,$<COMPILE_LANGUAGE:CXX>>:-Wsuggest-override>
$<$<BOOL:${is_gcc}>:-Wno-maybe-uninitialized>
$<$<BOOL:${perf}>:-fno-omit-frame-pointer>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${coverage}>>:-g --coverage -fprofile-abs-path>
$<$<AND:$<BOOL:${is_clang}>,$<BOOL:${coverage}>>:-g --coverage>

View File

@@ -1,33 +0,0 @@
#[===================================================================[
NIH prefix path..this is where we will download
and build any ExternalProjects, and they will hopefully
survive across build directory deletion (manual cleans)
#]===================================================================]
string (REGEX REPLACE "[ \\/%]+" "_" gen_for_path ${CMAKE_GENERATOR})
string (TOLOWER ${gen_for_path} gen_for_path)
# HACK: trying to shorten paths for windows CI (which hits 260 MAXPATH easily)
# @see: https://issues.jenkins-ci.org/browse/JENKINS-38706?focusedCommentId=339847
string (REPLACE "visual_studio" "vs" gen_for_path ${gen_for_path})
if (NOT DEFINED NIH_CACHE_ROOT)
if (DEFINED ENV{NIH_CACHE_ROOT})
set (NIH_CACHE_ROOT $ENV{NIH_CACHE_ROOT})
else ()
set (NIH_CACHE_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/.nih_c")
endif ()
endif ()
set (nih_cache_path
"${NIH_CACHE_ROOT}/${gen_for_path}/${CMAKE_CXX_COMPILER_ID}_${CMAKE_CXX_COMPILER_VERSION}")
if (NOT is_multiconfig)
set (nih_cache_path "${nih_cache_path}/${CMAKE_BUILD_TYPE}")
endif ()
file(TO_CMAKE_PATH "${nih_cache_path}" nih_cache_path)
message (STATUS "NIH-EP cache path: ${nih_cache_path}")
## two convenience variables:
set (ep_lib_prefix ${CMAKE_STATIC_LIBRARY_PREFIX})
set (ep_lib_suffix ${CMAKE_STATIC_LIBRARY_SUFFIX})
# this is a setting for FetchContent and needs to be
# a cache variable
# https://cmake.org/cmake/help/latest/module/FetchContent.html#populating-the-content
set (FETCHCONTENT_BASE_DIR ${nih_cache_path} CACHE STRING "" FORCE)

View File

@@ -17,6 +17,10 @@ if(unity)
if(NOT is_ci)
set(CMAKE_UNITY_BUILD_BATCH_SIZE 15 CACHE STRING "")
endif()
set(CMAKE_UNITY_BUILD ON CACHE BOOL "Do a unity build")
endif()
if(is_clang AND is_linux)
option(voidstar "Enable Antithesis instrumentation." OFF)
endif()
if(is_gcc OR is_clang)
option(coverage "Generates coverage info." OFF)

37
cmake/add_module.cmake Normal file
View File

@@ -0,0 +1,37 @@
include(isolate_headers)
# Create an OBJECT library target named
#
# ${PROJECT_NAME}.lib${parent}.${name}
#
# with sources in src/lib${parent}/${name}
# and headers in include/${parent}/${name}
# that cannot include headers from other directories in include/
# unless they come through linked libraries.
#
# add_module(parent a)
# add_module(parent b)
# target_link_libraries(project.libparent.b PUBLIC project.libparent.a)
function(add_module parent name)
set(target ${PROJECT_NAME}.lib${parent}.${name})
add_library(${target} OBJECT)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}/*.cpp"
)
target_sources(${target} PRIVATE ${sources})
target_include_directories(${target} PUBLIC
"$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>"
)
isolate_headers(
${target}
"${CMAKE_CURRENT_SOURCE_DIR}/include"
"${CMAKE_CURRENT_SOURCE_DIR}/include/${parent}/${name}"
PUBLIC
)
isolate_headers(
${target}
"${CMAKE_CURRENT_SOURCE_DIR}/src"
"${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}"
PRIVATE
)
endfunction()

View File

@@ -1,54 +0,0 @@
find_package(Boost 1.83 REQUIRED
COMPONENTS
chrono
container
context
coroutine
date_time
filesystem
json
program_options
regex
system
thread
)
add_library(ripple_boost INTERFACE)
add_library(Ripple::boost ALIAS ripple_boost)
if(XCODE)
target_include_directories(ripple_boost BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
target_compile_options(ripple_boost INTERFACE --system-header-prefix="boost/")
else()
target_include_directories(ripple_boost SYSTEM BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
endif()
target_link_libraries(ripple_boost
INTERFACE
Boost::boost
Boost::chrono
Boost::container
Boost::coroutine
Boost::date_time
Boost::filesystem
Boost::json
Boost::program_options
Boost::regex
Boost::system
Boost::iostreams
Boost::thread)
if(Boost_COMPILER)
target_link_libraries(ripple_boost INTERFACE Boost::disable_autolinking)
endif()
if(san AND is_clang)
# TODO: gcc does not support -fsanitize-blacklist...can we do something else
# for gcc ?
if(NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)
get_target_property(Boost_INCLUDE_DIRS Boost::headers INTERFACE_INCLUDE_DIRECTORIES)
endif()
message(STATUS "Adding [${Boost_INCLUDE_DIRS}] to sanitizer blacklist")
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt "src:${Boost_INCLUDE_DIRS}/*")
target_compile_options(opts
INTERFACE
# ignore boost headers for sanitizing
-fsanitize-blacklist=${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt)
endif()

View File

@@ -0,0 +1,20 @@
# file(CREATE_SYMLINK) only works on Windows with administrator privileges.
# https://stackoverflow.com/a/61244115/618906
function(create_symbolic_link target link)
if(WIN32)
if(NOT IS_SYMLINK "${link}")
if(NOT IS_ABSOLUTE "${target}")
# Relative links work do not work on Windows.
set(target "${link}/../${target}")
endif()
file(TO_NATIVE_PATH "${target}" target)
file(TO_NATIVE_PATH "${link}" link)
execute_process(COMMAND cmd.exe /c mklink /J "${link}" "${target}")
endif()
else()
file(CREATE_LINK "${target}" "${link}" SYMBOLIC)
endif()
if(NOT IS_SYMLINK "${link}")
message(ERROR "failed to create symlink: <${link}>")
endif()
endfunction()

View File

@@ -1,52 +1,4 @@
#[===================================================================[
NIH dep: boost
#]===================================================================]
if((NOT DEFINED BOOST_ROOT) AND(DEFINED ENV{BOOST_ROOT}))
set(BOOST_ROOT $ENV{BOOST_ROOT})
endif()
if((NOT DEFINED BOOST_LIBRARYDIR) AND(DEFINED ENV{BOOST_LIBRARYDIR}))
set(BOOST_LIBRARYDIR $ENV{BOOST_LIBRARYDIR})
endif()
file(TO_CMAKE_PATH "${BOOST_ROOT}" BOOST_ROOT)
if(WIN32 OR CYGWIN)
# Workaround for MSVC having two boost versions - x86 and x64 on same PC in stage folders
if((NOT DEFINED BOOST_LIBRARYDIR) AND (DEFINED BOOST_ROOT))
if(IS_DIRECTORY ${BOOST_ROOT}/stage64/lib)
set(BOOST_LIBRARYDIR ${BOOST_ROOT}/stage64/lib)
elseif(IS_DIRECTORY ${BOOST_ROOT}/stage/lib)
set(BOOST_LIBRARYDIR ${BOOST_ROOT}/stage/lib)
elseif(IS_DIRECTORY ${BOOST_ROOT}/lib)
set(BOOST_LIBRARYDIR ${BOOST_ROOT}/lib)
else()
message(WARNING "Did not find expected boost library dir. "
"Defaulting to ${BOOST_ROOT}")
set(BOOST_LIBRARYDIR ${BOOST_ROOT})
endif()
endif()
endif()
message(STATUS "BOOST_ROOT: ${BOOST_ROOT}")
message(STATUS "BOOST_LIBRARYDIR: ${BOOST_LIBRARYDIR}")
# uncomment the following as needed to debug FindBoost issues:
#set(Boost_DEBUG ON)
#[=========================================================[
boost dynamic libraries don't trivially support @rpath
linking right now (cmake's default), so just force
static linking for macos, or if requested on linux by flag
#]=========================================================]
if(static)
set(Boost_USE_STATIC_LIBS ON)
endif()
set(Boost_USE_MULTITHREADED ON)
if(static AND NOT APPLE)
set(Boost_USE_STATIC_RUNTIME ON)
else()
set(Boost_USE_STATIC_RUNTIME OFF)
endif()
# TBD:
# Boost_USE_DEBUG_RUNTIME: When ON, uses Boost libraries linked against the
find_package(Boost 1.86 REQUIRED
find_package(Boost 1.83 REQUIRED
COMPONENTS
chrono
container
@@ -58,12 +10,12 @@ find_package(Boost 1.86 REQUIRED
program_options
regex
system
iostreams
thread)
thread
)
add_library(ripple_boost INTERFACE)
add_library(Ripple::boost ALIAS ripple_boost)
if(is_xcode)
if(XCODE)
target_include_directories(ripple_boost BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
target_compile_options(ripple_boost INTERFACE --system-header-prefix="boost/")
else()
@@ -83,6 +35,7 @@ target_link_libraries(ripple_boost
Boost::program_options
Boost::regex
Boost::system
Boost::iostreams
Boost::thread)
if(Boost_COMPILER)
target_link_libraries(ripple_boost INTERFACE Boost::disable_autolinking)

View File

@@ -1,28 +0,0 @@
#[===================================================================[
NIH dep: ed25519-donna
#]===================================================================]
add_library (ed25519-donna STATIC
external/ed25519-donna/ed25519.c)
target_include_directories (ed25519-donna
PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/external>
$<INSTALL_INTERFACE:include>
PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/external/ed25519-donna)
#[=========================================================[
NOTE for macos:
https://github.com/floodyberry/ed25519-donna/issues/29
our source for ed25519-donna-portable.h has been
patched to workaround this.
#]=========================================================]
target_link_libraries (ed25519-donna PUBLIC OpenSSL::SSL)
add_library (NIH::ed25519-donna ALIAS ed25519-donna)
target_link_libraries (ripple_libs INTERFACE NIH::ed25519-donna)
#[===========================[
headers installation
#]===========================]
install (
FILES
external/ed25519-donna/ed25519.h
DESTINATION include/ed25519-donna)

File diff suppressed because it is too large Load Diff

View File

@@ -1,47 +0,0 @@
# - Try to find jemalloc
# Once done this will define
# JEMALLOC_FOUND - System has jemalloc
# JEMALLOC_INCLUDE_DIRS - The jemalloc include directories
# JEMALLOC_LIBRARIES - The libraries needed to use jemalloc
if(NOT USE_BUNDLED_JEMALLOC)
find_package(PkgConfig)
if (PKG_CONFIG_FOUND)
pkg_check_modules(PC_JEMALLOC QUIET jemalloc)
endif()
else()
set(PC_JEMALLOC_INCLUDEDIR)
set(PC_JEMALLOC_INCLUDE_DIRS)
set(PC_JEMALLOC_LIBDIR)
set(PC_JEMALLOC_LIBRARY_DIRS)
set(LIMIT_SEARCH NO_DEFAULT_PATH)
endif()
set(JEMALLOC_DEFINITIONS ${PC_JEMALLOC_CFLAGS_OTHER})
find_path(JEMALLOC_INCLUDE_DIR jemalloc/jemalloc.h
PATHS ${PC_JEMALLOC_INCLUDEDIR} ${PC_JEMALLOC_INCLUDE_DIRS}
${LIMIT_SEARCH})
# If we're asked to use static linkage, add libjemalloc.a as a preferred library name.
if(JEMALLOC_USE_STATIC)
list(APPEND JEMALLOC_NAMES
"${CMAKE_STATIC_LIBRARY_PREFIX}jemalloc${CMAKE_STATIC_LIBRARY_SUFFIX}")
endif()
list(APPEND JEMALLOC_NAMES jemalloc)
find_library(JEMALLOC_LIBRARY NAMES ${JEMALLOC_NAMES}
HINTS ${PC_JEMALLOC_LIBDIR} ${PC_JEMALLOC_LIBRARY_DIRS}
${LIMIT_SEARCH})
set(JEMALLOC_LIBRARIES ${JEMALLOC_LIBRARY})
set(JEMALLOC_INCLUDE_DIRS ${JEMALLOC_INCLUDE_DIR})
include(FindPackageHandleStandardArgs)
# handle the QUIETLY and REQUIRED arguments and set JEMALLOC_FOUND to TRUE
# if all listed variables are TRUE
find_package_handle_standard_args(JeMalloc DEFAULT_MSG
JEMALLOC_LIBRARY JEMALLOC_INCLUDE_DIR)
mark_as_advanced(JEMALLOC_INCLUDE_DIR JEMALLOC_LIBRARY)

View File

@@ -1,22 +0,0 @@
find_package (PkgConfig REQUIRED)
pkg_search_module (libarchive_PC QUIET libarchive>=3.4.3)
if(static)
set(LIBARCHIVE_LIB libarchive.a)
else()
set(LIBARCHIVE_LIB archive)
endif()
find_library (archive
NAMES ${LIBARCHIVE_LIB}
HINTS
${libarchive_PC_LIBDIR}
${libarchive_PC_LIBRARY_DIRS}
NO_DEFAULT_PATH)
find_path (LIBARCHIVE_INCLUDE_DIR
NAMES archive.h
HINTS
${libarchive_PC_INCLUDEDIR}
${libarchive_PC_INCLUDEDIRS}
NO_DEFAULT_PATH)

View File

@@ -1,24 +0,0 @@
find_package (PkgConfig)
if (PKG_CONFIG_FOUND)
pkg_search_module (lz4_PC QUIET liblz4>=1.9)
endif ()
if(static)
set(LZ4_LIB liblz4.a)
else()
set(LZ4_LIB lz4.so)
endif()
find_library (lz4
NAMES ${LZ4_LIB}
HINTS
${lz4_PC_LIBDIR}
${lz4_PC_LIBRARY_DIRS}
NO_DEFAULT_PATH)
find_path (LZ4_INCLUDE_DIR
NAMES lz4.h
HINTS
${lz4_PC_INCLUDEDIR}
${lz4_PC_INCLUDEDIRS}
NO_DEFAULT_PATH)

View File

@@ -1,24 +0,0 @@
find_package (PkgConfig)
if (PKG_CONFIG_FOUND)
pkg_search_module (secp256k1_PC QUIET libsecp256k1)
endif ()
if(static)
set(SECP256K1_LIB libsecp256k1.a)
else()
set(SECP256K1_LIB secp256k1)
endif()
find_library(secp256k1
NAMES ${SECP256K1_LIB}
HINTS
${secp256k1_PC_LIBDIR}
${secp256k1_PC_LIBRARY_PATHS}
NO_DEFAULT_PATH)
find_path (SECP256K1_INCLUDE_DIR
NAMES secp256k1.h
HINTS
${secp256k1_PC_INCLUDEDIR}
${secp256k1_PC_INCLUDEDIRS}
NO_DEFAULT_PATH)

View File

@@ -1,24 +0,0 @@
find_package (PkgConfig)
if (PKG_CONFIG_FOUND)
pkg_search_module (snappy_PC QUIET snappy>=1.1.7)
endif ()
if(static)
set(SNAPPY_LIB libsnappy.a)
else()
set(SNAPPY_LIB libsnappy.so)
endif()
find_library (snappy
NAMES ${SNAPPY_LIB}
HINTS
${snappy_PC_LIBDIR}
${snappy_PC_LIBRARY_DIRS}
NO_DEFAULT_PATH)
find_path (SNAPPY_INCLUDE_DIR
NAMES snappy.h
HINTS
${snappy_PC_INCLUDEDIR}
${snappy_PC_INCLUDEDIRS}
NO_DEFAULT_PATH)

View File

@@ -1,19 +0,0 @@
find_package (PkgConfig)
if (PKG_CONFIG_FOUND)
# TBD - currently no soci pkgconfig
#pkg_search_module (soci_PC QUIET libsoci_core>=3.2)
endif ()
if(static)
set(SOCI_LIB libsoci.a)
else()
set(SOCI_LIB libsoci_core.so)
endif()
find_library (soci
NAMES ${SOCI_LIB})
find_path (SOCI_INCLUDE_DIR
NAMES soci/soci.h)
message("SOCI FOUND AT: ${SOCI_LIB}")

View File

@@ -1,24 +0,0 @@
find_package (PkgConfig)
if (PKG_CONFIG_FOUND)
pkg_search_module (sqlite_PC QUIET sqlite3>=3.26.0)
endif ()
if(static)
set(SQLITE_LIB libsqlite3.a)
else()
set(SQLITE_LIB sqlite3.so)
endif()
find_library (sqlite3
NAMES ${SQLITE_LIB}
HINTS
${sqlite_PC_LIBDIR}
${sqlite_PC_LIBRARY_DIRS}
NO_DEFAULT_PATH)
find_path (SQLITE_INCLUDE_DIR
NAMES sqlite3.h
HINTS
${sqlite_PC_INCLUDEDIR}
${sqlite_PC_INCLUDEDIRS}
NO_DEFAULT_PATH)

View File

@@ -1,163 +0,0 @@
#[===================================================================[
NIH dep: libarchive
#]===================================================================]
option (local_libarchive "use local build of libarchive." OFF)
add_library (archive_lib UNKNOWN IMPORTED GLOBAL)
if (NOT local_libarchive)
if (NOT WIN32)
find_package(libarchive_pc REQUIRED)
endif ()
if (archive)
message (STATUS "Found libarchive using pkg-config. Using ${archive}.")
set_target_properties (archive_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${archive}
IMPORTED_LOCATION_RELEASE
${archive}
INTERFACE_INCLUDE_DIRECTORIES
${LIBARCHIVE_INCLUDE_DIR})
# pkg-config can return extra info for static lib linking
# this is probably needed/useful generally, but apply
# to APPLE for now (mostly for homebrew)
if (APPLE AND static AND libarchive_PC_STATIC_LIBRARIES)
message(STATUS "NOTE: libarchive static libs: ${libarchive_PC_STATIC_LIBRARIES}")
# also, APPLE seems to need iconv...maybe linux does too (TBD)
target_link_libraries (archive_lib
INTERFACE iconv ${libarchive_PC_STATIC_LIBRARIES})
endif ()
else ()
## now try searching using the minimal find module that cmake provides
find_package(LibArchive 3.4.3 QUIET)
if (LibArchive_FOUND)
if (static)
# find module doesn't find static libs currently, so we re-search
get_filename_component(_loc ${LibArchive_LIBRARY} DIRECTORY)
find_library(_la_static
NAMES libarchive.a archive_static.lib archive.lib
PATHS ${_loc})
if (_la_static)
set (_la_lib ${_la_static})
else ()
message (WARNING "unable to find libarchive static lib - switching to local build")
set (local_libarchive ON CACHE BOOL "" FORCE)
endif ()
else ()
set (_la_lib ${LibArchive_LIBRARY})
endif ()
if (NOT local_libarchive)
message (STATUS "Found libarchive using module/config. Using ${_la_lib}.")
set_target_properties (archive_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${_la_lib}
IMPORTED_LOCATION_RELEASE
${_la_lib}
INTERFACE_INCLUDE_DIRECTORIES
${LibArchive_INCLUDE_DIRS})
endif ()
else ()
set (local_libarchive ON CACHE BOOL "" FORCE)
endif ()
endif ()
endif()
if (local_libarchive)
set (lib_post "")
if (MSVC)
set (lib_post "_static")
endif ()
ExternalProject_Add (libarchive
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/libarchive/libarchive.git
GIT_TAG v3.4.3
CMAKE_ARGS
# passing the compiler seems to be needed for windows CI, sadly
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DENABLE_LZ4=ON
-ULZ4_*
-DLZ4_INCLUDE_DIR=$<JOIN:$<TARGET_PROPERTY:lz4_lib,INTERFACE_INCLUDE_DIRECTORIES>,::>
# because we are building a static lib, this lz4 library doesn't
# actually matter since you can't generally link static libs to other static
# libs. The include files are needed, but the library itself is not (until
# we link our application, at which point we use the lz4 we built above).
# nonetheless, we need to provide a library to libarchive else it will
# NOT include lz4 support when configuring
-DLZ4_LIBRARY=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:lz4_lib,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:lz4_lib,IMPORTED_LOCATION_RELEASE>>
-DENABLE_WERROR=OFF
-DENABLE_TAR=OFF
-DENABLE_TAR_SHARED=OFF
-DENABLE_INSTALL=ON
-DENABLE_NETTLE=OFF
-DENABLE_OPENSSL=OFF
-DENABLE_LZO=OFF
-DENABLE_LZMA=OFF
-DENABLE_ZLIB=OFF
-DENABLE_BZip2=OFF
-DENABLE_LIBXML2=OFF
-DENABLE_EXPAT=OFF
-DENABLE_PCREPOSIX=OFF
-DENABLE_LibGCC=OFF
-DENABLE_CNG=OFF
-DENABLE_CPIO=OFF
-DENABLE_CPIO_SHARED=OFF
-DENABLE_CAT=OFF
-DENABLE_CAT_SHARED=OFF
-DENABLE_XATTR=OFF
-DENABLE_ACL=OFF
-DENABLE_ICONV=OFF
-DENABLE_TEST=OFF
-DENABLE_COVERAGE=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LIST_SEPARATOR ::
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--target archive_static
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/libarchive/$<CONFIG>/${ep_lib_prefix}archive${lib_post}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/libarchive
>
TEST_COMMAND ""
INSTALL_COMMAND ""
DEPENDS lz4_lib
BUILD_BYPRODUCTS
<BINARY_DIR>/libarchive/${ep_lib_prefix}archive${lib_post}${ep_lib_suffix}
<BINARY_DIR>/libarchive/${ep_lib_prefix}archive${lib_post}_d${ep_lib_suffix}
)
ExternalProject_Get_Property (libarchive BINARY_DIR)
ExternalProject_Get_Property (libarchive SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (libarchive)
endif ()
file (MAKE_DIRECTORY ${SOURCE_DIR}/libarchive)
set_target_properties (archive_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/libarchive/${ep_lib_prefix}archive${lib_post}_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/libarchive/${ep_lib_prefix}archive${lib_post}${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/libarchive
INTERFACE_COMPILE_DEFINITIONS
LIBARCHIVE_STATIC)
endif()
add_dependencies (archive_lib libarchive)
target_link_libraries (archive_lib INTERFACE lz4_lib)
target_link_libraries (ripple_libs INTERFACE archive_lib)
exclude_if_included (libarchive)
exclude_if_included (archive_lib)

View File

@@ -1,79 +0,0 @@
#[===================================================================[
NIH dep: lz4
#]===================================================================]
add_library (lz4_lib STATIC IMPORTED GLOBAL)
if (NOT WIN32)
find_package(lz4)
endif()
if(lz4)
set_target_properties (lz4_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${lz4}
IMPORTED_LOCATION_RELEASE
${lz4}
INTERFACE_INCLUDE_DIRECTORIES
${LZ4_INCLUDE_DIR})
else()
ExternalProject_Add (lz4
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/lz4/lz4.git
GIT_TAG v1.9.2
SOURCE_SUBDIR contrib/cmake_unofficial
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DBUILD_STATIC_LIBS=ON
-DBUILD_SHARED_LIBS=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--target lz4_static
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}lz4$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>
>
TEST_COMMAND ""
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}lz4${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}lz4_d${ep_lib_suffix}
)
ExternalProject_Get_Property (lz4 BINARY_DIR)
ExternalProject_Get_Property (lz4 SOURCE_DIR)
file (MAKE_DIRECTORY ${SOURCE_DIR}/lz4)
set_target_properties (lz4_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/${ep_lib_prefix}lz4_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/${ep_lib_prefix}lz4${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/lib)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (lz4)
endif ()
add_dependencies (lz4_lib lz4)
target_link_libraries (ripple_libs INTERFACE lz4_lib)
exclude_if_included (lz4)
endif()
exclude_if_included (lz4_lib)

View File

@@ -1,31 +0,0 @@
#[===================================================================[
NIH dep: nudb
NuDB is header-only, thus is an INTERFACE lib in CMake.
TODO: move the library definition into NuDB repo and add
proper targets and export/install
#]===================================================================]
if (is_root_project) # NuDB not needed in the case of xrpl_core inclusion build
add_library (nudb INTERFACE)
FetchContent_Declare(
nudb_src
GIT_REPOSITORY https://github.com/CPPAlliance/NuDB.git
GIT_TAG 2.0.5
)
FetchContent_GetProperties(nudb_src)
if(NOT nudb_src_POPULATED)
message (STATUS "Pausing to download NuDB...")
FetchContent_Populate(nudb_src)
endif()
file(TO_CMAKE_PATH "${nudb_src_SOURCE_DIR}" nudb_src_SOURCE_DIR)
# specify as system includes so as to avoid warnings
target_include_directories (nudb SYSTEM INTERFACE ${nudb_src_SOURCE_DIR}/include)
target_link_libraries (nudb
INTERFACE
Boost::thread
Boost::system)
add_library (NIH::nudb ALIAS nudb)
target_link_libraries (ripple_libs INTERFACE NIH::nudb)
endif ()

View File

@@ -1,48 +0,0 @@
#[===================================================================[
NIH dep: openssl
#]===================================================================]
#[===============================================[
OPENSSL_ROOT_DIR is the only variable that
FindOpenSSL honors for locating, so convert any
OPENSSL_ROOT vars to this
#]===============================================]
if (NOT DEFINED OPENSSL_ROOT_DIR)
if (DEFINED ENV{OPENSSL_ROOT})
set (OPENSSL_ROOT_DIR $ENV{OPENSSL_ROOT})
elseif (HOMEBREW)
execute_process (COMMAND ${HOMEBREW} --prefix openssl
OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif ()
file (TO_CMAKE_PATH "${OPENSSL_ROOT_DIR}" OPENSSL_ROOT_DIR)
endif ()
if (static)
set (OPENSSL_USE_STATIC_LIBS ON)
endif ()
set (OPENSSL_MSVC_STATIC_RT ON)
find_package (OpenSSL 1.1.1 REQUIRED)
target_link_libraries (ripple_libs
INTERFACE
OpenSSL::SSL
OpenSSL::Crypto)
# disable SSLv2...this can also be done when building/configuring OpenSSL
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2)
#[=========================================================[
https://gitlab.kitware.com/cmake/cmake/issues/16885
depending on how openssl is built, it might depend
on zlib. In fact, the openssl find package should
figure this out for us, but it does not currently...
so let's add zlib ourselves to the lib list
TODO: investigate linking to static zlib for static
build option
#]=========================================================]
find_package (ZLIB)
set (has_zlib FALSE)
if (TARGET ZLIB::ZLIB)
set_target_properties(OpenSSL::Crypto PROPERTIES
INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
set (has_zlib TRUE)
endif ()

View File

@@ -1,70 +0,0 @@
if(reporting)
find_package(PostgreSQL)
if(NOT PostgreSQL_FOUND)
message("find_package did not find postgres")
find_library(postgres NAMES pq libpq libpq-dev pq-dev postgresql-devel)
find_path(libpq-fe NAMES libpq-fe.h PATH_SUFFIXES postgresql pgsql include)
if(NOT libpq-fe_FOUND OR NOT postgres_FOUND)
message("No system installed Postgres found. Will build")
add_library(postgres SHARED IMPORTED GLOBAL)
add_library(pgport SHARED IMPORTED GLOBAL)
add_library(pgcommon SHARED IMPORTED GLOBAL)
ExternalProject_Add(postgres_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/postgres/postgres.git
GIT_TAG REL_14_5
CONFIGURE_COMMAND ./configure --without-readline > /dev/null
BUILD_COMMAND ${CMAKE_COMMAND} -E env --unset=MAKELEVEL make
UPDATE_COMMAND ""
BUILD_IN_SOURCE 1
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/src/interfaces/libpq/${ep_lib_prefix}pq.a
<BINARY_DIR>/src/common/${ep_lib_prefix}pgcommon.a
<BINARY_DIR>/src/port/${ep_lib_prefix}pgport.a
LOG_BUILD TRUE
)
ExternalProject_Get_Property (postgres_src SOURCE_DIR)
ExternalProject_Get_Property (postgres_src BINARY_DIR)
set (postgres_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${postgres_src_SOURCE_DIR})
list(APPEND INCLUDE_DIRS
${SOURCE_DIR}/src/include
${SOURCE_DIR}/src/interfaces/libpq
)
set_target_properties(postgres PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/src/interfaces/libpq/${ep_lib_prefix}pq.a
INTERFACE_INCLUDE_DIRECTORIES
"${INCLUDE_DIRS}"
)
set_target_properties(pgcommon PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/src/common/${ep_lib_prefix}pgcommon.a
INTERFACE_INCLUDE_DIRECTORIES
"${INCLUDE_DIRS}"
)
set_target_properties(pgport PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/src/port/${ep_lib_prefix}pgport.a
INTERFACE_INCLUDE_DIRECTORIES
"${INCLUDE_DIRS}"
)
add_dependencies(postgres postgres_src)
add_dependencies(pgcommon postgres_src)
add_dependencies(pgport postgres_src)
file(TO_CMAKE_PATH "${postgres_src_SOURCE_DIR}" postgres_src_SOURCE_DIR)
target_link_libraries(ripple_libs INTERFACE postgres pgcommon pgport)
else()
message("Found system installed Postgres via find_libary")
target_include_directories(ripple_libs INTERFACE ${libpq-fe})
target_link_libraries(ripple_libs INTERFACE ${postgres})
endif()
else()
message("Found system installed Postgres via find_package")
target_include_directories(ripple_libs INTERFACE ${PostgreSQL_INCLUDE_DIRS})
target_link_libraries(ripple_libs INTERFACE ${PostgreSQL_LIBRARIES})
endif()
endif()

View File

@@ -1,156 +0,0 @@
#[===================================================================[
import protobuf (lib and compiler) and create a lib
from our proto message definitions. If the system protobuf
is not found, fallback on EP to download and build a version
from official source.
#]===================================================================]
if (static)
set (Protobuf_USE_STATIC_LIBS ON)
endif ()
find_package (Protobuf 3.8)
if (is_multiconfig)
set(protobuf_protoc_lib ${Protobuf_PROTOC_LIBRARIES})
else ()
string(TOUPPER ${CMAKE_BUILD_TYPE} upper_cmake_build_type)
set(protobuf_protoc_lib ${Protobuf_PROTOC_LIBRARY_${upper_cmake_build_type}})
endif ()
if (local_protobuf OR NOT (Protobuf_FOUND AND Protobuf_PROTOC_EXECUTABLE AND protobuf_protoc_lib))
include (GNUInstallDirs)
message (STATUS "using local protobuf build.")
set(protobuf_reqs Protobuf_PROTOC_EXECUTABLE protobuf_protoc_lib)
foreach(lib ${protobuf_reqs})
if(NOT ${lib})
message(STATUS "Couldn't find ${lib}")
endif()
endforeach()
if (WIN32)
# protobuf prepends lib even on windows
set (pbuf_lib_pre "lib")
else ()
set (pbuf_lib_pre ${ep_lib_prefix})
endif ()
# for the external project build of protobuf, we currently ignore the
# static option and always build static libs here. This is consistent
# with our other EP builds. Dynamic libs in an EP would add complexity
# because we'd need to get them into the runtime path, and probably
# install them.
ExternalProject_Add (protobuf_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/protocolbuffers/protobuf.git
GIT_TAG v3.8.0
SOURCE_SUBDIR cmake
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DCMAKE_INSTALL_PREFIX=<BINARY_DIR>/_installed_
-Dprotobuf_BUILD_TESTS=OFF
-Dprotobuf_BUILD_EXAMPLES=OFF
-Dprotobuf_BUILD_PROTOC_BINARIES=ON
-Dprotobuf_MSVC_STATIC_RUNTIME=ON
-DBUILD_SHARED_LIBS=OFF
-Dprotobuf_BUILD_SHARED_LIBS=OFF
-DCMAKE_DEBUG_POSTFIX=_d
-Dprotobuf_DEBUG_POSTFIX=_d
-Dprotobuf_WITH_ZLIB=$<IF:$<BOOL:${has_zlib}>,ON,OFF>
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
$<$<BOOL:${unity}>:-DCMAKE_UNITY_BUILD=ON}>
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -EHa -MP"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
TEST_COMMAND ""
INSTALL_COMMAND
${CMAKE_COMMAND} -E env --unset=DESTDIR ${CMAKE_COMMAND} --build . --config $<CONFIG> --target install
BUILD_BYPRODUCTS
<BINARY_DIR>/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protobuf${ep_lib_suffix}
<BINARY_DIR>/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protobuf_d${ep_lib_suffix}
<BINARY_DIR>/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protoc${ep_lib_suffix}
<BINARY_DIR>/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protoc_d${ep_lib_suffix}
<BINARY_DIR>/_installed_/bin/protoc${CMAKE_EXECUTABLE_SUFFIX}
)
ExternalProject_Get_Property (protobuf_src BINARY_DIR)
ExternalProject_Get_Property (protobuf_src SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (protobuf_src)
endif ()
exclude_if_included (protobuf_src)
if (NOT TARGET protobuf::libprotobuf)
add_library (protobuf::libprotobuf STATIC IMPORTED GLOBAL)
endif ()
file (MAKE_DIRECTORY ${BINARY_DIR}/_installed_/include)
set_target_properties (protobuf::libprotobuf PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protobuf_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protobuf${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${BINARY_DIR}/_installed_/include)
add_dependencies (protobuf::libprotobuf protobuf_src)
exclude_if_included (protobuf::libprotobuf)
if (NOT TARGET protobuf::libprotoc)
add_library (protobuf::libprotoc STATIC IMPORTED GLOBAL)
endif ()
set_target_properties (protobuf::libprotoc PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protoc_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protoc${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${BINARY_DIR}/_installed_/include)
add_dependencies (protobuf::libprotoc protobuf_src)
exclude_if_included (protobuf::libprotoc)
if (NOT TARGET protobuf::protoc)
add_executable (protobuf::protoc IMPORTED)
exclude_if_included (protobuf::protoc)
endif ()
set_target_properties (protobuf::protoc PROPERTIES
IMPORTED_LOCATION "${BINARY_DIR}/_installed_/bin/protoc${CMAKE_EXECUTABLE_SUFFIX}")
add_dependencies (protobuf::protoc protobuf_src)
else ()
if (NOT TARGET protobuf::protoc)
if (EXISTS "${Protobuf_PROTOC_EXECUTABLE}")
add_executable (protobuf::protoc IMPORTED)
set_target_properties (protobuf::protoc PROPERTIES
IMPORTED_LOCATION "${Protobuf_PROTOC_EXECUTABLE}")
else ()
message (FATAL_ERROR "Protobuf import failed")
endif ()
endif ()
endif ()
set(output_dir ${CMAKE_BINARY_DIR}/proto_gen)
file(MAKE_DIRECTORY ${output_dir})
set(ccbd ${CMAKE_CURRENT_BINARY_DIR})
set(CMAKE_CURRENT_BINARY_DIR ${output_dir})
protobuf_generate_cpp(PROTO_SRCS PROTO_HDRS src/ripple/proto/ripple.proto)
set(CMAKE_CURRENT_BINARY_DIR ${ccbd})
target_include_directories(xrpl_core SYSTEM PUBLIC
# The generated implementation imports the header relative to the output
# directory.
$<BUILD_INTERFACE:${output_dir}>
$<BUILD_INTERFACE:${output_dir}/src>
)
target_sources(xrpl_core PRIVATE ${output_dir}/src/ripple/proto/ripple.pb.cc)
install(
FILES ${output_dir}/src/ripple/proto/ripple.pb.h
DESTINATION include/ripple/proto)
target_link_libraries(xrpl_core PUBLIC protobuf::libprotobuf)
target_compile_options(xrpl_core
PUBLIC
$<$<BOOL:${is_xcode}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>
)

View File

@@ -1,177 +0,0 @@
#[===================================================================[
NIH dep: rocksdb
#]===================================================================]
add_library (rocksdb_lib UNKNOWN IMPORTED GLOBAL)
set_target_properties (rocksdb_lib
PROPERTIES INTERFACE_COMPILE_DEFINITIONS RIPPLE_ROCKSDB_AVAILABLE=1)
option (local_rocksdb "use local build of rocksdb." OFF)
if (NOT local_rocksdb)
find_package (RocksDB 6.27 QUIET CONFIG)
if (TARGET RocksDB::rocksdb)
message (STATUS "Found RocksDB using config.")
get_target_property (_rockslib_l RocksDB::rocksdb IMPORTED_LOCATION_DEBUG)
if (_rockslib_l)
set_target_properties (rocksdb_lib PROPERTIES IMPORTED_LOCATION_DEBUG ${_rockslib_l})
endif ()
get_target_property (_rockslib_l RocksDB::rocksdb IMPORTED_LOCATION_RELEASE)
if (_rockslib_l)
set_target_properties (rocksdb_lib PROPERTIES IMPORTED_LOCATION_RELEASE ${_rockslib_l})
endif ()
get_target_property (_rockslib_l RocksDB::rocksdb IMPORTED_LOCATION)
if (_rockslib_l)
set_target_properties (rocksdb_lib PROPERTIES IMPORTED_LOCATION ${_rockslib_l})
endif ()
get_target_property (_rockslib_i RocksDB::rocksdb INTERFACE_INCLUDE_DIRECTORIES)
if (_rockslib_i)
set_target_properties (rocksdb_lib PROPERTIES INTERFACE_INCLUDE_DIRECTORIES ${_rockslib_i})
endif ()
target_link_libraries (ripple_libs INTERFACE RocksDB::rocksdb)
else ()
# using a find module with rocksdb is difficult because
# you have no idea how it was configured (transitive dependencies).
# the code below will generally find rocksdb using the module, but
# will then result in linker errors for static linkage since the
# transitive dependencies are unknown. force local build here for now, but leave the code as
# a placeholder for future investigation.
if (static)
set (local_rocksdb ON CACHE BOOL "" FORCE)
# TBD if there is some way to extract transitive deps..then:
#set (RocksDB_USE_STATIC ON)
else ()
find_package (RocksDB 6.27 MODULE)
if (ROCKSDB_FOUND)
if (RocksDB_LIBRARY_DEBUG)
set_target_properties (rocksdb_lib PROPERTIES IMPORTED_LOCATION_DEBUG ${RocksDB_LIBRARY_DEBUG})
endif ()
set_target_properties (rocksdb_lib PROPERTIES IMPORTED_LOCATION_RELEASE ${RocksDB_LIBRARIES})
set_target_properties (rocksdb_lib PROPERTIES IMPORTED_LOCATION ${RocksDB_LIBRARIES})
set_target_properties (rocksdb_lib PROPERTIES INTERFACE_INCLUDE_DIRECTORIES ${RocksDB_INCLUDE_DIRS})
else ()
set (local_rocksdb ON CACHE BOOL "" FORCE)
endif ()
endif ()
endif ()
endif ()
if (local_rocksdb)
message (STATUS "Using local build of RocksDB.")
ExternalProject_Add (rocksdb
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/facebook/rocksdb.git
GIT_TAG v6.27.3
PATCH_COMMAND
# only used by windows build
${CMAKE_COMMAND} -E copy_if_different
${CMAKE_CURRENT_SOURCE_DIR}/cmake/rocks_thirdparty.inc
<SOURCE_DIR>/thirdparty.inc
COMMAND
# fixup their build version file to keep the values
# from changing always
${CMAKE_COMMAND} -E copy_if_different
${CMAKE_CURRENT_SOURCE_DIR}/cmake/rocksdb_build_version.cc.in
<SOURCE_DIR>/util/build_version.cc.in
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
$<$<BOOL:${unity}>:-DCMAKE_UNITY_BUILD=ON}>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DBUILD_SHARED_LIBS=OFF
-DCMAKE_POSITION_INDEPENDENT_CODE=ON
-DWITH_JEMALLOC=$<IF:$<BOOL:${jemalloc}>,ON,OFF>
-DWITH_SNAPPY=ON
-DWITH_LZ4=ON
-DWITH_ZLIB=OFF
-DUSE_RTTI=ON
-DWITH_ZSTD=OFF
-DWITH_GFLAGS=OFF
-DWITH_BZ2=OFF
-ULZ4_*
-Ulz4_*
-Dlz4_INCLUDE_DIRS=$<JOIN:$<TARGET_PROPERTY:lz4_lib,INTERFACE_INCLUDE_DIRECTORIES>,::>
-Dlz4_LIBRARIES=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:lz4_lib,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:lz4_lib,IMPORTED_LOCATION_RELEASE>>
-Dlz4_FOUND=ON
-USNAPPY_*
-Usnappy_*
-USnappy_*
-Dsnappy_INCLUDE_DIRS=$<JOIN:$<TARGET_PROPERTY:snappy_lib,INTERFACE_INCLUDE_DIRECTORIES>,::>
-Dsnappy_LIBRARIES=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_RELEASE>>
-Dsnappy_FOUND=ON
-DSnappy_INCLUDE_DIRS=$<JOIN:$<TARGET_PROPERTY:snappy_lib,INTERFACE_INCLUDE_DIRECTORIES>,::>
-DSnappy_LIBRARIES=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_RELEASE>>
-DSnappy_FOUND=ON
-DWITH_MD_LIBRARY=OFF
-DWITH_RUNTIME_DEBUG=$<IF:$<CONFIG:Debug>,ON,OFF>
-DFAIL_ON_WARNINGS=OFF
-DWITH_ASAN=OFF
-DWITH_TSAN=OFF
-DWITH_UBSAN=OFF
-DWITH_NUMA=OFF
-DWITH_TBB=OFF
-DWITH_WINDOWS_UTF8_FILENAMES=OFF
-DWITH_XPRESS=OFF
-DPORTABLE=ON
-DFORCE_SSE42=OFF
-DDISABLE_STALL_NOTIF=OFF
-DOPTDBG=ON
-DROCKSDB_LITE=OFF
-DWITH_FALLOCATE=ON
-DWITH_LIBRADOS=OFF
-DWITH_JNI=OFF
-DROCKSDB_INSTALL_ON_WINDOWS=OFF
-DWITH_TESTS=OFF
-DWITH_TOOLS=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -MP /DNDEBUG"
>
$<$<NOT:$<BOOL:${MSVC}>>:
"-DCMAKE_CXX_FLAGS=-DNDEBUG"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}rocksdb$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>
>
LIST_SEPARATOR ::
TEST_COMMAND ""
INSTALL_COMMAND ""
DEPENDS snappy_lib lz4_lib
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}rocksdb${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}rocksdb_d${ep_lib_suffix}
)
ExternalProject_Get_Property (rocksdb BINARY_DIR)
ExternalProject_Get_Property (rocksdb SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (rocksdb)
endif ()
file (MAKE_DIRECTORY ${SOURCE_DIR}/include)
set_target_properties (rocksdb_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/${ep_lib_prefix}rocksdb_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/${ep_lib_prefix}rocksdb${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
add_dependencies (rocksdb_lib rocksdb)
exclude_if_included (rocksdb)
endif ()
target_link_libraries (rocksdb_lib
INTERFACE
snappy_lib
lz4_lib
$<$<BOOL:${MSVC}>:rpcrt4>)
exclude_if_included (rocksdb_lib)
target_link_libraries (ripple_libs INTERFACE rocksdb_lib)

View File

@@ -1,58 +0,0 @@
#[===================================================================[
NIH dep: secp256k1
#]===================================================================]
add_library (secp256k1_lib STATIC IMPORTED GLOBAL)
if (NOT WIN32)
find_package(secp256k1)
endif()
if(secp256k1)
set_target_properties (secp256k1_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${secp256k1}
IMPORTED_LOCATION_RELEASE
${secp256k1}
INTERFACE_INCLUDE_DIRECTORIES
${SECP256K1_INCLUDE_DIR})
add_library (secp256k1 ALIAS secp256k1_lib)
add_library (NIH::secp256k1 ALIAS secp256k1_lib)
else()
set(INSTALL_SECP256K1 true)
add_library (secp256k1 STATIC
external/secp256k1/src/secp256k1.c)
target_compile_definitions (secp256k1
PRIVATE
USE_NUM_NONE
USE_FIELD_10X26
USE_FIELD_INV_BUILTIN
USE_SCALAR_8X32
USE_SCALAR_INV_BUILTIN)
target_include_directories (secp256k1
PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/external>
$<INSTALL_INTERFACE:include>
PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/external/secp256k1)
target_compile_options (secp256k1
PRIVATE
$<$<BOOL:${MSVC}>:-wd4319>
$<$<NOT:$<BOOL:${MSVC}>>:
-Wno-deprecated-declarations
-Wno-unused-function
>
$<$<BOOL:${is_gcc}>:-Wno-nonnull-compare>)
target_link_libraries (ripple_libs INTERFACE NIH::secp256k1)
#[===========================[
headers installation
#]===========================]
install (
FILES
external/secp256k1/include/secp256k1.h
DESTINATION include/secp256k1/include)
add_library (NIH::secp256k1 ALIAS secp256k1)
endif()

View File

@@ -1,77 +0,0 @@
#[===================================================================[
NIH dep: snappy
#]===================================================================]
add_library (snappy_lib STATIC IMPORTED GLOBAL)
if (NOT WIN32)
find_package(snappy)
endif()
if(snappy)
set_target_properties (snappy_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${snappy}
IMPORTED_LOCATION_RELEASE
${snappy}
INTERFACE_INCLUDE_DIRECTORIES
${SNAPPY_INCLUDE_DIR})
else()
ExternalProject_Add (snappy
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/google/snappy.git
GIT_TAG 1.1.7
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DBUILD_SHARED_LIBS=OFF
-DCMAKE_POSITION_INDEPENDENT_CODE=ON
-DSNAPPY_BUILD_TESTS=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -EHa -MP"
"-DCMAKE_CXX_FLAGS_DEBUG=-MTd"
"-DCMAKE_CXX_FLAGS_RELEASE=-MT"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}snappy$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>
>
TEST_COMMAND ""
INSTALL_COMMAND
${CMAKE_COMMAND} -E copy_if_different <BINARY_DIR>/config.h <BINARY_DIR>/snappy-stubs-public.h <SOURCE_DIR>
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}snappy${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}snappy_d${ep_lib_suffix}
)
ExternalProject_Get_Property (snappy BINARY_DIR)
ExternalProject_Get_Property (snappy SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (snappy)
endif ()
file (MAKE_DIRECTORY ${SOURCE_DIR}/snappy)
set_target_properties (snappy_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/${ep_lib_prefix}snappy_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/${ep_lib_prefix}snappy${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR})
endif()
add_dependencies (snappy_lib snappy)
target_link_libraries (ripple_libs INTERFACE snappy_lib)
exclude_if_included (snappy)
exclude_if_included (snappy_lib)

View File

@@ -1,165 +0,0 @@
#[===================================================================[
NIH dep: soci
#]===================================================================]
foreach (_comp core empty sqlite3)
add_library ("soci_${_comp}" STATIC IMPORTED GLOBAL)
endforeach ()
if (NOT WIN32)
find_package(soci)
endif()
if (soci)
foreach (_comp core empty sqlite3)
set_target_properties ("soci_${_comp}" PROPERTIES
IMPORTED_LOCATION_DEBUG
${soci}
IMPORTED_LOCATION_RELEASE
${soci}
INTERFACE_INCLUDE_DIRECTORIES
${SOCI_INCLUDE_DIR})
endforeach ()
else()
set (soci_lib_pre ${ep_lib_prefix})
set (soci_lib_post "")
if (WIN32)
# for some reason soci on windows still prepends lib (non-standard)
set (soci_lib_pre lib)
# this version in the name might change if/when we change versions of soci
set (soci_lib_post "_4_0")
endif ()
get_target_property (_boost_incs Boost::date_time INTERFACE_INCLUDE_DIRECTORIES)
get_target_property (_boost_dt Boost::date_time IMPORTED_LOCATION)
if (NOT _boost_dt)
get_target_property (_boost_dt Boost::date_time IMPORTED_LOCATION_RELEASE)
endif ()
if (NOT _boost_dt)
get_target_property (_boost_dt Boost::date_time IMPORTED_LOCATION_DEBUG)
endif ()
ExternalProject_Add (soci
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/SOCI/soci.git
GIT_TAG 04e1870294918d20761736743bb6136314c42dd5
# We had an issue with soci integer range checking for boost::optional
# and needed to remove the exception that SOCI throws in this case.
# This is *probably* a bug in SOCI, but has never been investigated more
# nor reported to the maintainers.
# This cmake script comments out the lines in question.
# This patch process is likely fragile and should be reviewed carefully
# whenever we update the GIT_TAG above.
PATCH_COMMAND
${CMAKE_COMMAND} -D RIPPLED_SOURCE=${CMAKE_CURRENT_SOURCE_DIR}
-P ${CMAKE_CURRENT_SOURCE_DIR}/cmake/soci_patch.cmake
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
$<$<BOOL:${CMAKE_TOOLCHAIN_FILE}>:-DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}>
$<$<BOOL:${VCPKG_TARGET_TRIPLET}>:-DVCPKG_TARGET_TRIPLET=${VCPKG_TARGET_TRIPLET}>
$<$<BOOL:${unity}>:-DCMAKE_UNITY_BUILD=ON}>
-DCMAKE_PREFIX_PATH=${CMAKE_BINARY_DIR}/sqlite3
-DCMAKE_MODULE_PATH=${CMAKE_CURRENT_SOURCE_DIR}/cmake
-DCMAKE_INCLUDE_PATH=$<JOIN:$<TARGET_PROPERTY:sqlite,INTERFACE_INCLUDE_DIRECTORIES>,::>
-DCMAKE_LIBRARY_PATH=${sqlite_BINARY_DIR}
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DSOCI_CXX_C11=ON
-DSOCI_STATIC=ON
-DSOCI_LIBDIR=lib
-DSOCI_SHARED=OFF
-DSOCI_TESTS=OFF
# hacks to workaround the fact that soci doesn't currently use
# boost imported targets in its cmake. If they switch to
# proper imported targets, this next line can be removed
# (as well as the get_property above that sets _boost_incs)
-DBoost_INCLUDE_DIRS=$<JOIN:${_boost_incs},::>
-DBoost_INCLUDE_DIR=$<JOIN:${_boost_incs},::>
-DBOOST_ROOT=${BOOST_ROOT}
-DWITH_BOOST=ON
-DBoost_FOUND=ON
-DBoost_NO_BOOST_CMAKE=ON
-DBoost_DATE_TIME_FOUND=ON
-DSOCI_HAVE_BOOST=ON
-DSOCI_HAVE_BOOST_DATE_TIME=ON
-DBoost_DATE_TIME_LIBRARY=${_boost_dt}
-DSOCI_DB2=OFF
-DSOCI_FIREBIRD=OFF
-DSOCI_MYSQL=OFF
-DSOCI_ODBC=OFF
-DSOCI_ORACLE=OFF
-DSOCI_POSTGRESQL=OFF
-DSOCI_SQLITE3=ON
-DSQLITE3_INCLUDE_DIR=$<JOIN:$<TARGET_PROPERTY:sqlite,INTERFACE_INCLUDE_DIRECTORIES>,::>
-DSQLITE3_LIBRARY=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:sqlite,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:sqlite,IMPORTED_LOCATION_RELEASE>>
$<$<BOOL:${APPLE}>:-DCMAKE_FIND_FRAMEWORK=LAST>
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -EHa -MP"
"-DCMAKE_CXX_FLAGS_DEBUG=-MTd"
"-DCMAKE_CXX_FLAGS_RELEASE=-MT"
>
$<$<NOT:$<BOOL:${MSVC}>>:
"-DCMAKE_CXX_FLAGS=-Wno-deprecated-declarations"
>
# SEE: https://github.com/SOCI/soci/issues/640
$<$<AND:$<BOOL:${is_gcc}>,$<VERSION_GREATER_EQUAL:${CMAKE_CXX_COMPILER_VERSION},8>>:
"-DCMAKE_CXX_FLAGS=-Wno-deprecated-declarations -Wno-error=format-overflow -Wno-format-overflow -Wno-error=format-truncation"
>
LIST_SEPARATOR ::
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/lib/$<CONFIG>/${soci_lib_pre}soci_core${soci_lib_post}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/lib/$<CONFIG>/${soci_lib_pre}soci_empty${soci_lib_post}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/lib/$<CONFIG>/${soci_lib_pre}soci_sqlite3${soci_lib_post}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/lib
>
TEST_COMMAND ""
INSTALL_COMMAND ""
DEPENDS sqlite
BUILD_BYPRODUCTS
<BINARY_DIR>/lib/${soci_lib_pre}soci_core${soci_lib_post}${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_core${soci_lib_post}_d${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_empty${soci_lib_post}${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_empty${soci_lib_post}_d${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_sqlite3${soci_lib_post}${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_sqlite3${soci_lib_post}_d${ep_lib_suffix}
)
ExternalProject_Get_Property (soci BINARY_DIR)
ExternalProject_Get_Property (soci SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (soci)
endif ()
file (MAKE_DIRECTORY ${SOURCE_DIR}/include)
file (MAKE_DIRECTORY ${BINARY_DIR}/include)
foreach (_comp core empty sqlite3)
set_target_properties ("soci_${_comp}" PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/lib/${soci_lib_pre}soci_${_comp}${soci_lib_post}_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/lib/${soci_lib_pre}soci_${_comp}${soci_lib_post}${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
"${SOURCE_DIR}/include;${BINARY_DIR}/include")
add_dependencies ("soci_${_comp}" soci) # something has to depend on the ExternalProject to trigger it
target_link_libraries (ripple_libs INTERFACE "soci_${_comp}")
if (NOT _comp STREQUAL "core")
target_link_libraries ("soci_${_comp}" INTERFACE soci_core)
endif ()
endforeach ()
endif()
foreach (_comp core empty sqlite3)
exclude_if_included ("soci_${_comp}")
endforeach ()
exclude_if_included (soci)

View File

@@ -1,93 +0,0 @@
#[===================================================================[
NIH dep: sqlite
#]===================================================================]
add_library (sqlite STATIC IMPORTED GLOBAL)
if (NOT WIN32)
find_package(sqlite)
endif()
if(sqlite3)
set_target_properties (sqlite PROPERTIES
IMPORTED_LOCATION_DEBUG
${sqlite3}
IMPORTED_LOCATION_RELEASE
${sqlite3}
INTERFACE_INCLUDE_DIRECTORIES
${SQLITE_INCLUDE_DIR})
else()
ExternalProject_Add (sqlite3
PREFIX ${nih_cache_path}
# sqlite doesn't use git, but it provides versioned tarballs
URL https://www.sqlite.org/2018/sqlite-amalgamation-3260000.zip
http://www.sqlite.org/2018/sqlite-amalgamation-3260000.zip
https://www2.sqlite.org/2018/sqlite-amalgamation-3260000.zip
http://www2.sqlite.org/2018/sqlite-amalgamation-3260000.zip
# ^^^ version is apparent in the URL: 3260000 => 3.26.0
URL_HASH SHA256=de5dcab133aa339a4cf9e97c40aa6062570086d6085d8f9ad7bc6ddf8a52096e
# Don't need to worry about MITM attacks too much because the download
# is checked against a strong hash
TLS_VERIFY false
# we wrote a very simple CMake file to build sqlite
# so that's what we copy here so that we can build with
# CMake. sqlite doesn't generally provided a build system
# for the single amalgamation source file.
PATCH_COMMAND
${CMAKE_COMMAND} -E copy_if_different
${CMAKE_CURRENT_SOURCE_DIR}/cmake/CMake_sqlite3.txt
<SOURCE_DIR>/CMakeLists.txt
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}sqlite3$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>
>
TEST_COMMAND ""
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}sqlite3${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}sqlite3_d${ep_lib_suffix}
)
ExternalProject_Get_Property (sqlite3 BINARY_DIR)
ExternalProject_Get_Property (sqlite3 SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (sqlite3)
endif ()
set_target_properties (sqlite PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/${ep_lib_prefix}sqlite3_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/${ep_lib_prefix}sqlite3${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR})
add_dependencies (sqlite sqlite3)
exclude_if_included (sqlite3)
endif()
target_link_libraries (sqlite INTERFACE $<$<NOT:$<BOOL:${MSVC}>>:dl>)
target_link_libraries (ripple_libs INTERFACE sqlite)
exclude_if_included (sqlite)
set(sqlite_BINARY_DIR ${BINARY_DIR})

View File

@@ -1,84 +1 @@
#[===================================================================[
NIH dep: wasmedge: web assembly runtime for hooks.
#]===================================================================]
find_package(Curses)
if(CURSES_FOUND)
include_directories(${CURSES_INCLUDE_DIR})
target_link_libraries(ripple_libs INTERFACE ${CURSES_LIBRARY})
else()
message(WARNING "CURSES library not found... (only important for mac builds)")
endif()
find_package(LLVM REQUIRED CONFIG)
message(STATUS "Found LLVM ${LLVM_PACKAGE_VERSION}")
message(STATUS "Using LLVMConfig.cmake in: ${LLVM_DIR}")
ExternalProject_Add (wasmedge_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/WasmEdge/WasmEdge.git
GIT_TAG 0.11.2
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
-DWASMEDGE_BUILD_SHARED_LIB=OFF
-DWASMEDGE_BUILD_STATIC_LIB=ON
-DWASMEDGE_BUILD_AOT_RUNTIME=ON
-DWASMEDGE_FORCE_DISABLE_LTO=ON
-DWASMEDGE_LINK_LLVM_STATIC=ON
-DWASMEDGE_LINK_TOOLS_STATIC=ON
-DWASMEDGE_BUILD_PLUGINS=OFF
-DCMAKE_POSITION_INDEPENDENT_CODE=ON
-DLLVM_DIR=${LLVM_DIR}
-DLLVM_LIBRARY_DIR=${LLVM_LIBRARY_DIR}
-DLLVM_ENABLE_TERMINFO=OFF
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP -march=native"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LOG_CONFIGURE ON
LOG_BUILD ON
LOG_CONFIGURE ON
COMMAND
pwd
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
$<$<VERSION_GREATER_EQUAL:${CMAKE_VERSION},3.12>:--parallel ${ep_procs}>
TEST_COMMAND ""
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/lib/api/libwasmedge.a
)
add_library (wasmedge STATIC IMPORTED GLOBAL)
ExternalProject_Get_Property (wasmedge_src BINARY_DIR)
ExternalProject_Get_Property (wasmedge_src SOURCE_DIR)
set (wasmedge_src_BINARY_DIR "${BINARY_DIR}")
add_dependencies (wasmedge wasmedge_src)
execute_process(
COMMAND
mkdir -p "${wasmedge_src_BINARY_DIR}/include/api"
)
set_target_properties (wasmedge PROPERTIES
IMPORTED_LOCATION_DEBUG
"${wasmedge_src_BINARY_DIR}/lib/api/libwasmedge.a"
IMPORTED_LOCATION_RELEASE
"${wasmedge_src_BINARY_DIR}/lib/api/libwasmedge.a"
INTERFACE_INCLUDE_DIRECTORIES
"${wasmedge_src_BINARY_DIR}/include/api/"
)
target_link_libraries (ripple_libs INTERFACE wasmedge)
#RH NOTE: some compilers / versions of some libraries need these, most don't
find_library(XAR_LIBRARY NAMES xar)
if(XAR_LIBRARY)
target_link_libraries(ripple_libs INTERFACE ${XAR_LIBRARY})
else()
message(WARNING "xar library not found... (only important for mac builds)")
endif()
add_library (wasmedge::wasmedge ALIAS wasmedge)
find_package(wasmedge REQUIRED)

View File

@@ -1,167 +0,0 @@
if(reporting)
find_library(cassandra NAMES cassandra)
if(NOT cassandra)
message("System installed Cassandra cpp driver not found. Will build")
find_library(zlib NAMES zlib1g-dev zlib-devel zlib z)
if(NOT zlib)
message("zlib not found. will build")
add_library(zlib STATIC IMPORTED GLOBAL)
ExternalProject_Add(zlib_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/madler/zlib.git
GIT_TAG v1.2.12
INSTALL_COMMAND ""
BUILD_BYPRODUCTS <BINARY_DIR>/${ep_lib_prefix}z.a
LOG_BUILD TRUE
LOG_CONFIGURE TRUE
)
ExternalProject_Get_Property (zlib_src SOURCE_DIR)
ExternalProject_Get_Property (zlib_src BINARY_DIR)
set (zlib_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${zlib_src_SOURCE_DIR}/include)
set_target_properties (zlib PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/${ep_lib_prefix}z.a
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
add_dependencies(zlib zlib_src)
file(TO_CMAKE_PATH "${zlib_src_SOURCE_DIR}" zlib_src_SOURCE_DIR)
endif()
find_library(krb5 NAMES krb5-dev libkrb5-dev)
if(NOT krb5)
message("krb5 not found. will build")
add_library(krb5 STATIC IMPORTED GLOBAL)
ExternalProject_Add(krb5_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/krb5/krb5.git
GIT_TAG krb5-1.20-final
UPDATE_COMMAND ""
CONFIGURE_COMMAND autoreconf src && CFLAGS=-fcommon ./src/configure --enable-static --disable-shared > /dev/null
BUILD_IN_SOURCE 1
BUILD_COMMAND make
INSTALL_COMMAND ""
BUILD_BYPRODUCTS <SOURCE_DIR>/lib/${ep_lib_prefix}krb5.a
LOG_BUILD TRUE
)
ExternalProject_Get_Property (krb5_src SOURCE_DIR)
ExternalProject_Get_Property (krb5_src BINARY_DIR)
set (krb5_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${krb5_src_SOURCE_DIR}/include)
set_target_properties (krb5 PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/lib/${ep_lib_prefix}krb5.a
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
add_dependencies(krb5 krb5_src)
file(TO_CMAKE_PATH "${krb5_src_SOURCE_DIR}" krb5_src_SOURCE_DIR)
endif()
find_library(libuv1 NAMES uv1 libuv1 liubuv1-dev libuv1:amd64)
if(NOT libuv1)
message("libuv1 not found, will build")
add_library(libuv1 STATIC IMPORTED GLOBAL)
ExternalProject_Add(libuv_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/libuv/libuv.git
GIT_TAG v1.44.2
INSTALL_COMMAND ""
BUILD_BYPRODUCTS <BINARY_DIR>/${ep_lib_prefix}uv_a.a
LOG_BUILD TRUE
LOG_CONFIGURE TRUE
)
ExternalProject_Get_Property (libuv_src SOURCE_DIR)
ExternalProject_Get_Property (libuv_src BINARY_DIR)
set (libuv_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${libuv_src_SOURCE_DIR}/include)
set_target_properties (libuv1 PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/${ep_lib_prefix}uv_a.a
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
add_dependencies(libuv1 libuv_src)
file(TO_CMAKE_PATH "${libuv_src_SOURCE_DIR}" libuv_src_SOURCE_DIR)
endif()
add_library (cassandra STATIC IMPORTED GLOBAL)
ExternalProject_Add(cassandra_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/datastax/cpp-driver.git
GIT_TAG 2.16.2
CMAKE_ARGS
-DLIBUV_ROOT_DIR=${BINARY_DIR}
-DLIBUV_LIBARY=${BINARY_DIR}/libuv_a.a
-DLIBUV_INCLUDE_DIR=${SOURCE_DIR}/include
-DCASS_BUILD_STATIC=ON
-DCASS_BUILD_SHARED=OFF
-DOPENSSL_ROOT_DIR=/opt/local/openssl
INSTALL_COMMAND ""
BUILD_BYPRODUCTS <BINARY_DIR>/${ep_lib_prefix}cassandra_static.a
LOG_BUILD TRUE
LOG_CONFIGURE TRUE
)
ExternalProject_Get_Property (cassandra_src SOURCE_DIR)
ExternalProject_Get_Property (cassandra_src BINARY_DIR)
set (cassandra_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${cassandra_src_SOURCE_DIR}/include)
set_target_properties (cassandra PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/${ep_lib_prefix}cassandra_static.a
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
add_dependencies(cassandra cassandra_src)
if(NOT libuv1)
ExternalProject_Add_StepDependencies(cassandra_src build libuv1)
target_link_libraries(cassandra INTERFACE libuv1)
else()
target_link_libraries(cassandra INTERFACE ${libuv1})
endif()
if(NOT krb5)
ExternalProject_Add_StepDependencies(cassandra_src build krb5)
target_link_libraries(cassandra INTERFACE krb5)
else()
target_link_libraries(cassandra INTERFACE ${krb5})
endif()
if(NOT zlib)
ExternalProject_Add_StepDependencies(cassandra_src build zlib)
target_link_libraries(cassandra INTERFACE zlib)
else()
target_link_libraries(cassandra INTERFACE ${zlib})
endif()
file(TO_CMAKE_PATH "${cassandra_src_SOURCE_DIR}" cassandra_src_SOURCE_DIR)
target_link_libraries(ripple_libs INTERFACE cassandra)
else()
message("Found system installed cassandra cpp driver")
find_path(cassandra_includes NAMES cassandra.h REQUIRED)
target_link_libraries (ripple_libs INTERFACE ${cassandra})
target_include_directories(ripple_libs INTERFACE ${cassandra_includes})
endif()
exclude_if_included (cassandra)
endif()

View File

@@ -1,18 +0,0 @@
#[===================================================================[
NIH dep: date
the main library is header-only, thus is an INTERFACE lib in CMake.
NOTE: this has been accepted into c++20 so can likely be replaced
when we update to that standard
#]===================================================================]
find_package (date QUIET)
if (NOT TARGET date::date)
FetchContent_Declare(
hh_date_src
GIT_REPOSITORY https://github.com/HowardHinnant/date.git
GIT_TAG fc4cf092f9674f2670fb9177edcdee870399b829
)
FetchContent_MakeAvailable(hh_date_src)
endif ()

View File

@@ -1,392 +0,0 @@
# currently linking to unsecure versions...if we switch, we'll
# need to add ssl as a link dependency to the grpc targets
option (use_secure_grpc "use TLS version of grpc libs." OFF)
if (use_secure_grpc)
set (grpc_suffix "")
else ()
set (grpc_suffix "_unsecure")
endif ()
find_package (gRPC 1.23 CONFIG QUIET)
if (TARGET gRPC::gpr AND NOT local_grpc)
get_target_property (_grpc_l gRPC::gpr IMPORTED_LOCATION_DEBUG)
if (NOT _grpc_l)
get_target_property (_grpc_l gRPC::gpr IMPORTED_LOCATION_RELEASE)
endif ()
if (NOT _grpc_l)
get_target_property (_grpc_l gRPC::gpr IMPORTED_LOCATION)
endif ()
message (STATUS "Found cmake config for gRPC. Using ${_grpc_l}.")
else ()
find_package (PkgConfig QUIET)
if (PKG_CONFIG_FOUND)
pkg_check_modules (grpc QUIET "grpc${grpc_suffix}>=1.25" "grpc++${grpc_suffix}" gpr)
endif ()
if (grpc_FOUND)
message (STATUS "Found gRPC using pkg-config. Using ${grpc_gpr_PREFIX}.")
endif ()
add_executable (gRPC::grpc_cpp_plugin IMPORTED)
exclude_if_included (gRPC::grpc_cpp_plugin)
if (grpc_FOUND AND NOT local_grpc)
# use installed grpc (via pkg-config)
macro (add_imported_grpc libname_)
if (static)
set (_search "${CMAKE_STATIC_LIBRARY_PREFIX}${libname_}${CMAKE_STATIC_LIBRARY_SUFFIX}")
else ()
set (_search "${CMAKE_SHARED_LIBRARY_PREFIX}${libname_}${CMAKE_SHARED_LIBRARY_SUFFIX}")
endif()
find_library(_found_${libname_}
NAMES ${_search}
HINTS ${grpc_LIBRARY_DIRS})
if (_found_${libname_})
message (STATUS "importing ${libname_} as ${_found_${libname_}}")
else ()
message (FATAL_ERROR "using pkg-config for grpc, can't find ${_search}")
endif ()
add_library ("gRPC::${libname_}" STATIC IMPORTED GLOBAL)
set_target_properties ("gRPC::${libname_}" PROPERTIES IMPORTED_LOCATION ${_found_${libname_}})
if (grpc_INCLUDE_DIRS)
set_target_properties ("gRPC::${libname_}" PROPERTIES INTERFACE_INCLUDE_DIRECTORIES ${grpc_INCLUDE_DIRS})
endif ()
target_link_libraries (ripple_libs INTERFACE "gRPC::${libname_}")
exclude_if_included ("gRPC::${libname_}")
endmacro ()
set_target_properties (gRPC::grpc_cpp_plugin PROPERTIES
IMPORTED_LOCATION "${grpc_gpr_PREFIX}/bin/grpc_cpp_plugin${CMAKE_EXECUTABLE_SUFFIX}")
pkg_check_modules (cares QUIET libcares)
if (cares_FOUND)
if (static)
set (_search "${CMAKE_STATIC_LIBRARY_PREFIX}cares${CMAKE_STATIC_LIBRARY_SUFFIX}")
set (_prefix cares_STATIC)
set (_static STATIC)
else ()
set (_search "${CMAKE_SHARED_LIBRARY_PREFIX}cares${CMAKE_SHARED_LIBRARY_SUFFIX}")
set (_prefix cares)
set (_static)
endif()
find_library(_location NAMES ${_search} HINTS ${cares_LIBRARY_DIRS})
if (NOT _location)
message (FATAL_ERROR "using pkg-config for grpc, can't find c-ares")
endif ()
if(${_location} MATCHES "\\.a$")
add_library(c-ares::cares STATIC IMPORTED GLOBAL)
else()
add_library(c-ares::cares SHARED IMPORTED GLOBAL)
endif()
set_target_properties (c-ares::cares PROPERTIES
IMPORTED_LOCATION ${_location}
INTERFACE_INCLUDE_DIRECTORIES "${${_prefix}_INCLUDE_DIRS}"
INTERFACE_LINK_OPTIONS "${${_prefix}_LDFLAGS}"
)
exclude_if_included (c-ares::cares)
else ()
message (FATAL_ERROR "using pkg-config for grpc, can't find c-ares")
endif ()
else ()
#[===========================[
c-ares (grpc requires)
#]===========================]
ExternalProject_Add (c-ares_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/c-ares/c-ares.git
GIT_TAG cares-1_15_0
CMAKE_ARGS
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DCMAKE_INSTALL_PREFIX=<BINARY_DIR>/_installed_
-DCARES_SHARED=OFF
-DCARES_STATIC=ON
-DCARES_STATIC_PIC=ON
-DCARES_INSTALL=ON
-DCARES_MSVC_STATIC_RUNTIME=ON
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
TEST_COMMAND ""
INSTALL_COMMAND
${CMAKE_COMMAND} -E env --unset=DESTDIR ${CMAKE_COMMAND} --build . --config $<CONFIG> --target install
BUILD_BYPRODUCTS
<BINARY_DIR>/_installed_/lib/${ep_lib_prefix}cares${ep_lib_suffix}
<BINARY_DIR>/_installed_/lib/${ep_lib_prefix}cares_d${ep_lib_suffix}
)
exclude_if_included (c-ares_src)
ExternalProject_Get_Property (c-ares_src BINARY_DIR)
set (cares_binary_dir "${BINARY_DIR}")
add_library (c-ares::cares STATIC IMPORTED GLOBAL)
file (MAKE_DIRECTORY ${BINARY_DIR}/_installed_/include)
set_target_properties (c-ares::cares PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/_installed_/lib/${ep_lib_prefix}cares_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/_installed_/lib/${ep_lib_prefix}cares${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${BINARY_DIR}/_installed_/include)
add_dependencies (c-ares::cares c-ares_src)
exclude_if_included (c-ares::cares)
if (NOT has_zlib)
#[===========================[
zlib (grpc requires)
#]===========================]
if (MSVC)
set (zlib_debug_postfix "d") # zlib cmake sets this internally for MSVC, so we really don't have a choice
set (zlib_base "zlibstatic")
else ()
set (zlib_debug_postfix "_d")
set (zlib_base "z")
endif ()
ExternalProject_Add (zlib_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/madler/zlib.git
GIT_TAG v1.2.11
CMAKE_ARGS
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=${zlib_debug_postfix}
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DCMAKE_INSTALL_PREFIX=<BINARY_DIR>/_installed_
-DBUILD_SHARED_LIBS=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
TEST_COMMAND ""
INSTALL_COMMAND
${CMAKE_COMMAND} -E env --unset=DESTDIR ${CMAKE_COMMAND} --build . --config $<CONFIG> --target install
BUILD_BYPRODUCTS
<BINARY_DIR>/_installed_/lib/${ep_lib_prefix}${zlib_base}${ep_lib_suffix}
<BINARY_DIR>/_installed_/lib/${ep_lib_prefix}${zlib_base}${zlib_debug_postfix}${ep_lib_suffix}
)
exclude_if_included (zlib_src)
ExternalProject_Get_Property (zlib_src BINARY_DIR)
set (zlib_binary_dir "${BINARY_DIR}")
add_library (ZLIB::ZLIB STATIC IMPORTED GLOBAL)
file (MAKE_DIRECTORY ${BINARY_DIR}/_installed_/include)
set_target_properties (ZLIB::ZLIB PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/_installed_/lib/${ep_lib_prefix}${zlib_base}${zlib_debug_postfix}${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/_installed_/lib/${ep_lib_prefix}${zlib_base}${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${BINARY_DIR}/_installed_/include)
add_dependencies (ZLIB::ZLIB zlib_src)
exclude_if_included (ZLIB::ZLIB)
endif ()
#[===========================[
grpc
#]===========================]
ExternalProject_Add (grpc_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/grpc/grpc.git
GIT_TAG v1.25.0
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DCMAKE_CXX_STANDARD=17
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
$<$<BOOL:${CMAKE_TOOLCHAIN_FILE}>:-DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}>
$<$<BOOL:${VCPKG_TARGET_TRIPLET}>:-DVCPKG_TARGET_TRIPLET=${VCPKG_TARGET_TRIPLET}>
$<$<BOOL:${unity}>:-DCMAKE_UNITY_BUILD=ON}>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DgRPC_BUILD_TESTS=OFF
-DgRPC_BENCHMARK_PROVIDER=""
-DgRPC_BUILD_CSHARP_EXT=OFF
-DgRPC_MSVC_STATIC_RUNTIME=ON
-DgRPC_INSTALL=OFF
-DgRPC_CARES_PROVIDER=package
-Dc-ares_DIR=${cares_binary_dir}/_installed_/lib/cmake/c-ares
-DgRPC_SSL_PROVIDER=package
-DOPENSSL_ROOT_DIR=${OPENSSL_ROOT_DIR}
-DgRPC_PROTOBUF_PROVIDER=package
-DProtobuf_USE_STATIC_LIBS=$<IF:$<AND:$<BOOL:${Protobuf_FOUND}>,$<NOT:$<BOOL:${static}>>>,OFF,ON>
-DProtobuf_INCLUDE_DIR=$<JOIN:$<TARGET_PROPERTY:protobuf::libprotobuf,INTERFACE_INCLUDE_DIRECTORIES>,:_:>
-DProtobuf_LIBRARY=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:protobuf::libprotobuf,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:protobuf::libprotobuf,IMPORTED_LOCATION_RELEASE>>
-DProtobuf_PROTOC_LIBRARY=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:protobuf::libprotoc,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:protobuf::libprotoc,IMPORTED_LOCATION_RELEASE>>
-DProtobuf_PROTOC_EXECUTABLE=$<TARGET_PROPERTY:protobuf::protoc,IMPORTED_LOCATION>
-DgRPC_ZLIB_PROVIDER=package
$<$<NOT:$<BOOL:${has_zlib}>>:-DZLIB_ROOT=${zlib_binary_dir}/_installed_>
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -EHa -MP"
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}grpc${grpc_suffix}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}grpc++${grpc_suffix}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}address_sorting$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}gpr$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/$<CONFIG>/grpc_cpp_plugin${CMAKE_EXECUTABLE_SUFFIX}
<BINARY_DIR>
>
LIST_SEPARATOR :_:
TEST_COMMAND ""
INSTALL_COMMAND ""
DEPENDS c-ares_src
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}grpc${grpc_suffix}${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}grpc${grpc_suffix}_d${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}grpc++${grpc_suffix}${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}grpc++${grpc_suffix}_d${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}address_sorting${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}address_sorting_d${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}gpr${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}gpr_d${ep_lib_suffix}
<BINARY_DIR>/grpc_cpp_plugin${CMAKE_EXECUTABLE_SUFFIX}
)
if (TARGET protobuf_src)
ExternalProject_Add_StepDependencies(grpc_src build protobuf_src)
endif ()
exclude_if_included (grpc_src)
ExternalProject_Get_Property (grpc_src BINARY_DIR)
ExternalProject_Get_Property (grpc_src SOURCE_DIR)
set (grpc_binary_dir "${BINARY_DIR}")
set (grpc_source_dir "${SOURCE_DIR}")
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (grpc_src)
endif ()
file (MAKE_DIRECTORY ${SOURCE_DIR}/include)
macro (add_imported_grpc libname_)
add_library ("gRPC::${libname_}" STATIC IMPORTED GLOBAL)
set_target_properties ("gRPC::${libname_}" PROPERTIES
IMPORTED_LOCATION_DEBUG
${grpc_binary_dir}/${ep_lib_prefix}${libname_}_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${grpc_binary_dir}/${ep_lib_prefix}${libname_}${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${grpc_source_dir}/include)
add_dependencies ("gRPC::${libname_}" grpc_src)
target_link_libraries (ripple_libs INTERFACE "gRPC::${libname_}")
exclude_if_included ("gRPC::${libname_}")
endmacro ()
set_target_properties (gRPC::grpc_cpp_plugin PROPERTIES
IMPORTED_LOCATION "${grpc_binary_dir}/grpc_cpp_plugin${CMAKE_EXECUTABLE_SUFFIX}")
add_dependencies (gRPC::grpc_cpp_plugin grpc_src)
endif ()
add_imported_grpc (gpr)
add_imported_grpc ("grpc${grpc_suffix}")
add_imported_grpc ("grpc++${grpc_suffix}")
add_imported_grpc (address_sorting)
target_link_libraries ("gRPC::grpc${grpc_suffix}" INTERFACE c-ares::cares gRPC::gpr gRPC::address_sorting ZLIB::ZLIB)
target_link_libraries ("gRPC::grpc++${grpc_suffix}" INTERFACE "gRPC::grpc${grpc_suffix}" gRPC::gpr)
endif ()
#[=================================[
generate protobuf sources for
grpc defs and bundle into a
static lib
#]=================================]
set(output_dir "${CMAKE_BINARY_DIR}/proto_gen_grpc")
set(GRPC_GEN_DIR "${output_dir}/ripple/proto")
file(MAKE_DIRECTORY ${GRPC_GEN_DIR})
set(GRPC_PROTO_SRCS)
set(GRPC_PROTO_HDRS)
set(GRPC_PROTO_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/src/ripple/proto/org")
file(GLOB_RECURSE GRPC_DEFINITION_FILES "${GRPC_PROTO_ROOT}/*.proto")
foreach(file ${GRPC_DEFINITION_FILES})
# /home/user/rippled/src/ripple/proto/org/.../v1/get_ledger.proto
get_filename_component(_abs_file ${file} ABSOLUTE)
# /home/user/rippled/src/ripple/proto/org/.../v1
get_filename_component(_abs_dir ${_abs_file} DIRECTORY)
# get_ledger
get_filename_component(_basename ${file} NAME_WE)
# /home/user/rippled/src/ripple/proto
get_filename_component(_proto_inc ${GRPC_PROTO_ROOT} DIRECTORY) # updir one level
# org/.../v1/get_ledger.proto
file(RELATIVE_PATH _rel_root_file ${_proto_inc} ${_abs_file})
# org/.../v1
get_filename_component(_rel_root_dir ${_rel_root_file} DIRECTORY)
# src/ripple/proto/org/.../v1
file(RELATIVE_PATH _rel_dir ${CMAKE_CURRENT_SOURCE_DIR} ${_abs_dir})
set(src_1 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.grpc.pb.cc")
set(src_2 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.pb.cc")
set(hdr_1 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.grpc.pb.h")
set(hdr_2 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.pb.h")
add_custom_command(
OUTPUT ${src_1} ${src_2} ${hdr_1} ${hdr_2}
COMMAND protobuf::protoc
ARGS --grpc_out=${GRPC_GEN_DIR}
--cpp_out=${GRPC_GEN_DIR}
--plugin=protoc-gen-grpc=$<TARGET_FILE:gRPC::grpc_cpp_plugin>
-I ${_proto_inc} -I ${_rel_dir}
${_abs_file}
DEPENDS ${_abs_file} protobuf::protoc gRPC::grpc_cpp_plugin
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
COMMENT "Running gRPC C++ protocol buffer compiler on ${file}"
VERBATIM)
set_source_files_properties(${src_1} ${src_2} ${hdr_1} ${hdr_2} PROPERTIES
GENERATED TRUE
SKIP_UNITY_BUILD_INCLUSION ON
)
list(APPEND GRPC_PROTO_SRCS ${src_1} ${src_2})
list(APPEND GRPC_PROTO_HDRS ${hdr_1} ${hdr_2})
endforeach()
target_include_directories(xrpl_core SYSTEM PUBLIC
$<BUILD_INTERFACE:${output_dir}>
$<BUILD_INTERFACE:${output_dir}/ripple/proto>
# The generated sources include headers relative to this path. Fix it later.
$<INSTALL_INTERFACE:include/ripple/proto>
)
target_sources(xrpl_core PRIVATE ${GRPC_PROTO_SRCS})
install(
DIRECTORY ${output_dir}/ripple
DESTINATION include/
FILES_MATCHING PATTERN "*.h"
)
target_link_libraries(xrpl_core PUBLIC
"gRPC::grpc++"
# libgrpc is missing references.
absl::random_random
)
target_compile_options(xrpl_core
PRIVATE
$<$<BOOL:${MSVC}>:-wd4065>
$<$<NOT:$<BOOL:${MSVC}>>:-Wno-deprecated-declarations>
PUBLIC
$<$<BOOL:${MSVC}>:-wd4996>
$<$<BOOL:${is_xcode}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>)
# target_link_libraries (ripple_libs INTERFACE Ripple::grpc_pbufs)
# exclude_if_included (grpc_pbufs)

View File

@@ -0,0 +1,48 @@
include(create_symbolic_link)
# Consider include directory B nested under prefix A:
#
# /path/to/A/then/to/B/...
#
# Call C the relative path from A to B.
# C is what we want to write in `#include` directives:
#
# #include <then/to/B/...>
#
# Examples, all from the `jobqueue` module:
#
# - Library public headers:
# B = /include/xrpl/jobqueue
# A = /include/
# C = xrpl/jobqueue
#
# - Library private headers:
# B = /src/libxrpl/jobqueue
# A = /src/
# C = libxrpl/jobqueue
#
# - Test private headers:
# B = /tests/jobqueue
# A = /
# C = tests/jobqueue
#
# To isolate headers from each other,
# we want to create a symlink Y that points to B,
# within a subdirectory X of the `CMAKE_BINARY_DIR`,
# that has the same relative path C between X and Y,
# and then add X as an include directory of the target,
# sometimes `PUBLIC` and sometimes `PRIVATE`.
# The Cs are all guaranteed to be unique.
# We can guarantee a unique X per target by using
# `${CMAKE_CURRENT_BINARY_DIR}/include/${target}`.
#
# isolate_headers(target A B scope)
function(isolate_headers target A B scope)
file(RELATIVE_PATH C "${A}" "${B}")
set(X "${CMAKE_CURRENT_BINARY_DIR}/modules/${target}")
set(Y "${X}/${C}")
cmake_path(GET Y PARENT_PATH parent)
file(MAKE_DIRECTORY "${parent}")
create_symbolic_link("${B}" "${Y}")
target_include_directories(${target} ${scope} "$<BUILD_INTERFACE:${X}>")
endfunction()

View File

@@ -0,0 +1,24 @@
# Link a library to its modules (see: `add_module`)
# and remove the module sources from the library's sources.
#
# add_module(parent a)
# add_module(parent b)
# target_link_libraries(project.libparent.b PUBLIC project.libparent.a)
# add_library(project.libparent)
# target_link_modules(parent PUBLIC a b)
function(target_link_modules parent scope)
set(library ${PROJECT_NAME}.lib${parent})
foreach(name ${ARGN})
set(module ${library}.${name})
get_target_property(sources ${library} SOURCES)
list(LENGTH sources before)
get_target_property(dupes ${module} SOURCES)
list(LENGTH dupes expected)
list(REMOVE_ITEM sources ${dupes})
list(LENGTH sources after)
math(EXPR actual "${before} - ${after}")
message(STATUS "${module} with ${expected} sources took ${actual} sources from ${library}")
set_target_properties(${library} PROPERTIES SOURCES "${sources}")
target_link_libraries(${library} ${scope} ${module})
endforeach()
endfunction()

View File

@@ -21,22 +21,23 @@ class Xrpl(ConanFile):
'tests': [True, False],
'unity': [True, False],
'xrpld': [True, False],
'with_wasmedge': [True, False],
'tool_requires_b2': [True, False],
}
requires = [
'date/3.0.1',
'date/3.0.3',
'grpc/1.50.1',
'libarchive/3.6.2',
'libarchive/3.7.6',
'nudb/2.0.8',
'openssl/1.1.1u',
'soci/4.0.3',
'openssl/3.6.0',
'soci/4.0.3@xahaud/stable',
'xxhash/0.8.2',
'wasmedge/0.11.2',
'zlib/1.2.13',
'zlib/1.3.1',
]
tool_requires = [
'protobuf/3.21.9',
'protobuf/3.21.12',
]
default_options = {
@@ -49,9 +50,11 @@ class Xrpl(ConanFile):
'static': True,
'tests': False,
'unity': False,
'with_wasmedge': True,
'tool_requires_b2': False,
'xrpld': False,
'date/*:header_only': True,
'date/*:header_only': False,
'grpc/*:shared': False,
'grpc/*:secure': True,
'libarchive/*:shared': False,
@@ -95,15 +98,28 @@ class Xrpl(ConanFile):
match = next(m for m in matches if m)
self.version = match.group(1)
def build_requirements(self):
self.tool_requires('grpc/1.50.1')
# Explicitly require b2 (e.g. for building from source for glibc compatibility)
if self.options.tool_requires_b2:
self.tool_requires('b2/5.3.2')
def configure(self):
if self.settings.compiler == 'apple-clang':
self.options['boost'].visibility = 'global'
self.options['boost/*'].visibility = 'global'
def requirements(self):
self.requires('boost/1.86.0', force=True)
self.requires('lz4/1.9.3', force=True)
# Force boost version for all dependencies to avoid conflicts
self.requires('boost/1.86.0', override=True)
self.requires('lz4/1.10.0', force=True)
self.requires('protobuf/3.21.9', force=True)
self.requires('sqlite3/3.42.0', force=True)
# Force sqlite3 version to avoid conflicts with soci
self.requires('sqlite3/3.47.0', override=True)
# Force our custom snappy build for all dependencies
self.requires('snappy/1.1.10@xahaud/stable', override=True)
if self.options.with_wasmedge:
self.requires('wasmedge/0.11.2@xahaud/stable')
if self.options.jemalloc:
self.requires('jemalloc/5.3.0')
if self.options.rocksdb:

View File

@@ -8,4 +8,4 @@ if [[ "$GITHUB_REPOSITORY" == "" ]]; then
fi
echo "Mounting $(pwd)/io in ubuntu and running unit tests"
docker run --rm -i -v $(pwd):/io -e BUILD_CORES=$BUILD_CORES ubuntu sh -c '/io/release-build/xahaud --unittest-jobs $BUILD_CORES -u'
docker run --rm -i -v $(pwd):/io --platform=linux/amd64 -e BUILD_CORES=$BUILD_CORES ubuntu sh -c '/io/release-build/xahaud --unittest-jobs $BUILD_CORES -u'

1
external/README.md vendored
View File

@@ -4,6 +4,7 @@ The Conan recipes include patches we have not yet pushed upstream.
| Folder | Upstream | Description |
|:----------------|:---------------------------------------------|:------------|
| `antithesis-sdk`| [Project](https://github.com/antithesishq/antithesis-sdk-cpp/) | [Antithesis](https://antithesis.com/docs/using_antithesis/sdk/cpp/overview.html) SDK for C++ |
| `ed25519-donna` | [Project](https://github.com/floodyberry/ed25519-donna) | [Ed25519](http://ed25519.cr.yp.to/) digital signatures |
| `rocksdb` | [Recipe](https://github.com/conan-io/conan-center-index/tree/master/recipes/rocksdb) | Fast key/value database. (Supports rotational disks better than NuDB.) |
| `secp256k1` | [Project](https://github.com/bitcoin-core/secp256k1) | ECDSA digital signatures using the **secp256k1** curve |

3
external/antithesis-sdk/.clang-format vendored Normal file
View File

@@ -0,0 +1,3 @@
---
DisableFormat: true
SortIncludes: false

17
external/antithesis-sdk/CMakeLists.txt vendored Normal file
View File

@@ -0,0 +1,17 @@
cmake_minimum_required(VERSION 3.25)
# Note, version set explicitly by rippled project
project(antithesis-sdk-cpp VERSION 0.4.4 LANGUAGES CXX)
add_library(antithesis-sdk-cpp INTERFACE antithesis_sdk.h)
# Note, both sections below created by rippled project
target_include_directories(antithesis-sdk-cpp INTERFACE
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}>
)
install(
FILES antithesis_sdk.h
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}"
)

BIN
external/antithesis-sdk/LICENSE vendored Normal file

Binary file not shown.

8
external/antithesis-sdk/README.md vendored Normal file
View File

@@ -0,0 +1,8 @@
# Antithesis C++ SDK
This library provides methods for C++ programs to configure the [Antithesis](https://antithesis.com) platform. It contains three kinds of functionality:
* Assertion macros that allow you to define test properties about your software or workload.
* Randomness functions for requesting both structured and unstructured randomness from the Antithesis platform.
* Lifecycle functions that inform the Antithesis environment that particular test phases or milestones have been reached.
For general usage guidance see the [Antithesis C++ SDK Documentation](https://antithesis.com/docs/using_antithesis/sdk/cpp/overview/)

View File

@@ -0,0 +1,113 @@
#pragma once
/*
This header file enables code coverage instrumentation. It is distributed with the Antithesis C++ SDK.
This header file can be used in both C and C++ programs. (The rest of the SDK works only for C++ programs.)
You should include it in a single .cpp or .c file.
The instructions (such as required compiler flags) and usage guidance are found at https://antithesis.com/docs/using_antithesis/sdk/cpp/overview/.
*/
#include <unistd.h>
#include <string.h>
#include <dlfcn.h>
#include <stdint.h>
#include <stdio.h>
#ifndef __cplusplus
#include <stdbool.h>
#include <stddef.h>
#endif
// If the libvoidstar(determ) library is present,
// pass thru trace_pc_guard related callbacks to it
typedef void (*trace_pc_guard_init_fn)(uint32_t *start, uint32_t *stop);
typedef void (*trace_pc_guard_fn)(uint32_t *guard, uint64_t edge);
static trace_pc_guard_init_fn trace_pc_guard_init = NULL;
static trace_pc_guard_fn trace_pc_guard = NULL;
static bool did_check_libvoidstar = false;
static bool has_libvoidstar = false;
static __attribute__((no_sanitize("coverage"))) void debug_message_out(const char *msg) {
(void)printf("%s\n", msg);
return;
}
extern
#ifdef __cplusplus
"C"
#endif
__attribute__((no_sanitize("coverage"))) void antithesis_load_libvoidstar() {
#ifdef __cplusplus
constexpr
#endif
const char* LIB_PATH = "/usr/lib/libvoidstar.so";
if (did_check_libvoidstar) {
return;
}
debug_message_out("TRYING TO LOAD libvoidstar");
did_check_libvoidstar = true;
void* shared_lib = dlopen(LIB_PATH, RTLD_NOW);
if (!shared_lib) {
debug_message_out("Can not load the Antithesis native library");
return;
}
void* trace_pc_guard_init_sym = dlsym(shared_lib, "__sanitizer_cov_trace_pc_guard_init");
if (!trace_pc_guard_init_sym) {
debug_message_out("Can not forward calls to libvoidstar for __sanitizer_cov_trace_pc_guard_init");
return;
}
void* trace_pc_guard_sym = dlsym(shared_lib, "__sanitizer_cov_trace_pc_guard_internal");
if (!trace_pc_guard_sym) {
debug_message_out("Can not forward calls to libvoidstar for __sanitizer_cov_trace_pc_guard");
return;
}
trace_pc_guard_init = (trace_pc_guard_init_fn)(trace_pc_guard_init_sym);
trace_pc_guard = (trace_pc_guard_fn)(trace_pc_guard_sym);
has_libvoidstar = true;
debug_message_out("LOADED libvoidstar");
}
// The following symbols are indeed reserved identifiers, since we're implementing functions defined
// in the compiler runtime. Not clear how to get Clang on board with that besides narrowly suppressing
// the warning in this case. The sample code on the CoverageSanitizer documentation page fails this
// warning!
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wreserved-identifier"
extern
#ifdef __cplusplus
"C"
#endif
void __sanitizer_cov_trace_pc_guard_init(uint32_t *start, uint32_t *stop) {
debug_message_out("SDK forwarding to libvoidstar for __sanitizer_cov_trace_pc_guard_init()");
if (!did_check_libvoidstar) {
antithesis_load_libvoidstar();
}
if (has_libvoidstar) {
trace_pc_guard_init(start, stop);
}
return;
}
extern
#ifdef __cplusplus
"C"
#endif
void __sanitizer_cov_trace_pc_guard( uint32_t *guard ) {
if (has_libvoidstar) {
uint64_t edge = (uint64_t)(__builtin_return_address(0));
trace_pc_guard(guard, edge);
} else {
if (guard) {
*guard = 0;
}
}
return;
}
#pragma clang diagnostic pop

1105
external/antithesis-sdk/antithesis_sdk.h vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -89,13 +89,13 @@ class RocksDBConan(ConanFile):
if self.options.with_snappy:
self.requires("snappy/1.1.10")
if self.options.with_lz4:
self.requires("lz4/1.9.4")
self.requires("lz4/1.10.0")
if self.options.with_zlib:
self.requires("zlib/[>=1.2.11 <2]")
if self.options.with_zstd:
self.requires("zstd/1.5.5")
self.requires("zstd/1.5.6")
if self.options.get_safe("with_tbb"):
self.requires("onetbb/2021.10.0")
self.requires("onetbb/2021.12.0")
if self.options.with_jemalloc:
self.requires("jemalloc/5.3.0")

View File

@@ -10,8 +10,8 @@ env:
MAKEFLAGS: -j4
BUILD: check
### secp256k1 config
ECMULTWINDOW: auto
ECMULTGENPRECISION: auto
ECMULTWINDOW: 15
ECMULTGENKB: 22
ASM: no
WIDEMUL: auto
WITH_VALGRIND: yes
@@ -20,20 +20,18 @@ env:
EXPERIMENTAL: no
ECDH: no
RECOVERY: no
EXTRAKEYS: no
SCHNORRSIG: no
MUSIG: no
ELLSWIFT: no
### test options
SECP256K1_TEST_ITERS:
SECP256K1_TEST_ITERS: 64
BENCH: yes
SECP256K1_BENCH_ITERS: 2
CTIMETESTS: yes
# Compile and run the tests
EXAMPLES: yes
# https://cirrus-ci.org/pricing/#compute-credits
credits_snippet: &CREDITS
# Don't use any credits for now.
use_compute_credits: false
cat_logs_snippet: &CAT_LOGS
always:
cat_tests_log_script:
@@ -53,357 +51,51 @@ cat_logs_snippet: &CAT_LOGS
cat_ci_env_script:
- env
merge_base_script_snippet: &MERGE_BASE
merge_base_script:
- if [ "$CIRRUS_PR" = "" ]; then exit 0; fi
- git fetch --depth=1 $CIRRUS_REPO_CLONE_URL "pull/${CIRRUS_PR}/merge"
- git checkout FETCH_HEAD # Use merged changes to detect silent merge conflicts
linux_container_snippet: &LINUX_CONTAINER
container:
dockerfile: ci/linux-debian.Dockerfile
# Reduce number of CPUs to be able to do more builds in parallel.
cpu: 1
# Gives us more CPUs for free if they're available.
greedy: true
# More than enough for our scripts.
memory: 1G
task:
name: "x86_64: Linux (Debian stable)"
<< : *LINUX_CONTAINER
matrix: &ENV_MATRIX
- env: {WIDEMUL: int64, RECOVERY: yes}
- env: {WIDEMUL: int64, ECDH: yes, SCHNORRSIG: yes}
- env: {WIDEMUL: int128}
- env: {WIDEMUL: int128_struct}
- env: {WIDEMUL: int128, RECOVERY: yes, SCHNORRSIG: yes}
- env: {WIDEMUL: int128, ECDH: yes, SCHNORRSIG: yes}
- env: {WIDEMUL: int128, ASM: x86_64}
- env: { RECOVERY: yes, SCHNORRSIG: yes}
- env: {CTIMETESTS: no, RECOVERY: yes, ECDH: yes, SCHNORRSIG: yes, CPPFLAGS: -DVERIFY}
- env: {BUILD: distcheck, WITH_VALGRIND: no, CTIMETESTS: no, BENCH: no}
- env: {CPPFLAGS: -DDETERMINISTIC}
- env: {CFLAGS: -O0, CTIMETESTS: no}
- env: { ECMULTGENPRECISION: 2, ECMULTWINDOW: 2 }
- env: { ECMULTGENPRECISION: 8, ECMULTWINDOW: 4 }
matrix:
- env:
CC: gcc
- env:
CC: clang
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "i686: Linux (Debian stable)"
<< : *LINUX_CONTAINER
env:
HOST: i686-linux-gnu
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
matrix:
- env:
CC: i686-linux-gnu-gcc
- env:
CC: clang --target=i686-pc-linux-gnu -isystem /usr/i686-linux-gnu/include
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "arm64: macOS Ventura"
macos_instance:
image: ghcr.io/cirruslabs/macos-ventura-base:latest
env:
HOMEBREW_NO_AUTO_UPDATE: 1
HOMEBREW_NO_INSTALL_CLEANUP: 1
# Cirrus gives us a fixed number of 4 virtual CPUs. Not that we even have that many jobs at the moment...
MAKEFLAGS: -j5
matrix:
<< : *ENV_MATRIX
env:
ASM: no
WITH_VALGRIND: no
CTIMETESTS: no
matrix:
- env:
CC: gcc
- env:
CC: clang
brew_script:
- brew install automake libtool gcc
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
<< : *CREDITS
task:
name: "s390x (big-endian): Linux (Debian stable, QEMU)"
<< : *LINUX_CONTAINER
env:
WRAPPER_CMD: qemu-s390x
SECP256K1_TEST_ITERS: 16
HOST: s390x-linux-gnu
WITH_VALGRIND: no
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
CTIMETESTS: no
<< : *MERGE_BASE
test_script:
# https://sourceware.org/bugzilla/show_bug.cgi?id=27008
- rm /etc/ld.so.cache
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "ARM32: Linux (Debian stable, QEMU)"
<< : *LINUX_CONTAINER
env:
WRAPPER_CMD: qemu-arm
SECP256K1_TEST_ITERS: 16
HOST: arm-linux-gnueabihf
WITH_VALGRIND: no
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
CTIMETESTS: no
matrix:
- env: {}
- env: {EXPERIMENTAL: yes, ASM: arm32}
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "ARM64: Linux (Debian stable, QEMU)"
<< : *LINUX_CONTAINER
env:
WRAPPER_CMD: qemu-aarch64
SECP256K1_TEST_ITERS: 16
HOST: aarch64-linux-gnu
WITH_VALGRIND: no
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
CTIMETESTS: no
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "ppc64le: Linux (Debian stable, QEMU)"
<< : *LINUX_CONTAINER
env:
WRAPPER_CMD: qemu-ppc64le
SECP256K1_TEST_ITERS: 16
HOST: powerpc64le-linux-gnu
WITH_VALGRIND: no
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
CTIMETESTS: no
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
<< : *LINUX_CONTAINER
env:
WRAPPER_CMD: wine
WITH_VALGRIND: no
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
CTIMETESTS: no
matrix:
- name: "x86_64 (mingw32-w64): Windows (Debian stable, Wine)"
env:
HOST: x86_64-w64-mingw32
- name: "i686 (mingw32-w64): Windows (Debian stable, Wine)"
env:
HOST: i686-w64-mingw32
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
<< : *LINUX_CONTAINER
env:
WRAPPER_CMD: wine
WERROR_CFLAGS: -WX
WITH_VALGRIND: no
ECDH: yes
RECOVERY: yes
EXPERIMENTAL: yes
SCHNORRSIG: yes
CTIMETESTS: no
# Use a MinGW-w64 host to tell ./configure we're building for Windows.
# This will detect some MinGW-w64 tools but then make will need only
# the MSVC tools CC, AR and NM as specified below.
HOST: x86_64-w64-mingw32
CC: /opt/msvc/bin/x64/cl
AR: /opt/msvc/bin/x64/lib
NM: /opt/msvc/bin/x64/dumpbin -symbols -headers
# Set non-essential options that affect the CLI messages here.
# (They depend on the user's taste, so we don't want to set them automatically in configure.ac.)
CFLAGS: -nologo -diagnostics:caret
LDFLAGS: -Xlinker -Xlinker -Xlinker -nologo
matrix:
- name: "x86_64 (MSVC): Windows (Debian stable, Wine)"
- name: "x86_64 (MSVC): Windows (Debian stable, Wine, int128_struct)"
env:
WIDEMUL: int128_struct
- name: "x86_64 (MSVC): Windows (Debian stable, Wine, int128_struct with __(u)mulh)"
env:
WIDEMUL: int128_struct
CPPFLAGS: -DSECP256K1_MSVC_MULH_TEST_OVERRIDE
- name: "i686 (MSVC): Windows (Debian stable, Wine)"
env:
HOST: i686-w64-mingw32
CC: /opt/msvc/bin/x86/cl
AR: /opt/msvc/bin/x86/lib
NM: /opt/msvc/bin/x86/dumpbin -symbols -headers
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
# Sanitizers
task:
<< : *LINUX_CONTAINER
env:
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
CTIMETESTS: no
matrix:
- name: "Valgrind (memcheck)"
container:
cpu: 2
env:
# The `--error-exitcode` is required to make the test fail if valgrind found errors, otherwise it'll return 0 (https://www.valgrind.org/docs/manual/manual-core.html)
WRAPPER_CMD: "valgrind --error-exitcode=42"
SECP256K1_TEST_ITERS: 2
- name: "UBSan, ASan, LSan"
container:
memory: 2G
env:
CFLAGS: "-fsanitize=undefined,address -g"
UBSAN_OPTIONS: "print_stacktrace=1:halt_on_error=1"
ASAN_OPTIONS: "strict_string_checks=1:detect_stack_use_after_return=1:detect_leaks=1"
LSAN_OPTIONS: "use_unaligned=1"
SECP256K1_TEST_ITERS: 32
# Try to cover many configurations with just a tiny matrix.
matrix:
- env:
ASM: auto
- env:
ASM: no
ECMULTGENPRECISION: 2
ECMULTWINDOW: 2
matrix:
- env:
CC: clang
- env:
HOST: i686-linux-gnu
CC: i686-linux-gnu-gcc
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
# Memory sanitizers
task:
<< : *LINUX_CONTAINER
name: "MSan"
env:
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
CTIMETESTS: yes
CC: clang
SECP256K1_TEST_ITERS: 32
ASM: no
WITH_VALGRIND: no
container:
memory: 2G
matrix:
- env:
CFLAGS: "-fsanitize=memory -g"
- env:
ECMULTGENPRECISION: 2
ECMULTWINDOW: 2
CFLAGS: "-fsanitize=memory -g -O3"
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "C++ -fpermissive (entire project)"
<< : *LINUX_CONTAINER
env:
CC: g++
CFLAGS: -fpermissive -g
CPPFLAGS: -DSECP256K1_CPLUSPLUS_TEST_OVERRIDE
WERROR_CFLAGS:
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "C++ (public headers)"
<< : *LINUX_CONTAINER
test_script:
- g++ -Werror include/*.h
- clang -Werror -x c++-header include/*.h
- /opt/msvc/bin/x64/cl.exe -c -WX -TP include/*.h
task:
name: "sage prover"
<< : *LINUX_CONTAINER
test_script:
- cd sage
- sage prove_group_implementations.sage
task:
name: "x86_64: Windows (VS 2022)"
windows_container:
image: cirrusci/windowsservercore:visualstudio2022
cpu: 4
memory: 3840MB
env:
PATH: '%CIRRUS_WORKING_DIR%\build\src\RelWithDebInfo;%PATH%'
x64_NATIVE_TOOLS: '"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\Build\vcvars64.bat"'
# Ignore MSBuild warning MSB8029.
# See: https://learn.microsoft.com/en-us/visualstudio/msbuild/errors/msb8029?view=vs-2022
IgnoreWarnIntDirInTempDetected: 'true'
merge_script:
- PowerShell -NoLogo -Command if ($env:CIRRUS_PR -ne $null) { git fetch $env:CIRRUS_REPO_CLONE_URL pull/$env:CIRRUS_PR/merge; git reset --hard FETCH_HEAD; }
configure_script:
- '%x64_NATIVE_TOOLS%'
- cmake -E env CFLAGS="/WX" cmake -G "Visual Studio 17 2022" -A x64 -S . -B build -DSECP256K1_ENABLE_MODULE_RECOVERY=ON -DSECP256K1_BUILD_EXAMPLES=ON
linux_arm64_container_snippet: &LINUX_ARM64_CONTAINER
env_script:
- env | tee /tmp/env
build_script:
- '%x64_NATIVE_TOOLS%'
- cmake --build build --config RelWithDebInfo -- -property:UseMultiToolTask=true;CL_MPcount=5
check_script:
- '%x64_NATIVE_TOOLS%'
- ctest -C RelWithDebInfo --test-dir build -j 5
- build\src\RelWithDebInfo\bench_ecmult.exe
- build\src\RelWithDebInfo\bench_internal.exe
- build\src\RelWithDebInfo\bench.exe
- DOCKER_BUILDKIT=1 docker build --file "ci/linux-debian.Dockerfile" --tag="ci_secp256k1_arm"
- docker image prune --force # Cleanup stale layers
test_script:
- docker run --rm --mount "type=bind,src=./,dst=/ci_secp256k1" --env-file /tmp/env --replace --name "ci_secp256k1_arm" "ci_secp256k1_arm" bash -c "cd /ci_secp256k1/ && ./ci/ci.sh"
task:
name: "ARM64: Linux (Debian stable)"
persistent_worker:
labels:
type: arm64
env:
ECDH: yes
RECOVERY: yes
EXTRAKEYS: yes
SCHNORRSIG: yes
MUSIG: yes
ELLSWIFT: yes
matrix:
# Currently only gcc-snapshot, the other compilers are tested on GHA with QEMU
- env: { CC: 'gcc-snapshot' }
<< : *LINUX_ARM64_CONTAINER
<< : *CAT_LOGS
task:
name: "ARM64: Linux (Debian stable), Valgrind"
persistent_worker:
labels:
type: arm64
env:
ECDH: yes
RECOVERY: yes
EXTRAKEYS: yes
SCHNORRSIG: yes
MUSIG: yes
ELLSWIFT: yes
WRAPPER_CMD: 'valgrind --error-exitcode=42'
SECP256K1_TEST_ITERS: 2
matrix:
- env: { CC: 'gcc' }
- env: { CC: 'clang' }
- env: { CC: 'gcc-snapshot' }
- env: { CC: 'clang-snapshot' }
<< : *LINUX_ARM64_CONTAINER
<< : *CAT_LOGS

View File

@@ -10,6 +10,8 @@ ctime_tests
ecdh_example
ecdsa_example
schnorr_example
ellswift_example
musig_example
*.exe
*.so
*.a

View File

@@ -5,6 +5,83 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.6.0] - 2024-11-04
#### Added
- New module `musig` implements the MuSig2 multisignature scheme according to the [BIP 327 specification](https://github.com/bitcoin/bips/blob/master/bip-0327.mediawiki). See:
- Header file `include/secp256k1_musig.h` which defines the new API.
- Document `doc/musig.md` for further notes on API usage.
- Usage example `examples/musig.c`.
- New CMake variable `SECP256K1_APPEND_LDFLAGS` for appending linker flags to the build command.
#### Changed
- API functions now use a significantly more robust method to clear secrets from the stack before returning. However, secret clearing remains a best-effort security measure and cannot guarantee complete removal.
- Any type `secp256k1_foo` can now be forward-declared using `typedef struct secp256k1_foo secp256k1_foo;` (or also `struct secp256k1_foo;` in C++).
- Organized CMake build artifacts into dedicated directories (`bin/` for executables, `lib/` for libraries) to improve build output structure and Windows shared library compatibility.
#### Removed
- Removed the `secp256k1_scratch_space` struct and its associated functions `secp256k1_scratch_space_create` and `secp256k1_scratch_space_destroy` because the scratch space was unused in the API.
#### ABI Compatibility
The symbols `secp256k1_scratch_space_create` and `secp256k1_scratch_space_destroy` were removed.
Otherwise, the library maintains backward compatibility with versions 0.3.x through 0.5.x.
## [0.5.1] - 2024-08-01
#### Added
- Added usage example for an ElligatorSwift key exchange.
#### Changed
- The default size of the precomputed table for signing was changed from 22 KiB to 86 KiB. The size can be changed with the configure option `--ecmult-gen-kb` (`SECP256K1_ECMULT_GEN_KB` for CMake).
- "auto" is no longer an accepted value for the `--with-ecmult-window` and `--with-ecmult-gen-kb` configure options (this also applies to `SECP256K1_ECMULT_WINDOW_SIZE` and `SECP256K1_ECMULT_GEN_KB` in CMake). To achieve the same configuration as previously provided by the "auto" value, omit setting the configure option explicitly.
#### Fixed
- Fixed compilation when the extrakeys module is disabled.
#### ABI Compatibility
The ABI is backward compatible with versions 0.5.0, 0.4.x and 0.3.x.
## [0.5.0] - 2024-05-06
#### Added
- New function `secp256k1_ec_pubkey_sort` that sorts public keys using lexicographic (of compressed serialization) order.
#### Changed
- The implementation of the point multiplication algorithm used for signing and public key generation was changed, resulting in improved performance for those operations.
- The related configure option `--ecmult-gen-precision` was replaced with `--ecmult-gen-kb` (`SECP256K1_ECMULT_GEN_KB` for CMake).
- This changes the supported precomputed table sizes for these operations. The new supported sizes are 2 KiB, 22 KiB, or 86 KiB (while the old supported sizes were 32 KiB, 64 KiB, or 512 KiB).
#### ABI Compatibility
The ABI is backward compatible with versions 0.4.x and 0.3.x.
## [0.4.1] - 2023-12-21
#### Changed
- The point multiplication algorithm used for ECDH operations (module `ecdh`) was replaced with a slightly faster one.
- Optional handwritten x86_64 assembly for field operations was removed because modern C compilers are able to output more efficient assembly. This change results in a significant speedup of some library functions when handwritten x86_64 assembly is enabled (`--with-asm=x86_64` in GNU Autotools, `-DSECP256K1_ASM=x86_64` in CMake), which is the default on x86_64. Benchmarks with GCC 10.5.0 show a 10% speedup for `secp256k1_ecdsa_verify` and `secp256k1_schnorrsig_verify`.
#### ABI Compatibility
The ABI is backward compatible with versions 0.4.0 and 0.3.x.
## [0.4.0] - 2023-09-04
#### Added
- New module `ellswift` implements ElligatorSwift encoding for public keys and x-only Diffie-Hellman key exchange for them.
ElligatorSwift permits representing secp256k1 public keys as 64-byte arrays which cannot be distinguished from uniformly random. See:
- Header file `include/secp256k1_ellswift.h` which defines the new API.
- Document `doc/ellswift.md` which explains the mathematical background of the scheme.
- The [paper](https://eprint.iacr.org/2022/759) on which the scheme is based.
- We now test the library with unreleased development snapshots of GCC and Clang. This gives us an early chance to catch miscompilations and constant-time issues introduced by the compiler (such as those that led to the previous two releases).
#### Fixed
- Fixed symbol visibility in Windows DLL builds, where three internal library symbols were wrongly exported.
#### Changed
- When consuming libsecp256k1 as a static library on Windows, the user must now define the `SECP256K1_STATIC` macro before including `secp256k1.h`.
#### ABI Compatibility
This release is backward compatible with the ABI of 0.3.0, 0.3.1, and 0.3.2. Symbol visibility is now believed to be handled properly on supported platforms and is now considered to be part of the ABI. Please report any improperly exported symbols as a bug.
## [0.3.2] - 2023-05-13
We strongly recommend updating to 0.3.2 if you use or plan to use GCC >=13 to compile libsecp256k1. When in doubt, check the GCC version using `gcc -v`.
@@ -85,7 +162,11 @@ This version was in fact never released.
The number was given by the build system since the introduction of autotools in Jan 2014 (ea0fe5a5bf0c04f9cc955b2966b614f5f378c6f6).
Therefore, this version number does not uniquely identify a set of source files.
[unreleased]: https://github.com/bitcoin-core/secp256k1/compare/v0.3.2...HEAD
[0.6.0]: https://github.com/bitcoin-core/secp256k1/compare/v0.5.1...v0.6.0
[0.5.1]: https://github.com/bitcoin-core/secp256k1/compare/v0.5.0...v0.5.1
[0.5.0]: https://github.com/bitcoin-core/secp256k1/compare/v0.4.1...v0.5.0
[0.4.1]: https://github.com/bitcoin-core/secp256k1/compare/v0.4.0...v0.4.1
[0.4.0]: https://github.com/bitcoin-core/secp256k1/compare/v0.3.2...v0.4.0
[0.3.2]: https://github.com/bitcoin-core/secp256k1/compare/v0.3.1...v0.3.2
[0.3.1]: https://github.com/bitcoin-core/secp256k1/compare/v0.3.0...v0.3.1
[0.3.0]: https://github.com/bitcoin-core/secp256k1/compare/v0.2.0...v0.3.0

View File

@@ -1,32 +1,29 @@
cmake_minimum_required(VERSION 3.13)
if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.15)
# MSVC runtime library flags are selected by the CMAKE_MSVC_RUNTIME_LIBRARY abstraction.
cmake_policy(SET CMP0091 NEW)
# MSVC warning flags are not in CMAKE_<LANG>_FLAGS by default.
cmake_policy(SET CMP0092 NEW)
endif()
cmake_minimum_required(VERSION 3.16)
#=============================
# Project / Package metadata
#=============================
project(libsecp256k1
# The package (a.k.a. release) version is based on semantic versioning 2.0.0 of
# the API. All changes in experimental modules are treated as
# backwards-compatible and therefore at most increase the minor version.
VERSION 0.3.2
VERSION 0.6.0
DESCRIPTION "Optimized C library for ECDSA signatures and secret/public key operations on curve secp256k1."
HOMEPAGE_URL "https://github.com/bitcoin-core/secp256k1"
LANGUAGES C
)
enable_testing()
list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake)
if(CMAKE_VERSION VERSION_LESS 3.21)
get_directory_property(parent_directory PARENT_DIRECTORY)
if(parent_directory)
set(PROJECT_IS_TOP_LEVEL OFF CACHE INTERNAL "Emulates CMake 3.21+ behavior.")
set(${PROJECT_NAME}_IS_TOP_LEVEL OFF CACHE INTERNAL "Emulates CMake 3.21+ behavior.")
# Emulates CMake 3.21+ behavior.
if(CMAKE_SOURCE_DIR STREQUAL CMAKE_CURRENT_SOURCE_DIR)
set(PROJECT_IS_TOP_LEVEL ON)
set(${PROJECT_NAME}_IS_TOP_LEVEL ON)
else()
set(PROJECT_IS_TOP_LEVEL ON CACHE INTERNAL "Emulates CMake 3.21+ behavior.")
set(${PROJECT_NAME}_IS_TOP_LEVEL ON CACHE INTERNAL "Emulates CMake 3.21+ behavior.")
set(PROJECT_IS_TOP_LEVEL OFF)
set(${PROJECT_NAME}_IS_TOP_LEVEL OFF)
endif()
unset(parent_directory)
endif()
# The library version is based on libtool versioning of the ABI. The set of
@@ -34,15 +31,19 @@ endif()
# https://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html
# All changes in experimental modules are treated as if they don't affect the
# interface and therefore only increase the revision.
set(${PROJECT_NAME}_LIB_VERSION_CURRENT 2)
set(${PROJECT_NAME}_LIB_VERSION_REVISION 2)
set(${PROJECT_NAME}_LIB_VERSION_CURRENT 5)
set(${PROJECT_NAME}_LIB_VERSION_REVISION 0)
set(${PROJECT_NAME}_LIB_VERSION_AGE 0)
#=============================
# Language setup
#=============================
set(CMAKE_C_STANDARD 90)
set(CMAKE_C_EXTENSIONS OFF)
list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake)
#=============================
# Configurable options
#=============================
option(BUILD_SHARED_LIBS "Build shared libraries." ON)
option(SECP256K1_DISABLE_SHARED "Disable shared library. Overrides BUILD_SHARED_LIBS." OFF)
if(SECP256K1_DISABLE_SHARED)
@@ -51,24 +52,49 @@ endif()
option(SECP256K1_INSTALL "Enable installation." ${PROJECT_IS_TOP_LEVEL})
## Modules
# We declare all options before processing them, to make sure we can express
# dependendencies while processing.
option(SECP256K1_ENABLE_MODULE_ECDH "Enable ECDH module." ON)
if(SECP256K1_ENABLE_MODULE_ECDH)
add_compile_definitions(ENABLE_MODULE_ECDH=1)
option(SECP256K1_ENABLE_MODULE_RECOVERY "Enable ECDSA pubkey recovery module." OFF)
option(SECP256K1_ENABLE_MODULE_EXTRAKEYS "Enable extrakeys module." ON)
option(SECP256K1_ENABLE_MODULE_SCHNORRSIG "Enable schnorrsig module." ON)
option(SECP256K1_ENABLE_MODULE_MUSIG "Enable musig module." ON)
option(SECP256K1_ENABLE_MODULE_ELLSWIFT "Enable ElligatorSwift module." ON)
# Processing must be done in a topological sorting of the dependency graph
# (dependent module first).
if(SECP256K1_ENABLE_MODULE_ELLSWIFT)
add_compile_definitions(ENABLE_MODULE_ELLSWIFT=1)
endif()
if(SECP256K1_ENABLE_MODULE_MUSIG)
if(DEFINED SECP256K1_ENABLE_MODULE_SCHNORRSIG AND NOT SECP256K1_ENABLE_MODULE_SCHNORRSIG)
message(FATAL_ERROR "Module dependency error: You have disabled the schnorrsig module explicitly, but it is required by the musig module.")
endif()
set(SECP256K1_ENABLE_MODULE_SCHNORRSIG ON)
add_compile_definitions(ENABLE_MODULE_MUSIG=1)
endif()
if(SECP256K1_ENABLE_MODULE_SCHNORRSIG)
if(DEFINED SECP256K1_ENABLE_MODULE_EXTRAKEYS AND NOT SECP256K1_ENABLE_MODULE_EXTRAKEYS)
message(FATAL_ERROR "Module dependency error: You have disabled the extrakeys module explicitly, but it is required by the schnorrsig module.")
endif()
set(SECP256K1_ENABLE_MODULE_EXTRAKEYS ON)
add_compile_definitions(ENABLE_MODULE_SCHNORRSIG=1)
endif()
if(SECP256K1_ENABLE_MODULE_EXTRAKEYS)
add_compile_definitions(ENABLE_MODULE_EXTRAKEYS=1)
endif()
option(SECP256K1_ENABLE_MODULE_RECOVERY "Enable ECDSA pubkey recovery module." OFF)
if(SECP256K1_ENABLE_MODULE_RECOVERY)
add_compile_definitions(ENABLE_MODULE_RECOVERY=1)
endif()
option(SECP256K1_ENABLE_MODULE_EXTRAKEYS "Enable extrakeys module." ON)
option(SECP256K1_ENABLE_MODULE_SCHNORRSIG "Enable schnorrsig module." ON)
if(SECP256K1_ENABLE_MODULE_SCHNORRSIG)
set(SECP256K1_ENABLE_MODULE_EXTRAKEYS ON)
add_compile_definitions(ENABLE_MODULE_SCHNORRSIG=1)
endif()
if(SECP256K1_ENABLE_MODULE_EXTRAKEYS)
add_compile_definitions(ENABLE_MODULE_EXTRAKEYS=1)
if(SECP256K1_ENABLE_MODULE_ECDH)
add_compile_definitions(ENABLE_MODULE_ECDH=1)
endif()
option(SECP256K1_USE_EXTERNAL_DEFAULT_CALLBACKS "Enable external default callback functions." OFF)
@@ -76,22 +102,25 @@ if(SECP256K1_USE_EXTERNAL_DEFAULT_CALLBACKS)
add_compile_definitions(USE_EXTERNAL_DEFAULT_CALLBACKS=1)
endif()
set(SECP256K1_ECMULT_WINDOW_SIZE "AUTO" CACHE STRING "Window size for ecmult precomputation for verification, specified as integer in range [2..24]. \"AUTO\" is a reasonable setting for desktop machines (currently 15). [default=AUTO]")
set_property(CACHE SECP256K1_ECMULT_WINDOW_SIZE PROPERTY STRINGS "AUTO" 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24)
set(SECP256K1_ECMULT_WINDOW_SIZE 15 CACHE STRING "Window size for ecmult precomputation for verification, specified as integer in range [2..24]. The default value is a reasonable setting for desktop machines (currently 15). [default=15]")
set_property(CACHE SECP256K1_ECMULT_WINDOW_SIZE PROPERTY STRINGS 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24)
include(CheckStringOptionValue)
check_string_option_value(SECP256K1_ECMULT_WINDOW_SIZE)
if(SECP256K1_ECMULT_WINDOW_SIZE STREQUAL "AUTO")
set(SECP256K1_ECMULT_WINDOW_SIZE 15)
endif()
add_compile_definitions(ECMULT_WINDOW_SIZE=${SECP256K1_ECMULT_WINDOW_SIZE})
set(SECP256K1_ECMULT_GEN_PREC_BITS "AUTO" CACHE STRING "Precision bits to tune the precomputed table size for signing, specified as integer 2, 4 or 8. \"AUTO\" is a reasonable setting for desktop machines (currently 4). [default=AUTO]")
set_property(CACHE SECP256K1_ECMULT_GEN_PREC_BITS PROPERTY STRINGS "AUTO" 2 4 8)
check_string_option_value(SECP256K1_ECMULT_GEN_PREC_BITS)
if(SECP256K1_ECMULT_GEN_PREC_BITS STREQUAL "AUTO")
set(SECP256K1_ECMULT_GEN_PREC_BITS 4)
set(SECP256K1_ECMULT_GEN_KB 86 CACHE STRING "The size of the precomputed table for signing in multiples of 1024 bytes (on typical platforms). Larger values result in possibly better signing or key generation performance at the cost of a larger table. Valid choices are 2, 22, 86. The default value is a reasonable setting for desktop machines (currently 86). [default=86]")
set_property(CACHE SECP256K1_ECMULT_GEN_KB PROPERTY STRINGS 2 22 86)
check_string_option_value(SECP256K1_ECMULT_GEN_KB)
if(SECP256K1_ECMULT_GEN_KB EQUAL 2)
add_compile_definitions(COMB_BLOCKS=2)
add_compile_definitions(COMB_TEETH=5)
elseif(SECP256K1_ECMULT_GEN_KB EQUAL 22)
add_compile_definitions(COMB_BLOCKS=11)
add_compile_definitions(COMB_TEETH=6)
elseif(SECP256K1_ECMULT_GEN_KB EQUAL 86)
add_compile_definitions(COMB_BLOCKS=43)
add_compile_definitions(COMB_TEETH=6)
endif()
add_compile_definitions(ECMULT_GEN_PREC_BITS=${SECP256K1_ECMULT_GEN_PREC_BITS})
set(SECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY "OFF" CACHE STRING "Test-only override of the (autodetected by the C code) \"widemul\" setting. Legal values are: \"OFF\", \"int128_struct\", \"int128\" or \"int64\". [default=OFF]")
set_property(CACHE SECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY PROPERTY STRINGS "OFF" "int128_struct" "int128" "int64")
@@ -102,7 +131,7 @@ if(SECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY)
endif()
mark_as_advanced(FORCE SECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY)
set(SECP256K1_ASM "AUTO" CACHE STRING "Assembly optimizations to use: \"AUTO\", \"OFF\", \"x86_64\" or \"arm32\" (experimental). [default=AUTO]")
set(SECP256K1_ASM "AUTO" CACHE STRING "Assembly to use: \"AUTO\", \"OFF\", \"x86_64\" or \"arm32\" (experimental). [default=AUTO]")
set_property(CACHE SECP256K1_ASM PROPERTY STRINGS "AUTO" "OFF" "x86_64" "arm32")
check_string_option_value(SECP256K1_ASM)
if(SECP256K1_ASM STREQUAL "arm32")
@@ -112,7 +141,7 @@ if(SECP256K1_ASM STREQUAL "arm32")
if(HAVE_ARM32_ASM)
add_compile_definitions(USE_EXTERNAL_ASM=1)
else()
message(FATAL_ERROR "ARM32 assembly optimization requested but not available.")
message(FATAL_ERROR "ARM32 assembly requested but not available.")
endif()
elseif(SECP256K1_ASM)
include(CheckX86_64Assembly)
@@ -123,14 +152,14 @@ elseif(SECP256K1_ASM)
elseif(SECP256K1_ASM STREQUAL "AUTO")
set(SECP256K1_ASM "OFF")
else()
message(FATAL_ERROR "x86_64 assembly optimization requested but not available.")
message(FATAL_ERROR "x86_64 assembly requested but not available.")
endif()
endif()
option(SECP256K1_EXPERIMENTAL "Allow experimental configuration options." OFF)
if(NOT SECP256K1_EXPERIMENTAL)
if(SECP256K1_ASM STREQUAL "arm32")
message(FATAL_ERROR "ARM32 assembly optimization is experimental. Use -DSECP256K1_EXPERIMENTAL=ON to allow.")
message(FATAL_ERROR "ARM32 assembly is experimental. Use -DSECP256K1_EXPERIMENTAL=ON to allow.")
endif()
endif()
@@ -167,7 +196,7 @@ else()
string(REGEX REPLACE "-DNDEBUG[ \t\r\n]*" "" CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE}")
string(REGEX REPLACE "-DNDEBUG[ \t\r\n]*" "" CMAKE_C_FLAGS_MINSIZEREL "${CMAKE_C_FLAGS_MINSIZEREL}")
# Prefer -O2 optimization level. (-O3 is CMake's default for Release for many compilers.)
string(REGEX REPLACE "-O3[ \t\r\n]*" "-O2" CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE}")
string(REGEX REPLACE "-O3( |$)" "-O2\\1" CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE}")
endif()
# Define custom "Coverage" build type.
@@ -189,31 +218,37 @@ mark_as_advanced(
CMAKE_SHARED_LINKER_FLAGS_COVERAGE
)
get_property(is_multi_config GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set(default_build_type "RelWithDebInfo")
if(is_multi_config)
set(CMAKE_CONFIGURATION_TYPES "${default_build_type}" "Release" "Debug" "MinSizeRel" "Coverage" CACHE STRING
"Supported configuration types."
FORCE
)
else()
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY
STRINGS "${default_build_type}" "Release" "Debug" "MinSizeRel" "Coverage"
)
if(NOT CMAKE_BUILD_TYPE)
message(STATUS "Setting build type to \"${default_build_type}\" as none was specified")
set(CMAKE_BUILD_TYPE "${default_build_type}" CACHE STRING
"Choose the type of build."
if(PROJECT_IS_TOP_LEVEL)
get_property(is_multi_config GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set(default_build_type "RelWithDebInfo")
if(is_multi_config)
set(CMAKE_CONFIGURATION_TYPES "${default_build_type}" "Release" "Debug" "MinSizeRel" "Coverage" CACHE STRING
"Supported configuration types."
FORCE
)
else()
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY
STRINGS "${default_build_type}" "Release" "Debug" "MinSizeRel" "Coverage"
)
if(NOT CMAKE_BUILD_TYPE)
message(STATUS "Setting build type to \"${default_build_type}\" as none was specified")
set(CMAKE_BUILD_TYPE "${default_build_type}" CACHE STRING
"Choose the type of build."
FORCE
)
endif()
endif()
endif()
include(TryAppendCFlags)
if(MSVC)
# Keep the following commands ordered lexicographically.
try_append_c_flags(/W2) # Moderate warning level.
try_append_c_flags(/W3) # Production quality warning level.
try_append_c_flags(/wd4146) # Disable warning C4146 "unary minus operator applied to unsigned type, result still unsigned".
try_append_c_flags(/wd4244) # Disable warning C4244 "'conversion' conversion from 'type1' to 'type2', possible loss of data".
try_append_c_flags(/wd4267) # Disable warning C4267 "'var' : conversion from 'size_t' to 'type', possible loss of data".
# Eliminate deprecation warnings for the older, less secure functions.
add_compile_definitions(_CRT_SECURE_NO_WARNINGS)
else()
# Keep the following commands ordered lexicographically.
try_append_c_flags(-pedantic)
@@ -234,17 +269,41 @@ endif()
set(CMAKE_C_VISIBILITY_PRESET hidden)
# Ask CTest to create a "check" target (e.g., make check) as alias for the "test" target.
# CTEST_TEST_TARGET_ALIAS is not documented but supposed to be user-facing.
# See: https://gitlab.kitware.com/cmake/cmake/-/commit/816c9d1aa1f2b42d40c81a991b68c96eb12b6d2
set(CTEST_TEST_TARGET_ALIAS check)
include(CTest)
# We do not use CTest's BUILD_TESTING because a single toggle for all tests is too coarse for our needs.
mark_as_advanced(BUILD_TESTING)
if(SECP256K1_BUILD_BENCHMARK OR SECP256K1_BUILD_TESTS OR SECP256K1_BUILD_EXHAUSTIVE_TESTS OR SECP256K1_BUILD_CTIME_TESTS OR SECP256K1_BUILD_EXAMPLES)
enable_testing()
set(print_msan_notice)
if(SECP256K1_BUILD_CTIME_TESTS)
include(CheckMemorySanitizer)
check_memory_sanitizer(msan_enabled)
if(msan_enabled)
try_append_c_flags(-fno-sanitize-memory-param-retval)
set(print_msan_notice YES)
endif()
unset(msan_enabled)
endif()
set(SECP256K1_APPEND_CFLAGS "" CACHE STRING "Compiler flags that are appended to the command line after all other flags added by the build system. This variable is intended for debugging and special builds.")
if(SECP256K1_APPEND_CFLAGS)
# Appending to this low-level rule variable is the only way to
# guarantee that the flags appear at the end of the command line.
string(APPEND CMAKE_C_COMPILE_OBJECT " ${SECP256K1_APPEND_CFLAGS}")
endif()
set(SECP256K1_APPEND_LDFLAGS "" CACHE STRING "Linker flags that are appended to the command line after all other flags added by the build system. This variable is intended for debugging and special builds.")
if(SECP256K1_APPEND_LDFLAGS)
# Appending to this low-level rule variable is the only way to
# guarantee that the flags appear at the end of the command line.
string(APPEND CMAKE_C_CREATE_SHARED_LIBRARY " ${SECP256K1_APPEND_LDFLAGS}")
string(APPEND CMAKE_C_LINK_EXECUTABLE " ${SECP256K1_APPEND_LDFLAGS}")
endif()
if(NOT CMAKE_RUNTIME_OUTPUT_DIRECTORY)
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/bin)
endif()
if(NOT CMAKE_LIBRARY_OUTPUT_DIRECTORY)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/lib)
endif()
if(NOT CMAKE_ARCHIVE_OUTPUT_DIRECTORY)
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/lib)
endif()
add_subdirectory(src)
if(SECP256K1_BUILD_EXAMPLES)
add_subdirectory(examples)
@@ -266,11 +325,13 @@ message(" ECDH ................................ ${SECP256K1_ENABLE_MODULE_ECDH}
message(" ECDSA pubkey recovery ............... ${SECP256K1_ENABLE_MODULE_RECOVERY}")
message(" extrakeys ........................... ${SECP256K1_ENABLE_MODULE_EXTRAKEYS}")
message(" schnorrsig .......................... ${SECP256K1_ENABLE_MODULE_SCHNORRSIG}")
message(" musig ............................... ${SECP256K1_ENABLE_MODULE_MUSIG}")
message(" ElligatorSwift ...................... ${SECP256K1_ENABLE_MODULE_ELLSWIFT}")
message("Parameters:")
message(" ecmult window size .................. ${SECP256K1_ECMULT_WINDOW_SIZE}")
message(" ecmult gen precision bits ........... ${SECP256K1_ECMULT_GEN_PREC_BITS}")
message(" ecmult gen table size ............... ${SECP256K1_ECMULT_GEN_KB} KiB")
message("Optional features:")
message(" assembly optimization ............... ${SECP256K1_ASM}")
message(" assembly ............................ ${SECP256K1_ASM}")
message(" external callbacks .................. ${SECP256K1_USE_EXTERNAL_DEFAULT_CALLBACKS}")
if(SECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY)
message(" wide multiplication (test-only) ..... ${SECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY}")
@@ -297,7 +358,7 @@ message("Valgrind .............................. ${SECP256K1_VALGRIND}")
get_directory_property(definitions COMPILE_DEFINITIONS)
string(REPLACE ";" " " definitions "${definitions}")
message("Preprocessor defined macros ........... ${definitions}")
message("C compiler ............................ ${CMAKE_C_COMPILER}")
message("C compiler ............................ ${CMAKE_C_COMPILER_ID} ${CMAKE_C_COMPILER_VERSION}, ${CMAKE_C_COMPILER}")
message("CFLAGS ................................ ${CMAKE_C_FLAGS}")
get_directory_property(compile_options COMPILE_OPTIONS)
string(REPLACE ";" " " compile_options "${compile_options}")
@@ -320,7 +381,20 @@ else()
message(" - LDFLAGS for executables ............ ${CMAKE_EXE_LINKER_FLAGS_DEBUG}")
message(" - LDFLAGS for shared libraries ....... ${CMAKE_SHARED_LINKER_FLAGS_DEBUG}")
endif()
message("\n")
if(SECP256K1_APPEND_CFLAGS)
message("SECP256K1_APPEND_CFLAGS ............... ${SECP256K1_APPEND_CFLAGS}")
endif()
if(SECP256K1_APPEND_LDFLAGS)
message("SECP256K1_APPEND_LDFLAGS .............. ${SECP256K1_APPEND_LDFLAGS}")
endif()
message("")
if(print_msan_notice)
message(
"Note:\n"
" MemorySanitizer detected, tried to add -fno-sanitize-memory-param-retval to compile options\n"
" to avoid false positives in ctime_tests. Pass -DSECP256K1_BUILD_CTIME_TESTS=OFF to avoid this.\n"
)
endif()
if(SECP256K1_EXPERIMENTAL)
message(
" ******\n"

108
external/secp256k1/CONTRIBUTING.md vendored Normal file
View File

@@ -0,0 +1,108 @@
# Contributing to libsecp256k1
## Scope
libsecp256k1 is a library for elliptic curve cryptography on the curve secp256k1, not a general-purpose cryptography library.
The library primarily serves the needs of the Bitcoin Core project but provides additional functionality for the benefit of the wider Bitcoin ecosystem.
## Adding new functionality or modules
The libsecp256k1 project welcomes contributions in the form of new functionality or modules, provided they are within the project's scope.
It is the responsibility of the contributors to convince the maintainers that the proposed functionality is within the project's scope, high-quality and maintainable.
Contributors are recommended to provide the following in addition to the new code:
* **Specification:**
A specification can help significantly in reviewing the new code as it provides documentation and context.
It may justify various design decisions, give a motivation and outline security goals.
If the specification contains pseudocode, a reference implementation or test vectors, these can be used to compare with the proposed libsecp256k1 code.
* **Security Arguments:**
In addition to a defining the security goals, it should be argued that the new functionality meets these goals.
Depending on the nature of the new functionality, a wide range of security arguments are acceptable, ranging from being "obviously secure" to rigorous proofs of security.
* **Relevance Arguments:**
The relevance of the new functionality for the Bitcoin ecosystem should be argued by outlining clear use cases.
These are not the only factors taken into account when considering to add new functionality.
The proposed new libsecp256k1 code must be of high quality, including API documentation and tests, as well as featuring a misuse-resistant API design.
We recommend reaching out to other contributors (see [Communication Channels](#communication-channels)) and get feedback before implementing new functionality.
## Communication channels
Most communication about libsecp256k1 occurs on the GitHub repository: in issues, pull request or on the discussion board.
Additionally, there is an IRC channel dedicated to libsecp256k1, with biweekly meetings (see channel topic).
The channel is `#secp256k1` on Libera Chat.
The easiest way to participate on IRC is with the web client, [web.libera.chat](https://web.libera.chat/#secp256k1).
Chat history logs can be found at https://gnusha.org/secp256k1/.
## Contributor workflow & peer review
The Contributor Workflow & Peer Review in libsecp256k1 are similar to Bitcoin Core's workflow and review processes described in its [CONTRIBUTING.md](https://github.com/bitcoin/bitcoin/blob/master/CONTRIBUTING.md).
### Coding conventions
In addition, libsecp256k1 tries to maintain the following coding conventions:
* No runtime heap allocation (e.g., no `malloc`) unless explicitly requested by the caller (via `secp256k1_context_create` or `secp256k1_scratch_space_create`, for example). Moreover, it should be possible to use the library without any heap allocations.
* The tests should cover all lines and branches of the library (see [Test coverage](#coverage)).
* Operations involving secret data should be tested for being constant time with respect to the secrets (see [src/ctime_tests.c](src/ctime_tests.c)).
* Local variables containing secret data should be cleared explicitly to try to delete secrets from memory.
* Use `secp256k1_memcmp_var` instead of `memcmp` (see [#823](https://github.com/bitcoin-core/secp256k1/issues/823)).
* As a rule of thumb, the default values for configuration options should target standard desktop machines and align with Bitcoin Core's defaults, and the tests should mostly exercise the default configuration (see [#1549](https://github.com/bitcoin-core/secp256k1/issues/1549#issuecomment-2200559257)).
#### Style conventions
* Commits should be atomic and diffs should be easy to read. For this reason, do not mix any formatting fixes or code moves with actual code changes. Make sure each individual commit is hygienic: that it builds successfully on its own without warnings, errors, regressions, or test failures.
* New code should adhere to the style of existing, in particular surrounding, code. Other than that, we do not enforce strict rules for code formatting.
* The code conforms to C89. Most notably, that means that only `/* ... */` comments are allowed (no `//` line comments). Moreover, any declarations in a `{ ... }` block (e.g., a function) must appear at the beginning of the block before any statements. When you would like to declare a variable in the middle of a block, you can open a new block:
```C
void secp256k_foo(void) {
unsigned int x; /* declaration */
int y = 2*x; /* declaration */
x = 17; /* statement */
{
int a, b; /* declaration */
a = x + y; /* statement */
secp256k_bar(x, &b); /* statement */
}
}
```
* Use `unsigned int` instead of just `unsigned`.
* Use `void *ptr` instead of `void* ptr`.
* Arguments of the publicly-facing API must have a specific order defined in [include/secp256k1.h](include/secp256k1.h).
* User-facing comment lines in headers should be limited to 80 chars if possible.
* All identifiers in file scope should start with `secp256k1_`.
* Avoid trailing whitespace.
### Tests
#### Coverage
This library aims to have full coverage of reachable lines and branches.
To create a test coverage report, configure with `--enable-coverage` (use of GCC is necessary):
$ ./configure --enable-coverage
Run the tests:
$ make check
To create a report, `gcovr` is recommended, as it includes branch coverage reporting:
$ gcovr --exclude 'src/bench*' --print-summary
To create a HTML report with coloured and annotated source code:
$ mkdir -p coverage
$ gcovr --exclude 'src/bench*' --html --html-details -o coverage/coverage.html
#### Exhaustive tests
There are tests of several functions in which a small group replaces secp256k1.
These tests are *exhaustive* since they provide all elements and scalars of the small group as input arguments (see [src/tests_exhaustive.c](src/tests_exhaustive.c)).
### Benchmarks
See `src/bench*.c` for examples of benchmarks.

View File

@@ -37,7 +37,6 @@ noinst_HEADERS += src/field_10x26_impl.h
noinst_HEADERS += src/field_5x52.h
noinst_HEADERS += src/field_5x52_impl.h
noinst_HEADERS += src/field_5x52_int128_impl.h
noinst_HEADERS += src/field_5x52_asm_impl.h
noinst_HEADERS += src/modinv32.h
noinst_HEADERS += src/modinv32_impl.h
noinst_HEADERS += src/modinv64.h
@@ -46,6 +45,7 @@ noinst_HEADERS += src/precomputed_ecmult.h
noinst_HEADERS += src/precomputed_ecmult_gen.h
noinst_HEADERS += src/assumptions.h
noinst_HEADERS += src/checkmem.h
noinst_HEADERS += src/testutil.h
noinst_HEADERS += src/util.h
noinst_HEADERS += src/int128.h
noinst_HEADERS += src/int128_impl.h
@@ -64,6 +64,8 @@ noinst_HEADERS += src/field.h
noinst_HEADERS += src/field_impl.h
noinst_HEADERS += src/bench.h
noinst_HEADERS += src/wycheproof/ecdsa_secp256k1_sha256_bitcoin_test.h
noinst_HEADERS += src/hsort.h
noinst_HEADERS += src/hsort_impl.h
noinst_HEADERS += contrib/lax_der_parsing.h
noinst_HEADERS += contrib/lax_der_parsing.c
noinst_HEADERS += contrib/lax_der_privatekey_parsing.h
@@ -153,7 +155,7 @@ endif
if USE_EXAMPLES
noinst_PROGRAMS += ecdsa_example
ecdsa_example_SOURCES = examples/ecdsa.c
ecdsa_example_CPPFLAGS = -I$(top_srcdir)/include
ecdsa_example_CPPFLAGS = -I$(top_srcdir)/include -DSECP256K1_STATIC
ecdsa_example_LDADD = libsecp256k1.la
ecdsa_example_LDFLAGS = -static
if BUILD_WINDOWS
@@ -163,7 +165,7 @@ TESTS += ecdsa_example
if ENABLE_MODULE_ECDH
noinst_PROGRAMS += ecdh_example
ecdh_example_SOURCES = examples/ecdh.c
ecdh_example_CPPFLAGS = -I$(top_srcdir)/include
ecdh_example_CPPFLAGS = -I$(top_srcdir)/include -DSECP256K1_STATIC
ecdh_example_LDADD = libsecp256k1.la
ecdh_example_LDFLAGS = -static
if BUILD_WINDOWS
@@ -174,7 +176,7 @@ endif
if ENABLE_MODULE_SCHNORRSIG
noinst_PROGRAMS += schnorr_example
schnorr_example_SOURCES = examples/schnorr.c
schnorr_example_CPPFLAGS = -I$(top_srcdir)/include
schnorr_example_CPPFLAGS = -I$(top_srcdir)/include -DSECP256K1_STATIC
schnorr_example_LDADD = libsecp256k1.la
schnorr_example_LDFLAGS = -static
if BUILD_WINDOWS
@@ -182,6 +184,28 @@ schnorr_example_LDFLAGS += -lbcrypt
endif
TESTS += schnorr_example
endif
if ENABLE_MODULE_ELLSWIFT
noinst_PROGRAMS += ellswift_example
ellswift_example_SOURCES = examples/ellswift.c
ellswift_example_CPPFLAGS = -I$(top_srcdir)/include -DSECP256K1_STATIC
ellswift_example_LDADD = libsecp256k1.la
ellswift_example_LDFLAGS = -static
if BUILD_WINDOWS
ellswift_example_LDFLAGS += -lbcrypt
endif
TESTS += ellswift_example
endif
if ENABLE_MODULE_MUSIG
noinst_PROGRAMS += musig_example
musig_example_SOURCES = examples/musig.c
musig_example_CPPFLAGS = -I$(top_srcdir)/include -DSECP256K1_STATIC
musig_example_LDADD = libsecp256k1.la
musig_example_LDFLAGS = -static
if BUILD_WINDOWS
musig_example_LDFLAGS += -lbcrypt
endif
TESTS += musig_example
endif
endif
### Precomputed tables
@@ -189,11 +213,11 @@ EXTRA_PROGRAMS = precompute_ecmult precompute_ecmult_gen
CLEANFILES = $(EXTRA_PROGRAMS)
precompute_ecmult_SOURCES = src/precompute_ecmult.c
precompute_ecmult_CPPFLAGS = $(SECP_CONFIG_DEFINES)
precompute_ecmult_CPPFLAGS = $(SECP_CONFIG_DEFINES) -DVERIFY
precompute_ecmult_LDADD = $(COMMON_LIB)
precompute_ecmult_gen_SOURCES = src/precompute_ecmult_gen.c
precompute_ecmult_gen_CPPFLAGS = $(SECP_CONFIG_DEFINES)
precompute_ecmult_gen_CPPFLAGS = $(SECP_CONFIG_DEFINES) -DVERIFY
precompute_ecmult_gen_LDADD = $(COMMON_LIB)
# See Automake manual, Section "Errors with distclean".
@@ -241,6 +265,7 @@ maintainer-clean-local: clean-testvectors
### Additional files to distribute
EXTRA_DIST = autogen.sh CHANGELOG.md SECURITY.md
EXTRA_DIST += doc/release-process.md doc/safegcd_implementation.md
EXTRA_DIST += doc/ellswift.md doc/musig.md
EXTRA_DIST += examples/EXAMPLES_COPYING
EXTRA_DIST += sage/gen_exhaustive_groups.sage
EXTRA_DIST += sage/gen_split_lambda_constants.sage
@@ -267,3 +292,11 @@ endif
if ENABLE_MODULE_SCHNORRSIG
include src/modules/schnorrsig/Makefile.am.include
endif
if ENABLE_MODULE_MUSIG
include src/modules/musig/Makefile.am.include
endif
if ENABLE_MODULE_ELLSWIFT
include src/modules/ellswift/Makefile.am.include
endif

View File

@@ -1,11 +1,10 @@
libsecp256k1
============
[![Build Status](https://api.cirrus-ci.com/github/bitcoin-core/secp256k1.svg?branch=master)](https://cirrus-ci.com/github/bitcoin-core/secp256k1)
![Dependencies: None](https://img.shields.io/badge/dependencies-none-success)
[![irc.libera.chat #secp256k1](https://img.shields.io/badge/irc.libera.chat-%23secp256k1-success)](https://web.libera.chat/#secp256k1)
Optimized C library for ECDSA signatures and secret/public key operations on curve secp256k1.
High-performance high-assurance C library for digital signatures and other cryptographic primitives on the secp256k1 elliptic curve.
This library is intended to be the highest quality publicly available library for cryptography on the secp256k1 curve. However, the primary focus of its development has been for usage in the Bitcoin system and usage unlike Bitcoin's may be less well tested, verified, or suffer from a less well thought out interface. Correct usage requires some care and consideration that the library is fit for your application's purpose.
@@ -21,6 +20,8 @@ Features:
* Optional module for public key recovery.
* Optional module for ECDH key exchange.
* Optional module for Schnorr signatures according to [BIP-340](https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki).
* Optional module for ElligatorSwift key exchange according to [BIP-324](https://github.com/bitcoin/bips/blob/master/bip-0324.mediawiki).
* Optional module for MuSig2 Schnorr multi-signatures according to [BIP-327](https://github.com/bitcoin/bips/blob/master/bip-0327.mediawiki).
Implementation details
----------------------
@@ -34,7 +35,7 @@ Implementation details
* Expose only higher level interfaces to minimize the API surface and improve application security. ("Be difficult to use insecurely.")
* Field operations
* Optimized implementation of arithmetic modulo the curve's field size (2^256 - 0x1000003D1).
* Using 5 52-bit limbs (including hand-optimized assembly for x86_64, by Diederik Huys).
* Using 5 52-bit limbs
* Using 10 26-bit limbs (including hand-optimized assembly for 32-bit ARM, by Wladimir J. van der Laan).
* This is an experimental feature that has not received enough scrutiny to satisfy the standard of quality of this library but is made available for testing and review by the community.
* Scalar operations
@@ -80,9 +81,9 @@ To maintain a pristine source tree, CMake encourages to perform an out-of-source
$ mkdir build && cd build
$ cmake ..
$ make
$ make check # run the test suite
$ sudo make install # optional
$ cmake --build .
$ ctest # run the test suite
$ sudo cmake --install . # optional
To compile optional modules (such as Schnorr signatures), you need to run `cmake` with additional flags (such as `-DSECP256K1_ENABLE_MODULE_SCHNORRSIG=ON`). Run `cmake .. -LH` to see the full list of available flags.
@@ -114,31 +115,10 @@ Usage examples can be found in the [examples](examples) directory. To compile th
* [ECDSA example](examples/ecdsa.c)
* [Schnorr signatures example](examples/schnorr.c)
* [Deriving a shared secret (ECDH) example](examples/ecdh.c)
* [ElligatorSwift key exchange example](examples/ellswift.c)
To compile the Schnorr signature and ECDH examples, you also need to configure with `--enable-module-schnorrsig` and `--enable-module-ecdh`.
Test coverage
-----------
This library aims to have full coverage of the reachable lines and branches.
To create a test coverage report, configure with `--enable-coverage` (use of GCC is necessary):
$ ./configure --enable-coverage
Run the tests:
$ make check
To create a report, `gcovr` is recommended, as it includes branch coverage reporting:
$ gcovr --exclude 'src/bench*' --print-summary
To create a HTML report with coloured and annotated source code:
$ mkdir -p coverage
$ gcovr --exclude 'src/bench*' --html --html-details -o coverage/coverage.html
Benchmark
------------
If configured with `--enable-benchmark` (which is the default), binaries for benchmarking the libsecp256k1 functions will be present in the root directory after the build.
@@ -155,3 +135,8 @@ Reporting a vulnerability
------------
See [SECURITY.md](SECURITY.md)
Contributing to libsecp256k1
------------
See [CONTRIBUTING.md](CONTRIBUTING.md)

View File

@@ -45,6 +45,22 @@ fi
AC_MSG_RESULT($has_valgrind)
])
AC_DEFUN([SECP_MSAN_CHECK], [
AC_MSG_CHECKING(whether MemorySanitizer is enabled)
AC_COMPILE_IFELSE([AC_LANG_SOURCE([[
#if defined(__has_feature)
# if __has_feature(memory_sanitizer)
/* MemorySanitizer is enabled. */
# elif
# error "MemorySanitizer is disabled."
# endif
#else
# error "__has_feature is not defined."
#endif
]])], [msan_enabled=yes], [msan_enabled=no])
AC_MSG_RESULT([$msan_enabled])
])
dnl SECP_TRY_APPEND_CFLAGS(flags, VAR)
dnl Append flags to VAR if CC accepts them.
AC_DEFUN([SECP_TRY_APPEND_CFLAGS], [

View File

@@ -4,19 +4,21 @@ set -eux
export LC_ALL=C
# Print relevant CI environment to allow reproducing the job outside of CI.
# Print commit and relevant CI environment to allow reproducing the job outside of CI.
git show --no-patch
print_environment() {
# Turn off -x because it messes up the output
set +x
# There are many ways to print variable names and their content. This one
# does not rely on bash.
for var in WERROR_CFLAGS MAKEFLAGS BUILD \
ECMULTWINDOW ECMULTGENPRECISION ASM WIDEMUL WITH_VALGRIND EXTRAFLAGS \
EXPERIMENTAL ECDH RECOVERY SCHNORRSIG \
ECMULTWINDOW ECMULTGENKB ASM WIDEMUL WITH_VALGRIND EXTRAFLAGS \
EXPERIMENTAL ECDH RECOVERY EXTRAKEYS MUSIG SCHNORRSIG ELLSWIFT \
SECP256K1_TEST_ITERS BENCH SECP256K1_BENCH_ITERS CTIMETESTS\
EXAMPLES \
HOST WRAPPER_CMD \
CC CFLAGS CPPFLAGS AR NM
CC CFLAGS CPPFLAGS AR NM \
UBSAN_OPTIONS ASAN_OPTIONS LSAN_OPTIONS
do
eval "isset=\${$var+x}"
if [ -n "$isset" ]; then
@@ -30,19 +32,15 @@ print_environment() {
}
print_environment
# Start persistent wineserver if necessary.
# This speeds up jobs with many invocations of wine (e.g., ./configure with MSVC) tremendously.
case "$WRAPPER_CMD" in
*wine*)
# Make sure to shutdown wineserver whenever we exit.
trap "wineserver -k || true" EXIT INT HUP
# This is apparently only reliable when we run a dummy command such as "hh.exe" afterwards.
wineserver -p && wine hh.exe
env >> test_env.log
# If gcc is requested, assert that it's in fact gcc (and not some symlinked Apple clang).
case "${CC:-undefined}" in
*gcc*)
$CC -v 2>&1 | grep -q "gcc version" || exit 1;
;;
esac
env >> test_env.log
if [ -n "${CC+x}" ]; then
# The MSVC compiler "cl" doesn't understand "-v"
$CC -v || true
@@ -54,22 +52,55 @@ if [ -n "$WRAPPER_CMD" ]; then
$WRAPPER_CMD --version
fi
# Workaround for https://bugs.kde.org/show_bug.cgi?id=452758 (fixed in valgrind 3.20.0).
case "${CC:-undefined}" in
clang*)
if [ "$CTIMETESTS" = "yes" ] && [ "$WITH_VALGRIND" = "yes" ]
then
export CFLAGS="${CFLAGS:+$CFLAGS }-gdwarf-4"
else
case "$WRAPPER_CMD" in
valgrind*)
export CFLAGS="${CFLAGS:+$CFLAGS }-gdwarf-4"
;;
esac
fi
;;
esac
./autogen.sh
./configure \
--enable-experimental="$EXPERIMENTAL" \
--with-test-override-wide-multiply="$WIDEMUL" --with-asm="$ASM" \
--with-ecmult-window="$ECMULTWINDOW" \
--with-ecmult-gen-precision="$ECMULTGENPRECISION" \
--with-ecmult-gen-kb="$ECMULTGENKB" \
--enable-module-ecdh="$ECDH" --enable-module-recovery="$RECOVERY" \
--enable-module-ellswift="$ELLSWIFT" \
--enable-module-extrakeys="$EXTRAKEYS" \
--enable-module-schnorrsig="$SCHNORRSIG" \
--enable-module-musig="$MUSIG" \
--enable-examples="$EXAMPLES" \
--enable-ctime-tests="$CTIMETESTS" \
--with-valgrind="$WITH_VALGRIND" \
--host="$HOST" $EXTRAFLAGS
# We have set "-j<n>" in MAKEFLAGS.
make
build_exit_code=0
make > make.log 2>&1 || build_exit_code=$?
cat make.log
if [ $build_exit_code -ne 0 ]; then
case "${CC:-undefined}" in
*snapshot*)
# Ignore internal compiler errors in gcc-snapshot and clang-snapshot
grep -e "internal compiler error:" -e "PLEASE submit a bug report" make.log
return $?;
;;
*)
return 1;
;;
esac
fi
# Print information about binaries so that we can see that the architecture is correct
file *tests* || true

View File

@@ -1,4 +1,17 @@
FROM debian:stable
FROM debian:stable-slim
SHELL ["/bin/bash", "-c"]
WORKDIR /root
# A too high maximum number of file descriptors (with the default value
# inherited from the docker host) can cause issues with some of our tools:
# - sanitizers hanging: https://github.com/google/sanitizers/issues/1662
# - valgrind crashing: https://stackoverflow.com/a/75293014
# This is not be a problem on our CI hosts, but developers who run the image
# on their machines may run into this (e.g., on Arch Linux), so warn them.
# (Note that .bashrc is only executed in interactive bash shells.)
RUN echo 'if [[ $(ulimit -n) -gt 200000 ]]; then echo "WARNING: Very high value reported by \"ulimit -n\". Consider passing \"--ulimit nofile=32768\" to \"docker run\"."; fi' >> /root/.bashrc
RUN dpkg --add-architecture i386 && \
dpkg --add-architecture s390x && \
@@ -11,27 +24,56 @@ RUN dpkg --add-architecture i386 && \
RUN apt-get update && apt-get install --no-install-recommends -y \
git ca-certificates \
make automake libtool pkg-config dpkg-dev valgrind qemu-user \
gcc clang llvm libc6-dbg \
gcc clang llvm libclang-rt-dev libc6-dbg \
g++ \
gcc-i686-linux-gnu libc6-dev-i386-cross libc6-dbg:i386 libubsan1:i386 libasan6:i386 \
gcc-i686-linux-gnu libc6-dev-i386-cross libc6-dbg:i386 libubsan1:i386 libasan8:i386 \
gcc-s390x-linux-gnu libc6-dev-s390x-cross libc6-dbg:s390x \
gcc-arm-linux-gnueabihf libc6-dev-armhf-cross libc6-dbg:armhf \
gcc-aarch64-linux-gnu libc6-dev-arm64-cross libc6-dbg:arm64 \
gcc-powerpc64le-linux-gnu libc6-dev-ppc64el-cross libc6-dbg:ppc64el \
gcc-mingw-w64-x86-64-win32 wine64 wine \
gcc-mingw-w64-i686-win32 wine32 \
sagemath
python3 && \
if ! ( dpkg --print-architecture | grep --quiet "arm64" ) ; then \
apt-get install --no-install-recommends -y \
gcc-aarch64-linux-gnu libc6-dev-arm64-cross libc6-dbg:arm64 ;\
fi && \
apt-get clean && rm -rf /var/lib/apt/lists/*
WORKDIR /root
# The "wine" package provides a convience wrapper that we need
RUN apt-get update && apt-get install --no-install-recommends -y \
git ca-certificates wine64 wine python3-simplejson python3-six msitools winbind procps && \
git clone https://github.com/mstorsjo/msvc-wine && \
mkdir /opt/msvc && \
python3 msvc-wine/vsdownload.py --accept-license --dest /opt/msvc Microsoft.VisualStudio.Workload.VCTools && \
msvc-wine/install.sh /opt/msvc
# Build and install gcc snapshot
ARG GCC_SNAPSHOT_MAJOR=15
RUN apt-get update && apt-get install --no-install-recommends -y wget libgmp-dev libmpfr-dev libmpc-dev flex && \
mkdir gcc && cd gcc && \
wget --progress=dot:giga --https-only --recursive --accept '*.tar.xz' --level 1 --no-directories "https://gcc.gnu.org/pub/gcc/snapshots/LATEST-${GCC_SNAPSHOT_MAJOR}" && \
wget "https://gcc.gnu.org/pub/gcc/snapshots/LATEST-${GCC_SNAPSHOT_MAJOR}/sha512.sum" && \
sha512sum --check --ignore-missing sha512.sum && \
# We should have downloaded exactly one tar.xz file
ls && \
[ $(ls *.tar.xz | wc -l) -eq "1" ] && \
tar xf *.tar.xz && \
mkdir gcc-build && cd gcc-build && \
../*/configure --prefix=/opt/gcc-snapshot --enable-languages=c --disable-bootstrap --disable-multilib --without-isl && \
make -j $(nproc) && \
make install && \
cd ../.. && rm -rf gcc && \
ln -s /opt/gcc-snapshot/bin/gcc /usr/bin/gcc-snapshot && \
apt-get autoremove -y wget libgmp-dev libmpfr-dev libmpc-dev flex && \
apt-get clean && rm -rf /var/lib/apt/lists/*
# Install clang snapshot, see https://apt.llvm.org/
RUN \
# Setup GPG keys of LLVM repository
apt-get update && apt-get install --no-install-recommends -y wget && \
wget -qO- https://apt.llvm.org/llvm-snapshot.gpg.key | tee /etc/apt/trusted.gpg.d/apt.llvm.org.asc && \
# Add repository for this Debian release
. /etc/os-release && echo "deb http://apt.llvm.org/${VERSION_CODENAME} llvm-toolchain-${VERSION_CODENAME} main" >> /etc/apt/sources.list && \
apt-get update && \
# Determine the version number of the LLVM development branch
LLVM_VERSION=$(apt-cache search --names-only '^clang-[0-9]+$' | sort -V | tail -1 | cut -f1 -d" " | cut -f2 -d"-" ) && \
# Install
apt-get install --no-install-recommends -y "clang-${LLVM_VERSION}" && \
# Create symlink
ln -s "/usr/bin/clang-${LLVM_VERSION}" /usr/bin/clang-snapshot && \
# Clean up
apt-get autoremove -y wget && \
apt-get clean && rm -rf /var/lib/apt/lists/*
# Initialize the wine environment. Wait until the wineserver process has
# exited before closing the session, to avoid corrupting the wine prefix.
RUN wine64 wineboot --init && \
while (ps -A | grep wineserver) > /dev/null; do sleep 1; done

View File

@@ -1,6 +1,6 @@
function(check_arm32_assembly)
try_compile(HAVE_ARM32_ASM
${CMAKE_BINARY_DIR}/check_arm32_assembly
SOURCES ${CMAKE_SOURCE_DIR}/cmake/source_arm32.s
${PROJECT_BINARY_DIR}/check_arm32_assembly
SOURCES ${PROJECT_SOURCE_DIR}/cmake/source_arm32.s
)
endfunction()

View File

@@ -0,0 +1,18 @@
include_guard(GLOBAL)
include(CheckCSourceCompiles)
function(check_memory_sanitizer output)
set(CMAKE_TRY_COMPILE_TARGET_TYPE STATIC_LIBRARY)
check_c_source_compiles("
#if defined(__has_feature)
# if __has_feature(memory_sanitizer)
/* MemorySanitizer is enabled. */
# elif
# error \"MemorySanitizer is disabled.\"
# endif
#else
# error \"__has_feature is not defined.\"
#endif
" HAVE_MSAN)
set(${output} ${HAVE_MSAN} PARENT_SCOPE)
endfunction()

View File

@@ -0,0 +1,8 @@
function(generate_pkg_config_file in_file)
set(prefix ${CMAKE_INSTALL_PREFIX})
set(exec_prefix \${prefix})
set(libdir \${exec_prefix}/${CMAKE_INSTALL_LIBDIR})
set(includedir \${prefix}/${CMAKE_INSTALL_INCLUDEDIR})
set(PACKAGE_VERSION ${PROJECT_VERSION})
configure_file(${in_file} ${PROJECT_NAME}.pc @ONLY)
endfunction()

View File

@@ -4,8 +4,8 @@ AC_PREREQ([2.60])
# the API. All changes in experimental modules are treated as
# backwards-compatible and therefore at most increase the minor version.
define(_PKG_VERSION_MAJOR, 0)
define(_PKG_VERSION_MINOR, 3)
define(_PKG_VERSION_PATCH, 2)
define(_PKG_VERSION_MINOR, 6)
define(_PKG_VERSION_PATCH, 0)
define(_PKG_VERSION_IS_RELEASE, true)
# The library version is based on libtool versioning of the ABI. The set of
@@ -13,8 +13,8 @@ define(_PKG_VERSION_IS_RELEASE, true)
# https://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html
# All changes in experimental modules are treated as if they don't affect the
# interface and therefore only increase the revision.
define(_LIB_VERSION_CURRENT, 2)
define(_LIB_VERSION_REVISION, 2)
define(_LIB_VERSION_CURRENT, 5)
define(_LIB_VERSION_REVISION, 0)
define(_LIB_VERSION_AGE, 0)
AC_INIT([libsecp256k1],m4_join([.], _PKG_VERSION_MAJOR, _PKG_VERSION_MINOR, _PKG_VERSION_PATCH)m4_if(_PKG_VERSION_IS_RELEASE, [true], [], [-dev]),[https://github.com/bitcoin-core/secp256k1/issues],[libsecp256k1],[https://github.com/bitcoin-core/secp256k1])
@@ -121,13 +121,12 @@ AC_DEFUN([SECP_TRY_APPEND_DEFAULT_CFLAGS], [
# libtool makes the same assumption internally.
# Note that "/opt" and "-opt" are equivalent for MSVC; we use "-opt" because "/opt" looks like a path.
if test x"$GCC" != x"yes" && test x"$build_windows" = x"yes"; then
SECP_TRY_APPEND_CFLAGS([-W2 -wd4146], $1) # Moderate warning level, disable warning C4146 "unary minus operator applied to unsigned type, result still unsigned"
# We pass -ignore:4217 to the MSVC linker to suppress warning 4217 when
# importing variables from a statically linked secp256k1.
# (See the libtool manual, section "Windows DLLs" for background.)
# Unfortunately, libtool tries to be too clever and strips "-Xlinker arg"
# into "arg", so this will be " -Xlinker -ignore:4217" after stripping.
LDFLAGS="-Xlinker -Xlinker -Xlinker -ignore:4217 $LDFLAGS"
SECP_TRY_APPEND_CFLAGS([-W3], $1) # Production quality warning level.
SECP_TRY_APPEND_CFLAGS([-wd4146], $1) # Disable warning C4146 "unary minus operator applied to unsigned type, result still unsigned".
SECP_TRY_APPEND_CFLAGS([-wd4244], $1) # Disable warning C4244 "'conversion' conversion from 'type1' to 'type2', possible loss of data".
SECP_TRY_APPEND_CFLAGS([-wd4267], $1) # Disable warning C4267 "'var' : conversion from 'size_t' to 'type', possible loss of data".
# Eliminate deprecation warnings for the older, less secure functions.
CPPFLAGS="-D_CRT_SECURE_NO_WARNINGS $CPPFLAGS"
fi
])
SECP_TRY_APPEND_DEFAULT_CFLAGS(SECP_CFLAGS)
@@ -185,6 +184,14 @@ AC_ARG_ENABLE(module_schnorrsig,
AS_HELP_STRING([--enable-module-schnorrsig],[enable schnorrsig module [default=yes]]), [],
[SECP_SET_DEFAULT([enable_module_schnorrsig], [yes], [yes])])
AC_ARG_ENABLE(module_musig,
AS_HELP_STRING([--enable-module-musig],[enable MuSig2 module [default=yes]]), [],
[SECP_SET_DEFAULT([enable_module_musig], [yes], [yes])])
AC_ARG_ENABLE(module_ellswift,
AS_HELP_STRING([--enable-module-ellswift],[enable ElligatorSwift module [default=yes]]), [],
[SECP_SET_DEFAULT([enable_module_ellswift], [yes], [yes])])
AC_ARG_ENABLE(external_default_callbacks,
AS_HELP_STRING([--enable-external-default-callbacks],[enable external default callback functions [default=no]]), [],
[SECP_SET_DEFAULT([enable_external_default_callbacks], [no], [no])])
@@ -198,25 +205,24 @@ AC_ARG_ENABLE(external_default_callbacks,
AC_ARG_WITH([test-override-wide-multiply], [] ,[set_widemul=$withval], [set_widemul=auto])
AC_ARG_WITH([asm], [AS_HELP_STRING([--with-asm=x86_64|arm32|no|auto],
[assembly optimizations to use (experimental: arm32) [default=auto]])],[req_asm=$withval], [req_asm=auto])
[assembly to use (experimental: arm32) [default=auto]])],[req_asm=$withval], [req_asm=auto])
AC_ARG_WITH([ecmult-window], [AS_HELP_STRING([--with-ecmult-window=SIZE|auto],
AC_ARG_WITH([ecmult-window], [AS_HELP_STRING([--with-ecmult-window=SIZE],
[window size for ecmult precomputation for verification, specified as integer in range [2..24].]
[Larger values result in possibly better performance at the cost of an exponentially larger precomputed table.]
[The table will store 2^(SIZE-1) * 64 bytes of data but can be larger in memory due to platform-specific padding and alignment.]
[A window size larger than 15 will require you delete the prebuilt precomputed_ecmult.c file so that it can be rebuilt.]
[For very large window sizes, use "make -j 1" to reduce memory use during compilation.]
["auto" is a reasonable setting for desktop machines (currently 15). [default=auto]]
[The default value is a reasonable setting for desktop machines (currently 15). [default=15]]
)],
[req_ecmult_window=$withval], [req_ecmult_window=auto])
[set_ecmult_window=$withval], [set_ecmult_window=15])
AC_ARG_WITH([ecmult-gen-precision], [AS_HELP_STRING([--with-ecmult-gen-precision=2|4|8|auto],
[Precision bits to tune the precomputed table size for signing.]
[The size of the table is 32kB for 2 bits, 64kB for 4 bits, 512kB for 8 bits of precision.]
[A larger table size usually results in possible faster signing.]
["auto" is a reasonable setting for desktop machines (currently 4). [default=auto]]
AC_ARG_WITH([ecmult-gen-kb], [AS_HELP_STRING([--with-ecmult-gen-kb=2|22|86],
[The size of the precomputed table for signing in multiples of 1024 bytes (on typical platforms).]
[Larger values result in possibly better signing/keygeneration performance at the cost of a larger table.]
[The default value is a reasonable setting for desktop machines (currently 86). [default=86]]
)],
[req_ecmult_gen_precision=$withval], [req_ecmult_gen_precision=auto])
[set_ecmult_gen_kb=$withval], [set_ecmult_gen_kb=86])
AC_ARG_WITH([valgrind], [AS_HELP_STRING([--with-valgrind=yes|no|auto],
[Build with extra checks for running inside Valgrind [default=auto]]
@@ -245,6 +251,20 @@ if test x"$enable_ctime_tests" = x"auto"; then
enable_ctime_tests=$enable_valgrind
fi
print_msan_notice=no
if test x"$enable_ctime_tests" = x"yes"; then
SECP_MSAN_CHECK
# MSan on Clang >=16 reports unitialized memory in function parameters and return values, even if
# the uninitalized variable is never actually "used". This is called "eager" checking, and it's
# sounds like good idea for normal use of MSan. However, it yields many false positives in the
# ctime_tests because many return values depend on secret (i.e., "uninitialized") values, and
# we're only interested in detecting branches (which count as "uses") on secret data.
if test x"$msan_enabled" = x"yes"; then
SECP_TRY_APPEND_CFLAGS([-fno-sanitize-memory-param-retval], SECP_CFLAGS)
print_msan_notice=yes
fi
fi
if test x"$enable_coverage" = x"yes"; then
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DCOVERAGE=1"
SECP_CFLAGS="-O0 --coverage $SECP_CFLAGS"
@@ -276,24 +296,24 @@ else
x86_64)
SECP_X86_64_ASM_CHECK
if test x"$has_x86_64_asm" != x"yes"; then
AC_MSG_ERROR([x86_64 assembly optimization requested but not available])
AC_MSG_ERROR([x86_64 assembly requested but not available])
fi
;;
arm32)
SECP_ARM32_ASM_CHECK
if test x"$has_arm32_asm" != x"yes"; then
AC_MSG_ERROR([ARM32 assembly optimization requested but not available])
AC_MSG_ERROR([ARM32 assembly requested but not available])
fi
;;
no)
;;
*)
AC_MSG_ERROR([invalid assembly optimization selection])
AC_MSG_ERROR([invalid assembly selection])
;;
esac
fi
# Select assembly optimization
# Select assembly
enable_external_asm=no
case $set_asm in
@@ -306,7 +326,7 @@ arm32)
no)
;;
*)
AC_MSG_ERROR([invalid assembly optimizations])
AC_MSG_ERROR([invalid assembly selection])
;;
esac
@@ -333,14 +353,7 @@ auto)
;;
esac
# Set ecmult window size
if test x"$req_ecmult_window" = x"auto"; then
set_ecmult_window=15
else
set_ecmult_window=$req_ecmult_window
fi
error_window_size=['window size for ecmult precomputation not an integer in range [2..24] or "auto"']
error_window_size=['window size for ecmult precomputation not an integer in range [2..24]']
case $set_ecmult_window in
''|*[[!0-9]]*)
# no valid integer
@@ -355,19 +368,18 @@ case $set_ecmult_window in
;;
esac
# Set ecmult gen precision
if test x"$req_ecmult_gen_precision" = x"auto"; then
set_ecmult_gen_precision=4
else
set_ecmult_gen_precision=$req_ecmult_gen_precision
fi
case $set_ecmult_gen_precision in
2|4|8)
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DECMULT_GEN_PREC_BITS=$set_ecmult_gen_precision"
case $set_ecmult_gen_kb in
2)
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DCOMB_BLOCKS=2 -DCOMB_TEETH=5"
;;
22)
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DCOMB_BLOCKS=11 -DCOMB_TEETH=6"
;;
86)
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DCOMB_BLOCKS=43 -DCOMB_TEETH=6"
;;
*)
AC_MSG_ERROR(['ecmult gen precision not 2, 4, 8 or "auto"'])
AC_MSG_ERROR(['ecmult gen table size not 2, 22 or 86'])
;;
esac
@@ -384,23 +396,38 @@ SECP_CFLAGS="$SECP_CFLAGS $WERROR_CFLAGS"
### Handle module options
###
if test x"$enable_module_ecdh" = x"yes"; then
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DENABLE_MODULE_ECDH=1"
# Processing must be done in a reverse topological sorting of the dependency graph
# (dependent module first).
if test x"$enable_module_ellswift" = x"yes"; then
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DENABLE_MODULE_ELLSWIFT=1"
fi
if test x"$enable_module_musig" = x"yes"; then
if test x"$enable_module_schnorrsig" = x"no"; then
AC_MSG_ERROR([Module dependency error: You have disabled the schnorrsig module explicitly, but it is required by the musig module.])
fi
enable_module_schnorrsig=yes
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DENABLE_MODULE_MUSIG=1"
fi
if test x"$enable_module_schnorrsig" = x"yes"; then
if test x"$enable_module_extrakeys" = x"no"; then
AC_MSG_ERROR([Module dependency error: You have disabled the extrakeys module explicitly, but it is required by the schnorrsig module.])
fi
enable_module_extrakeys=yes
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DENABLE_MODULE_SCHNORRSIG=1"
fi
if test x"$enable_module_extrakeys" = x"yes"; then
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DENABLE_MODULE_EXTRAKEYS=1"
fi
if test x"$enable_module_recovery" = x"yes"; then
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DENABLE_MODULE_RECOVERY=1"
fi
if test x"$enable_module_schnorrsig" = x"yes"; then
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DENABLE_MODULE_SCHNORRSIG=1"
enable_module_extrakeys=yes
fi
# Test if extrakeys is set after the schnorrsig module to allow the schnorrsig
# module to set enable_module_extrakeys=yes
if test x"$enable_module_extrakeys" = x"yes"; then
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DENABLE_MODULE_EXTRAKEYS=1"
if test x"$enable_module_ecdh" = x"yes"; then
SECP_CONFIG_DEFINES="$SECP_CONFIG_DEFINES -DENABLE_MODULE_ECDH=1"
fi
if test x"$enable_external_default_callbacks" = x"yes"; then
@@ -411,14 +438,9 @@ fi
### Check for --enable-experimental if necessary
###
if test x"$enable_experimental" = x"yes"; then
AC_MSG_NOTICE([******])
AC_MSG_NOTICE([WARNING: experimental build])
AC_MSG_NOTICE([Experimental features do not have stable APIs or properties, and may not be safe for production use.])
AC_MSG_NOTICE([******])
else
if test x"$enable_experimental" = x"no"; then
if test x"$set_asm" = x"arm32"; then
AC_MSG_ERROR([ARM32 assembly optimization is experimental. Use --enable-experimental to allow.])
AC_MSG_ERROR([ARM32 assembly is experimental. Use --enable-experimental to allow.])
fi
fi
@@ -439,6 +461,8 @@ AM_CONDITIONAL([ENABLE_MODULE_ECDH], [test x"$enable_module_ecdh" = x"yes"])
AM_CONDITIONAL([ENABLE_MODULE_RECOVERY], [test x"$enable_module_recovery" = x"yes"])
AM_CONDITIONAL([ENABLE_MODULE_EXTRAKEYS], [test x"$enable_module_extrakeys" = x"yes"])
AM_CONDITIONAL([ENABLE_MODULE_SCHNORRSIG], [test x"$enable_module_schnorrsig" = x"yes"])
AM_CONDITIONAL([ENABLE_MODULE_MUSIG], [test x"$enable_module_musig" = x"yes"])
AM_CONDITIONAL([ENABLE_MODULE_ELLSWIFT], [test x"$enable_module_ellswift" = x"yes"])
AM_CONDITIONAL([USE_EXTERNAL_ASM], [test x"$enable_external_asm" = x"yes"])
AM_CONDITIONAL([USE_ASM_ARM], [test x"$set_asm" = x"arm32"])
AM_CONDITIONAL([BUILD_WINDOWS], [test "$build_windows" = "yes"])
@@ -460,10 +484,12 @@ echo " module ecdh = $enable_module_ecdh"
echo " module recovery = $enable_module_recovery"
echo " module extrakeys = $enable_module_extrakeys"
echo " module schnorrsig = $enable_module_schnorrsig"
echo " module musig = $enable_module_musig"
echo " module ellswift = $enable_module_ellswift"
echo
echo " asm = $set_asm"
echo " ecmult window size = $set_ecmult_window"
echo " ecmult gen prec. bits = $set_ecmult_gen_precision"
echo " ecmult gen table size = $set_ecmult_gen_kb KiB"
# Hide test-only options unless they're used.
if test x"$set_widemul" != xauto; then
echo " wide multiplication = $set_widemul"
@@ -475,3 +501,17 @@ echo " CPPFLAGS = $CPPFLAGS"
echo " SECP_CFLAGS = $SECP_CFLAGS"
echo " CFLAGS = $CFLAGS"
echo " LDFLAGS = $LDFLAGS"
if test x"$print_msan_notice" = x"yes"; then
echo
echo "Note:"
echo " MemorySanitizer detected, tried to add -fno-sanitize-memory-param-retval to SECP_CFLAGS"
echo " to avoid false positives in ctime_tests. Pass --disable-ctime-tests to avoid this."
fi
if test x"$enable_experimental" = x"yes"; then
echo
echo "WARNING: Experimental build"
echo " Experimental features do not have stable APIs or properties, and may not be safe for"
echo " production use."
fi

View File

@@ -67,8 +67,8 @@ extern "C" {
*
* Returns: 1 when the signature could be parsed, 0 otherwise.
* Args: ctx: a secp256k1 context object
* Out: sig: a pointer to a signature object
* In: input: a pointer to the signature to be parsed
* Out: sig: pointer to a signature object
* In: input: pointer to the signature to be parsed
* inputlen: the length of the array pointed to be input
*
* This function will accept any valid DER encoded signature, even if the

483
external/secp256k1/doc/ellswift.md vendored Normal file
View File

@@ -0,0 +1,483 @@
# ElligatorSwift for secp256k1 explained
In this document we explain how the `ellswift` module implementation is related to the
construction in the
["SwiftEC: Shalluevan de Woestijne Indifferentiable Function To Elliptic Curves"](https://eprint.iacr.org/2022/759)
paper by Jorge Chávez-Saab, Francisco Rodríguez-Henríquez, and Mehdi Tibouchi.
* [1. Introduction](#1-introduction)
* [2. The decoding function](#2-the-decoding-function)
+ [2.1 Decoding for `secp256k1`](#21-decoding-for-secp256k1)
* [3. The encoding function](#3-the-encoding-function)
+ [3.1 Switching to *v, w* coordinates](#31-switching-to-v-w-coordinates)
+ [3.2 Avoiding computing all inverses](#32-avoiding-computing-all-inverses)
+ [3.3 Finding the inverse](#33-finding-the-inverse)
+ [3.4 Dealing with special cases](#34-dealing-with-special-cases)
+ [3.5 Encoding for `secp256k1`](#35-encoding-for-secp256k1)
* [4. Encoding and decoding full *(x, y)* coordinates](#4-encoding-and-decoding-full-x-y-coordinates)
+ [4.1 Full *(x, y)* coordinates for `secp256k1`](#41-full-x-y-coordinates-for-secp256k1)
## 1. Introduction
The `ellswift` module effectively introduces a new 64-byte public key format, with the property
that (uniformly random) public keys can be encoded as 64-byte arrays which are computationally
indistinguishable from uniform byte arrays. The module provides functions to convert public keys
from and to this format, as well as convenience functions for key generation and ECDH that operate
directly on ellswift-encoded keys.
The encoding consists of the concatenation of two (32-byte big endian) encoded field elements $u$
and $t.$ Together they encode an x-coordinate on the curve $x$, or (see further) a full point $(x, y)$ on
the curve.
**Decoding** consists of decoding the field elements $u$ and $t$ (values above the field size $p$
are taken modulo $p$), and then evaluating $F_u(t)$, which for every $u$ and $t$ results in a valid
x-coordinate on the curve. The functions $F_u$ will be defined in [Section 2](#2-the-decoding-function).
**Encoding** a given $x$ coordinate is conceptually done as follows:
* Loop:
* Pick a uniformly random field element $u.$
* Compute the set $L = F_u^{-1}(x)$ of $t$ values for which $F_u(t) = x$, which may have up to *8* elements.
* With probability $1 - \dfrac{\\#L}{8}$, restart the loop.
* Select a uniformly random $t \in L$ and return $(u, t).$
This is the *ElligatorSwift* algorithm, here given for just x-coordinates. An extension to full
$(x, y)$ points will be given in [Section 4](#4-encoding-and-decoding-full-x-y-coordinates).
The algorithm finds a uniformly random $(u, t)$ among (almost all) those
for which $F_u(t) = x.$ Section 3.2 in the paper proves that the number of such encodings for
almost all x-coordinates on the curve (all but at most 39) is close to two times the field size
(specifically, it lies in the range $2q \pm (22\sqrt{q} + O(1))$, where $q$ is the size of the field).
## 2. The decoding function
First some definitions:
* $\mathbb{F}$ is the finite field of size $q$, of characteristic 5 or more, and $q \equiv 1 \mod 3.$
* For `secp256k1`, $q = 2^{256} - 2^{32} - 977$, which satisfies that requirement.
* Let $E$ be the elliptic curve of points $(x, y) \in \mathbb{F}^2$ for which $y^2 = x^3 + ax + b$, with $a$ and $b$
public constants, for which $\Delta_E = -16(4a^3 + 27b^2)$ is a square, and at least one of $(-b \pm \sqrt{-3 \Delta_E} / 36)/2$ is a square.
This implies that the order of $E$ is either odd, or a multiple of *4*.
If $a=0$, this condition is always fulfilled.
* For `secp256k1`, $a=0$ and $b=7.$
* Let the function $g(x) = x^3 + ax + b$, so the $E$ curve equation is also $y^2 = g(x).$
* Let the function $h(x) = 3x^3 + 4a.$
* Define $V$ as the set of solutions $(x_1, x_2, x_3, z)$ to $z^2 = g(x_1)g(x_2)g(x_3).$
* Define $S_u$ as the set of solutions $(X, Y)$ to $X^2 + h(u)Y^2 = -g(u)$ and $Y \neq 0.$
* $P_u$ is a function from $\mathbb{F}$ to $S_u$ that will be defined below.
* $\psi_u$ is a function from $S_u$ to $V$ that will be defined below.
**Note**: In the paper:
* $F_u$ corresponds to $F_{0,u}$ there.
* $P_u(t)$ is called $P$ there.
* All $S_u$ sets together correspond to $S$ there.
* All $\psi_u$ functions together (operating on elements of $S$) correspond to $\psi$ there.
Note that for $V$, the left hand side of the equation $z^2$ is square, and thus the right
hand must also be square. As multiplying non-squares results in a square in $\mathbb{F}$,
out of the three right-hand side factors an even number must be non-squares.
This implies that exactly *1* or exactly *3* out of
$\\{g(x_1), g(x_2), g(x_3)\\}$ must be square, and thus that for any $(x_1,x_2,x_3,z) \in V$,
at least one of $\\{x_1, x_2, x_3\\}$ must be a valid x-coordinate on $E.$ There is one exception
to this, namely when $z=0$, but even then one of the three values is a valid x-coordinate.
**Define** the decoding function $F_u(t)$ as:
* Let $(x_1, x_2, x_3, z) = \psi_u(P_u(t)).$
* Return the first element $x$ of $(x_3, x_2, x_1)$ which is a valid x-coordinate on $E$ (i.e., $g(x)$ is square).
$P_u(t) = (X(u, t), Y(u, t))$, where:
$$
\begin{array}{lcl}
X(u, t) & = & \left\\{\begin{array}{ll}
\dfrac{g(u) - t^2}{2t} & a = 0 \\
\dfrac{g(u) + h(u)(Y_0(u) - X_0(u)t)^2}{X_0(u)(1 + h(u)t^2)} & a \neq 0
\end{array}\right. \\
Y(u, t) & = & \left\\{\begin{array}{ll}
\dfrac{X(u, t) + t}{u \sqrt{-3}} = \dfrac{g(u) + t^2}{2tu\sqrt{-3}} & a = 0 \\
Y_0(u) + t(X(u, t) - X_0(u)) & a \neq 0
\end{array}\right.
\end{array}
$$
$P_u(t)$ is defined:
* For $a=0$, unless:
* $u = 0$ or $t = 0$ (division by zero)
* $g(u) = -t^2$ (would give $Y=0$).
* For $a \neq 0$, unless:
* $X_0(u) = 0$ or $h(u)t^2 = -1$ (division by zero)
* $Y_0(u) (1 - h(u)t^2) = 2X_0(u)t$ (would give $Y=0$).
The functions $X_0(u)$ and $Y_0(u)$ are defined in Appendix A of the paper, and depend on various properties of $E.$
The function $\psi_u$ is the same for all curves: $\psi_u(X, Y) = (x_1, x_2, x_3, z)$, where:
$$
\begin{array}{lcl}
x_1 & = & \dfrac{X}{2Y} - \dfrac{u}{2} && \\
x_2 & = & -\dfrac{X}{2Y} - \dfrac{u}{2} && \\
x_3 & = & u + 4Y^2 && \\
z & = & \dfrac{g(x_3)}{2Y}(u^2 + ux_1 + x_1^2 + a) = \dfrac{-g(u)g(x_3)}{8Y^3}
\end{array}
$$
### 2.1 Decoding for `secp256k1`
Put together and specialized for $a=0$ curves, decoding $(u, t)$ to an x-coordinate is:
**Define** $F_u(t)$ as:
* Let $X = \dfrac{u^3 + b - t^2}{2t}.$
* Let $Y = \dfrac{X + t}{u\sqrt{-3}}.$
* Return the first $x$ in $(u + 4Y^2, \dfrac{-X}{2Y} - \dfrac{u}{2}, \dfrac{X}{2Y} - \dfrac{u}{2})$ for which $g(x)$ is square.
To make sure that every input decodes to a valid x-coordinate, we remap the inputs in case
$P_u$ is not defined (when $u=0$, $t=0$, or $g(u) = -t^2$):
**Define** $F_u(t)$ as:
* Let $u'=u$ if $u \neq 0$; $1$ otherwise (guaranteeing $u' \neq 0$).
* Let $t'=t$ if $t \neq 0$; $1$ otherwise (guaranteeing $t' \neq 0$).
* Let $t''=t'$ if $g(u') \neq -t'^2$; $2t'$ otherwise (guaranteeing $t'' \neq 0$ and $g(u') \neq -t''^2$).
* Let $X = \dfrac{u'^3 + b - t''^2}{2t''}.$
* Let $Y = \dfrac{X + t''}{u'\sqrt{-3}}.$
* Return the first $x$ in $(u' + 4Y^2, \dfrac{-X}{2Y} - \dfrac{u'}{2}, \dfrac{X}{2Y} - \dfrac{u'}{2})$ for which $x^3 + b$ is square.
The choices here are not strictly necessary. Just returning a fixed constant in any of the undefined cases would suffice,
but the approach here is simple enough and gives fairly uniform output even in these cases.
**Note**: in the paper these conditions result in $\infty$ as output, due to the use of projective coordinates there.
We wish to avoid the need for callers to deal with this special case.
This is implemented in `secp256k1_ellswift_xswiftec_frac_var` (which decodes to an x-coordinate represented as a fraction), and
in `secp256k1_ellswift_xswiftec_var` (which outputs the actual x-coordinate).
## 3. The encoding function
To implement $F_u^{-1}(x)$, the function to find the set of inverses $t$ for which $F_u(t) = x$, we have to reverse the process:
* Find all the $(X, Y) \in S_u$ that could have given rise to $x$, through the $x_1$, $x_2$, or $x_3$ formulas in $\psi_u.$
* Map those $(X, Y)$ solutions to $t$ values using $P_u^{-1}(X, Y).$
* For each of the found $t$ values, verify that $F_u(t) = x.$
* Return the remaining $t$ values.
The function $P_u^{-1}$, which finds $t$ given $(X, Y) \in S_u$, is significantly simpler than $P_u:$
$$
P_u^{-1}(X, Y) = \left\\{\begin{array}{ll}
Yu\sqrt{-3} - X & a = 0 \\
\dfrac{Y-Y_0(u)}{X-X_0(u)} & a \neq 0 \land X \neq X_0(u) \\
\dfrac{-X_0(u)}{h(u)Y_0(u)} & a \neq 0 \land X = X_0(u) \land Y = Y_0(u)
\end{array}\right.
$$
The third step above, verifying that $F_u(t) = x$, is necessary because for the $(X, Y)$ values found through the $x_1$ and $x_2$ expressions,
it is possible that decoding through $\psi_u(X, Y)$ yields a valid $x_3$ on the curve, which would take precedence over the
$x_1$ or $x_2$ decoding. These $(X, Y)$ solutions must be rejected.
Since we know that exactly one or exactly three out of $\\{x_1, x_2, x_3\\}$ are valid x-coordinates for any $t$,
the case where either $x_1$ or $x_2$ is valid and in addition also $x_3$ is valid must mean that all three are valid.
This means that instead of checking whether $x_3$ is on the curve, it is also possible to check whether the other one out of
$x_1$ and $x_2$ is on the curve. This is significantly simpler, as it turns out.
Observe that $\psi_u$ guarantees that $x_1 + x_2 = -u.$ So given either $x = x_1$ or $x = x_2$, the other one of the two can be computed as
$-u - x.$ Thus, when encoding $x$ through the $x_1$ or $x_2$ expressions, one can simply check whether $g(-u-x)$ is a square,
and if so, not include the corresponding $t$ values in the returned set. As this does not need $X$, $Y$, or $t$, this condition can be determined
before those values are computed.
It is not possible that an encoding found through the $x_1$ expression decodes to a different valid x-coordinate using $x_2$ (which would
take precedence), for the same reason: if both $x_1$ and $x_2$ decodings were valid, $x_3$ would be valid as well, and thus take
precedence over both. Because of this, the $g(-u-x)$ being square test for $x_1$ and $x_2$ is the only test necessary to guarantee the found $t$
values round-trip back to the input $x$ correctly. This is the reason for choosing the $(x_3, x_2, x_1)$ precedence order in the decoder;
any order which does not place $x_3$ first requires more complicated round-trip checks in the encoder.
### 3.1 Switching to *v, w* coordinates
Before working out the formulas for all this, we switch to different variables for $S_u.$ Let $v = (X/Y - u)/2$, and
$w = 2Y.$ Or in the other direction, $X = w(u/2 + v)$ and $Y = w/2:$
* $S_u'$ becomes the set of $(v, w)$ for which $w^2 (u^2 + uv + v^2 + a) = -g(u)$ and $w \neq 0.$
* For $a=0$ curves, $P_u^{-1}$ can be stated for $(v,w)$ as $P_u^{'-1}(v, w) = w\left(\frac{\sqrt{-3}-1}{2}u - v\right).$
* $\psi_u$ can be stated for $(v, w)$ as $\psi_u'(v, w) = (x_1, x_2, x_3, z)$, where
$$
\begin{array}{lcl}
x_1 & = & v \\
x_2 & = & -u - v \\
x_3 & = & u + w^2 \\
z & = & \dfrac{g(x_3)}{w}(u^2 + uv + v^2 + a) = \dfrac{-g(u)g(x_3)}{w^3}
\end{array}
$$
We can now write the expressions for finding $(v, w)$ given $x$ explicitly, by solving each of the $\\{x_1, x_2, x_3\\}$
expressions for $v$ or $w$, and using the $S_u'$ equation to find the other variable:
* Assuming $x = x_1$, we find $v = x$ and $w = \pm\sqrt{-g(u)/(u^2 + uv + v^2 + a)}$ (two solutions).
* Assuming $x = x_2$, we find $v = -u-x$ and $w = \pm\sqrt{-g(u)/(u^2 + uv + v^2 + a)}$ (two solutions).
* Assuming $x = x_3$, we find $w = \pm\sqrt{x-u}$ and $v = -u/2 \pm \sqrt{-w^2(4g(u) + w^2h(u))}/(2w^2)$ (four solutions).
### 3.2 Avoiding computing all inverses
The *ElligatorSwift* algorithm as stated in Section 1 requires the computation of $L = F_u^{-1}(x)$ (the
set of all $t$ such that $(u, t)$ decode to $x$) in full. This is unnecessary.
Observe that the procedure of restarting with probability $(1 - \frac{\\#L}{8})$ and otherwise returning a
uniformly random element from $L$ is actually equivalent to always padding $L$ with $\bot$ values up to length 8,
picking a uniformly random element from that, restarting whenever $\bot$ is picked:
**Define** *ElligatorSwift(x)* as:
* Loop:
* Pick a uniformly random field element $u.$
* Compute the set $L = F_u^{-1}(x).$
* Let $T$ be the 8-element vector consisting of the elements of $L$, plus $8 - \\#L$ times $\\{\bot\\}.$
* Select a uniformly random $t \in T.$
* If $t \neq \bot$, return $(u, t)$; restart loop otherwise.
Now notice that the order of elements in $T$ does not matter, as all we do is pick a uniformly
random element in it, so we do not need to have all $\bot$ values at the end.
As we have 8 distinct formulas for finding $(v, w)$ (taking the variants due to $\pm$ into account),
we can associate every index in $T$ with exactly one of those formulas, making sure that:
* Formulas that yield no solutions (due to division by zero or non-existing square roots) or invalid solutions are made to return $\bot.$
* For the $x_1$ and $x_2$ cases, if $g(-u-x)$ is a square, $\bot$ is returned instead (the round-trip check).
* In case multiple formulas would return the same non- $\bot$ result, all but one of those must be turned into $\bot$ to avoid biasing those.
The last condition above only occurs with negligible probability for cryptographically-sized curves, but is interesting
to take into account as it allows exhaustive testing in small groups. See [Section 3.4](#34-dealing-with-special-cases)
for an analysis of all the negligible cases.
If we define $T = (G_{0,u}(x), G_{1,u}(x), \ldots, G_{7,u}(x))$, with each $G_{i,u}$ matching one of the formulas,
the loop can be simplified to only compute one of the inverses instead of all of them:
**Define** *ElligatorSwift(x)* as:
* Loop:
* Pick a uniformly random field element $u.$
* Pick a uniformly random integer $c$ in $[0,8).$
* Let $t = G_{c,u}(x).$
* If $t \neq \bot$, return $(u, t)$; restart loop otherwise.
This is implemented in `secp256k1_ellswift_xelligatorswift_var`.
### 3.3 Finding the inverse
To implement $G_{c,u}$, we map $c=0$ to the $x_1$ formula, $c=1$ to the $x_2$ formula, and $c=2$ and $c=3$ to the $x_3$ formula.
Those are then repeated as $c=4$ through $c=7$ for the other sign of $w$ (noting that in each formula, $w$ is a square root of some expression).
Ignoring the negligible cases, we get:
**Define** $G_{c,u}(x)$ as:
* If $c \in \\{0, 1, 4, 5\\}$ (for $x_1$ and $x_2$ formulas):
* If $g(-u-x)$ is square, return $\bot$ (as $x_3$ would be valid and take precedence).
* If $c \in \\{0, 4\\}$ (the $x_1$ formula) let $v = x$, otherwise let $v = -u-x$ (the $x_2$ formula)
* Let $s = -g(u)/(u^2 + uv + v^2 + a)$ (using $s = w^2$ in what follows).
* Otherwise, when $c \in \\{2, 3, 6, 7\\}$ (for $x_3$ formulas):
* Let $s = x-u.$
* Let $r = \sqrt{-s(4g(u) + sh(u))}.$
* Let $v = (r/s - u)/2$ if $c \in \\{3, 7\\}$; $(-r/s - u)/2$ otherwise.
* Let $w = \sqrt{s}.$
* Depending on $c:$
* If $c \in \\{0, 1, 2, 3\\}:$ return $P_u^{'-1}(v, w).$
* If $c \in \\{4, 5, 6, 7\\}:$ return $P_u^{'-1}(v, -w).$
Whenever a square root of a non-square is taken, $\bot$ is returned; for both square roots this happens with roughly
50% on random inputs. Similarly, when a division by 0 would occur, $\bot$ is returned as well; this will only happen
with negligible probability. A division by 0 in the first branch in fact cannot occur at all, because $u^2 + uv + v^2 + a = 0$
implies $g(-u-x) = g(x)$ which would mean the $g(-u-x)$ is square condition has triggered
and $\bot$ would have been returned already.
**Note**: In the paper, the $case$ variable corresponds roughly to the $c$ above, but only takes on 4 possible values (1 to 4).
The conditional negation of $w$ at the end is done randomly, which is equivalent, but makes testing harder. We choose to
have the $G_{c,u}$ be deterministic, and capture all choices in $c.$
Now observe that the $c \in \\{1, 5\\}$ and $c \in \\{3, 7\\}$ conditions effectively perform the same $v \rightarrow -u-v$
transformation. Furthermore, that transformation has no effect on $s$ in the first branch
as $u^2 + ux + x^2 + a = u^2 + u(-u-x) + (-u-x)^2 + a.$ Thus we can extract it out and move it down:
**Define** $G_{c,u}(x)$ as:
* If $c \in \\{0, 1, 4, 5\\}:$
* If $g(-u-x)$ is square, return $\bot.$
* Let $s = -g(u)/(u^2 + ux + x^2 + a).$
* Let $v = x.$
* Otherwise, when $c \in \\{2, 3, 6, 7\\}:$
* Let $s = x-u.$
* Let $r = \sqrt{-s(4g(u) + sh(u))}.$
* Let $v = (r/s - u)/2.$
* Let $w = \sqrt{s}.$
* Depending on $c:$
* If $c \in \\{0, 2\\}:$ return $P_u^{'-1}(v, w).$
* If $c \in \\{1, 3\\}:$ return $P_u^{'-1}(-u-v, w).$
* If $c \in \\{4, 6\\}:$ return $P_u^{'-1}(v, -w).$
* If $c \in \\{5, 7\\}:$ return $P_u^{'-1}(-u-v, -w).$
This shows there will always be exactly 0, 4, or 8 $t$ values for a given $(u, x)$ input.
There can be 0, 1, or 2 $(v, w)$ pairs before invoking $P_u^{'-1}$, and each results in 4 distinct $t$ values.
### 3.4 Dealing with special cases
As mentioned before there are a few cases to deal with which only happen in a negligibly small subset of inputs.
For cryptographically sized fields, if only random inputs are going to be considered, it is unnecessary to deal with these. Still, for completeness
we analyse them here. They generally fall into two categories: cases in which the encoder would produce $t$ values that
do not decode back to $x$ (or at least cannot guarantee that they do), and cases in which the encoder might produce the same
$t$ value for multiple $c$ inputs (thereby biasing that encoding):
* In the branch for $x_1$ and $x_2$ (where $c \in \\{0, 1, 4, 5\\}$):
* When $g(u) = 0$, we would have $s=w=Y=0$, which is not on $S_u.$ This is only possible on even-ordered curves.
Excluding this also removes the one condition under which the simplified check for $x_3$ on the curve
fails (namely when $g(x_1)=g(x_2)=0$ but $g(x_3)$ is not square).
This does exclude some valid encodings: when both $g(u)=0$ and $u^2+ux+x^2+a=0$ (also implying $g(x)=0$),
the $S_u'$ equation degenerates to $0 = 0$, and many valid $t$ values may exist. Yet, these cannot be targeted uniformly by the
encoder anyway as there will generally be more than 8.
* When $g(x) = 0$, the same $t$ would be produced as in the $x_3$ branch (where $c \in \\{2, 3, 6, 7\\}$) which we give precedence
as it can deal with $g(u)=0$.
This is again only possible on even-ordered curves.
* In the branch for $x_3$ (where $c \in \\{2, 3, 6, 7\\}$):
* When $s=0$, a division by zero would occur.
* When $v = -u-v$ and $c \in \\{3, 7\\}$, the same $t$ would be returned as in the $c \in \\{2, 6\\}$ cases.
It is equivalent to checking whether $r=0$.
This cannot occur in the $x_1$ or $x_2$ branches, as it would trigger the $g(-u-x)$ is square condition.
A similar concern for $w = -w$ does not exist, as $w=0$ is already impossible in both branches: in the first
it requires $g(u)=0$ which is already outlawed on even-ordered curves and impossible on others; in the second it would trigger division by zero.
* Curve-specific special cases also exist that need to be rejected, because they result in $(u,t)$ which is invalid to the decoder, or because of division by zero in the encoder:
* For $a=0$ curves, when $u=0$ or when $t=0$. The latter can only be reached by the encoder when $g(u)=0$, which requires an even-ordered curve.
* For $a \neq 0$ curves, when $X_0(u)=0$, when $h(u)t^2 = -1$, or when $w(u + 2v) = 2X_0(u)$ while also either $w \neq 2Y_0(u)$ or $h(u)=0$.
**Define** a version of $G_{c,u}(x)$ which deals with all these cases:
* If $a=0$ and $u=0$, return $\bot.$
* If $a \neq 0$ and $X_0(u)=0$, return $\bot.$
* If $c \in \\{0, 1, 4, 5\\}:$
* If $g(u) = 0$ or $g(x) = 0$, return $\bot$ (even curves only).
* If $g(-u-x)$ is square, return $\bot.$
* Let $s = -g(u)/(u^2 + ux + x^2 + a)$ (cannot cause division by zero).
* Let $v = x.$
* Otherwise, when $c \in \\{2, 3, 6, 7\\}:$
* Let $s = x-u.$
* Let $r = \sqrt{-s(4g(u) + sh(u))}$; return $\bot$ if not square.
* If $c \in \\{3, 7\\}$ and $r=0$, return $\bot.$
* If $s = 0$, return $\bot.$
* Let $v = (r/s - u)/2.$
* Let $w = \sqrt{s}$; return $\bot$ if not square.
* If $a \neq 0$ and $w(u+2v) = 2X_0(u)$ and either $w \neq 2Y_0(u)$ or $h(u) = 0$, return $\bot.$
* Depending on $c:$
* If $c \in \\{0, 2\\}$, let $t = P_u^{'-1}(v, w).$
* If $c \in \\{1, 3\\}$, let $t = P_u^{'-1}(-u-v, w).$
* If $c \in \\{4, 6\\}$, let $t = P_u^{'-1}(v, -w).$
* If $c \in \\{5, 7\\}$, let $t = P_u^{'-1}(-u-v, -w).$
* If $a=0$ and $t=0$, return $\bot$ (even curves only).
* If $a \neq 0$ and $h(u)t^2 = -1$, return $\bot.$
* Return $t.$
Given any $u$, using this algorithm over all $x$ and $c$ values, every $t$ value will be reached exactly once,
for an $x$ for which $F_u(t) = x$ holds, except for these cases that will not be reached:
* All cases where $P_u(t)$ is not defined:
* For $a=0$ curves, when $u=0$, $t=0$, or $g(u) = -t^2.$
* For $a \neq 0$ curves, when $h(u)t^2 = -1$, $X_0(u) = 0$, or $Y_0(u) (1 - h(u) t^2) = 2X_0(u)t.$
* When $g(u)=0$, the potentially many $t$ values that decode to an $x$ satisfying $g(x)=0$ using the $x_2$ formula. These were excluded by the $g(u)=0$ condition in the $c \in \\{0, 1, 4, 5\\}$ branch.
These cases form a negligible subset of all $(u, t)$ for cryptographically sized curves.
### 3.5 Encoding for `secp256k1`
Specialized for odd-ordered $a=0$ curves:
**Define** $G_{c,u}(x)$ as:
* If $u=0$, return $\bot.$
* If $c \in \\{0, 1, 4, 5\\}:$
* If $(-u-x)^3 + b$ is square, return $\bot$
* Let $s = -(u^3 + b)/(u^2 + ux + x^2)$ (cannot cause division by 0).
* Let $v = x.$
* Otherwise, when $c \in \\{2, 3, 6, 7\\}:$
* Let $s = x-u.$
* Let $r = \sqrt{-s(4(u^3 + b) + 3su^2)}$; return $\bot$ if not square.
* If $c \in \\{3, 7\\}$ and $r=0$, return $\bot.$
* If $s = 0$, return $\bot.$
* Let $v = (r/s - u)/2.$
* Let $w = \sqrt{s}$; return $\bot$ if not square.
* Depending on $c:$
* If $c \in \\{0, 2\\}:$ return $w(\frac{\sqrt{-3}-1}{2}u - v).$
* If $c \in \\{1, 3\\}:$ return $w(\frac{\sqrt{-3}+1}{2}u + v).$
* If $c \in \\{4, 6\\}:$ return $w(\frac{-\sqrt{-3}+1}{2}u + v).$
* If $c \in \\{5, 7\\}:$ return $w(\frac{-\sqrt{-3}-1}{2}u - v).$
This is implemented in `secp256k1_ellswift_xswiftec_inv_var`.
And the x-only ElligatorSwift encoding algorithm is still:
**Define** *ElligatorSwift(x)* as:
* Loop:
* Pick a uniformly random field element $u.$
* Pick a uniformly random integer $c$ in $[0,8).$
* Let $t = G_{c,u}(x).$
* If $t \neq \bot$, return $(u, t)$; restart loop otherwise.
Note that this logic does not take the remapped $u=0$, $t=0$, and $g(u) = -t^2$ cases into account; it just avoids them.
While it is not impossible to make the encoder target them, this would increase the maximum number of $t$ values for a given $(u, x)$
combination beyond 8, and thereby slow down the ElligatorSwift loop proportionally, for a negligible gain in uniformity.
## 4. Encoding and decoding full *(x, y)* coordinates
So far we have only addressed encoding and decoding x-coordinates, but in some cases an encoding
for full points with $(x, y)$ coordinates is desirable. It is possible to encode this information
in $t$ as well.
Note that for any $(X, Y) \in S_u$, $(\pm X, \pm Y)$ are all on $S_u.$ Moreover, all of these are
mapped to the same x-coordinate. Negating $X$ or negating $Y$ just results in $x_1$ and $x_2$
being swapped, and does not affect $x_3.$ This will not change the outcome x-coordinate as the order
of $x_1$ and $x_2$ only matters if both were to be valid, and in that case $x_3$ would be used instead.
Still, these four $(X, Y)$ combinations all correspond to distinct $t$ values, so we can encode
the sign of the y-coordinate in the sign of $X$ or the sign of $Y.$ They correspond to the
four distinct $P_u^{'-1}$ calls in the definition of $G_{u,c}.$
**Note**: In the paper, the sign of the y coordinate is encoded in a separately-coded bit.
To encode the sign of $y$ in the sign of $Y:$
**Define** *Decode(u, t)* for full $(x, y)$ as:
* Let $(X, Y) = P_u(t).$
* Let $x$ be the first value in $(u + 4Y^2, \frac{-X}{2Y} - \frac{u}{2}, \frac{X}{2Y} - \frac{u}{2})$ for which $g(x)$ is square.
* Let $y = \sqrt{g(x)}.$
* If $sign(y) = sign(Y)$, return $(x, y)$; otherwise return $(x, -y).$
And encoding would be done using a $G_{c,u}(x, y)$ function defined as:
**Define** $G_{c,u}(x, y)$ as:
* If $c \in \\{0, 1\\}:$
* If $g(u) = 0$ or $g(x) = 0$, return $\bot$ (even curves only).
* If $g(-u-x)$ is square, return $\bot.$
* Let $s = -g(u)/(u^2 + ux + x^2 + a)$ (cannot cause division by zero).
* Let $v = x.$
* Otherwise, when $c \in \\{2, 3\\}:$
* Let $s = x-u.$
* Let $r = \sqrt{-s(4g(u) + sh(u))}$; return $\bot$ if not square.
* If $c = 3$ and $r = 0$, return $\bot.$
* Let $v = (r/s - u)/2.$
* Let $w = \sqrt{s}$; return $\bot$ if not square.
* Let $w' = w$ if $sign(w/2) = sign(y)$; $-w$ otherwise.
* Depending on $c:$
* If $c \in \\{0, 2\\}:$ return $P_u^{'-1}(v, w').$
* If $c \in \\{1, 3\\}:$ return $P_u^{'-1}(-u-v, w').$
Note that $c$ now only ranges $[0,4)$, as the sign of $w'$ is decided based on that of $y$, rather than on $c.$
This change makes some valid encodings unreachable: when $y = 0$ and $sign(Y) \neq sign(0)$.
In the above logic, $sign$ can be implemented in several ways, such as parity of the integer representation
of the input field element (for prime-sized fields) or the quadratic residuosity (for fields where
$-1$ is not square). The choice does not matter, as long as it only takes on two possible values, and for $x \neq 0$ it holds that $sign(x) \neq sign(-x)$.
### 4.1 Full *(x, y)* coordinates for `secp256k1`
For $a=0$ curves, there is another option. Note that for those,
the $P_u(t)$ function translates negations of $t$ to negations of (both) $X$ and $Y.$ Thus, we can use $sign(t)$ to
encode the y-coordinate directly. Combined with the earlier remapping to guarantee all inputs land on the curve, we get
as decoder:
**Define** *Decode(u, t)* as:
* Let $u'=u$ if $u \neq 0$; $1$ otherwise.
* Let $t'=t$ if $t \neq 0$; $1$ otherwise.
* Let $t''=t'$ if $u'^3 + b + t'^2 \neq 0$; $2t'$ otherwise.
* Let $X = \dfrac{u'^3 + b - t''^2}{2t''}.$
* Let $Y = \dfrac{X + t''}{u'\sqrt{-3}}.$
* Let $x$ be the first element of $(u' + 4Y^2, \frac{-X}{2Y} - \frac{u'}{2}, \frac{X}{2Y} - \frac{u'}{2})$ for which $g(x)$ is square.
* Let $y = \sqrt{g(x)}.$
* Return $(x, y)$ if $sign(y) = sign(t)$; $(x, -y)$ otherwise.
This is implemented in `secp256k1_ellswift_swiftec_var`. The used $sign(x)$ function is the parity of $x$ when represented as in integer in $[0,q).$
The corresponding encoder would invoke the x-only one, but negating the output $t$ if $sign(t) \neq sign(y).$
This is implemented in `secp256k1_ellswift_elligatorswift_var`.
Note that this is only intended for encoding points where both the x-coordinate and y-coordinate are unpredictable. When encoding x-only points
where the y-coordinate is implicitly even (or implicitly square, or implicitly in $[0,q/2]$), the encoder in
[Section 3.5](#35-encoding-for-secp256k1) must be used, or a bias is reintroduced that undoes all the benefit of using ElligatorSwift
in the first place.

54
external/secp256k1/doc/musig.md vendored Normal file
View File

@@ -0,0 +1,54 @@
Notes on the musig module API
===========================
The following sections contain additional notes on the API of the musig module (`include/secp256k1_musig.h`).
A usage example can be found in `examples/musig.c`.
## API misuse
The musig API is designed with a focus on misuse resistance.
However, due to the interactive nature of the MuSig protocol, there are additional failure modes that are not present in regular (single-party) Schnorr signature creation.
While the results can be catastrophic (e.g. leaking of the secret key), it is unfortunately not possible for the musig implementation to prevent all such failure modes.
Therefore, users of the musig module must take great care to make sure of the following:
1. A unique nonce per signing session is generated in `secp256k1_musig_nonce_gen`.
See the corresponding comment in `include/secp256k1_musig.h` for how to ensure that.
2. The `secp256k1_musig_secnonce` structure is never copied or serialized.
See also the comment on `secp256k1_musig_secnonce` in `include/secp256k1_musig.h`.
3. Opaque data structures are never written to or read from directly.
Instead, only the provided accessor functions are used.
## Key Aggregation and (Taproot) Tweaking
Given a set of public keys, the aggregate public key is computed with `secp256k1_musig_pubkey_agg`.
A plain tweak can be added to the resulting public key with `secp256k1_ec_pubkey_tweak_add` by setting the `tweak32` argument to the hash defined in BIP 32. Similarly, a Taproot tweak can be added with `secp256k1_xonly_pubkey_tweak_add` by setting the `tweak32` argument to the TapTweak hash defined in BIP 341.
Both types of tweaking can be combined and invoked multiple times if the specific application requires it.
## Signing
This is covered by `examples/musig.c`.
Essentially, the protocol proceeds in the following steps:
1. Generate a keypair with `secp256k1_keypair_create` and obtain the public key with `secp256k1_keypair_pub`.
2. Call `secp256k1_musig_pubkey_agg` with the pubkeys of all participants.
3. Optionally add a (Taproot) tweak with `secp256k1_musig_pubkey_xonly_tweak_add` and a plain tweak with `secp256k1_musig_pubkey_ec_tweak_add`.
4. Generate a pair of secret and public nonce with `secp256k1_musig_nonce_gen` and send the public nonce to the other signers.
5. Someone (not necessarily the signer) aggregates the public nonces with `secp256k1_musig_nonce_agg` and sends it to the signers.
6. Process the aggregate nonce with `secp256k1_musig_nonce_process`.
7. Create a partial signature with `secp256k1_musig_partial_sign`.
8. Verify the partial signatures (optional in some scenarios) with `secp256k1_musig_partial_sig_verify`.
9. Someone (not necessarily the signer) obtains all partial signatures and aggregates them into the final Schnorr signature using `secp256k1_musig_partial_sig_agg`.
The aggregate signature can be verified with `secp256k1_schnorrsig_verify`.
Steps 1 through 5 above can occur before or after the signers are aware of the message to be signed.
Whenever possible, it is recommended to generate the nonces only after the message is known.
This provides enhanced defense-in-depth measures, protecting against potential API misuse in certain scenarios.
However, it does require two rounds of communication during the signing process.
The alternative, generating the nonces in a pre-processing step before the message is known, eliminates these additional protective measures but allows for non-interactive signing.
Similarly, the API supports an alternative protocol flow where generating the aggregate key (steps 1 to 3) is allowed to happen after exchanging nonces (steps 4 to 5).
## Verification
A participant who wants to verify the partial signatures, but does not sign itself may do so using the above instructions except that the verifier skips steps 1, 4 and 7.

View File

@@ -1,4 +1,4 @@
# Release Process
# Release process
This document outlines the process for releasing versions of the form `$MAJOR.$MINOR.$PATCH`.
@@ -12,50 +12,83 @@ It is best if the maintainers are present during the release, so they can help e
This process also assumes that there will be no minor releases for old major releases.
We aim to cut a regular release every 3-4 months, approximately twice as frequent as major Bitcoin Core releases. Every second release should be published one month before the feature freeze of the next major Bitcoin Core release, allowing sufficient time to update the library in Core.
## Sanity checks
Perform these checks when reviewing the release PR (see below):
1. Ensure `make distcheck` doesn't fail.
```shell
./autogen.sh && ./configure --enable-dev-mode && make distcheck
```
2. Check installation with autotools:
```shell
dir=$(mktemp -d)
./autogen.sh && ./configure --prefix=$dir && make clean && make install && ls -RlAh $dir
gcc -o ecdsa examples/ecdsa.c $(PKG_CONFIG_PATH=$dir/lib/pkgconfig pkg-config --cflags --libs libsecp256k1) -Wl,-rpath,"$dir/lib" && ./ecdsa
```
3. Check installation with CMake:
```shell
dir=$(mktemp -d)
build=$(mktemp -d)
cmake -B $build -DCMAKE_INSTALL_PREFIX=$dir && cmake --build $build && cmake --install $build && ls -RlAh $dir
gcc -o ecdsa examples/ecdsa.c -I $dir/include -L $dir/lib*/ -l secp256k1 -Wl,-rpath,"$dir/lib",-rpath,"$dir/lib64" && ./ecdsa
```
4. Use the [`check-abi.sh`](/tools/check-abi.sh) tool to verify that there are no unexpected ABI incompatibilities and that the version number and the release notes accurately reflect all potential ABI changes. To run this tool, the `abi-dumper` and `abi-compliance-checker` packages are required.
```shell
tools/check-abi.sh
```
## Regular release
1. Open a PR to the master branch with a commit (using message `"release: prepare for $MAJOR.$MINOR.$PATCH"`, for example) that
* finalizes the release notes in [CHANGELOG.md](../CHANGELOG.md) (make sure to include an entry for `### ABI Compatibility`),
* sets `_PKG_VERSION_IS_RELEASE` to `true` in `configure.ac`, and
* if this is not a patch release
* updates `_PKG_VERSION_*` and `_LIB_VERSION_*` in `configure.ac` and
* finalizes the release notes in [CHANGELOG.md](../CHANGELOG.md) by
* adding a section for the release (make sure that the version number is a link to a diff between the previous and new version),
* removing the `[Unreleased]` section header,
* ensuring that the release notes are not missing entries (check the `needs-changelog` label on github), and
* including an entry for `### ABI Compatibility` if it doesn't exist,
* sets `_PKG_VERSION_IS_RELEASE` to `true` in `configure.ac`, and,
* if this is not a patch release,
* updates `_PKG_VERSION_*` and `_LIB_VERSION_*` in `configure.ac`, and
* updates `project(libsecp256k1 VERSION ...)` and `${PROJECT_NAME}_LIB_VERSION_*` in `CMakeLists.txt`.
2. After the PR is merged, tag the commit and push it:
2. Perform the [sanity checks](#sanity-checks) on the PR branch.
3. After the PR is merged, tag the commit, and push the tag:
```
RELEASE_COMMIT=<merge commit of step 1>
git tag -s v$MAJOR.$MINOR.$PATCH -m "libsecp256k1 $MAJOR.$MINOR.$PATCH" $RELEASE_COMMIT
git push git@github.com:bitcoin-core/secp256k1.git v$MAJOR.$MINOR.$PATCH
```
3. Open a PR to the master branch with a commit (using message `"release cleanup: bump version after $MAJOR.$MINOR.$PATCH"`, for example) that
* sets `_PKG_VERSION_IS_RELEASE` to `false` and increments `_PKG_VERSION_PATCH` and `_LIB_VERSION_REVISION` in `configure.ac`, and
* increments the `$PATCH` component of `project(libsecp256k1 VERSION ...)` and `${PROJECT_NAME}_LIB_VERSION_REVISION` in `CMakeLists.txt`.
4. Open a PR to the master branch with a commit (using message `"release cleanup: bump version after $MAJOR.$MINOR.$PATCH"`, for example) that
* sets `_PKG_VERSION_IS_RELEASE` to `false` and increments `_PKG_VERSION_PATCH` and `_LIB_VERSION_REVISION` in `configure.ac`,
* increments the `$PATCH` component of `project(libsecp256k1 VERSION ...)` and `${PROJECT_NAME}_LIB_VERSION_REVISION` in `CMakeLists.txt`, and
* adds an `[Unreleased]` section header to the [CHANGELOG.md](../CHANGELOG.md).
If other maintainers are not present to approve the PR, it can be merged without ACKs.
4. Create a new GitHub release with a link to the corresponding entry in [CHANGELOG.md](../CHANGELOG.md).
5. Create a new GitHub release with a link to the corresponding entry in [CHANGELOG.md](../CHANGELOG.md).
6. Send an announcement email to the bitcoin-dev mailing list.
## Maintenance release
Note that bugfixes only need to be backported to releases for which no compatible release without the bug exists.
Note that bug fixes need to be backported only to releases for which no compatible release without the bug exists.
1. If `$PATCH = 1`, create maintenance branch `$MAJOR.$MINOR`:
1. If there's no maintenance branch `$MAJOR.$MINOR`, create one:
```
git checkout -b $MAJOR.$MINOR v$MAJOR.$MINOR.0
git checkout -b $MAJOR.$MINOR v$MAJOR.$MINOR.$((PATCH - 1))
git push git@github.com:bitcoin-core/secp256k1.git $MAJOR.$MINOR
```
2. Open a pull request to the `$MAJOR.$MINOR` branch that
* includes the bugfixes,
* finalizes the release notes,
* includes the bug fixes,
* finalizes the release notes similar to a regular release,
* increments `_PKG_VERSION_PATCH` and `_LIB_VERSION_REVISION` in `configure.ac`
and the `$PATCH` component of `project(libsecp256k1 VERSION ...)` and `${PROJECT_NAME}_LIB_VERSION_REVISION` in `CMakeLists.txt`
(with commit message `"release: bump versions for $MAJOR.$MINOR.$PATCH"`, for example).
3. After the PRs are merged, update the release branch and tag the commit:
3. Perform the [sanity checks](#sanity-checks) on the PR branch.
4. After the PRs are merged, update the release branch, tag the commit, and push the tag:
```
git checkout $MAJOR.$MINOR && git pull
git tag -s v$MAJOR.$MINOR.$PATCH -m "libsecp256k1 $MAJOR.$MINOR.$PATCH"
```
4. Push tag:
```
git push git@github.com:bitcoin-core/secp256k1.git v$MAJOR.$MINOR.$PATCH
```
5. Create a new GitHub release with a link to the corresponding entry in [CHANGELOG.md](../CHANGELOG.md).
6. Open PR to the master branch that includes a commit (with commit message `"release notes: add $MAJOR.$MINOR.$PATCH"`, for example) that adds release notes to [CHANGELOG.md](../CHANGELOG.md).
6. Create a new GitHub release with a link to the corresponding entry in [CHANGELOG.md](../CHANGELOG.md).
7. Send an announcement email to the bitcoin-dev mailing list.
8. Open PR to the master branch that includes a commit (with commit message `"release notes: add $MAJOR.$MINOR.$PATCH"`, for example) that adds release notes to [CHANGELOG.md](../CHANGELOG.md).

View File

@@ -1,27 +1,31 @@
add_library(example INTERFACE)
target_include_directories(example INTERFACE
${PROJECT_SOURCE_DIR}/include
)
target_link_libraries(example INTERFACE
secp256k1
$<$<PLATFORM_ID:Windows>:bcrypt>
)
if(NOT BUILD_SHARED_LIBS AND MSVC)
target_link_options(example INTERFACE /IGNORE:4217)
endif()
function(add_example name)
set(target_name ${name}_example)
add_executable(${target_name} ${name}.c)
target_include_directories(${target_name} PRIVATE
${PROJECT_SOURCE_DIR}/include
)
target_link_libraries(${target_name}
secp256k1
$<$<PLATFORM_ID:Windows>:bcrypt>
)
set(test_name ${name}_example)
add_test(NAME secp256k1_${test_name} COMMAND ${target_name})
endfunction()
add_executable(ecdsa_example ecdsa.c)
target_link_libraries(ecdsa_example example)
add_test(NAME ecdsa_example COMMAND ecdsa_example)
add_example(ecdsa)
if(SECP256K1_ENABLE_MODULE_ECDH)
add_executable(ecdh_example ecdh.c)
target_link_libraries(ecdh_example example)
add_test(NAME ecdh_example COMMAND ecdh_example)
add_example(ecdh)
endif()
if(SECP256K1_ENABLE_MODULE_SCHNORRSIG)
add_executable(schnorr_example schnorr.c)
target_link_libraries(schnorr_example example)
add_test(NAME schnorr_example COMMAND schnorr_example)
add_example(schnorr)
endif()
if(SECP256K1_ENABLE_MODULE_ELLSWIFT)
add_example(ellswift)
endif()
if(SECP256K1_ENABLE_MODULE_MUSIG)
add_example(musig)
endif()

View File

@@ -42,18 +42,16 @@ int main(void) {
assert(return_val);
/*** Key Generation ***/
/* If the secret key is zero or out of range (bigger than secp256k1's
* order), we try to sample a new key. Note that the probability of this
* happening is negligible. */
while (1) {
if (!fill_random(seckey1, sizeof(seckey1)) || !fill_random(seckey2, sizeof(seckey2))) {
printf("Failed to generate randomness\n");
return 1;
}
if (secp256k1_ec_seckey_verify(ctx, seckey1) && secp256k1_ec_seckey_verify(ctx, seckey2)) {
break;
}
if (!fill_random(seckey1, sizeof(seckey1)) || !fill_random(seckey2, sizeof(seckey2))) {
printf("Failed to generate randomness\n");
return 1;
}
/* If the secret key is zero or out of range (greater than secp256k1's
* order), we fail. Note that the probability of this occurring is negligible
* with a properly functioning random number generator. */
if (!secp256k1_ec_seckey_verify(ctx, seckey1) || !secp256k1_ec_seckey_verify(ctx, seckey2)) {
printf("Generated secret key is invalid. This indicates an issue with the random number generator.\n");
return 1;
}
/* Public key creation using a valid context with a verified secret key should never fail */
@@ -108,7 +106,7 @@ int main(void) {
/* It's best practice to try to clear secrets from memory after using them.
* This is done because some bugs can allow an attacker to leak memory, for
* example through "out of bounds" array access (see Heartbleed), Or the OS
* example through "out of bounds" array access (see Heartbleed), or the OS
* swapping them to disk. Hence, we overwrite the secret key buffer with zeros.
*
* Here we are preventing these writes from being optimized out, as any good compiler

View File

@@ -49,18 +49,16 @@ int main(void) {
assert(return_val);
/*** Key Generation ***/
/* If the secret key is zero or out of range (bigger than secp256k1's
* order), we try to sample a new key. Note that the probability of this
* happening is negligible. */
while (1) {
if (!fill_random(seckey, sizeof(seckey))) {
printf("Failed to generate randomness\n");
return 1;
}
if (secp256k1_ec_seckey_verify(ctx, seckey)) {
break;
}
if (!fill_random(seckey, sizeof(seckey))) {
printf("Failed to generate randomness\n");
return 1;
}
/* If the secret key is zero or out of range (greater than secp256k1's
* order), we fail. Note that the probability of this occurring is negligible
* with a properly functioning random number generator. */
if (!secp256k1_ec_seckey_verify(ctx, seckey)) {
printf("Generated secret key is invalid. This indicates an issue with the random number generator.\n");
return 1;
}
/* Public key creation using a valid context with a verified secret key should never fail */
@@ -128,7 +126,7 @@ int main(void) {
/* It's best practice to try to clear secrets from memory after using them.
* This is done because some bugs can allow an attacker to leak memory, for
* example through "out of bounds" array access (see Heartbleed), Or the OS
* example through "out of bounds" array access (see Heartbleed), or the OS
* swapping them to disk. Hence, we overwrite the secret key buffer with zeros.
*
* Here we are preventing these writes from being optimized out, as any good compiler

121
external/secp256k1/examples/ellswift.c vendored Normal file
View File

@@ -0,0 +1,121 @@
/*************************************************************************
* Written in 2024 by Sebastian Falbesoner *
* To the extent possible under law, the author(s) have dedicated all *
* copyright and related and neighboring rights to the software in this *
* file to the public domain worldwide. This software is distributed *
* without any warranty. For the CC0 Public Domain Dedication, see *
* EXAMPLES_COPYING or https://creativecommons.org/publicdomain/zero/1.0 *
*************************************************************************/
/** This file demonstrates how to use the ElligatorSwift module to perform
* a key exchange according to BIP 324. Additionally, see the documentation
* in include/secp256k1_ellswift.h and doc/ellswift.md.
*/
#include <stdio.h>
#include <assert.h>
#include <string.h>
#include <secp256k1.h>
#include <secp256k1_ellswift.h>
#include "examples_util.h"
int main(void) {
secp256k1_context* ctx;
unsigned char randomize[32];
unsigned char auxrand1[32];
unsigned char auxrand2[32];
unsigned char seckey1[32];
unsigned char seckey2[32];
unsigned char ellswift_pubkey1[64];
unsigned char ellswift_pubkey2[64];
unsigned char shared_secret1[32];
unsigned char shared_secret2[32];
int return_val;
/* Create a secp256k1 context */
ctx = secp256k1_context_create(SECP256K1_CONTEXT_NONE);
if (!fill_random(randomize, sizeof(randomize))) {
printf("Failed to generate randomness\n");
return 1;
}
/* Randomizing the context is recommended to protect against side-channel
* leakage. See `secp256k1_context_randomize` in secp256k1.h for more
* information about it. This should never fail. */
return_val = secp256k1_context_randomize(ctx, randomize);
assert(return_val);
/*** Generate secret keys ***/
if (!fill_random(seckey1, sizeof(seckey1)) || !fill_random(seckey2, sizeof(seckey2))) {
printf("Failed to generate randomness\n");
return 1;
}
/* If the secret key is zero or out of range (greater than secp256k1's
* order), we fail. Note that the probability of this occurring is negligible
* with a properly functioning random number generator. */
if (!secp256k1_ec_seckey_verify(ctx, seckey1) || !secp256k1_ec_seckey_verify(ctx, seckey2)) {
printf("Generated secret key is invalid. This indicates an issue with the random number generator.\n");
return 1;
}
/* Generate ElligatorSwift public keys. This should never fail with valid context and
verified secret keys. Note that providing additional randomness (fourth parameter) is
optional, but recommended. */
if (!fill_random(auxrand1, sizeof(auxrand1)) || !fill_random(auxrand2, sizeof(auxrand2))) {
printf("Failed to generate randomness\n");
return 1;
}
return_val = secp256k1_ellswift_create(ctx, ellswift_pubkey1, seckey1, auxrand1);
assert(return_val);
return_val = secp256k1_ellswift_create(ctx, ellswift_pubkey2, seckey2, auxrand2);
assert(return_val);
/*** Create the shared secret on each side ***/
/* Perform x-only ECDH with seckey1 and ellswift_pubkey2. Should never fail
* with a verified seckey and valid pubkey. Note that both parties pass both
* EllSwift pubkeys in the same order; the pubkey of the calling party is
* determined by the "party" boolean (sixth parameter). */
return_val = secp256k1_ellswift_xdh(ctx, shared_secret1, ellswift_pubkey1, ellswift_pubkey2,
seckey1, 0, secp256k1_ellswift_xdh_hash_function_bip324, NULL);
assert(return_val);
/* Perform x-only ECDH with seckey2 and ellswift_pubkey1. Should never fail
* with a verified seckey and valid pubkey. */
return_val = secp256k1_ellswift_xdh(ctx, shared_secret2, ellswift_pubkey1, ellswift_pubkey2,
seckey2, 1, secp256k1_ellswift_xdh_hash_function_bip324, NULL);
assert(return_val);
/* Both parties should end up with the same shared secret */
return_val = memcmp(shared_secret1, shared_secret2, sizeof(shared_secret1));
assert(return_val == 0);
printf( " Secret Key1: ");
print_hex(seckey1, sizeof(seckey1));
printf( "EllSwift Pubkey1: ");
print_hex(ellswift_pubkey1, sizeof(ellswift_pubkey1));
printf("\n Secret Key2: ");
print_hex(seckey2, sizeof(seckey2));
printf( "EllSwift Pubkey2: ");
print_hex(ellswift_pubkey2, sizeof(ellswift_pubkey2));
printf("\n Shared Secret: ");
print_hex(shared_secret1, sizeof(shared_secret1));
/* This will clear everything from the context and free the memory */
secp256k1_context_destroy(ctx);
/* It's best practice to try to clear secrets from memory after using them.
* This is done because some bugs can allow an attacker to leak memory, for
* example through "out of bounds" array access (see Heartbleed), or the OS
* swapping them to disk. Hence, we overwrite the secret key buffer with zeros.
*
* Here we are preventing these writes from being optimized out, as any good compiler
* will remove any writes that aren't used. */
secure_erase(seckey1, sizeof(seckey1));
secure_erase(seckey2, sizeof(seckey2));
secure_erase(shared_secret1, sizeof(shared_secret1));
secure_erase(shared_secret2, sizeof(shared_secret2));
return 0;
}

View File

@@ -95,7 +95,7 @@ static void secure_erase(void *ptr, size_t len) {
* As best as we can tell, this is sufficient to break any optimisations that
* might try to eliminate "superfluous" memsets.
* This method used in memzero_explicit() the Linux kernel, too. Its advantage is that it is
* pretty efficient, because the compiler can still implement the memset() efficently,
* pretty efficient, because the compiler can still implement the memset() efficiently,
* just not remove it entirely. See "Dead Store Elimination (Still) Considered Harmful" by
* Yang et al. (USENIX Security 2017) for more background.
*/

260
external/secp256k1/examples/musig.c vendored Normal file
View File

@@ -0,0 +1,260 @@
/*************************************************************************
* To the extent possible under law, the author(s) have dedicated all *
* copyright and related and neighboring rights to the software in this *
* file to the public domain worldwide. This software is distributed *
* without any warranty. For the CC0 Public Domain Dedication, see *
* EXAMPLES_COPYING or https://creativecommons.org/publicdomain/zero/1.0 *
*************************************************************************/
/** This file demonstrates how to use the MuSig module to create a
* 3-of-3 multisignature. Additionally, see the documentation in
* include/secp256k1_musig.h and doc/musig.md.
*/
#include <stdio.h>
#include <assert.h>
#include <string.h>
#include <secp256k1.h>
#include <secp256k1_extrakeys.h>
#include <secp256k1_musig.h>
#include <secp256k1_schnorrsig.h>
#include "examples_util.h"
struct signer_secrets {
secp256k1_keypair keypair;
secp256k1_musig_secnonce secnonce;
};
struct signer {
secp256k1_pubkey pubkey;
secp256k1_musig_pubnonce pubnonce;
secp256k1_musig_partial_sig partial_sig;
};
/* Number of public keys involved in creating the aggregate signature */
#define N_SIGNERS 3
/* Create a key pair, store it in signer_secrets->keypair and signer->pubkey */
static int create_keypair(const secp256k1_context* ctx, struct signer_secrets *signer_secrets, struct signer *signer) {
unsigned char seckey[32];
if (!fill_random(seckey, sizeof(seckey))) {
printf("Failed to generate randomness\n");
return 0;
}
/* Try to create a keypair with a valid context. This only fails if the
* secret key is zero or out of range (greater than secp256k1's order). Note
* that the probability of this occurring is negligible with a properly
* functioning random number generator. */
if (!secp256k1_keypair_create(ctx, &signer_secrets->keypair, seckey)) {
return 0;
}
if (!secp256k1_keypair_pub(ctx, &signer->pubkey, &signer_secrets->keypair)) {
return 0;
}
secure_erase(seckey, sizeof(seckey));
return 1;
}
/* Tweak the pubkey corresponding to the provided keyagg cache, update the cache
* and return the tweaked aggregate pk. */
static int tweak(const secp256k1_context* ctx, secp256k1_xonly_pubkey *agg_pk, secp256k1_musig_keyagg_cache *cache) {
secp256k1_pubkey output_pk;
/* For BIP 32 tweaking the plain_tweak is set to a hash as defined in BIP
* 32. */
unsigned char plain_tweak[32] = "this could be a BIP32 tweak....";
/* For Taproot tweaking the xonly_tweak is set to the TapTweak hash as
* defined in BIP 341 */
unsigned char xonly_tweak[32] = "this could be a Taproot tweak..";
/* Plain tweaking which, for example, allows deriving multiple child
* public keys from a single aggregate key using BIP32 */
if (!secp256k1_musig_pubkey_ec_tweak_add(ctx, NULL, cache, plain_tweak)) {
return 0;
}
/* Note that we did not provide an output_pk argument, because the
* resulting pk is also saved in the cache and so if one is just interested
* in signing, the output_pk argument is unnecessary. On the other hand, if
* one is not interested in signing, the same output_pk can be obtained by
* calling `secp256k1_musig_pubkey_get` right after key aggregation to get
* the full pubkey and then call `secp256k1_ec_pubkey_tweak_add`. */
/* Xonly tweaking which, for example, allows creating Taproot commitments */
if (!secp256k1_musig_pubkey_xonly_tweak_add(ctx, &output_pk, cache, xonly_tweak)) {
return 0;
}
/* Note that if we wouldn't care about signing, we can arrive at the same
* output_pk by providing the untweaked public key to
* `secp256k1_xonly_pubkey_tweak_add` (after converting it to an xonly pubkey
* if necessary with `secp256k1_xonly_pubkey_from_pubkey`). */
/* Now we convert the output_pk to an xonly pubkey to allow to later verify
* the Schnorr signature against it. For this purpose we can ignore the
* `pk_parity` output argument; we would need it if we would have to open
* the Taproot commitment. */
if (!secp256k1_xonly_pubkey_from_pubkey(ctx, agg_pk, NULL, &output_pk)) {
return 0;
}
return 1;
}
/* Sign a message hash with the given key pairs and store the result in sig */
static int sign(const secp256k1_context* ctx, struct signer_secrets *signer_secrets, struct signer *signer, const secp256k1_musig_keyagg_cache *cache, const unsigned char *msg32, unsigned char *sig64) {
int i;
const secp256k1_musig_pubnonce *pubnonces[N_SIGNERS];
const secp256k1_musig_partial_sig *partial_sigs[N_SIGNERS];
/* The same for all signers */
secp256k1_musig_session session;
secp256k1_musig_aggnonce agg_pubnonce;
for (i = 0; i < N_SIGNERS; i++) {
unsigned char seckey[32];
unsigned char session_secrand[32];
/* Create random session ID. It is absolutely necessary that the session ID
* is unique for every call of secp256k1_musig_nonce_gen. Otherwise
* it's trivial for an attacker to extract the secret key! */
if (!fill_random(session_secrand, sizeof(session_secrand))) {
return 0;
}
if (!secp256k1_keypair_sec(ctx, seckey, &signer_secrets[i].keypair)) {
return 0;
}
/* Initialize session and create secret nonce for signing and public
* nonce to send to the other signers. */
if (!secp256k1_musig_nonce_gen(ctx, &signer_secrets[i].secnonce, &signer[i].pubnonce, session_secrand, seckey, &signer[i].pubkey, msg32, NULL, NULL)) {
return 0;
}
pubnonces[i] = &signer[i].pubnonce;
secure_erase(seckey, sizeof(seckey));
}
/* Communication round 1: Every signer sends their pubnonce to the
* coordinator. The coordinator runs secp256k1_musig_nonce_agg and sends
* agg_pubnonce to each signer */
if (!secp256k1_musig_nonce_agg(ctx, &agg_pubnonce, pubnonces, N_SIGNERS)) {
return 0;
}
/* Every signer creates a partial signature */
for (i = 0; i < N_SIGNERS; i++) {
/* Initialize the signing session by processing the aggregate nonce */
if (!secp256k1_musig_nonce_process(ctx, &session, &agg_pubnonce, msg32, cache)) {
return 0;
}
/* partial_sign will clear the secnonce by setting it to 0. That's because
* you must _never_ reuse the secnonce (or use the same session_secrand to
* create a secnonce). If you do, you effectively reuse the nonce and
* leak the secret key. */
if (!secp256k1_musig_partial_sign(ctx, &signer[i].partial_sig, &signer_secrets[i].secnonce, &signer_secrets[i].keypair, cache, &session)) {
return 0;
}
partial_sigs[i] = &signer[i].partial_sig;
}
/* Communication round 2: Every signer sends their partial signature to the
* coordinator, who verifies the partial signatures and aggregates them. */
for (i = 0; i < N_SIGNERS; i++) {
/* To check whether signing was successful, it suffices to either verify
* the aggregate signature with the aggregate public key using
* secp256k1_schnorrsig_verify, or verify all partial signatures of all
* signers individually. Verifying the aggregate signature is cheaper but
* verifying the individual partial signatures has the advantage that it
* can be used to determine which of the partial signatures are invalid
* (if any), i.e., which of the partial signatures cause the aggregate
* signature to be invalid and thus the protocol run to fail. It's also
* fine to first verify the aggregate sig, and only verify the individual
* sigs if it does not work.
*/
if (!secp256k1_musig_partial_sig_verify(ctx, &signer[i].partial_sig, &signer[i].pubnonce, &signer[i].pubkey, cache, &session)) {
return 0;
}
}
return secp256k1_musig_partial_sig_agg(ctx, sig64, &session, partial_sigs, N_SIGNERS);
}
int main(void) {
secp256k1_context* ctx;
int i;
struct signer_secrets signer_secrets[N_SIGNERS];
struct signer signers[N_SIGNERS];
const secp256k1_pubkey *pubkeys_ptr[N_SIGNERS];
secp256k1_xonly_pubkey agg_pk;
secp256k1_musig_keyagg_cache cache;
unsigned char msg[32] = "this_could_be_the_hash_of_a_msg";
unsigned char sig[64];
/* Create a secp256k1 context */
ctx = secp256k1_context_create(SECP256K1_CONTEXT_NONE);
printf("Creating key pairs......");
fflush(stdout);
for (i = 0; i < N_SIGNERS; i++) {
if (!create_keypair(ctx, &signer_secrets[i], &signers[i])) {
printf("FAILED\n");
return 1;
}
pubkeys_ptr[i] = &signers[i].pubkey;
}
printf("ok\n");
/* The aggregate public key produced by secp256k1_musig_pubkey_agg depends
* on the order of the provided public keys. If there is no canonical order
* of the signers, the individual public keys can optionally be sorted with
* secp256k1_ec_pubkey_sort to ensure that the aggregate public key is
* independent of the order of signers. */
printf("Sorting public keys.....");
fflush(stdout);
if (!secp256k1_ec_pubkey_sort(ctx, pubkeys_ptr, N_SIGNERS)) {
printf("FAILED\n");
return 1;
}
printf("ok\n");
printf("Combining public keys...");
fflush(stdout);
/* If you just want to aggregate and not sign, you can call
* secp256k1_musig_pubkey_agg with the keyagg_cache argument set to NULL
* while providing a non-NULL agg_pk argument. */
if (!secp256k1_musig_pubkey_agg(ctx, NULL, &cache, pubkeys_ptr, N_SIGNERS)) {
printf("FAILED\n");
return 1;
}
printf("ok\n");
printf("Tweaking................");
fflush(stdout);
/* Optionally tweak the aggregate key */
if (!tweak(ctx, &agg_pk, &cache)) {
printf("FAILED\n");
return 1;
}
printf("ok\n");
printf("Signing message.........");
fflush(stdout);
if (!sign(ctx, signer_secrets, signers, &cache, msg, sig)) {
printf("FAILED\n");
return 1;
}
printf("ok\n");
printf("Verifying signature.....");
fflush(stdout);
if (!secp256k1_schnorrsig_verify(ctx, sig, msg, 32, &agg_pk)) {
printf("FAILED\n");
return 1;
}
printf("ok\n");
/* It's best practice to try to clear secrets from memory after using them.
* This is done because some bugs can allow an attacker to leak memory, for
* example through "out of bounds" array access (see Heartbleed), or the OS
* swapping them to disk. Hence, we overwrite secret key material with zeros.
*
* Here we are preventing these writes from being optimized out, as any good compiler
* will remove any writes that aren't used. */
for (i = 0; i < N_SIGNERS; i++) {
secure_erase(&signer_secrets[i], sizeof(signer_secrets[i]));
}
secp256k1_context_destroy(ctx);
return 0;
}

View File

@@ -18,9 +18,9 @@
#include "examples_util.h"
int main(void) {
unsigned char msg[12] = "Hello World!";
unsigned char msg[] = {'H', 'e', 'l', 'l', 'o', ' ', 'W', 'o', 'r', 'l', 'd', '!'};
unsigned char msg_hash[32];
unsigned char tag[17] = "my_fancy_protocol";
unsigned char tag[] = {'m', 'y', '_', 'f', 'a', 'n', 'c', 'y', '_', 'p', 'r', 'o', 't', 'o', 'c', 'o', 'l'};
unsigned char seckey[32];
unsigned char randomize[32];
unsigned char auxiliary_rand[32];
@@ -43,20 +43,17 @@ int main(void) {
assert(return_val);
/*** Key Generation ***/
/* If the secret key is zero or out of range (bigger than secp256k1's
* order), we try to sample a new key. Note that the probability of this
* happening is negligible. */
while (1) {
if (!fill_random(seckey, sizeof(seckey))) {
printf("Failed to generate randomness\n");
return 1;
}
/* Try to create a keypair with a valid context, it should only fail if
* the secret key is zero or out of range. */
if (secp256k1_keypair_create(ctx, &keypair, seckey)) {
break;
}
if (!fill_random(seckey, sizeof(seckey))) {
printf("Failed to generate randomness\n");
return 1;
}
/* Try to create a keypair with a valid context. This only fails if the
* secret key is zero or out of range (greater than secp256k1's order). Note
* that the probability of this occurring is negligible with a properly
* functioning random number generator. */
if (!secp256k1_keypair_create(ctx, &keypair, seckey)) {
printf("Generated secret key is invalid. This indicates an issue with the random number generator.\n");
return 1;
}
/* Extract the X-only public key from the keypair. We pass NULL for
@@ -146,7 +143,7 @@ int main(void) {
/* It's best practice to try to clear secrets from memory after using them.
* This is done because some bugs can allow an attacker to leak memory, for
* example through "out of bounds" array access (see Heartbleed), Or the OS
* example through "out of bounds" array access (see Heartbleed), or the OS
* swapping them to disk. Hence, we overwrite the secret key buffer with zeros.
*
* Here we are preventing these writes from being optimized out, as any good compiler

View File

@@ -49,19 +49,6 @@ extern "C" {
*/
typedef struct secp256k1_context_struct secp256k1_context;
/** Opaque data structure that holds rewritable "scratch space"
*
* The purpose of this structure is to replace dynamic memory allocations,
* because we target architectures where this may not be available. It is
* essentially a resizable (within specified parameters) block of bytes,
* which is initially created either by memory allocation or TODO as a pointer
* into some fixed rewritable space.
*
* Unlike the context object, this cannot safely be shared between threads
* without additional synchronization logic.
*/
typedef struct secp256k1_scratch_space_struct secp256k1_scratch_space;
/** Opaque data structure that holds a parsed and valid public key.
*
* The exact representation of data inside is implementation defined and not
@@ -71,11 +58,11 @@ typedef struct secp256k1_scratch_space_struct secp256k1_scratch_space;
* use secp256k1_ec_pubkey_serialize and secp256k1_ec_pubkey_parse. To
* compare keys, use secp256k1_ec_pubkey_cmp.
*/
typedef struct {
typedef struct secp256k1_pubkey {
unsigned char data[64];
} secp256k1_pubkey;
/** Opaque data structured that holds a parsed ECDSA signature.
/** Opaque data structure that holds a parsed ECDSA signature.
*
* The exact representation of data inside is implementation defined and not
* guaranteed to be portable between different platforms or versions. It is
@@ -84,7 +71,7 @@ typedef struct {
* comparison, use the secp256k1_ecdsa_signature_serialize_* and
* secp256k1_ecdsa_signature_parse_* functions.
*/
typedef struct {
typedef struct secp256k1_ecdsa_signature {
unsigned char data[64];
} secp256k1_ecdsa_signature;
@@ -133,28 +120,45 @@ typedef int (*secp256k1_nonce_function)(
# define SECP256K1_NO_BUILD
#endif
/* Symbol visibility. See libtool manual, section "Windows DLLs". */
#if defined(_WIN32) && !defined(__GNUC__)
# ifdef SECP256K1_BUILD
# ifdef DLL_EXPORT
# define SECP256K1_API __declspec (dllexport)
# define SECP256K1_API_VAR extern __declspec (dllexport)
/* Symbol visibility. */
#if defined(_WIN32)
/* GCC for Windows (e.g., MinGW) accepts the __declspec syntax
* for MSVC compatibility. A __declspec declaration implies (but is not
* exactly equivalent to) __attribute__ ((visibility("default"))), and so we
* actually want __declspec even on GCC, see "Microsoft Windows Function
* Attributes" in the GCC manual and the recommendations in
* https://gcc.gnu.org/wiki/Visibility. */
# if defined(SECP256K1_BUILD)
# if defined(DLL_EXPORT) || defined(SECP256K1_DLL_EXPORT)
/* Building libsecp256k1 as a DLL.
* 1. If using Libtool, it defines DLL_EXPORT automatically.
* 2. In other cases, SECP256K1_DLL_EXPORT must be defined. */
# define SECP256K1_API extern __declspec (dllexport)
# else
/* Building libsecp256k1 as a static library on Windows.
* No declspec is needed, and so we would want the non-Windows-specific
* logic below take care of this case. However, this may result in setting
* __attribute__ ((visibility("default"))), which is supposed to be a noop
* on Windows but may trigger warnings when compiling with -flto due to a
* bug in GCC, see
* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116478 . */
# define SECP256K1_API extern
# endif
# elif defined _MSC_VER
# define SECP256K1_API
# define SECP256K1_API_VAR extern __declspec (dllimport)
# elif defined DLL_EXPORT
# define SECP256K1_API __declspec (dllimport)
# define SECP256K1_API_VAR extern __declspec (dllimport)
/* The user must define SECP256K1_STATIC when consuming libsecp256k1 as a static
* library on Windows. */
# elif !defined(SECP256K1_STATIC)
/* Consuming libsecp256k1 as a DLL. */
# define SECP256K1_API extern __declspec (dllimport)
# endif
#endif
#ifndef SECP256K1_API
/* All cases not captured by the Windows-specific logic. */
# if defined(__GNUC__) && (__GNUC__ >= 4) && defined(SECP256K1_BUILD)
# define SECP256K1_API __attribute__ ((visibility ("default")))
# define SECP256K1_API_VAR extern __attribute__ ((visibility ("default")))
/* Building libsecp256k1 using GCC or compatible. */
# define SECP256K1_API extern __attribute__ ((visibility ("default")))
# else
# define SECP256K1_API
# define SECP256K1_API_VAR extern
/* Fall back to standard C's extern. */
# define SECP256K1_API extern
# endif
#endif
@@ -226,10 +230,10 @@ typedef int (*secp256k1_nonce_function)(
*
* It is highly recommended to call secp256k1_selftest before using this context.
*/
SECP256K1_API_VAR const secp256k1_context *secp256k1_context_static;
SECP256K1_API const secp256k1_context *secp256k1_context_static;
/** Deprecated alias for secp256k1_context_static. */
SECP256K1_API_VAR const secp256k1_context *secp256k1_context_no_precomp
SECP256K1_API const secp256k1_context *secp256k1_context_no_precomp
SECP256K1_DEPRECATED("Use secp256k1_context_static instead");
/** Perform basic self tests (to be used in conjunction with secp256k1_context_static)
@@ -258,7 +262,7 @@ SECP256K1_API void secp256k1_selftest(void);
* memory allocation entirely, see secp256k1_context_static and the functions in
* secp256k1_preallocated.h.
*
* Returns: a newly created context object.
* Returns: pointer to a newly created context object.
* In: flags: Always set to SECP256K1_CONTEXT_NONE (see below).
*
* The only valid non-deprecated flag in recent library versions is
@@ -289,8 +293,8 @@ SECP256K1_API secp256k1_context *secp256k1_context_create(
* Cloning secp256k1_context_static is not possible, and should not be emulated by
* the caller (e.g., using memcpy). Create a new context instead.
*
* Returns: a newly created context object.
* Args: ctx: an existing context to copy (not secp256k1_context_static)
* Returns: pointer to a newly created context object.
* Args: ctx: pointer to a context to copy (not secp256k1_context_static).
*/
SECP256K1_API secp256k1_context *secp256k1_context_clone(
const secp256k1_context *ctx
@@ -306,7 +310,7 @@ SECP256K1_API secp256k1_context *secp256k1_context_clone(
* behaviour is undefined. In that case, secp256k1_context_preallocated_destroy must
* be used instead.
*
* Args: ctx: an existing context to destroy, constructed using
* Args: ctx: pointer to a context to destroy, constructed using
* secp256k1_context_create or secp256k1_context_clone
* (i.e., not secp256k1_context_static).
*/
@@ -343,8 +347,8 @@ SECP256K1_API void secp256k1_context_destroy(
* fails. In this case, the corresponding default handler will be called with
* the data pointer argument set to NULL.
*
* Args: ctx: an existing context object.
* In: fun: a pointer to a function to call when an illegal argument is
* Args: ctx: pointer to a context object.
* In: fun: pointer to a function to call when an illegal argument is
* passed to the API, taking a message and an opaque pointer.
* (NULL restores the default handler.)
* data: the opaque pointer to pass to fun above, must be NULL for the default handler.
@@ -370,8 +374,8 @@ SECP256K1_API void secp256k1_context_set_illegal_callback(
* for that). After this callback returns, anything may happen, including
* crashing.
*
* Args: ctx: an existing context object.
* In: fun: a pointer to a function to call when an internal error occurs,
* Args: ctx: pointer to a context object.
* In: fun: pointer to a function to call when an internal error occurs,
* taking a message and an opaque pointer (NULL restores the
* default handler, see secp256k1_context_set_illegal_callback
* for details).
@@ -385,34 +389,11 @@ SECP256K1_API void secp256k1_context_set_error_callback(
const void *data
) SECP256K1_ARG_NONNULL(1);
/** Create a secp256k1 scratch space object.
*
* Returns: a newly created scratch space.
* Args: ctx: an existing context object.
* In: size: amount of memory to be available as scratch space. Some extra
* (<100 bytes) will be allocated for extra accounting.
*/
SECP256K1_API SECP256K1_WARN_UNUSED_RESULT secp256k1_scratch_space *secp256k1_scratch_space_create(
const secp256k1_context *ctx,
size_t size
) SECP256K1_ARG_NONNULL(1);
/** Destroy a secp256k1 scratch space.
*
* The pointer may not be used afterwards.
* Args: ctx: a secp256k1 context object.
* scratch: space to destroy
*/
SECP256K1_API void secp256k1_scratch_space_destroy(
const secp256k1_context *ctx,
secp256k1_scratch_space *scratch
) SECP256K1_ARG_NONNULL(1);
/** Parse a variable-length public key into the pubkey object.
*
* Returns: 1 if the public key was fully valid.
* 0 if the public key could not be parsed or is invalid.
* Args: ctx: a secp256k1 context object.
* Args: ctx: pointer to a context object.
* Out: pubkey: pointer to a pubkey object. If 1 is returned, it is set to a
* parsed version of input. If not, its value is undefined.
* In: input: pointer to a serialized public key
@@ -432,14 +413,14 @@ SECP256K1_API SECP256K1_WARN_UNUSED_RESULT int secp256k1_ec_pubkey_parse(
/** Serialize a pubkey object into a serialized byte sequence.
*
* Returns: 1 always.
* Args: ctx: a secp256k1 context object.
* Out: output: a pointer to a 65-byte (if compressed==0) or 33-byte (if
* Args: ctx: pointer to a context object.
* Out: output: pointer to a 65-byte (if compressed==0) or 33-byte (if
* compressed==1) byte array to place the serialized key
* in.
* In/Out: outputlen: a pointer to an integer which is initially set to the
* In/Out: outputlen: pointer to an integer which is initially set to the
* size of output, and is overwritten with the written
* size.
* In: pubkey: a pointer to a secp256k1_pubkey containing an
* In: pubkey: pointer to a secp256k1_pubkey containing an
* initialized public key.
* flags: SECP256K1_EC_COMPRESSED if serialization should be in
* compressed format, otherwise SECP256K1_EC_UNCOMPRESSED.
@@ -457,7 +438,7 @@ SECP256K1_API int secp256k1_ec_pubkey_serialize(
* Returns: <0 if the first public key is less than the second
* >0 if the first public key is greater than the second
* 0 if the two public keys are equal
* Args: ctx: a secp256k1 context object.
* Args: ctx: pointer to a context object
* In: pubkey1: first public key to compare
* pubkey2: second public key to compare
*/
@@ -467,12 +448,26 @@ SECP256K1_API SECP256K1_WARN_UNUSED_RESULT int secp256k1_ec_pubkey_cmp(
const secp256k1_pubkey *pubkey2
) SECP256K1_ARG_NONNULL(1) SECP256K1_ARG_NONNULL(2) SECP256K1_ARG_NONNULL(3);
/** Sort public keys using lexicographic (of compressed serialization) order
*
* Returns: 0 if the arguments are invalid. 1 otherwise.
*
* Args: ctx: pointer to a context object
* In: pubkeys: array of pointers to pubkeys to sort
* n_pubkeys: number of elements in the pubkeys array
*/
SECP256K1_API int secp256k1_ec_pubkey_sort(
const secp256k1_context *ctx,
const secp256k1_pubkey **pubkeys,
size_t n_pubkeys
) SECP256K1_ARG_NONNULL(1) SECP256K1_ARG_NONNULL(2);
/** Parse an ECDSA signature in compact (64 bytes) format.
*
* Returns: 1 when the signature could be parsed, 0 otherwise.
* Args: ctx: a secp256k1 context object
* Out: sig: a pointer to a signature object
* In: input64: a pointer to the 64-byte array to parse
* Args: ctx: pointer to a context object
* Out: sig: pointer to a signature object
* In: input64: pointer to the 64-byte array to parse
*
* The signature must consist of a 32-byte big endian R value, followed by a
* 32-byte big endian S value. If R or S fall outside of [0..order-1], the
@@ -491,9 +486,9 @@ SECP256K1_API int secp256k1_ecdsa_signature_parse_compact(
/** Parse a DER ECDSA signature.
*
* Returns: 1 when the signature could be parsed, 0 otherwise.
* Args: ctx: a secp256k1 context object
* Out: sig: a pointer to a signature object
* In: input: a pointer to the signature to be parsed
* Args: ctx: pointer to a context object
* Out: sig: pointer to a signature object
* In: input: pointer to the signature to be parsed
* inputlen: the length of the array pointed to be input
*
* This function will accept any valid DER encoded signature, even if the
@@ -513,13 +508,13 @@ SECP256K1_API int secp256k1_ecdsa_signature_parse_der(
/** Serialize an ECDSA signature in DER format.
*
* Returns: 1 if enough space was available to serialize, 0 otherwise
* Args: ctx: a secp256k1 context object
* Out: output: a pointer to an array to store the DER serialization
* In/Out: outputlen: a pointer to a length integer. Initially, this integer
* Args: ctx: pointer to a context object
* Out: output: pointer to an array to store the DER serialization
* In/Out: outputlen: pointer to a length integer. Initially, this integer
* should be set to the length of output. After the call
* it will be set to the length of the serialization (even
* if 0 was returned).
* In: sig: a pointer to an initialized signature object
* In: sig: pointer to an initialized signature object
*/
SECP256K1_API int secp256k1_ecdsa_signature_serialize_der(
const secp256k1_context *ctx,
@@ -531,9 +526,9 @@ SECP256K1_API int secp256k1_ecdsa_signature_serialize_der(
/** Serialize an ECDSA signature in compact (64 byte) format.
*
* Returns: 1
* Args: ctx: a secp256k1 context object
* Out: output64: a pointer to a 64-byte array to store the compact serialization
* In: sig: a pointer to an initialized signature object
* Args: ctx: pointer to a context object
* Out: output64: pointer to a 64-byte array to store the compact serialization
* In: sig: pointer to an initialized signature object
*
* See secp256k1_ecdsa_signature_parse_compact for details about the encoding.
*/
@@ -547,7 +542,7 @@ SECP256K1_API int secp256k1_ecdsa_signature_serialize_compact(
*
* Returns: 1: correct signature
* 0: incorrect or unparseable signature
* Args: ctx: a secp256k1 context object.
* Args: ctx: pointer to a context object
* In: sig: the signature being verified.
* msghash32: the 32-byte message hash being verified.
* The verifier must make sure to apply a cryptographic
@@ -578,12 +573,12 @@ SECP256K1_API SECP256K1_WARN_UNUSED_RESULT int secp256k1_ecdsa_verify(
/** Convert a signature to a normalized lower-S form.
*
* Returns: 1 if sigin was not normalized, 0 if it already was.
* Args: ctx: a secp256k1 context object
* Out: sigout: a pointer to a signature to fill with the normalized form,
* Args: ctx: pointer to a context object
* Out: sigout: pointer to a signature to fill with the normalized form,
* or copy if the input was already normalized. (can be NULL if
* you're only interested in whether the input was already
* normalized).
* In: sigin: a pointer to a signature to check/normalize (can be identical to sigout)
* In: sigin: pointer to a signature to check/normalize (can be identical to sigout)
*
* With ECDSA a third-party can forge a second distinct signature of the same
* message, given a single initial signature, but without knowing the key. This
@@ -626,10 +621,10 @@ SECP256K1_API int secp256k1_ecdsa_signature_normalize(
* If a data pointer is passed, it is assumed to be a pointer to 32 bytes of
* extra entropy.
*/
SECP256K1_API_VAR const secp256k1_nonce_function secp256k1_nonce_function_rfc6979;
SECP256K1_API const secp256k1_nonce_function secp256k1_nonce_function_rfc6979;
/** A default safe nonce generation function (currently equal to secp256k1_nonce_function_rfc6979). */
SECP256K1_API_VAR const secp256k1_nonce_function secp256k1_nonce_function_default;
SECP256K1_API const secp256k1_nonce_function secp256k1_nonce_function_default;
/** Create an ECDSA signature.
*
@@ -658,12 +653,14 @@ SECP256K1_API int secp256k1_ecdsa_sign(
const void *ndata
) SECP256K1_ARG_NONNULL(1) SECP256K1_ARG_NONNULL(2) SECP256K1_ARG_NONNULL(3) SECP256K1_ARG_NONNULL(4);
/** Verify an ECDSA secret key.
/** Verify an elliptic curve secret key.
*
* A secret key is valid if it is not 0 and less than the secp256k1 curve order
* when interpreted as an integer (most significant byte first). The
* probability of choosing a 32-byte string uniformly at random which is an
* invalid secret key is negligible.
* invalid secret key is negligible. However, if it does happen it should
* be assumed that the randomness source is severely broken and there should
* be no retry.
*
* Returns: 1: secret key is valid
* 0: secret key is invalid
@@ -733,10 +730,10 @@ SECP256K1_API SECP256K1_WARN_UNUSED_RESULT int secp256k1_ec_pubkey_negate(
* invalid according to secp256k1_ec_seckey_verify, this
* function returns 0. seckey will be set to some unspecified
* value if this function returns 0.
* In: tweak32: pointer to a 32-byte tweak. If the tweak is invalid according to
* secp256k1_ec_seckey_verify, this function returns 0. For
* uniformly random 32-byte arrays the chance of being invalid
* is negligible (around 1 in 2^128).
* In: tweak32: pointer to a 32-byte tweak, which must be valid according to
* secp256k1_ec_seckey_verify or 32 zero bytes. For uniformly
* random 32-byte tweaks, the chance of being invalid is
* negligible (around 1 in 2^128).
*/
SECP256K1_API SECP256K1_WARN_UNUSED_RESULT int secp256k1_ec_seckey_tweak_add(
const secp256k1_context *ctx,
@@ -761,10 +758,10 @@ SECP256K1_API SECP256K1_WARN_UNUSED_RESULT int secp256k1_ec_privkey_tweak_add(
* Args: ctx: pointer to a context object.
* In/Out: pubkey: pointer to a public key object. pubkey will be set to an
* invalid value if this function returns 0.
* In: tweak32: pointer to a 32-byte tweak. If the tweak is invalid according to
* secp256k1_ec_seckey_verify, this function returns 0. For
* uniformly random 32-byte arrays the chance of being invalid
* is negligible (around 1 in 2^128).
* In: tweak32: pointer to a 32-byte tweak, which must be valid according to
* secp256k1_ec_seckey_verify or 32 zero bytes. For uniformly
* random 32-byte tweaks, the chance of being invalid is
* negligible (around 1 in 2^128).
*/
SECP256K1_API SECP256K1_WARN_UNUSED_RESULT int secp256k1_ec_pubkey_tweak_add(
const secp256k1_context *ctx,

View File

@@ -27,11 +27,11 @@ typedef int (*secp256k1_ecdh_hash_function)(
/** An implementation of SHA256 hash function that applies to compressed public key.
* Populates the output parameter with 32 bytes. */
SECP256K1_API_VAR const secp256k1_ecdh_hash_function secp256k1_ecdh_hash_function_sha256;
SECP256K1_API const secp256k1_ecdh_hash_function secp256k1_ecdh_hash_function_sha256;
/** A default ECDH hash function (currently equal to secp256k1_ecdh_hash_function_sha256).
* Populates the output parameter with 32 bytes. */
SECP256K1_API_VAR const secp256k1_ecdh_hash_function secp256k1_ecdh_hash_function_default;
SECP256K1_API const secp256k1_ecdh_hash_function secp256k1_ecdh_hash_function_default;
/** Compute an EC Diffie-Hellman secret in constant time
*
@@ -39,7 +39,7 @@ SECP256K1_API_VAR const secp256k1_ecdh_hash_function secp256k1_ecdh_hash_functio
* 0: scalar was invalid (zero or overflow) or hashfp returned 0
* Args: ctx: pointer to a context object.
* Out: output: pointer to an array to be filled by hashfp.
* In: pubkey: a pointer to a secp256k1_pubkey containing an initialized public key.
* In: pubkey: pointer to a secp256k1_pubkey containing an initialized public key.
* seckey: a 32-byte scalar with which to multiply the point.
* hashfp: pointer to a hash function. If NULL,
* secp256k1_ecdh_hash_function_sha256 is used

Some files were not shown because too many files have changed in this diff Show More