Compare commits

..

503 Commits

Author SHA1 Message Date
tequ
f43aafde16 clang-format 2025-08-02 23:44:58 +09:00
Denis Angell
a33748ab86 Add Batch feature (XLS-56) (#5060)
- Specification: [XRPLF/XRPL-Standards 56](https://github.com/XRPLF/XRPL-Standards/blob/master/XLS-0056d-batch/README.md)
- Amendment: `Batch`
- Implements execution of multiple transactions within a single batch transaction with four execution modes: `tfAllOrNothing`, `tfOnlyOne`, `tfUntilFailure`, and `tfIndependent`.
- Enables atomic multi-party transactions where multiple accounts can participate in a single batch, with up to 8 inner transactions and 8 batch signers per batch transaction.
- Inner transactions use `tfInnerBatchTxn` flag with zero fees, no signature, and empty signing public key.
- Inner transactions are applied after the outer batch succeeds via the `applyBatchTransactions` function in apply.cpp.
- Network layer prevents relay of transactions with `tfInnerBatchTxn` flag - each peer applies inner transactions locally from the batch.
- Batch transactions are excluded from AccountDelegate permissions but inner transactions retain full delegation support.
- Metadata includes `ParentBatchID` linking inner transactions to their containing batch for traceability and auditing.
- Extended STTx with batch-specific signature verification methods and added protocol structures (`sfRawTransactions`, `sfBatchSigners`).
2025-08-02 23:38:12 +09:00
tequ
43a4a3a3e2 Add Remit test to AMM Account 2025-07-19 19:18:03 +09:00
tequ
117bdb1c42 Optimize AccountDelete and Creadentials tests, Update tests priority 2025-07-19 04:49:25 +09:00
tequ
87e41a7888 Update Hook headers 2025-07-16 19:38:51 +09:00
tequ
df3bf8a958 VoteBehavior::DefaultYes for new fix Amendments
- NFToken related fix Amendments remains as `DefaultNo`.
2025-07-14 19:27:09 +09:00
tequ
c5fa112e16 Add TSH processing for AMM, AMMClawback, Clawback, Oracle (#532)
* Add TSH processing for `AMM`, `AMMClawback`, `Oracle`, `Clawback`

* Add empty TSH processing for other transaction types

* Add AMMTsh tests
2025-07-10 23:26:32 +09:00
tequ
d2e21da7a3 Merge branch 'dev' into sync-2.4.0 2025-07-09 13:38:41 +09:00
Niq Dudfield
849d447a20 docs(freeze): canceling escrows with deep frozen assets is allowed (#540) 2025-07-09 13:48:59 +10:00
tequ
ee27049687 IOUIssuerWeakTSH (#388) 2025-07-09 13:48:26 +10:00
tequ
60dec74baf Add DeepFreeze test for URIToken (#539) 2025-07-09 12:49:47 +10:00
Denis Angell
9abea13649 Feature Clawback (#534) 2025-07-09 12:48:46 +10:00
Denis Angell
810e15319c Feature DeepFreeze (#536)
---------

Co-authored-by: tequ <git@tequ.dev>
2025-07-09 10:33:08 +10:00
tequ
1f0bbdb288 Merge branch 'dev' into sync-2.4.0 2025-07-08 18:12:35 +09:00
Niq Dudfield
d593f3bef5 fix: provisional PreviousTxn{Id,LedgerSeq} double threading (#515)
---------

Co-authored-by: tequ <git@tequ.dev>
2025-07-08 18:04:39 +10:00
tequ
5cc56d5f15 Support Hook execution in simulate RPC (#531) 2025-07-05 14:32:42 +09:00
tequ
0a9d3d3d75 Merge branch 'dev' into sync-2.4.0 2025-07-03 10:14:42 +09:00
Niq Dudfield
1233694b6c chore: add suspicious_patterns to .scripts/pre-hook and not-suspicious filter (#525)
* chore: add suspicious_patterns to .scripts/pre-hook and not-suspicious filter

* rm: kill annoying checkpatterns job

* chore: cleanup

---------

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2025-07-01 20:58:06 +10:00
tequ
0d192d48ce add tests for DeepFreeze 2025-07-01 19:35:35 +09:00
tequ
574dc20641 Supported::No for featurePermissionedDomains 2025-07-01 16:08:01 +09:00
tequ
3367f40ef5 Supported::No for featureDynamicNFT 2025-07-01 16:01:27 +09:00
tequ
7080d292e6 Supported::No for featureCredentials 2025-07-01 15:54:07 +09:00
tequ
85a1eb5dba Supported::No for featureMPTokensV1 2025-07-01 15:22:52 +09:00
tequ
534ed875a2 Supported::No for featureNFTokenMintOffer 2025-07-01 15:14:08 +09:00
tequ
72b85d75c9 Supported::No for featureDID 2025-07-01 15:09:00 +09:00
tequ
9229ed779f Supported::No for featureXChainBridge 2025-07-01 14:52:24 +09:00
tequ
e9f671043d Combine AMM Amendments (#521)
* fixAMMv1_2
* fixAMMv1_1
* fixAMMOverflowOffer
* fixLPTokenTransfer
* suppress AMM test logs
* exclude `ltAMM` from `fixPreviousTxnID` Amendment
    - make `sfPreviousTxnID` and `sfPreviousTxnLgrSeq` required for ltAMM
2025-07-01 13:51:32 +09:00
tequ
37669452f6 Combine XChainBridge Amendments (#523) 2025-06-30 19:12:33 +09:00
tequ
d4fd40c471 Combine fixInnerObjTemplate Amendments (#524) 2025-06-30 18:14:04 +09:00
tequ
51aae2ce36 fix to DefaultNo for featureDeletableAccounts 2025-06-30 17:15:46 +09:00
tequ
c4106a2752 Disable instrumentation-build workflow (#530) 2025-06-30 16:54:11 +09:00
tequ
e955909a40 Merge branch 'dev' into sync-2.4.0 2025-06-30 13:33:12 +09:00
Niq Dudfield
396587c160 fix: prevent SOCI from linking ALL boost libraries (#529)
SOCI's vendored conanfile was using boost::boost which links against
every single Boost library (40+ libraries) when only boost::headers
is needed for SOCI's template specializations (boost::optional and
boost::gregorian::date support).

This was causing excessive linking and potential symbol conflicts,
particularly on Linux CI where boost_stacktrace_from_exception was
causing multiple definition errors with libstdc++.

Changed SOCI's boost dependency from boost::boost to boost::headers
since SOCI only needs Boost headers for its template specializations,
not the compiled libraries. The project already provides all necessary
Boost libraries through the ripple_boost target.

This reduces the linked libraries from 40+ down to just the ~14 that
the project actually uses, fixing the Linux CI build failures and
reducing binary size.

Note: The SOCI Conan recipe for Conan 2.0 already implements this
fix correctly.
2025-06-30 13:16:10 +09:00
tequ
a1d42b7380 Improve unittests (#494)
* Match unit tests on start of test name (#4634)

* For example, without this change, to run the TxQ tests, must specify
  `--unittest=TxQ1,TxQ2` on the command line. With this change, can use
  `--unittest=TxQ`, and both will be run.
* An exact match will prevent any further partial matching.
* This could have some side effects for different tests with a common
  name beginning. For example, NFToken, NFTokenBurn, NFTokenDir. This
  might be useful. If not, the shorter-named test(s) can be renamed. For
  example, NFToken to NFTokens.
* Split the NFToken, NFTokenBurn, and Offer test classes. Potentially speeds
  up parallel tests by a factor of 5.

* SetHook_test, SetHookTSH_test, XahauGenesis_test

---------

Co-authored-by: Ed Hennis <ed@ripple.com>
2025-06-30 10:03:02 +10:00
tequ
c065bc4938 Reduce numFeatures for DID Amendments combine 2025-06-28 21:36:25 +09:00
Niq Dudfield
2470926a1d fix: remove vestigial -DBOOST_ASIO_DISABLE_CONCEPTS usage (#526) 2025-06-27 16:41:30 +09:00
Denis Angell
c35890d5f8 fix cmake & xrpl_core 2025-06-25 10:30:30 +02:00
Denis Angell
248d485aed Update build-full.sh 2025-06-25 10:03:13 +02:00
Denis Angell
846965e77c fix cmake 2025-06-25 09:52:53 +02:00
Denis Angell
bf1f4e1a6f Update build-full.sh 2025-06-25 09:26:48 +02:00
Denis Angell
f8c4639ff4 add DeepFreeze to trustTransferAllowed 2025-06-25 09:15:05 +02:00
tequ
2451d78ae0 fix release-builder, workflow building 2025-06-24 21:27:20 +09:00
tequ
092f907724 remove checkpatterns workflow 2025-06-24 19:57:41 +09:00
tequ
33d4a989a2 Merge branch 'dev' into sync-2.4.0 2025-06-24 19:33:30 +09:00
tequ
348dab7491 Combine DID Amendments (#522)
fixEmptyDID -> featureDID
2025-06-23 21:13:52 +09:00
tequ
6728221831 Additional support for HookDefinition, HookState, ImportVLSequence at fixPreviousTxnID Amendment 2025-06-23 17:59:40 +09:00
Mark Travis
65f4945f22 Log detailed correlated consensus data together (#5302)
Combine multiple related debug log data points into a single
message. Allows quick correlation of events that
previously were either not logged or, if logged, strewn
across multiple lines, making correlation difficult.
The Heartbeat Timer and consensus ledger accept processing
each have this capability.

Also guarantees that log entries will be written if the
node is a validator, regardless of log severity level.
Otherwise, the level of these messages is at INFO severity.
2025-06-20 15:30:36 +09:00
Mark Travis
aff89c3457 fix: Acquire previously failed transaction set from network as new proposal arrives (#5318)
Reset the failure variable.
2025-06-20 15:12:58 +09:00
Bronek Kozicki
52e1766fb3 Fix Replace assert with XRPL_ASSERT (#5312) 2025-06-20 15:12:20 +09:00
Bronek Kozicki
3166ddc460 fix: Remove 'new parent hash' assert (#5313)
This assert is known to occasionally trigger, without causing errors
downstream. It is replaced with a log message.
2025-06-20 14:58:59 +09:00
Ed Hennis
db1591950d Add logging and improve counting of amendment votes from UNL (#5173)
* Add logging for amendment voting decision process
* When counting "received validations" to determine quorum, count the number of validators actually voting, not the total number of possible votes.
2025-06-20 14:58:48 +09:00
Bart
0d8c997867 docs: Revert peer port to 51235 (#5299)
Reverts the [port_peer] back to the legacy port 51235 rather than to the default port 2459, to avoid potentially inconveniencing existing operators.
2025-06-20 14:58:32 +09:00
Olek
41405706b0 fix: Switch Permissioned Domain to Supported::yes (#5287)
Switch Permissioned Domain feature's supported flag from Supported::no to Supported::yes for it to be votable.
2025-06-20 14:58:20 +09:00
Bart
d7480c6474 docs: Clarifies default port of hosts (#5290)
The current comment in the example cfg file incorrectly mentions both "may" and "must". This change fixes this comment to clarify that the default port of hosts is 2459 and that specifying it is therefore optional. It further sets the default port to 2459 instead of the legacy 51235.
2025-06-20 14:58:07 +09:00
Mark Travis
7b46e26d78 Log proposals and validations (#5291)
Adds detailed log messages for each validation and proposal received from the network.
2025-06-20 14:57:59 +09:00
Bart
5f5a73acbc Support canonical ledger entry names (#5271)
This change enhances the filtering in the ledger, ledger_data, and account_objects methods by also supporting filtering by the canonical name of the LedgerEntryType using case-insensitive matching.
2025-06-20 14:18:08 +09:00
Ed Hennis
c24f5b10b8 refactor: Change recursive_mutex to mutex in DatabaseRotatingImp (#5276)
Rewrites the code so that the lock is not held during the callback. Instead it locks twice, once before, and once after. This is safe due to the structure of the code, but is checked after the second lock. This allows mutex_ to be changed back to a regular mutex.
2025-06-20 14:17:06 +09:00
Bart
601bb7ed0f fix: Replace charge() by fee_.update() in OnMessage functions (#5269)
In PeerImpl.cpp, if the function is a message handler (onMessage) or called directly from a message handler, then it should use fee_, since when the handler returns (OnMessageEnd) then the charge function is called. If the function is not a message handler, such as a job queue item, it should remain charge.
2025-06-20 13:44:30 +09:00
Elliot Lee
0d3dd400f0 docs: ensure build_type and CMAKE_BUILD_TYPE match (#5274) 2025-06-20 13:43:59 +09:00
code0xff
4d763b7340 chore: Fix small typos in protocol files (#5279) 2025-06-20 13:43:42 +09:00
Ed Hennis
63665a6673 docs: Add a summary of the git commit message rules (#5283) 2025-06-20 13:43:03 +09:00
Olek
cbd7d5dc3a fix: Amendment to add transaction flag checking functionality for Credentials (#5250)
CredentialCreate / CredentialAccept / CredentialDelete transactions will check sfFlags field in preflight() when the amendment is enabled.
2025-06-20 13:42:18 +09:00
Donovan Hide
3e49ee604e fix: Omit superfluous setCurrentThreadName call in GRPCServer.cpp (#5280) 2025-06-20 11:21:55 +09:00
Bronek Kozicki
5e542f5215 fix: Do not allow creating Permissioned Domains if credentials are not enabled (#5275)
If the permissioned domains amendment XLS-80 is enabled before credentials XLS-70, then the permissioned domain users will not be able to match any credentials. The changes here prevent the creation of any permissioned domain objects if credentials are not enabled.
2025-06-20 11:21:45 +09:00
Mayukha Vadari
bdc404837c fix: issues in simulate RPC (#5265)
Make `simulate` RPC easier to use:
* Prevent the use of `seed`, `secret`, `seed_hex`, and `passphrase` fields (to avoid confusing with the signing methods).
* Add autofilling of the `NetworkID` field.
2025-06-20 11:21:33 +09:00
Bart
a62919a9cc Updates Conan dependencies (#5256)
This PR updates several Conan dependencies:
* boost
* date
* libarchive
* libmysqlclient
* libpq
* lz4
* onetbb
* openssl
* sqlite3
* zlib
* zstd
2025-06-20 11:21:21 +09:00
Shawn Xie
41dcc0fb23 Amendment fixFrozenLPTokenTransfer (#5227)
Prohibits LPToken holders from sending LPToken to others if they have been frozen by one of the assets in AMM pool.
2025-06-20 11:04:00 +09:00
Ed Hennis
b109dbf10f Improve git commit hash lookup (#5225)
- Also get the branch name.
- Use rev-parse instead of describe to get a clean hash.
- Return the git hash and branch name in server_info for admin
  connections.
- Include git hash and branch name on separate lines in --version.
2025-06-20 10:58:40 +09:00
Vlad
01372a67a8 Add deep freeze feature (XLS-77d) (#5187)
- spec: XRPLF/XRPL-Standards#220
- amendment: "DeepFreeze"
- implemented deep freeze spec to allow token issuers to prevent currency holders from being able to acquire more of these tokens.
- in combination with normal freeze, deep freeze effectively prevents any balance trust line balance change of a currency holder (except direct issuer <-> holder payments).
- added 2 new invariant checks to verify that deep freeze cannot be enacted without normal freeze and transfer is not frozen.
- made some fixes to existing freeze handling.

Co-authored-by: Ed Hennis <ed@ripple.com>
Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
2025-06-20 10:52:31 +09:00
Mayukha Vadari
2b59176cfd Add RPC "simulate" to execute a dry run of a transaction (#5069)
- Spec: https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0069d-simulate
- Also update signing methods to autofill fees better and properly handle transactions that require a non-standard fee.
2025-06-20 10:03:49 +09:00
Olek
a0505ce47d Fix CI unit tests (#5196)
- Add retries for rpc client
- Add dynamic port allocation for rpc servers
2025-06-20 02:28:17 +09:00
Michael Legleux
91aabaa4aa Update secp256k1 library to 0.6.0 (#5254) 2025-06-20 01:35:15 +09:00
Bronek Kozicki
a63008b1be Add [validator_list_threshold] to validators.txt to improve UNL security (#5112) 2025-06-20 01:34:38 +09:00
Bronek Kozicki
33b5ed931c Switch from assert to XRPL_ASSERT (#5245) 2025-06-20 01:29:51 +09:00
tequ
ce5c3c98c9 Add missing space character to a log message (#5251) 2025-06-20 01:29:43 +09:00
Bronek Kozicki
aeadad26cb Cleanup API-CHANGELOG.md (#5207) 2025-06-20 01:29:24 +09:00
Ed Hennis
74c50ebdab test: Unit tests to recreate invalid index logic error (#5242)
* One hits the global cache, one does not.
* Also some extra checking.

Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
2025-06-20 01:29:02 +09:00
Sergey Kuznetsov
1e2c92290d fix: Error consistency in LedgerEntry::parsePermissionedDomains() (#5252)
Update errors for parsing permissioned domains in the LedgerEntry handler to make them consistent with other parsers.
2025-06-20 00:56:42 +09:00
Ed Hennis
0617dc221d fix: Use consistent CMake settings for all modules (#5228)
* Resolves an issue introduced in #5111, which inadvertently removed the
  -Wno-maybe-uninitialized compiler option from some xrpl.libxrpl
  modules. This resulted in new "may be used uninitialized" build
  warnings, first noticed in the "protocol" module. When compiling with
  derr=TRUE, those warnings became errors, which made the build fail.
* Github CI actions will build with the assert and werr options turned
  on. This will cause CI jobs to fail if a developer introduces a new
  compiler warning, or causes an assert to fail in release builds.
* Includes the OS and compiler version in the linux dependencies jobs in
  the "check environment" step.
* Translates the `unity` build option into `CMAKE_UNITY_BUILD` setting.
2025-06-20 00:56:18 +09:00
Valentin Balaschenko
a4a8295567 Fix levelization script to ignore commented includes (#5194)
Check to ignore single-line comments during dependency analysis.
2025-06-20 00:45:59 +09:00
tequ
2a836cbbb8 Fix the flag processing of NFTokenModify (#5246)
Adds checks for invalid flags.
2025-06-20 00:45:51 +09:00
Mayukha Vadari
79935d4db8 Fix failing assert in connect RPC (#5235) 2025-06-20 00:45:43 +09:00
Olek
7088c64427 Permissioned Domains (XLS-80d) (#5161) 2025-06-20 00:45:28 +09:00
tequ
27ddfae5e1 XLS-46: DynamicNFT (#5048)
This Amendment adds functionality to update the URI of NFToken objects as described in the XLS-46d: Dynamic Non Fungible Tokens (dNFTs) spec.
2025-06-20 00:27:50 +09:00
Shawn Xie
cf957db8da prefix Uint384 and Uint512 with Hash in server_definitions (#5231) 2025-06-20 00:10:23 +09:00
Mayukha Vadari
ac532d9d16 refactor: add rpcName to LEDGER_ENTRY macro (#5202)
The LEDGER_ENTRY macro now takes an additional parameter, which makes it easier to avoid missing including the new field in jss.h and to the list of account_objects/ledger_data filters.
2025-06-20 00:08:47 +09:00
Michael Legleux
37614773bb fix: Add header for set_difference (#5197)
Fix `error C2039: 'set_difference': is not a member of 'std'`
2025-06-19 23:42:21 +09:00
Mayukha Vadari
c329d71717 fix: allow overlapping types in Expected (#5218)
For example, Expected<std::uint32_t, Json::Value>, will now build even though there is animplicit conversion from unsigned int to Json::Value.
2025-06-19 23:42:12 +09:00
Gregory Tsipenyuk
7de6a70221 Add MPTIssue to STIssue (#5200)
Replace Issue in STIssue with Asset. STIssue with MPTIssue is only used in MPT tests.
Will be used in Vault and in transactions with STIssue fields once MPT is integrated into DEX.
2025-06-19 23:41:59 +09:00
Bronek Kozicki
0fa542f672 Antithesis instrumentation improvements (#5213)
* Rename ASSERT to XRPL_ASSERT
* Upgrade to Anthithesis SDK 0.4.4, and use new 0.4.4 features
  * automatic cast to bool, like assert
* Add instrumentation workflow to verify build with instrumentation enabled
2025-06-19 23:27:49 +09:00
John Freeman
68705eee2c Enforce levelization in libxrpl with CMake (#5111)
Adds two CMake functions:

* add_module(library subdirectory): Declares an OBJECT "library" (a CMake abstraction for a collection of object files) with sources from the given subdirectory of the given library, representing a module. Isolates the module's headers by creating a subdirectory in the build directory, e.g. .build/tmp123, that contains just a symlink, e.g. .build/tmp123/basics, to the module's header directory, e.g. include/xrpl/basics, in the source directory, and putting .build/tmp123 (but not include/xrpl) on the include path of the module sources. This prevents the module sources from including headers not explicitly linked to the module in CMake with target_link_libraries.
* target_link_modules(library scope modules...): Links the library target to each of the module targets, and removes their sources from its source list (so they are not compiled and linked twice).

Uses these functions to separate and explicitly link modules in libxrpl:

    Level 01: beast
    Level 02: basics
    Level 03: json, crypto
    Level 04: protocol
    Level 05: resource, server
2025-06-19 23:06:46 +09:00
Mayukha Vadari
fdbb24d898 refactor: clean up LedgerEntry.cpp (#5199)
Refactors LedgerEntry to make it easier to read and understand.
2025-06-19 21:49:47 +09:00
Ed Hennis
60a8f3c05b test: Add more test cases for Base58 parser (#5174)
---------
Co-authored-by: John Freeman <jfreeman08@gmail.com>
2025-06-19 19:57:12 +09:00
Ed Hennis
5a3a71ecb8 test: Check for some unlikely null dereferences in tests (#5004) 2025-06-19 19:57:04 +09:00
Bronek Kozicki
16b3221f80 Add Antithesis intrumentation (#5042)
* Copy Antithesis SDK version 0.4.0 to directory external/
* Add build option `voidstar` to enable instrumentation with Antithesis SDK
* Define instrumentation macros ASSERT and UNREACHABLE in terms of regular C assert
* Replace asserts with named ASSERT or UNREACHABLE
* Add UNREACHABLE to LogicError
* Document instrumentation macros in CONTRIBUTING.md
2025-06-19 19:56:21 +09:00
Valentin Balaschenko
dd4b060f09 Reduce the peer charges for well-behaved peers:
- Fix an erroneous high fee penalty that peers could incur for sending
  older transactions.
- Update to the fees charged for imposing a load on the server.
- Prevent the relaying of internal pseudo-transactions.
  - Before: Pseudo-transactions received from a peer will fail the signature
    check, even if they were requested (using TMGetObjectByHash), because
    they have no signature. This causes the peer to be charge for an
    invalid signature.
  - After: Pseudo-transactions, are put into the global cache
    (TransactionMaster) only. If the transaction is not part of
    a TMTransactions batch, the peer is charged an unwanted data fee.
    These fees will not be a problem in the normal course of operations,
    but should dissuade peers from behaving badly by sending a bunch of
    junk.
- Improve logging: include the reason for fees charged to a peer.

Co-authored-by: Ed Hennis <ed@ripple.com>
2025-06-19 17:05:23 +09:00
tequ
ee78f8d566 update actions/upload-artifact to v4
https://github.blog/changelog/2024-04-16-deprecation-notice-v3-of-the-artifact-actions/
2025-06-19 16:11:34 +09:00
tequ
952ce55223 levelization 2025-06-19 16:05:02 +09:00
tequ
e6893a9422 clang-format, ignore magic_enum.h 2025-06-19 15:56:33 +09:00
tequ
0ba16ef3d6 fix ltDID type ID 2025-06-19 15:50:17 +09:00
tequ
479dd8b57b Update ServerDefinition 2025-06-19 15:05:42 +09:00
Elliot Lee
e626b096a3 refactor(AMMClawback): move tfClawTwoAssets check (#5201)
Move tfClawTwoAssets check to preflight and return
error temINVALID_FLAG

---------

Co-authored-by: yinyiqian1 <yqian@ripple.com>
2025-06-19 10:21:18 +09:00
Elliot Lee
329c0ab1e1 Add a new serialized type: STNumber (#5121)
`STNumber` lets objects and transactions contain multiple fields for
quantities of XRP, IOU, or MPT without duplicating information about the
"issue" (represented by `STIssue`). It is a straightforward serialization of
the `Number` type that uniformly represents those quantities.

---------

Co-authored-by: John Freeman <jfreeman08@gmail.com>
Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
2025-06-19 10:21:07 +09:00
Olek
997836906c fix: check for valid ammID field in amm_info RPC (#5188) 2025-06-19 10:20:59 +09:00
Mayukha Vadari
71554dce3a fix: include index in server_definitions RPC (#5190) 2025-06-19 10:20:44 +09:00
Bronek Kozicki
2395d17a1f Fix ledger_entry crash on invalid credentials request (#5189) 2025-06-19 10:19:13 +09:00
Shawn Xie
e862f40636 Replace Uint192 with Hash192 in server_definitions response (#5177) 2025-06-19 10:17:50 +09:00
Bronek Kozicki
4f8096f378 Fix potential deadlock (#5124)
* 2.2.2 changed functions acquireAsync and NetworkOPsImp::recvValidation to add an item to a collection under lock, unlock, do some work, then lock again to do remove the item. It will deadlock if an exception is thrown while adding the item - before unlocking.
* Replace ScopedUnlock with scope_unlock.
2025-06-19 10:14:43 +09:00
Olek
d8a3e65d78 Introduce Credentials support (XLS-70d): (#5103)
Amendment:
    - Credentials

    New Transactions:
    - CredentialCreate
    - CredentialAccept
    - CredentialDelete

    Modified Transactions:
    - DepositPreauth
    - Payment
    - EscrowFinish
    - PaymentChannelClaim
    - AccountDelete

    New Object:
    - Credential

    Modified Object:
    - DepositPreauth

    API updates:
    - ledger_entry
    - account_objects
    - ledger_data
    - deposit_authorized

    Read full spec: https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0070d-credentials
2025-06-19 10:14:24 +09:00
Gregory Tsipenyuk
c3cc6494dd Fix token comparison in Payment (#5172)
* Checks only Currency or MPT Issuance ID part of the Asset object.
* Resolves temREDUNDANT regression detected in testing.
2025-06-19 00:21:21 +09:00
Gregory Tsipenyuk
76397fea5c Add fixAMMv1_2 amendment (#5176)
* Add reserve check on AMM Withdraw
* Try AMM max offer if changeSpotPriceQuality() fails
2025-06-19 00:21:00 +09:00
Gregory Tsipenyuk
291fb21d45 Fix unity build (#5179) 2025-06-19 00:00:06 +09:00
yinyiqian1
727fc8e084 Add AMMClawback Transaction (XLS-0073d) (#5142)
Amendment:
- AMMClawback

New Transactions:
- AMMClawback

Modified Transactions:
- AMMCreate
- AMMDeposit
2025-06-18 23:59:44 +09:00
Valentin Balaschenko
acc95ecc56 docs: Add protobuf dependencies to linux setup instructions (#5156) 2025-06-18 23:48:28 +09:00
yinyiqian1
f7592641d1 fix: reject invalid markers in account_objects RPC calls (#5046) 2025-06-18 23:48:09 +09:00
Bob Conan
1338b67964 Update RELEASENOTES.md (#5154)
fix the typo "concensus" -> "consensus"
2025-06-18 23:41:27 +09:00
Gregory Tsipenyuk
9ee638fe7f Introduce MPT support (XLS-33d): (#5143)
Amendment:
- MPTokensV1

New Transactions:
- MPTokenIssuanceCreate
- MPTokenIssuanceDestroy
- MPTokenIssuanceSet
- MPTokenAuthorize

Modified Transactions:
- Payment
- Clawback

New Objects:
- MPTokenIssuance
- MPToken

API updates:
- ledger_entry
- account_objects
- ledger_data

Other:
- Add += and -= operators to ValueProxy

Read full spec: https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0033d-multi-purpose-tokens

---------
Co-authored-by: Shawn Xie <shawnxie920@gmail.com>
Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
Co-authored-by: John Freeman <jfreeman08@gmail.com>
2025-06-18 23:38:51 +09:00
John Freeman
9c1ed41879 Consolidate definitions of fields, objects, transactions, and features (#5122) 2025-06-18 16:26:50 +09:00
John Freeman
ab1c217e8d Reformat code with clang-format-18 2025-06-18 14:13:10 +09:00
John Freeman
e140a0fd0b Update pre-commit hook 2025-06-18 13:30:15 +09:00
John Freeman
7f3281ff54 Update clang-format settings 2025-06-18 13:30:01 +09:00
John Freeman
17d0e23720 Update clang-format workflow 2025-06-18 13:29:29 +09:00
Chenna Keshava B S
5675408c51 Expand Error Message for rpcInternal (#4959)
Validator operators have been confused by the rpcInternal error, which can occur if the server is not running in another process.
2025-06-18 13:27:24 +09:00
Elliot Lee
48bb555f92 docs: clean up API-CHANGELOG.md (#5064)
Move the newest information to the top, i.e., use reverse chronological order within each of the two sections ("API Versions" and "XRP Ledger server versions")
2025-06-18 13:27:14 +09:00
Denis Angell
2227a382d6 feat(SQLite): allow configurable database pragma values (#5135)
Make page_size and journal_size_limit configurable values in rippled.cfg
2025-06-18 13:27:04 +09:00
Vlad
088c1deaf5 refactor: re-order PRAGMA statements (#5140)
The page_size will soon be made configurable with #5135, making this
re-ordering necessary.

When opening SQLite connection, there are specific pragmas set with
commonPragmas.

In particular, PRAGMA journal_mode creates journal file and locks the
page_size; as of this commit, this sets the page size to the default
value of 4096. Coincidentally, the hardcoded page_size was also 4096, so
no issue was noticed.
2025-06-18 13:26:55 +09:00
Chenna Keshava B S
632f94a8e7 fix(book_changes): add "validated" field and reduce RPC latency (#5096)
Update book_changes RPC to reduce latency, add "validated" field, and accept shortcut strings (current, closed, validated) for ledger_index.

`"validated": true` indicates that the transaction has been included in a validated ledger so the result of the transaction is immutable.

Fix #5033

Fix #5034

Fix #5035

Fix #5036

---------

Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
2025-06-18 13:26:45 +09:00
luozexuan
6bf4adf42f chore: fix typos in comments (#5094)
Signed-off-by: luozexuan <fetchcode@139.com>
2025-06-18 13:26:37 +09:00
Ed Hennis
db9af3a8c9 test: Retry RPC commands to try to fix MacOS CI jobs (#5120)
* Retry some failed RPC connections / commands in unit tests
* Remove orphaned `getAccounts` function

Co-authored-by: John Freeman <jfreeman08@gmail.com>
2025-06-18 13:26:21 +09:00
John Freeman
ef7e743a0e docs: Update options documentation (#5083)
Co-authored-by: Elliot Lee <github.public@intelliot.com>
2025-06-18 13:25:11 +09:00
John Freeman
c97f32cbcc refactor: Remove dead headers (#5081) 2025-06-18 13:24:50 +09:00
John Freeman
187634272d refactor: Remove reporting mode (#5092) 2025-06-18 13:24:40 +09:00
Scott Schurr
be49b22c2f Address rare corruption of NFTokenPage linked list (#4945)
* Add fixNFTokenPageLinks amendment:

It was discovered that under rare circumstances the links between
NFTokenPages could be removed.  If this happens, then the
account_objects and account_nfts RPC commands under-report the
NFTokens owned by an account.

The fixNFTokenPageLinks amendment does the following to address
the problem:

- It fixes the underlying problem so no further broken links
  should be created.
- It adds Invariants so, if such damage were introduced in the
  future, an invariant would stop it.
- It adds a new FixLedgerState transaction that repairs
  directories that were damaged in this fashion.
- It adds unit tests for all of it.
2025-06-18 12:32:59 +09:00
Bronek Kozicki
fd1908f5b6 Factor out Transactor::trapTransaction (#5087) 2025-06-18 12:22:04 +09:00
John Freeman
d27bc94249 Remove shards (#5066) 2025-06-18 12:17:28 +09:00
Bronek Kozicki
16b4550d93 Update gcovr EXCLUDE (#5084) 2025-06-17 23:58:07 +09:00
Bronek Kozicki
b51411f728 Fix crash inside OverlayImpl loops over ids_ (#5071) 2025-06-17 23:57:58 +09:00
Ed Hennis
6a17c6be3f docs: Document the process for merging pull requests (#5010) 2025-06-17 23:57:41 +09:00
Scott Schurr
881c5c8b96 Remove unused constants from resource/Fees.h (#4856) 2025-06-17 23:57:02 +09:00
Mayukha Vadari
eaf63accbe fix: change error for invalid feature param in feature RPC (#5063)
* Returns an "Invalid parameters" error if the `feature` parameter is provided and is not a string.
2025-06-17 23:56:54 +09:00
Ed Hennis
a26bcf1328 Ensure levelization sorting is ASCII-order across platforms (#5072) 2025-06-17 23:56:45 +09:00
Ed Hennis
b5e309347a fix: Fix NuDB build error via Conan patch (#5061)
* Includes updated instructions in BUILD.md.
2025-06-17 23:56:34 +09:00
yinyiqian1
f04b4e066f Disallow filtering account_objects by unsupported types (#5056)
* `account_objects` returns an invalid field error if `type` is not supported.
  This includes objects an account can't own, or which are unsupported by `account_objects`
* Includes:
  * Amendments
  * Directory Node
  * Fee Settings
  * Ledger Hashes
  * Negative UNL
2025-06-17 23:56:15 +09:00
Scott Schurr
b0c8296dc0 chore: Add comments to SignerEntries.h (#5059) 2025-06-17 23:49:42 +09:00
Scott Schurr
d8d55c2397 chore: Rename two files from Directory* to Dir*: (#5058)
The names of the files should reflect the name of the Dir class.

Co-authored-by: Zack Brunson <Zshooter@gmail.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
2025-06-17 23:46:50 +09:00
Denis Angell
d62a3ec724 Update BUILD.md after PR #5052 (#5067)
* Document the need to specify "xrpld" and "tests" to build and test rippled.
2025-06-17 22:55:08 +09:00
John Freeman
f1687f0e1b Add xrpld build option and Conan package test (#5052)
* Make xrpld target optional

* Add job to test Conan recipe

* [fold] address review comments

* [fold] Enable tests in workflows

* [fold] Rename with_xrpld option

* [fold] Fix grep expression
2025-06-17 22:54:49 +09:00
dashangcun
a5787f78a5 chore: remove repeat words (#5053)
Signed-off-by: dashangcun <jchaodaohang@foxmail.com>
Co-authored-by: dashangcun <jchaodaohang@foxmail.com>
Co-authored-by: Zack Brunson <Zshooter@gmail.com>
2025-06-17 22:42:57 +09:00
yinyiqian1
62c17828d7 fix CTID in tx command returns invalidParams on lowercase hex (#5049)
* fix CTID in tx command returns invalidParams on lowercase hex

* test mixed case and change auto to explicit type

* add header cctype because std::tolower is called

* remove unused local variable

* change test case comment from 'lowercase' to 'mixed case'

---------

Co-authored-by: Zack Brunson <Zshooter@gmail.com>
2025-06-17 22:42:43 +09:00
Ed Hennis
1dee7d6c4d Invariant: prevent a deleted account from leaving (most) artifacts on the ledger. (#4663)
* Add feature / amendment "InvariantsV1_1"

* Adds invariant AccountRootsDeletedClean:

* Checks that a deleted account doesn't leave any directly
  accessible artifacts behind.
* Always tests, but only changes the transaction result if
  featureInvariantsV1_1 is enabled.
* Unit tests.

* Resolves #4638

* [FOLD] Review feedback from @gregtatcam:

* Fix unused variable warning
* Improve Invariant test const correctness

* [FOLD] Review feedback from @mvadari:

* Centralize the account keylet function list, and some optimization

* [FOLD] Some structured binding doesn't work in clang

* [FOLD] Review feedback 2 from @mvadari:

* Clean up and clarify some comments.

* [FOLD] Change InvariantsV1_1 to unsupported

* Will allow multiple PRs to be merged over time using the same amendment.

* fixup! [FOLD] Change InvariantsV1_1 to unsupported

* [FOLD] Update and clarify some comments. No code changes.

* Move CMake directory

* Rearrange sources

* Rewrite includes

* Recompute loops

* Fix merge issue and formatting

---------

Co-authored-by: Pretty Printer <cpp@ripple.com>
2025-06-17 22:42:28 +09:00
yinyiqian1
3e95f07a25 fix "account_nfts" with unassociated marker returning issue (#5045)
* fix "account_nfts" with unassociated marker returning issue

* create unit test for fixing nft page invalid marker not returning error

add more test

change test name

create unit test

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* [FOLD] accumulated review suggestions

* move BEAST check out of lambda function

---------

Authored-by: Scott Schurr <scott@ripple.com>
2025-06-17 22:38:25 +09:00
Scott Schurr
463dd92c9e fixInnerObjTemplate2 amendment (#5047)
* fixInnerObjTemplate2 amendment:

Apply inner object templates to all remaining (non-AMM)
inner objects.

Adds a unit test for applying the template to sfMajorities.
Other remaining inner objects showed no problems having
templates applied.

* Move CMake directory

* Rearrange sources

* Rewrite includes

* Recompute loops

---------

Co-authored-by: Pretty Printer <cpp@ripple.com>
2025-06-17 22:33:07 +09:00
tequ
95e16b0eed fix for current codebase 2025-06-17 21:45:05 +09:00
Pretty Printer
cb641e4733 Recompute loops 2025-06-17 20:32:54 +09:00
Pretty Printer
3cb60afde6 Rewrite includes 2025-06-17 20:32:41 +09:00
Pretty Printer
a6a71bcc3f Rearrange sources 2025-06-17 10:42:41 +00:00
Pretty Printer
6c1bc9052d Rearrange sources 2025-06-17 19:16:40 +09:00
Pretty Printer
6b5a7ec905 Move CMake directory 2025-06-17 18:22:13 +09:00
John Freeman
7e639a1a9d Add bin/physical.sh (#4997) 2025-06-17 17:44:53 +09:00
John Freeman
cd0141d781 Prepare to rearrange sources: (#4997)
- Remove CMake module "MultiConfig".
- Update clang-format configuration, CodeCov configuration,
  levelization script.
- Replace source lists in CMake with globs.
2025-06-17 17:44:40 +09:00
Bronek Kozicki
34be0ce4fe Change order of checks in amm_info: (#4924)
* Change order of checks in amm_info

* Change amm_info error message in API version 3

* Change amm_info error tests
2025-06-17 13:31:33 +09:00
Scott Schurr
01971ab1b9 Add the fixEnforceNFTokenTrustline amendment: (#4946)
Fix interactions between NFTokenOffers and trust lines.

Since the NFTokenAcceptOffer does not check the trust line that
the issuer receives as a transfer fee in the NFTokenAcceptOffer,
if the issuer deletes the trust line after NFTokenCreateOffer,
the trust line is created for the issuer by the
NFTokenAcceptOffer.  That's fixed.

Resolves #4925.
2025-06-17 13:31:15 +09:00
Chenna Keshava B S
7996d08d0c Replaces the usage of boost::string_view with std::string_view (#4509) 2025-06-17 13:27:08 +09:00
Elliot Lee
11ff672df8 docs: explain how to find a clang-format patch generated by CI (#4521) 2025-06-17 13:20:21 +09:00
tequ
00fc12faa9 XLS-52d: NFTokenMintOffer (#4845) 2025-06-17 13:20:05 +09:00
todaymoon
f97bf81b16 chore: remove repeat words (#5041) 2025-06-17 12:54:28 +09:00
Alex Kremer
29abe2ae46 Expose all amendments known by libxrpl (#5026) 2025-06-17 12:54:18 +09:00
Scott Schurr
bb271020df fixReducedOffersV2: prevent offers from blocking order books: (#5032)
Fixes issue #4937.

The fixReducedOffersV1 amendment fixed certain forms of offer
modification that could lead to blocked order books.  Reduced
offers can block order books if the effective quality of the
reduced offer is worse than the quality of the original offer
(from the perspective of the taker). It turns out that, for
small values, the quality of the reduced offer can be
significantly affected by the rounding mode used during
scaling computations.

Issue #4937 identified an additional code path that modified
offers in a way that could lead to blocked order books.  This
commit changes the rounding in that newly located code path so
the quality of the modified offer is never worse than the
quality of the offer as it was originally placed.

It is possible that additional ways of producing blocking
offers will come to light.  Therefore there may be a future
need for a V3 amendment.
2025-06-17 12:54:03 +09:00
Chenna Keshava B S
a5fb634d53 Additional unit tests for testing deletion of trust lines (#4886) 2025-06-17 12:48:31 +09:00
Olek
8896ea7220 Fix conan typo: (#5044)
Add missed coma in 'exportes_sources'
2025-06-17 12:48:06 +09:00
Bronek Kozicki
837dd8c4b9 Add new command line option to make replaying transactions easier: (#5027)
* Add trap_tx_hash command line option

This new option can be used only if replay is also enabled. It takes a transaction hash from the ledger loaded for replay, and will cause a specific line to be hit in Transactor.cpp, right before the selected transaction is applied.
2025-06-17 12:47:45 +09:00
John Freeman
323fba5c17 Fix compatibility with Conan 2.x: (#5001)
Closes #4926, #4990
2025-06-17 12:42:56 +09:00
J. Scott Branson
723a51921d Update SQLite3 max_page_count to match current defaults (#5114)
When rippled initiates a connection to SQLite3, rippled sends a "PRAGMA"
statement defining the maximum number of pages allowed in the database.
Update the max_page_count so it is consistent with the default for newer
versions of SQLite3. Increasing max_page_count is critical for keeping
full history servers online.

Fix #5102
2025-06-17 12:32:25 +09:00
Valentin Balaschenko
8b83693235 Track latencies of certain code blocks, and log if they take too long 2025-06-17 12:32:25 +09:00
John Freeman
beaf794938 Use error codes throughout fast Base58 implementation 2025-06-17 12:32:25 +09:00
Mayukha Vadari
21a383eeaf Improve error handling in some RPC commands 2025-06-17 12:32:24 +09:00
Alex Kremer
06394e9d17 Add xrpl.libpp as an exported lib in conan (#5022) 2025-06-17 12:32:24 +09:00
Gregory Tsipenyuk
ea2e503ef8 Fix Oracle's token pair deterministic order: (#5021)
Price Oracle data-series logic uses `unordered_map` to update the Oracle object.
This results in different servers disagreeing on the order of that hash table.
Consequently, the generated ledgers will have different hashes.
The fix uses `map` instead to guarantee the order of the token pairs
in the data-series.
2025-06-17 12:32:23 +09:00
Gregory Tsipenyuk
447e6c6c1e Fix last Liquidity Provider withdrawal:
Due to the rounding, LPTokenBalance of the last
Liquidity Provider (LP), might not match this LP's
trustline balance. This fix sets LPTokenBalance on
last LP withdrawal to this LP's LPToken trustline
balance.
2025-06-17 12:32:23 +09:00
Gregory Tsipenyuk
e5e4925a39 Fix offer crossing via single path AMM with transfer fee:
Single path AMM offer has to factor in the transfer in rate
when calculating the upper bound quality and the quality function
because single path AMM's offer quality is not constant.
This fix factors in the transfer fee in
BookStep::adjustQualityWithFees().
2025-06-17 12:32:22 +09:00
Gregory Tsipenyuk
3ff7f34d7c Fix adjustAmountsByLPTokens():
The fix is to return the actual adjusted lp tokens and amounts
by the function.
2025-06-17 12:32:22 +09:00
Gregory Tsipenyuk
e1f2e62c08 Add the fixAMMOfferRounding amendment: (#4983)
* Fix AMM offer rounding and low quality LOB offer blocking AMM:

A single-path AMM offer with account offer on DEX, is always generated
starting with the takerPays first, which is rounded up, and then
the takerGets, which is rounded down. This rounding ensures that the pool's
product invariant is maintained. However, when one of the offer's side
is XRP, this rounding can result in the AMM offer having a lower
quality, potentially causing offer generation to fail if the quality
is lower than the account's offer quality.

To address this issue, the proposed fix adjusts the offer generation process
to start with the XRP side first and always rounds it down. This results
in a smaller offer size, improving the offer's quality. Regardless if the offer
has XRP or not, the rounding is done so that the offer size is minimized.
This change still ensures the product invariant, as the other generated
side is the exact result of the swap-in or swap-out equations.

If a liquidity can be provided by both AMM and LOB offer on offer crossing
then AMM offer is generated so that it matches LOB offer quality. If LOB
offer quality is less than limit quality then generated AMM offer quality
is also less than limit quality and the offer doesn't cross. To address
this issue, if LOB quality is better than limit quality then use LOB
quality to generate AMM offer. Otherwise, don't use the quality to generate
AMM offer. In this case, limitOut() function in StrandFlow limits
the out amount to match strand's quality to limit quality and consume
maximum AMM liquidity.
2025-06-17 12:32:21 +09:00
Gregory Tsipenyuk
10dcdd87d4 Price Oracle: validate input parameters and extend test coverage: (#5013)
* Price Oracle: validate input parameters and extend test coverage:

Validate trim, time_threshold, document_id are valid
Int, UInt, or string convertible to UInt. Validate base_asset
and quote_asset are valid currency. Update error codes.
Extend Oracle and GetAggregatePrice unit-tests.
Denote unreachable coverage code.

* Set one-line LCOV_EXCL_LINE

* Move ledger_entry tests to LedgerRPC_test.cpp

* Add constants for "None"

* Fix LedgerRPC test

---------

Co-authored-by: Scott Determan <scott.determan@yahoo.com>
2025-06-17 12:32:21 +09:00
Michael Legleux
6419eaae42 Add external directory to Conan recipe's exports (#5006) 2025-06-17 12:32:20 +09:00
John Freeman
1f28001aae Add missing includes (#5011) 2025-06-17 12:32:20 +09:00
seelabs
9d0b94029a Remove flow assert: (#5009)
Rounding in the payment engine is causing an assert to sometimes fire
with "dust" amounts. This is causing issues when running debug builds of
rippled. This issue will be addressed, but the assert is no longer
serving its purpose.
2025-06-17 12:32:20 +09:00
seelabs
c3d51f85af fix amendment: AMM swap should honor invariants: (#5002)
The AMM has an invariant for swaps where:
new_balance_1*new_balance_2 >= old_balance_1*old_balance_2

Due to rounding, this invariant could sometimes be violated (although by
very small amounts).

This patch introduces an amendment `fixAMMRounding` that changes the
rounding to always favor the AMM. Doing this should maintain the
invariant.

Co-authored-by: Bronek Kozicki
Co-authored-by: thejohnfreeman
2025-06-17 12:32:19 +09:00
seelabs
f15412acb5 Add global access to the current ledger rules:
It can be difficult to make transaction breaking changes to low level
code because the low level code does not have access to a ledger and the
current activated amendments in that ledger (the "rules"). This patch
adds global access to the current ledger rules as a `std::optional`. If
the optional is not seated, then there is no active transaction.
2025-06-17 12:32:19 +09:00
Snoppy
eea44ad6cb chore: fix typos (#4958) 2025-06-17 12:32:18 +09:00
Ed Hennis
2380633d9a test: Add RPC error checking support to unit tests (#4987) 2025-06-17 12:32:18 +09:00
John Freeman
4400a6eef6 Ignore more commits 2025-06-17 12:32:17 +09:00
John Freeman
17c9e967fd Address compiler warnings 2025-06-17 12:32:17 +09:00
John Freeman
3b96cac31c Add markers around source lists 2025-06-17 12:32:16 +09:00
John Freeman
fda0b67d9d Fix source lists 2025-06-17 12:32:16 +09:00
Pretty Printer
58a24ac1a2 Rewrite includes
$ find src/ripple/ src/test/ -type f -exec sed -i 's:include\s*["<]ripple/\(.*\)\.h\(pp\)\?[">]:include <ripple/\1.h>:' {} +
2025-06-17 12:32:16 +09:00
Pretty Printer
0213db8a08 Format formerly .hpp files 2025-06-17 12:32:15 +09:00
Pretty Printer
fcd0e23326 Rename .hpp to .h 2025-06-17 12:32:15 +09:00
John Freeman
2827748bcf Simplify protobuf generation 2025-06-17 12:32:14 +09:00
Pretty Printer
4319b1a097 Consolidate external libraries 2025-06-17 12:32:14 +09:00
John Freeman
6af0cb9bb4 Remove unused files 2025-06-17 12:32:13 +09:00
Ed Hennis
2abb48a618 fix: Remove redundant STAmount conversion in test (#4996) 2025-06-17 12:32:13 +09:00
Scott Determan
8eead5c99c fix: resolve database deadlock: (#4989)
The `rotateWithLock` function holds a lock while it calls a callback
function that's passed in by the caller. This is a problematic design
that needs to be used very carefully. In this case, at least one caller
passed in a callback that eventually relocks the mutex on the same
thread, causing UB (a deadlock was observed). The caller was from
SHAMapStoreImpl, and it called `clearCaches`. This `clearCaches` can
potentially call `fetchNodeObject`, which tried to relock the mutex.

This patch resolves the issue by changing the mutex type to a
`recursive_mutex`. Ideally, the code should be rewritten so it doesn't
hold the mutex during the callback and the mutex should be changed back
to a regular mutex.

Co-authored-by: Ed Hennis <ed@ripple.com>
2025-06-17 12:32:12 +09:00
Michael Legleux
3055029ded fix Conan component reference typo 2025-06-17 12:32:12 +09:00
Bronek Kozicki
ec23db00e7 Remove unused lambdas from MultiApiJson_test 2025-06-17 12:32:12 +09:00
Chenna Keshava B S
39b84e073b test: verify the rounding behavior of equal-asset AMM deposits (#4982)
* Specifically, test using tfLPToken flag
2025-06-17 12:32:11 +09:00
John Freeman
a9afc6c690 test: Add tests to raise coverage of AMM (#4971)
---------

Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
Co-authored-by: Mark Travis <mtravis@ripple.com>
Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
Co-authored-by: Mayukha Vadari <mvadari@gmail.com>
Co-authored-by: Chenna Keshava <ckeshavabs@gmail.com>
2025-06-17 12:32:11 +09:00
John Freeman
f9d544caef test: Add tests to raise coverage of AMM (#4971)
---------

Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
Co-authored-by: Mark Travis <mtravis@ripple.com>
Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
Co-authored-by: Mayukha Vadari <mvadari@gmail.com>
Co-authored-by: Chenna Keshava <ckeshavabs@gmail.com>
2025-06-17 12:32:10 +09:00
Bronek Kozicki
6cf6b42c57 test: Unit test for AMM offer overflow (#4986) 2025-06-17 12:32:10 +09:00
Mayukha Vadari
534e9989a8 fix amendment to add PreviousTxnID/PreviousTxnLgrSequence (#4751)
This amendment, `fixPreviousTxnID`, adds `PreviousTxnID` and
`PreviousTxnLgrSequence` as fields to all ledger objects that did
not already have them included (`DirectoryNode`, `Amendments`,
`FeeSettings`, `NegativeUNL`, and `AMM`). This makes it much easier
to go through the history of these ledger objects.
2025-06-17 12:32:09 +09:00
Ed Hennis
7fc312b271 chore: Default validator-keys-tool to master branch: (#4943)
* master is the default branch for that project. There's no point in
  using develop.
2025-06-17 12:32:09 +09:00
Scott Determan
8cfea5a9d1 fixXChainRewardRounding: round reward shares down: (#4933)
When calculating reward shares, the amount should always be rounded
down. If the `fixUniversalNumber` amendment is not active, this works
correctly. If it is not active, then the amount is incorrectly rounded
up. This patch introduces an amendment so it will be rounded down.
2025-06-17 12:32:08 +09:00
Mark Travis
c1cb2765ee Don't reach consensus as quickly if no other proposals seen: (#4763)
This fixes a case where a peer can desync under a certain timing
circumstance--if it reaches a certain point in consensus before it receives
proposals. 

This was noticed under high transaction volumes. Namely, when we arrive at the
point of deciding whether consensus is reached after minimum establish phase
duration but before having received any proposals. This could be caused by
finishing the previous round slightly faster and/or having some delay in
receiving proposals. Existing behavior arrives at consensus immediately after
the minimum establish duration with no proposals. This causes us to desync
because we then close a non-validated ledger. The change in this PR causes us to
wait for a configured threshold before making the decision to arrive at
consensus with no proposals. This allows validators to catch up and for brief
delays in receiving proposals to be absorbed. There should be no drawback since,
with no proposals coming in, we needn't be in a huge rush to jump ahead.
2025-06-17 12:32:08 +09:00
Bronek Kozicki
fc305f974b Write improved forAllApiVersions used in NetworkOPs (#4833) 2025-06-17 12:32:08 +09:00
Bronek Kozicki
04f36c8d63 Enforce no duplicate slots from incoming connections: (#4944)
We do not currently enforce that incoming peer connection does not have
remote_endpoint which is already used (either by incoming or outgoing
connection), hence already stored in slots_. If we happen to receive a
connection from such a duplicate remote_endpoint, it will eventually result in a
crash (when disconnecting) or weird behavior (when updating slot state), as a
result of an apparently matching remote_endpoint in slots_ being used by a
different connection.
2025-06-17 12:32:07 +09:00
Mayukha Vadari
70a3be5ebe fixEmptyDID: fix amendment to handle empty DID edge case: (#4950)
This amendment fixes an edge case where an empty DID object can be
created. It adds an additional check to ensure that DIDs are
non-empty when created, and returns a `tecEMPTY_DID` error if the DID
would be empty.
2025-06-17 12:32:07 +09:00
Ed Hennis
842f8b0ede test: Env unit test RPC errors return a unique result: (#4877)
* telENV_RPC_FAILED is a new code, reserved exclusively
  for unit tests when RPC fails. This will
  make those types of errors distinct and easier to test
  for when expected and/or diagnose when not.
* Output RPC command result when result is not expected.
2025-06-17 12:32:06 +09:00
Bronek Kozicki
87368f7f0e Upgrade to xxhash 0.8.2 as a Conan requirement, enable SIMD hashing (#4893)
We are currently using old version 0.6.2 of `xxhash`, as a verbatim copy and paste of its header file `xxhash.h`. Switch to the more recent version 0.8.2. Since this version is in Conan Center (and properly protects its ABI by keeping the state object incomplete), add it as a Conan requirement. Switch to the SIMD instructions (in the new `XXH3` family) supported by the new version.
2025-06-17 12:32:06 +09:00
Michael Legleux
69e3cdce53 Install more public headers (#4940)
Fixes some mistakes in #4885
2025-06-17 12:32:05 +09:00
Scott Determan
37cc0709c7 fix: order book update variable swap: (#4890)
This is likely the result of a typo when the code was simplified.
2025-06-17 12:32:05 +09:00
John Freeman
7e64d49bd0 Embed patched recipe for RocksDB 6.29.5 (#4947) 2025-06-17 12:32:04 +09:00
Gregory Tsipenyuk
bb463bc62f build: add STCurrency.h to xrpl_core to fix clio build (#4939) 2025-06-17 12:32:04 +09:00
Mayukha Vadari
a0accf3d6a feat: add user version of feature RPC (#4781)
* uses same formatting as admin RPC
* hides potentially sensitive data
2025-06-17 12:32:04 +09:00
Scott Determan
9a1888cc2d Fast base58 codec: (#4327)
This algorithm is about an order of magnitude faster than the existing
algorithm (about 10x faster for encoding and about 15x faster for
decoding - including the double hash for the checksum). The algorithms
use gcc's int128 (fast MS version will have to wait, in the meantime MS
falls back to the slow code).
2025-06-17 12:32:03 +09:00
Chenna Keshava B S
a342b510e7 Remove default ctors from SecretKey and PublicKey: (#4607)
* It is now an invariant that all constructed Public Keys are valid,
  non-empty and contain 33 bytes of data.
* Additionally, the memory footprint of the PublicKey class is reduced.
  The size_ data member is declared as static.
* Distinguish and identify the PublisherList retrieved from the local
  config file, versus the ones obtained from other validators.
* Fixes #2942
2025-06-17 12:32:03 +09:00
Gregory Tsipenyuk
b588f1a06e fix compile error on gcc 13: (#4932)
The compilation fails due to an issue in the initializer list
of an optional argument, which holds a vector of pairs.
The code compiles correctly on earlier gcc versions, but fails on gcc 13.
2025-06-17 12:32:02 +09:00
Gregory Tsipenyuk
833a75f57a Price Oracle (XLS-47d): (#4789) (#4789)
Implement native support for Price Oracles.

 A Price Oracle is used to bring real-world data, such as market prices,
 onto the blockchain, enabling dApps to access and utilize information
 that resides outside the blockchain.

 Add Price Oracle functionality:
 - OracleSet: create or update the Oracle object
 - OracleDelete: delete the Oracle object

 To support this functionality add:
 - New RPC method, `get_aggregate_price`, to calculate aggregate price for a token pair of the specified oracles
 - `ltOracle` object

 The `ltOracle` object maintains:
 - Oracle Owner's account
 - Oracle's metadata
 - Up to ten token pairs with the scaled price
 - The last update time the token pairs were updated

 Add Oracle unit-tests
2025-06-17 12:32:02 +09:00
Mayukha Vadari
20812b4a2c feat(rpc): add server_definitions method (#4703)
Add a new RPC / WS call for `server_definitions`, which returns an
SDK-compatible `definitions.json` (binary enum definitions) generated by
the server. This enables clients/libraries to dynamically work with new
fields and features, such as ones that may become available on side
chains. Clients query `server_definitions` on a node from the network
they want to work with, and immediately know how to speak that node's
binary "language", even if new features are added to it in the future
(as long as there are no new serialized types that the software doesn't
know how to serialize/deserialize).

Example:

```js
> {"command": "server_definitions"}
< {
    "result": {
        "FIELDS": [
            [
                "Generic",
                {
                    "isSerialized": false,
                    "isSigningField": false,
                    "isVLEncoded": false,
                    "nth": 0,
                    "type": "Unknown"
                }
            ],
            [
                "Invalid",
                {
                    "isSerialized": false,
                    "isSigningField": false,
                    "isVLEncoded": false,
                    "nth": -1,
                    "type": "Unknown"
                }
            ],
            [
                "ObjectEndMarker",
                {
                    "isSerialized": false,
                    "isSigningField": true,
                    "isVLEncoded": false,
                    "nth": 1,
                    "type": "STObject"
                }
            ],
        ...
```

Close #3657

---------

Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
2025-06-17 12:32:01 +09:00
Gregory Tsipenyuk
5109b1a117 fix: improper handling of large synthetic AMM offers:
A large synthetic offer was not handled correctly in the payment engine.
This patch fixes that issue and introduces a new invariant check while
processing synthetic offers.
2025-06-17 12:18:08 +09:00
Ed Hennis
08abc9490d test: guarantee proper lifetime for temporary Rules object: (#4917)
* Commit 01c37fe introduced a change to the STTx unit test where a local
  "defaultRules" object was created with a temporary inline "presets"
  value provided to the ctor. Rules::Impl stores a const ref to the
  presets provided to the ctor.  This particular call provided an inline
  temp variable, which goes out of scope as soon as the object is
  created. On Windows, attempting to use the presets (e.g. via the
  enabled() function) causes an access violation, which crashes the test
  run.
* An audit of the code indicates that all other instances of Rules use
  the Application's config.features list, which will have a sufficient
  lifetime.
2025-06-17 12:17:12 +09:00
Gregory Tsipenyuk
71ffc69819 fixInnerObjTemplate: set inner object template (#4906)
Add `STObject` constructor to explicitly set the inner object template.
This allows certain AMM transactions to apply in the same ledger:

There is no issue if the trading fee is greater than or equal to 0.01%.
If the trading fee is less than 0.01%, then:
- After AMM create, AMM transactions must wait for one ledger to close
  (3-5 seconds).
- After one ledger is validated, all AMM transactions succeed, as
  appropriate, except for AMMVote.
- The first AMMVote which votes for a 0 trading fee in a ledger will
  succeed. Subsequent AMMVote transactions which vote for a 0 trading
  fee will wait for the next ledger (3-5 seconds). This behavior repeats
  for each ledger.

This has no effect on the ultimate correctness of AMM. This amendment
will allow the transactions described above to succeed as expected, even
if the trading fee is 0 and the transactions are applied within one
ledger (block).
2025-06-17 12:17:12 +09:00
Chenna Keshava B S
c29a632d0c feat: allow port_grpc to be specified in [server] stanza (#4728)
Prior to this commit, `port_grpc` could not be added to the [server]
stanza. Instead of validating gRPC IP/Port/Protocol information in
ServerHandler, validate grpc port info in GRPCServer constructor. This
should not break backwards compatibility.

gRPC-related config info must be in a section (stanza) called
[port_gprc].

* Close #4015 - That was an alternate solution. It was decided that with
  relaxed validation, it is not necessary to rename port_grpc.
* Fix #4557
2025-06-17 12:17:11 +09:00
Michael Legleux
a8fe0f62e2 build: add headers needed in Conan package for libxrpl (#4885)
These headers are required in the xrpl Conan package in order for
xbridge witness server (xbwd) to build. This change to libxrpl may help
any dependents of libxrpl. This addition does not change any C++ code.
2025-06-17 12:17:11 +09:00
Shawn Xie
59070b4f3e fixNFTokenReserve: ensure NFT tx fails when reserve is not met (#4767)
Without this amendment, an NFTokenAcceptOffer transaction can succeed
even when the NFToken recipient does not have sufficient reserves for
the new NFTokenPage. This allowed accounts to accept NFT sell offers
without having a sufficient reserve. (However, there was no issue in
brokered mode or when a buy offer is involved.)

Instead, the transaction should fail with `tecINSUFFICIENT_RESERVE` as
appropriate. The `fixNFTokenReserve` amendment adds checks in the
NFTokenAcceptOffer transactor to check if the OwnerCount changed. If it
did, then it checks the new reserve requirement.

Fix #4679
2025-06-17 12:17:10 +09:00
tequ
880d8a7be8 bad merge: RPCCall_test, Transaction_test 2025-06-17 12:17:10 +09:00
Ed Hennis
b8854c7437 Fix cahce bug introduced in 2.0.1
Partially chery-picked from f419c18056
2025-06-17 12:17:09 +09:00
John Freeman
5ab9e2fed5 fix(libxrpl): change library names in Conan recipe (#4831)
Use consistent platform-agnostic library names on all platforms.

Fix an issue that prevents dependents like validator-keys-tool from
linking to libxrpl on Windows.

It is bad practice to change the binary base name depending on the
platform. CMake already manipulates the base name into a final name that
fits the conventions of the platform. Linkers accept base names on the
command line and then look for conventional names on disk.
2025-06-17 12:17:09 +09:00
Bronek Kozicki
eb7e17e4f8 test: add unit test for redundant payment (#4860)
If the payee and payer are the same account, then the transaction fails
in preflight with temREDUNDANT.
2025-06-17 12:17:09 +09:00
Bronek Kozicki
e9287a3d3d test: improve code coverage reporting (#4849)
* Speed up the generation of coverage reports by using multiple cores.

* Add codecov step to coverage workflow.
2025-06-17 12:17:05 +09:00
Chenna Keshava B S
2572b3204c docs: update help message about unit test-suite pattern matching (#4846)
Update the "rippled --help" message for the "-u" parameter. This
documents the unit test name pattern matching rule implemented by #4634.

Fix #4800
2025-06-17 12:16:14 +09:00
Elliot Lee
b3de0b6329 docs: add Performance type to PR template (#4875) 2025-06-17 12:16:14 +09:00
Bronek Kozicki
fe5bf9c12d test: add DeliverMax to more JSONRPC tests (#4826)
Minor change in unit tests to improve testing scope.
2025-06-17 12:16:13 +09:00
John Freeman
bb2712dd20 fix: change default send_queue_limit to 500 (#4867)
Clients subscribed to `transactions` over WebSocket are being
disconnected because the traffic exceeds the default `send_queue_limit`
of 100.

This commit changes the default configuration, not the default in code.

Fix #4866
2025-06-17 12:16:13 +09:00
Ed Hennis
0a97b9f471 Improve lifetime management of ledger objects (SLEs) to prevent runaway memory usage: (#4822)
* Add logging for Application.cpp sweep()
* Improve lifetime management of ledger objects (`SLE`s)
* Only store SLE digest in CachedView; get SLEs from CachedSLEs
* Also force release of last ledger used for path finding if there are
  no path finding requests to process
* Count more ST objects (derive from `CountedObject`)
* Track CachedView stats in CountedObjects
* Rename the CachedView counters
* Fix the scope of the digest lookup lock

Before this patch, if you asked "is it caching?" It was always caching.
2025-06-17 12:16:13 +09:00
Ed Hennis
469d4e81e4 WebSocket should only call async_close once (#4848)
Prevent WebSocket connections from trying to close twice.

The issue only occurs in debug builds (assertions are disabled in
release builds, including published packages), and when the WebSocket
connections are unprivileged. The assert (and WRN log) occurs when a
client drives up the resource balance enough to be forcibly disconnected
while there are still messages pending to be sent.

Thanks to @lathanbritz for discovering this issue in #4822.
2025-06-17 12:16:12 +09:00
Hussein Badakhchani
4aa8259353 fix typo: 'of' instead of 'on' (#4821)
Co-authored-by: Hussein Badakhchani <hoos@alsoug.com>
2025-06-17 12:16:12 +09:00
Bronek Kozicki
614382cb7e Workarounds for gcc-13 compatibility (#4817)
Workaround for compilation errors with gcc-13 and other compilers
relying on `libstdc++` version 13. This is temporary until actual fix is
available for us to use: https://github.com/boostorg/beast/pull/2682

Some boost.beast files (which we do use) rely on an old gcc-12 behaviour
where `#include <cstdint>` was not needed even though types from this
header were used. This was broken by a change in libstdc++ version 13:
https://gcc.gnu.org/gcc-13/porting_to.html#header-dep-changes

The necessary fix was implemented in boost.beast, however it is not yet
available. Until it is available, we can use this workaround to enable
compilation of `rippled` with gcc-13, clang-16, etc.
2025-06-17 12:16:11 +09:00
Bronek Kozicki
2c6dff314b APIv2: show DeliverMax in submit, submit_multisigned (#4827)
Show `DeliverMax` instead of `Amount` in output from `submit`,
`submit_multisigned`, `sign`, and `sign_for`.

Fix #4829
2025-06-17 12:16:11 +09:00
Bronek Kozicki
b83e66882c APIv2: consistently return ledger_index as integer (#4820)
For api_version 2, always return ledger_index as integer in JSON output.

api_version 1 retains prior behavior.
2025-06-17 12:16:10 +09:00
Bronek Kozicki
c569651f83 Fix 2.0 regression in tx method with binary output (#4812)
* Fix binary output from tx method

* Formatting fix

* Minor test improvement

* Minor test improvements
2025-06-17 12:16:10 +09:00
Bronek Kozicki
999fc61230 Promote API version 2 to supported (#4803)
* Promote API version 2 to supported

* Switch command line to API version 1

* Fix LedgerRequestRPC test

* Remove obsolete tx_account method

This method is not implemented, the only parts which are removed are related to command-line parsing

* Fix RPCCall test

* Reduce diff size, small test improvements

* Minor fixes

* Support for the mold linker

* [fold] handle case where both mold and gold are installed

* [fold] Use first non-default linker

* Fix TransactionEntry_test

* Fix AccountTx_test

---------

Co-authored-by: seelabs <scott.determan@yahoo.com>
2025-06-17 12:16:10 +09:00
Scott Determan
9899eda7c2 Support for the mold linker (#4807) 2025-06-17 12:16:09 +09:00
Bronek Kozicki
41daa9f64c Unify JSON serialization format of transactions (#4775)
* Remove include <ranges>

* Formatting fix

* Output for subscriptions

* Output from sign, submit etc.

* Output from ledger

* Output from account_tx

* Output from transaction_entry

* Output from tx

* Store close_time_iso in API v2 output

* Add small APIv2 unit test for subscribe

* Add unit test for transaction_entry

* Add unit test for tx

* Remove inLedger from API version 2

* Set ledger_hash and ledger_index

* Move isValidated from RPCHelpers to LedgerMaster

* Store closeTime in LedgerFill

* Time formatting fix

* additional tests for Subscribe unit tests

* Improved comments

* Rename mInLedger to mLedgerIndex

* Minor fixes

* Set ledger_hash on closed ledger, even if not validated

* Update API-CHANGELOG.md

* Add ledger_hash, ledger_index to transaction_entry

* Fix validated and close_time_iso in account_tx

* Fix typos

* Improve getJson for Transaction and STTx

* Minor improvements

* Replace class enum JsonOptions with struct

We may consider turning this into a general-purpose template and using it elsewhere

* simplify the extraction of transactionID from Transaction object

* Remove obsolete comments

* Unconditionally set validated in account_tx output

* Minor improvements

* Minor fixes

---------

Co-authored-by: Chenna Keshava <ckeshavabs@gmail.com>
2025-06-17 12:16:09 +09:00
Scott Determan
d75a6edc58 fix: check for valid public key in attestations (#4798) 2025-06-17 12:16:08 +09:00
pwang200
cc60747344 Fix unit test api_version to enable api_version 2 (#4785)
The command line API still uses `apiMaximumSupportedVersion`.
The unit test RPCs use `apiMinimumSupportedVersion` if unspecified.

Context:
- #4568
- #4552
2025-06-17 12:16:08 +09:00
Gregory Tsipenyuk
65c7f6f7d0 fixFillOrKill: fix offer crossing with tfFillOrKill (#4694)
Introduce the `fixFillOrKill` amendment.

Fix an edge case occurring when an offer with `tfFillOrKill` set (but
without `tfSell` set) fails to cross an offer with a better rate. If
`tfFillOrKill` is set, then the owner must receive the full TakerPays.
Without this amendment, an offer fails if the entire `TakerGets` is not
spent. With this amendment, when `tfSell` is not set, the entire
`TakerGets` does not have to be spent.

For details about OfferCreate, see: https://xrpl.org/offercreate.html

Fix #4684

---------

Co-authored-by: Scott Schurr <scott@ripple.com>
2025-06-17 12:16:07 +09:00
Bronek Kozicki
c3a36ad748 fix: remove include <ranges> (#4788)
Remove dependency on `<ranges>` header, since it is not implemented by
all compilers which we want to support.

This code change only affects unit tests.

Resolve https://github.com/XRPLF/rippled/issues/4787
2025-06-17 12:16:07 +09:00
Bronek Kozicki
0a3ce6cf36 APIv2: remove tx_history and ledger_header (#4759)
Remove `tx_history` and `ledger_header` methods from API version 2.

Update `RPC::Handler` to allow for methods (or method implementations)
to be API version specific. This partially resolves #4727. We can now
store multiple handlers with the same name, as long as they belong to
different (non-overlapping) API versions. This necessarily impacts the
handler lookup algorithm and its complexity; however, there is no
performance loss on x86_64 architecture, and only minimal performance
loss on arm64 (around 10ns). This design change gives us extra
flexibility evolving the API in the future, including other parts of

In API version 2, `tx_history` and `ledger_header` are no longer
recognised; if they are called, `rippled` will return error
`unknownCmd`

Resolve #3638

Resolve #3539
2025-06-17 12:16:06 +09:00
Mark Travis
bedafe5cff docs: clarify definition of network health (#4729)
Update the documentation to describe network health with more nuance as
well as context about related factors.
2025-06-17 12:16:06 +09:00
tequ
d7b7bf7a10 fix temCode: bad merge 2025-06-17 12:16:06 +09:00
Bronek Kozicki
446a1fdaac APIv2(DeliverMax): add alias for Amount in Payment transactions (#4733)
Using the "Amount" field in Payment transactions can cause incorrect
interpretation. There continue to be problems from the use of this
field. "Amount" is rarely the correct field to use; instead,
"delivered_amount" (or "DeliveredAmount") should be used.

Rename the "Amount" field to "DeliverMax", a less misleading name. With
api_version: 2, remove the "Amount" field from Payment transactions.

- Input: "DeliverMax" in `tx_json` is an alias for "Amount"
  - sign
  - submit (in sign-and-submit mode)
  - submit_multisigned
  - sign_for
- Output: Add "DeliverMax" where transactions are provided by the API
  - ledger
  - tx
  - tx_history
  - account_tx
  - transaction_entry
  - subscribe (transactions stream)
- Output: Remove "Amount" from API version 2

Fix #3484

Fix #3902
2025-06-17 12:16:05 +09:00
Mayukha Vadari
18ccbf4a53 DID: Decentralized identifiers (DIDs) (XLS-40): (#4636)
Implement native support for W3C DIDs.

Add a new ledger object: `DID`.

Add two new transactions:
1. `DIDSet`: create or update the `DID` object.
2. `DIDDelete`: delete the `DID` object.

This meets the requirements specified in the DID v1.0 specification
currently recommended by the W3C Credentials Community Group.

The DID format for the XRP Ledger conforms to W3C DID standards.
The objects can be created and owned by any XRPL account holder.
The transactions can be integrated by any service, wallet, or application.
2025-06-17 12:16:05 +09:00
Scott Schurr
9f2fd23575 refactor(peerfinder): use LogicError in PeerFinder::Logic (#4562)
It might be possible for the server code to indirect through certain
`end()` iterators. While a debug build would catch this problem with
`assert()`s, a release build would crash. If there are problems in this
area in the future, it is best to get a definitive indication of the
nature of the error regardless of whether it's a debug or release build.
To accomplish this, these `assert`s are converted into `LogicError`s
that will produce a reasonable error message when they fire.
2025-06-17 12:16:04 +09:00
Ed Hennis
20a422076d fix(PathRequest): remove incorrect assert (#4743)
The assert is saying that the only reason `pathFinder` would be null is
if the request was aborted (connection dropped, etc.). That's what
`continueCallback()` checks. But that is very clearly not true if you
look at `getPathFinder`, which calls `findPaths`, which can return false
for many reasons.

Fix #4744
2025-06-17 12:16:04 +09:00
Ed Hennis
7fded60cc9 docs(API-CHANGELOG): add XRPFees change (#4741)
* Add a new API Changelog section for release 1.10.
* Mark `jss::fee_ref` as deprecated.
* Fix a copy-paste error in one of the unit tests.
2025-06-17 12:16:03 +09:00
Florent
136508e56c docs(rippled-example.cfg): add P2P link compression (#4753)
P2P link compression is a feature added in 1.6.0 by #3287.

https://xrpl.org/enable-link-compression.html

If the default changes in the future - for example, as currently
proposed by #4387 - the comment will be updated at that time.

Fix #4656
2025-06-17 12:16:03 +09:00
Denis Angell
ef1c26f9f5 fixDisallowIncomingV1: allow issuers to authorize trust lines (#4721)
Context: The `DisallowIncoming` amendment provides an option to block
incoming trust lines from reaching your account. The
asfDisallowIncomingTrustline AccountSet Flag, when enabled, prevents any
incoming trust line from being created. However, it was too restrictive:
it would block an issuer from authorizing a trust line, even if the
trust line already exists. Consider:

1. Issuer sets asfRequireAuth on their account.
2. User sets asfDisallowIncomingTrustline on their account.
3. User submits tx to SetTrust to Issuer.

At this point, without `fixDisallowIncomingV1` active, the issuer would
not be able to authorize the trust line.

The `fixDisallowIncomingV1` amendment, once activated, allows an issuer
to authorize a trust line even after the user sets the
asfDisallowIncomingTrustline flag, as long as the trust line already
exists.
2025-06-17 12:16:03 +09:00
Scott Determan
a8e8d50cb8 refactor: reduce boilerplate in applySteps: (#4710)
When a new transactor is added, there are several places in applySteps
that need to be modified. This patch refactors the code so only one
function needs to be modified.
2025-06-17 12:16:02 +09:00
Rome Reginelli
73933a366b refactor: reunify transaction common fields: (#4715)
Make transactions and pseudo-transactions share the same commonFields
again. This regularizes the code in a nice way.

While this technically allows pseudo-transactions to have a
TicketSequence field, pseudo-transactions are only ever constructed by
code paths that don't add such a field, so this is not a transaction
processing change. It may be possible to add a separate check to ensure
TicketSequence (and other fields that don't make sense on
pseudo-transactions) are never added to pseudo-transactions, but that
should not be necessary. (TicketSequence is not the only common field
that can not and does not appear in pseudo-transactions.) Note:
TicketSequence is already documented as a common field.

Related: #4637

Fix #4714
2025-06-17 12:16:02 +09:00
Chenna Keshava B S
92f0efb064 docs(BUILD.md): require GCC 11 or higher (#4700)
Update minimum compiler requirement for building the codebase. The
feature "using enum" is required. This feature was introduced in C++20.

Updating the C++ compiler to version 11 or later fixes this error:

```
Building CXX object CMakeFiles/xrpl_core.dir/src/ripple/protocol/impl/STAmount.cpp.o
/build/ripple/binary/src/ripple/protocol/impl/STAmount.cpp: In lambda function:
/build/ripple/binary/src/ripple/protocol/impl/STAmount.cpp:1577:15: error: expected nested-name-specifier before 'enum'
 1577 |         using enum Number::rounding_mode;
      |               ^~~~
```

Fix #4693
2025-06-17 12:16:01 +09:00
Scott Determan
11849215b4 fix(XLS-38): disallow the same bridge on one chain: (#4720)
Modify the `XChainBridge` amendment.

Before this patch, two door accounts on the same chain could could own
the same bridge spec (of course, one would have to be the issuer and one
would have to be the locker). While this is silly, it does not violate
any bridge invariants. However, on further review, if we allow this then
the `claim` transactions would need to change. Since it's hard to see a
use case for two doors to own the same bridge, this patch disallows
it. (The transaction will return tecDUPLICATE).
2025-06-17 12:16:01 +09:00
Scott Schurr
e9f83a7808 fix: stabilize voting threshold for amendment majority mechanism (#4410)
Amendment "flapping" (an amendment repeatedly gaining and losing
majority) usually occurs when an amendment is on the verge of gaining
majority, and a validator not in favor of the amendment goes offline or
loses sync. This fix makes two changes:

1. The number of validators in the UNL determines the threshold required
   for an amendment to gain majority.
2. The AmendmentTable keeps a record of the most recent Amendment vote
   received from each trusted validator (and, with `trustChanged`, stays
   up-to-date when the set of trusted validators changes). If no
   validation arrives from a given validator, then the AmendmentTable
   assumes that the previously-received vote has not changed.

In other words, when missing an `STValidation` from a remote validator,
each server now uses the last vote seen. There is a 24 hour timeout for
recorded validator votes.

These changes do not require an amendment because they do not impact
transaction processing, but only the threshold at which each individual
validator decides to propose an EnableAmendment pseudo-transaction.

Fix #4350
2025-06-17 12:16:00 +09:00
Ed Hennis
8b0cb51d24 fix(build): uint is not defined on Windows platform (#4731)
Fix the Windows build by using `unsigned int` (instead of `uint`).

The error, introduced by #4618, looks something like:
  rpc\impl\RPCHelpers.h(299,5): error C2061: syntax error: identifier
  'uint' (compiling source file app\ledger\Ledger.cpp)
2025-06-17 12:16:00 +09:00
Nik Bougalis
9838bdf214 Eliminate the built-in SNTP support (fixes #4207): (#4628) 2025-06-17 12:15:59 +09:00
John Freeman
6cf6416f15 fix: accept all valid currency codes in API (#4566)
A few methods, including `book_offers`, take currency codes as
parameters. The XRPL doesn't care if the letters in those codes are
lowercase or uppercase, as long as they come from an alphabet defined
internally. rippled doesn't care either, when they are submitted in a
hex representation. When they are submitted in an ASCII string
representation, rippled, but not XRPL, is more restrictive, preventing
clients from interacting with some currencies already in the XRPL.

This change gets rippled out of the way and lets clients submit currency
codes in ASCII using the full alphabet.

Fixes #4112
2025-06-17 12:15:59 +09:00
Bronek Kozicki
5e1dd22bf3 chore: add .build to .gitignore (#4722)
Currently, the `BUILD.md` instructions suggest using `.build` as the
build directory, so this change helps to reduce confusion.

An alternative would be to instruct developers to add `/.build/` to
`.git/info/exclude` or to user-level `.gitignore` (although the latter
is very intrusive). However, it is being added here because it is a good
practice to have a sensible default that's consistent with the build
instructions.
2025-06-17 12:15:59 +09:00
ForwardSlashBack
dc08a666f9 Fix typo in BUILD.md (#4718)
Co-authored-by: Chenna Keshava B S <21219765+ckeshava@users.noreply.github.com>
2025-06-17 12:15:58 +09:00
Peter Chen
07dda63bd5 APIv2(gateway_balances, channel_authorize): update errors (#4618)
gateway_balances
* When `account` does not exist in the ledger, return `actNotFound`
  * (Previously, a normal response was returned)
  * Fix #4290
* When required field(s) are missing, return `invalidParams`
  * (Previously, `invalidHotWallet` was incorrectly returned)
  * Fix #4548

channel_authorize
* When the specified `key_type` is invalid, return `badKeyType`
  * (Previously, `invalidParams` was returned)
  * Fix #4289

Since these are breaking changes, they apply only to API version 2.

Supersedes #4577
2025-06-17 12:15:58 +09:00
John Freeman
53cdb040cf build: use Boost 1.82 and link Boost.Json (#4632)
Add Boost::json to the list of linked Boost libraries.

This seems to be required for macOS.
2025-06-17 12:15:57 +09:00
Chenna Keshava B S
01890e863a docs(overlay): add URL of blog post and clarify wording (#4635) 2025-06-17 12:15:57 +09:00
Elliot Lee
6057c65027 docs(RELEASENOTES): update 1.12.0 notes to match dev blog (#4691)
* Reorganize some changelog entries
* Add note about portable binaries
* Dev blog: https://xrpl.org/blog
2025-06-17 12:15:56 +09:00
John Freeman
0815ec39f8 Update secp256k1 to 0.3.2 (#4653)
Copy the new code to `src/secp256k1` without changes:
`src/secp256k1` is identical to bitcoin-core/secp256k1@acf5c55 (v0.3.2).

We could consider changing to a Git submodule, though that would require
changes to the build instructions because we are not using submodules
anywhere else.
2025-06-17 12:15:56 +09:00
Chenna Keshava B S
c60eb416d2 docs: fix comment for LedgerHistory::fixIndex return value (#4574)
`LedgerHistory::fixIndex` returns `false` if a repair was performed.

Fix #4572
2025-06-17 12:15:55 +09:00
Ed Hennis
3ea653cccb fix: remove unused variable causing clang 14 build errors (#4672)
Removed the unused variable `none` from `Writer.cpp` which was causing
build errors on clang version 14.
2025-06-17 12:15:55 +09:00
Elliot Lee
9605fa0e53 docs(BUILD): make it easier to find environment.md (#4507)
Make the instructions a bit easier to follow. Users on different
platforms can look for their platform name to find relevant information.
2025-06-17 12:15:55 +09:00
Michael Legleux
5741a0d5cb Revert CMake changes (#4707)
This was likely put back when #4292 was rebased.
2025-06-17 12:15:54 +09:00
Scott Determan
11b1602814 Change XChainBridge amendment to Supported::yes (#4709) 2025-06-17 12:15:54 +09:00
Scott Determan
8667480406 Fix Windows build by removing two unused declarations (#4708)
Remove the `verify` and `message` function declarations. The explicit
instantiation requests could not be completed because there were no
implementations for those two member functions. It is helpful that the
Microsoft (MSVC) compiler on Windows appears to be strict when it comes
to template instantiation.

This resolves the warning:

  XChainAttestations.h(450): warning C4661: 'bool
  ripple::XChainAttestationsBase<ripple::XChainClaimAttestation>::verify(void)
  const': no suitable definition provided for explicit template
  instantiation request
2025-06-17 12:15:53 +09:00
Ed Hennis
cff548bcc8 Match unit tests on start of test name (#4634)
* For example, without this change, to run the TxQ tests, must specify
  `--unittest=TxQ1,TxQ2` on the command line. With this change, can use
  `--unittest=TxQ`, and both will be run.
* An exact match will prevent any further partial matching.
* This could have some side effects for different tests with a common
  name beginning. For example, NFToken, NFTokenBurn, NFTokenDir. This
  might be useful. If not, the shorter-named test(s) can be renamed. For
  example, NFToken to NFTokens.
* Split the NFToken, NFTokenBurn, and Offer test classes. Potentially speeds
  up parallel tests by a factor of 5.
2025-06-17 12:15:53 +09:00
Howard Hinnant
acf7486c8d Revert ThreadName due to problems on Windows (#4702)
* Revert "Remove CurrentThreadName.h from RippledCore.cmake (#4697)"

This reverts commit 3b5fcd587313f5ebc762bc21c6a4ec3e6c275e83.

* Revert "Introduce replacement for getting and setting thread name: (#4312)"

This reverts commit 36cb5f90e233f975eb3f80d819b2fbadab0a9387.
2025-06-17 12:15:52 +09:00
Scott Determan
6de5de02cb XChainBridge: Introduce sidechain support (XLS-38): (#4292)
A bridge connects two blockchains: a locking chain and an issuing
chain (also called a mainchain and a sidechain). Both are independent
ledgers, with their own validators and potentially their own custom
transactions. Importantly, there is a way to move assets from the
locking chain to the issuing chain and a way to return those assets from
the issuing chain back to the locking chain: the bridge. This key
operation is called a cross-chain transfer. A cross-chain transfer is
not a single transaction. It happens on two chains, requires multiple
transactions, and involves an additional server type called a "witness".

A bridge does not exchange assets between two ledgers. Instead, it locks
assets on one ledger (the "locking chain") and represents those assets
with wrapped assets on another chain (the "issuing chain"). A good model
to keep in mind is a box with an infinite supply of wrapped assets.
Putting an asset from the locking chain into the box will release a
wrapped asset onto the issuing chain. Putting a wrapped asset from the
issuing chain back into the box will release one of the existing locking
chain assets back onto the locking chain. There is no other way to get
assets into or out of the box. Note that there is no way for the box to
"run out of" wrapped assets - it has an infinite supply.

Co-authored-by: Gregory Popovitch <greg7mdp@gmail.com>
2025-06-17 12:15:52 +09:00
Peter Chen
2c5ecfa75d APIv2(account_tx, noripple_check): return error on invalid input (#4620)
For the `account_tx` and `noripple_check` methods, perform input
validation for optional parameters such as "binary", "forward",
"strict", "transactions". Previously, when these parameters had invalid
values (e.g. not a bool), no error would be returned. Now, it returns an
`invalidParams` error.

* This updates the behavior to match Clio
  (https://github.com/XRPLF/clio).
* Since this is potentially a breaking change, it only applies to
  requests specifying api_version: 2.
* Fix #4543.
2025-06-17 12:15:52 +09:00
Howard Hinnant
bbc943ca10 Remove CurrentThreadName.h from RippledCore.cmake (#4697)
(File was already removed from the source)
2025-06-17 12:15:51 +09:00
Mayukha Vadari
5d3b8976f7 refactor: simplify TxFormats common fields logic (#4637)
Minor refactor to `TxFormats.cpp`:
- Rename `commonFields` to `pseudoCommonFields` (since it is the common fields
  that all pseudo-transactions need)
- Add a new static variable, `commonFields`, which represents all the common
  fields that non-pseudo transactions need (essentially everything that
  `pseudoCommonFields` contains, plus `sfTicketSequence`)

This makes it harder to accidentally leave out `sfTicketSequence` in a new
transaction.
2025-06-17 12:15:51 +09:00
Peter Chen
cabb9cfd50 APIv2(ledger_entry): return invalidParams for bad parameters (#4630)
- Verify "check", used to retrieve a Check object, is a string.
- Verify "nft_page", used to retrieve an NFT Page, is a string.
- Verify "index", used to retrieve any type of ledger object by its
  unique ID, is a string.
- Verify "directory", used to retrieve a DirectoryNode, is a string or
  an object.

This change only impacts api_version 2 since it is a breaking change.

https://xrpl.org/ledger_entry.html

Fix #4550
2025-06-17 12:15:50 +09:00
Mark Pevec
27c1ab5fba docs(rippled-example.cfg): clarify ssl_cert vs ssl_chain (#4667)
Clarify usage of ssl_cert vs ssl_chain
2025-06-17 12:15:50 +09:00
Howard Hinnant
0f4bc92f77 Introduce replacement for getting and setting thread name: (#4312)
* In namespace ripple, introduces get_name function that takes a
  std:🧵:native_handle_type and returns a std::string.
* In namespace ripple, introduces get_name function that takes a
  std::thread or std::jthread and returns a std::string.
* In namespace ripple::this_thread, introduces get_name function
  that takes no parameters and returns the name of the current
  thread as a std::string.
* In namespace ripple::this_thread, introduces set_name function
  that takes a std::string_view and sets the name of the current
  thread.
* Intended to replace the beast utilities setCurrentThreadName
  and getCurrentThreadName.
2025-06-17 12:15:49 +09:00
tequ
f6d2bf819d Fix governance vote purge (#221)
governance hook should be independently and deterministically recompiled before being voted in
2025-06-16 17:12:06 +10:00
tequ
5f02b98066 fix bad merge (Remarks) 2025-06-16 00:19:12 +09:00
John Freeman
52d3babf1b Update dependencies (#4595)
Use the most recent versions in ConanCenter.

* Due to a bug in Clang 16, you may get a compile error:
  "call to 'async_teardown' is ambiguous"
  * A compiler flag workaround is documented in `BUILD.md`.
* At this time, building this with gcc 13 may require editing some files
  in `.conan/data`
  * A patch to support gcc13 may be added in a later PR.

---------

Co-authored-by: Scott Schurr <scott@ripple.com>
2025-06-15 23:18:30 +09:00
Gregory Tsipenyuk
9a82bf9ec2 amm_info: fetch by amm account id; add AMM object entry (#4682)
- Update amm_info to fetch AMM by amm account id.
  - This is an additional way to retrieve an AMM object.
  - Alternatively, AMM can still be fetched by the asset pair as well.
- Add owner directory entry for AMM object.

Context:

- Add back the AMM object directory entry, which was deleted by #4626.
  - This fixes `account_objects` for `amm` type.
2025-06-15 23:14:45 +09:00
Rome Reginelli
f94e1c1be2 AMMBid: use tecINTERNAL for 'impossible' errors (#4674)
Modify two error cases in AMMBid transactor to return `tecINTERNAL` to
more clearly indicate that these errors should not be possible unless
operating in unforeseen circumstances. It likely indicates a bug.

The log level has been updated to `fatal()` since it indicates a
(potentially network-wide) unexpected condition when either of these
errors occurs.

Details:

The two specific transaction error cases changed are:

- `tecAMM_BALANCE` - In this case, this error (total LP Tokens
  outstanding is lower than the amount to be burned for the bid) is a
  subset of the case where the user doesn't have enough LP Tokens to pay
  for the bid. When this case is reached, the bidder's LP Tokens balance
  has already been checked first. The user's LP Tokens should always be
  a subset of total LP Tokens issued, so this should be impossible.
- `tecINSUFFICIENT_PAYMENT` - In this case, the amount to be refunded as
  a result of the bid is greater than the price paid for the auction
  slot. This should never occur unless something is wrong with the math
  for calculating the refund amount.

Both error cases in question are "defense in depth" measures meant to
protect against making things worse if the code has already reached a
state that is supposed to be impossible, likely due to a bug elsewhere.

Such "shouldn't ever occur" checks should use an error code that
categorically indicates a larger problem. This is similar to how
`tecINVARIANT_FAILED` is a warning sign that something went wrong and
likely could've been worse, but since there isn't an Invariant Check
applying here, `tecINTERNAL` is the appropriate error code.

This is "debatably" a transaction processing change since it could
hypothetically change how transactions are processed if there's a bug we
don't know about.
2025-06-15 23:14:44 +09:00
Ikko Eltociear Ashimine
9e1831cacf refactor: fix typo in FeeUnits.h (#4644)
covert -> convert
2025-06-15 23:14:44 +09:00
Arihant Kothari
3eb8a64e64 test: add forAllApiVersions helper function (#4611)
Introduce a new variadic template helper function, `forAllApiVersions`,
that accepts callables to execute a set of functions over a range of
versions - from RPC::apiMinimumSupportedVersion to RPC::apiBetaVersion.
This avoids the duplication of code.

Context: #4552
2025-06-15 23:14:43 +09:00
Mayukha Vadari
61fd0d0164 add view updates for account SLEs (#4629)
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
2025-06-15 23:14:43 +09:00
John Freeman
314cf50863 Fix the package recipe for consumers of libxrpl (#4631)
- "Rename" the type `LedgerInfo` to `LedgerHeader` (but leave an alias
  for `LedgerInfo` to not yet disturb existing uses). Put it in its own
  public header, named after itself, so that it is more easily found.
- Move the type `Fees` and NFT serialization functions into public
  (installed) headers.
- Compile the XRPL and gRPC protocol buffers directly into `libxrpl` and
  install their headers. Fix the Conan recipe to correctly export these
  types.

Addresses change (2) in
https://github.com/XRPLF/XRPL-Standards/discussions/121.

For context: This work supports Clio's dependence on libxrpl. Clio is
just an example consumer. These changes should benefit all current and
future consumers.

---------

Co-authored-by: cyan317 <120398799+cindyyan317@users.noreply.github.com>
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
2025-06-15 23:14:43 +09:00
John Freeman
00a6922045 Fix package definition for Conan (#4485)
Fix the libxrpl library target for consumers using Conan.

* Fix installation issues and update includes.
* Update requirements in the Conan package info.
  * libxrpl requires openssl::crypto.

(Conan is a software package manager for C++.)
2025-06-15 23:14:42 +09:00
Alphonse Noni Mousse
cd9facd7fa refactor: improve checking of path lengths (#4519)
Improve the checking of the path lengths during Payments. Previously,
the code that did the check of the payment path lengths was sometimes
executed, but without any effect. This changes it to only check when it
matters, and to not make unnecessary copies of the path vectors.

Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
2025-06-15 23:09:45 +09:00
Alphonse N. Mousse
39b2e3334a refactor: use C++20 function std::popcount (#4389)
- Replace custom popcnt16 implementation with std::popcount from C++20
- Maintain compatibility with older compilers and MacOS by providing a
  conditional compilation fallback to __builtin_popcount and a lookup
  table method
- Move and inline related functions within SHAMapInnerNode for
  performance and readability

Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
2025-06-15 23:09:44 +09:00
Gregory Tsipenyuk
f19e254366 fix(AMM): prevent orphaned objects, inconsistent ledger state: (#4626)
When an AMM account is deleted, the owner directory entries must be
deleted in order to ensure consistent ledger state.

* When deleting AMM account:
  * Clean up AMM owner dir, linking AMM account and AMM object
  * Delete trust lines to AMM
* Disallow `CheckCreate` to AMM accounts
  * AMM cannot cash a check
* Constrain entries in AuthAccounts array to be accounts
  * AuthAccounts is an array of objects for the AMMBid transaction
* SetTrust (TrustSet): Allow on AMM only for LP tokens
  * If the destination is an AMM account and the trust line doesn't
    exist, then:
    * If the asset is not the AMM LP token, then fail the tx with
      `tecNO_PERMISSION`
    * If the AMM is in empty state, then fail the tx with `tecAMM_EMPTY`
      * This disallows trustlines to AMM in empty state
* Add AMMID to AMM root account
  * Remove lsfAMM flag and use sfAMMID instead
* Remove owner dir entry for ltAMM
* Add `AMMDelete` transaction type to handle amortized deletion
  * Limit number of trust lines to delete on final withdraw + AMMDelete
  * Put AMM in empty state when LPTokens is 0 upon final withdraw
  * Add `tfTwoAssetIfEmpty` deposit option in AMM empty state
  * Fail all AMM transactions in AMM empty state except special deposit
  * Add `tecINCOMPLETE` to indicate that not all AMM trust lines are
    deleted (i.e. partial deletion)
    * This is handled in Transactor similar to deleted offers
  * Fail AMMDelete with `tecINTERNAL` if AMM root account is nullptr
  * Don't validate for invalid asset pair in AMMDelete
* AMMWithdraw deletes AMM trust lines and AMM account/object only if the
  number of trust lines is less than max
  * Current `maxDeletableAMMTrustLines` = 512
  * Check no directory left after AMM trust lines are deleted
  * Enable partial trustline deletion in AMMWithdraw
* Add `tecAMM_NOT_EMPTY` to fail any transaction that expects an AMM in
  empty state
* Clawback considerations
  * Disallow clawback out of AMM account
  * Disallow AMM create if issuer can claw back

This patch applies to the AMM implementation in #4294.

Acknowledgements:
Richard Holland and Nik Bougalis for responsibly disclosing this issue.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the project code and urge researchers to
responsibly disclose any issues they may find.

To report a bug, please send a detailed report to:

    bugs@xrpl.org

Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
2025-06-15 23:09:42 +09:00
RichardAH
5e083121da feat: support Concise Transaction Identifier (CTID) (XLS-37) (#4418)
* add CTIM to tx rpc

---------

Co-authored-by: Rome Reginelli <mduo13@gmail.com>
Co-authored-by: Elliot Lee <github.public@intelliot.com>
Co-authored-by: Denis Angell <dangell@transia.co>
2025-06-15 23:09:04 +09:00
Shawn Xie
683e9ccc1a Rename allowClawback flag to allowTrustLineClawback (#4617)
Reason for this change is here XRPLF/XRPL-Standards#119

We would want to be explicit that this flag is exclusively for trustline. For new token types(eg. CFT), they will not utilize this flag for clawback, instead, they will turn clawback on/off on the token-level, which is more versatile.
2025-06-15 23:09:03 +09:00
Gregory Tsipenyuk
8dbc6db079 Introduce AMM support (XLS-30d): (#4294)
Add AMM functionality:
- InstanceCreate
- Deposit
- Withdraw
- Governance
- Auctioning
- payment engine integration

To support this functionality, add:
- New RPC method, `amm_info`, to fetch pool and LPT balances
- AMM Root Account
- trust line for each IOU AMM token
- trust line to track Liquidity Provider Tokens (LPT)
- `ltAMM` object

The `ltAMM` object tracks:
- fee votes
- auction slot bids
- AMM tokens pair
- total outstanding tokens balance
- `AMMID` to AMM `RootAccountID` mapping

Add new classes to facilitate AMM integration into the payment engine.
`BookStep` uses these classes to infer if AMM liquidity can be consumed.

The AMM formula implementation uses the new Number class added in #4192.
IOUAmount and STAmount use Number arithmetic.

Add AMM unit tests for all features.

AMM requires the following amendments:
- featureAMM
- fixUniversalNumber
- featureFlowCross

Notes:
- Current trading fee threshold is 1%
- AMM currency is generated by: 0x03 + 152 bits of sha256{cur1, cur2}
- Current max AMM Offers is 30

---------

Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
2025-06-15 23:09:01 +09:00
Elliot Lee
b7a29cad94 docs(CONTRIBUTING): push beta releases to release (#4589)
Sections that were rewrapped were wrapped to 72 characters, the same as
the recommendation for commit messages.
2025-06-15 23:07:39 +09:00
Arihant Kothari
7054bf64e9 APIv2(ledger_entry): return "invalidParams" when fields missing (#4552)
Improve error handling for ledger_entry by returning an "invalidParams"
error when one or more request fields are specified incorrectly, or one
or more required fields are missing.

For example, if none of of the following fields is provided, then the
API should return an invalidParams error:
* index, account_root, directory, offer, ripple_state, check, escrow,
  payment_channel, deposit_preauth, ticket

Prior to this commit, the API returned an "unknownOption" error instead.
Since the error was actually due to invalid parameters, rather than
unknown options, this error was misleading.

Since this is an API breaking change, the "invalidParams" error is only
returned for requests using api_version: 2 and above. To maintain
backward compatibility, the "unknownOption" error is still returned for
api_version: 1.

Related: #4573

Fix #4303
2025-06-15 23:07:39 +09:00
Chenna Keshava B S
3ddf1c99d5 refactor: change the return type of mulDiv to std::optional (#4243)
- Previously, mulDiv had `std::pair<bool, uint64_t>` as the output type.
  - This is an error-prone interface as it is easy to ignore when
    overflow occurs.
- Using a return type of `std::optional` should decrease the likelihood
  of ignoring overflow.
  - It also allows for the use of optional::value_or() as a way to
    explicitly recover from overflow.
- Include limits.h header file preprocessing directive in order to
  satisfy gcc's numeric_limits incomplete_type requirement.

Fix #3495

---------

Co-authored-by: John Freeman <jfreeman08@gmail.com>
2025-06-15 23:07:38 +09:00
Shawn Xie
6d5e7e519b fix: add allowClawback flag for account_info (#4590)
* Update the `account_info` API so that the `allowClawback` flag is
  included in the response.
  * The proposed `Clawback` amendement added an `allowClawback` flag in
    the `AccountRoot` object.
  * In the API response, under `account_flags`, there is now an
    `allowClawback` field with a boolean (`true` or `false`) value.
  * For reference, the XLS-39 Clawback implementation can be found in
    #4553

Fix #4588
2025-06-15 23:07:38 +09:00
Peter Chen
224ae4e70f APIv2(account_info): handle invalid "signer_lists" value (#4585)
When requesting `account_info` with an invalid `signer_lists` value, the
API should return an "invalidParams" error.

`signer_lists` should have a value of type boolean. If it is not a
boolean, then it is invalid input. The response now indicates that.

* This is an API breaking change, so the change is only reflected for
  requests containing `"api_version": 2`
* Fix #4539
2025-06-15 23:07:37 +09:00
Chenna Keshava B S
8531ba5838 fix: Update Handler::Condition enum values #3417 (#4239)
- Use powers of two to clearly indicate the bitmask
- Replace bitmask with explicit if-conditions to better indicate predicates

Change enum values to be powers of two (fix #3417) #4239

Implement the simplified condition evaluation
removes the complex bitwise and(&) operator
Implement the second proposed solution in Nik Bougalis's comment - Software does not distinguish between different Conditions (Version: 1.5) #3417 (comment)
I have tested this code change by performing RPC calls with the commands server_info, server_state, peers and validation_info. These commands worked as expected.
2025-06-15 23:07:37 +09:00
Peter Chen
73550a4bfc APIv2: add error messages for account_tx (#4571)
Certain inputs for the AccountTx method should return an error. In other
words, an invalid request from a user or client now results in an error
message.

Since this can change the response from the API, it is an API breaking
change. This commit maintains backward compatibility by keeping the
existing behavior for existing requests. When clients specify
"api_version": 2, they will be able to get the updated error messages.

Update unit tests to check the error based on the API version.

* Fix #4288
* Fix #4545
2025-06-15 23:07:36 +09:00
Ed Hennis
e353c9d6eb Fix build references to deleted ServerHandlerImp: (#4592)
* Commits 0b812cd (#4427) and 11e914f (#4516) conflict. The first added
  references to `ServerHandlerImp` in files outside of that class's
  organizational unit (which is technically incorrect). The second
  removed `ServerHandlerImp`, but was not up to date with develop. This
  results in the build failing.
* Fixes the build by changing references to `ServerHandlerImp` to
  the more correct `ServerHandler`.
2025-06-15 23:07:36 +09:00
Scott Schurr
b733d274a0 refactor: rename ServerHandlerImp to ServerHandler (#4516)
Rename `ServerHandlerImp` to `ServerHandler`. There was no other
ServerHandler definition despite the existence of a header suggesting
that there was.

This resolves a piece of historical confusion in the code, which was
identified during a code review.

The changes in the diff may look more extensive than they actually are.
The contents of `impl/ServerHandlerImp.h` were merged into
`ServerHandler.h`, making the latter file appear to have undergone
significant modifications. However, this is a non-breaking refactor that
only restructures code.
2025-06-15 23:07:36 +09:00
Chenna Keshava B S
346544e371 fix: remove deprecated fields in ledger method (#4244)
Remove deprecated fields from the ledger command:
* accepted
* hash (use ledger_hash instead)
* seqNum (use ledger_index instead)
* totalCoins (use total_coins instead)

Update SHAMapStore unit tests to use `jss:ledger_hash` instead of the
deprecated `hash` field.

Fix #3214
2025-06-15 23:07:35 +09:00
Denis Angell
5d2d1d4497 refactor: replace hand-rolled lexicalCast (#4473)
Replace hand-rolled code with std::from_chars for better
maintainability.

The C++ std::from_chars function is intended to be as fast as possible,
so it is unlikely to be slower than the code it replaces. This change is
a net gain because it reduces the amount of hand-rolled code.
2025-06-15 23:07:35 +09:00
Shawn Xie
0f0ffda053 XLS-39 Clawback: (#4553)
Introduces:
* AccountRoot flag: lsfAllowClawback
* New Clawback transaction
* More info on clawback spec: https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-39d-clawback
2025-06-15 23:07:32 +09:00
Howard Hinnant
37f7734b25 refactor: remove TypedField's move constructor (#4567)
Apply a minor cleanup in `TypedField`:
* Remove a non-working and unused move constructor.
* Constrain the remaining constructor to not be overly generic enough as
  to be used as a copy or move constructor.
2025-06-15 23:06:45 +09:00
drlongle
f8dc0cab65 Add RPC/WS ports to server_info (#4427)
Enhance the /crawl endpoint by publishing WebSocket/RPC ports in the
server_info response. The function processing requests to the /crawl
endpoint actually calls server_info internally, so this change enables a
server to advertise its WebSocket/RPC port(s) to peers via the /crawl
endpoint. `grpc` and `peer` ports are included as well.

The new `ports` array contains objects, each containing a `port` for the
listening port (number string), and an array `protocol` listing the
supported protocol(s).

This allows crawlers to build a richer topology without needing to
port-scan nodes. For non-admin users (including peers), the info about
*admin* ports is excluded.

Also increase test coverage for RPC ServerInfo.

Fix #2837.
2025-06-15 23:06:45 +09:00
Scott Schurr
3c4731a676 fixReducedOffersV1: prevent offers from blocking order books: (#4512)
Curtail the occurrence of order books that are blocked by reduced offers
with the implementation of the fixReducedOffersV1 amendment.

This commit identifies three ways in which offers can be reduced:

1. A new offer can be partially crossed by existing offers, so the new
   offer is reduced when placed in the ledger.

2. An in-ledger offer can be partially crossed by a new offer in a
   transaction. So the in-ledger offer is reduced by the new offer.

3. An in-ledger offer may be under-funded. In this case the in-ledger
   offer is scaled down to match the available funds.

Reduced offers can block order books if the effective quality of the
reduced offer is worse than the quality of the original offer (from the
perspective of the taker). It turns out that, for small values, the
quality of the reduced offer can be significantly affected by the
rounding mode used during scaling computations.

This commit adjusts some rounding modes so that the quality of a reduced
offer is always at least as good (from the taker's perspective) as the
original offer.

The amendment is titled fixReducedOffersV1 because additional ways of
producing reduced offers may come to light. Therefore, there may be a
future need for a V2 amendment.
2025-06-15 23:06:42 +09:00
Ed Hennis
995e70c2b0 Enable the Beta RPC API (v2) for all unit tests: (#4573)
* Enable api_version 2, which is currently in beta. It is expected to be
  marked stable by the next stable release.
* This does not change any defaults.
* The only existing tests changed were one that set the same flag, which
  was now redundant, and a couple that tested versioning explicitly.
2025-06-15 23:05:35 +09:00
Denis Angell
a5ea86fdfc Add Conan Building For Development (#432) 2025-05-14 14:00:20 +10:00
RichardAH
615f56570a Sus pat (#507) 2025-05-01 17:23:56 +10:00
RichardAH
5e005cd6ee remove false positives from sus pat finder (#506) 2025-05-01 09:54:41 +10:00
Denis Angell
80a7197590 fix warnings (#505) 2025-04-30 11:51:58 +02:00
tequ
7b581443d1 Suppress build warning introduced in Catalogue (#499) 2025-04-29 08:25:55 +10:00
tequ
5400f43359 Supress logs for Catalogue_test, Import_test (#495) 2025-04-24 17:46:09 +10:00
Denis Angell
8cf7d485ab fix: ledger_index (#498) 2025-04-24 16:45:01 +10:00
tequ
372f25d09b Remove #ifndef DEBUG guards and exception handling wrappers (#496) 2025-04-24 16:38:14 +10:00
Denis Angell
401395a204 patch remarks (#497) 2025-04-24 16:36:57 +10:00
tequ
4221dcf568 Add tests for SetRemarks (#491) 2025-04-18 09:34:44 +10:00
tequ
989532702d Update clang-format workflow (#490) 2025-04-17 16:16:59 +10:00
RichardAH
f9cd2e0d21 Remarks amendment (#301)
Co-authored-by: Denis Angell <dangell@transia.co>
2025-04-16 08:42:04 +10:00
tequ
59e334c099 fixRewardClaimFlags (#487) 2025-04-15 20:08:19 +10:00
tequ
9018596532 HookCanEmit (#392) 2025-04-15 13:32:35 +10:00
Niq Dudfield
b827f0170d feat(catalogue): add cli commands and fix file_size (#486)
* feat(catalogue): add cli commands and fix file_size

* feat(catalogue): add cli commands and fix file_size

* feat(catalogue): fix tests

* feat(catalogue): fix tests

* feat(catalogue): use formatBytesIEC

* feat: add file_size_estimated

* feat: add file_size_estimated

* feat: add file_size_estimated
2025-04-15 08:50:15 +10:00
tequ
e4b7e8f0f2 Update sfcodes script (#479) 2025-04-10 09:44:31 +10:00
tequ
1485078d91 Update CHooks build script (#465) 2025-04-09 20:22:34 +10:00
tequ
6625d2be92 Add xpop_slot test (#470) 2025-04-09 20:20:23 +10:00
tequ
2fb5c92140 feat: Run unittests in parallel with Github Actions (#483)
Implement parallel execution for unit tests using Github Actions to improve CI pipeline efficiency and reduce build times.
2025-04-04 19:32:47 +02:00
Niq Dudfield
c4b5ae3787 Fix missing includes in Catalogue.cpp for non-unity builds (#485) 2025-04-04 12:53:45 +10:00
Niq Dudfield
d546d761ce Fix using using Status with rpcError (#484) 2025-04-01 21:00:13 +10:00
RichardAH
e84a36867b Catalogue (#443) 2025-04-01 16:47:48 +10:00
Niq Dudfield
0b675465b4 Fix ServerDefinitions_test regression intro in #475 (#477) 2025-03-19 12:32:27 +10:00
Niq Dudfield
d088ad61a9 Prevent dangling reference in getHash() (#475)
Replace temporary uint256 with static variable when returning fallback hash
to avoid returning a const reference to a local temporary object.
2025-03-18 18:37:18 +10:00
Niq Dudfield
ef77b02d7f CI Release Builder (#455) 2025-03-11 13:19:28 +01:00
RichardAH
7385828983 Touch Amendment (#294) 2025-03-06 08:25:42 +01:00
Niq Dudfield
88b01514c1 fix: remove negative rate test failing on MacOS (#452) 2025-03-03 13:12:13 +01:00
Denis Angell
aeece15096 [fix] github runner (#451)
Co-authored-by: Niq Dudfield <ndudfield@gmail.com>
2025-03-03 09:55:51 +01:00
tequ
89cacb1258 Enhance shell script error handling and debugging on GHA (#447) 2025-02-24 10:33:21 +01:00
tequ
8ccff44e8c Fix Error handling on build action (#412) 2025-02-24 18:16:21 +10:00
tequ
420240a2ab Fixed not to use a large fixed range in the magic_enum. (#436) 2025-02-24 17:46:42 +10:00
Richard Holland
230873f196 debug gh builds 2025-02-06 15:21:37 +11:00
Wietse Wind
1fb1a99ea2 Update build-in-docker.yml 2025-02-05 08:23:49 +01:00
Richard Holland
e0b63ac70e Revert "debug account tx tests under release builder"
This reverts commit da8df63be3.

Revert "add strict filtering to account_tx api (#429)"

This reverts commit 317bd4bc6e.
2025-02-05 14:59:33 +11:00
Richard Holland
da8df63be3 debug account tx tests under release builder 2025-02-04 17:02:17 +11:00
RichardAH
317bd4bc6e add strict filtering to account_tx api (#429) 2025-02-03 17:56:08 +10:00
RichardAH
2fd465bb3f fix20250131 (#428)
Co-authored-by: Denis Angell <dangell@transia.co>
2025-02-03 10:33:19 +10:00
Wietse Wind
fa71bda29c Artifact v4 continue on error 2025-02-01 08:58:13 +01:00
Wietse Wind
412593d7bc Update artifact 2025-02-01 08:57:48 +01:00
Wietse Wind
12d8342c34 Update artifact 2025-02-01 08:57:25 +01:00
tequ
d17f7151ab Fix HookResult(ExitType) when accept() is not called (#415) 2025-01-22 13:33:59 +10:00
tequ
4466175231 Update boost link for build-full.sh (#421) 2025-01-22 08:38:12 +10:00
tequ
621ca9c865 Add space to trace_float log (#424) 2025-01-22 08:34:33 +10:00
tequ
85a752235a add URITokenIssuer to account_flags for account_info (#404) 2024-12-16 16:10:01 +10:00
RichardAH
d878fd4a6e allow multiple datagram monitor endpoints (#408) 2024-12-14 08:44:40 +10:00
Richard Holland
532a471a35 fixReduceImport (#398)
Co-authored-by: Denis Angell <dangell@transia.co>
2024-12-11 13:29:37 +11:00
RichardAH
e9468d8b4a Datagram monitor (#400)
Co-authored-by: Denis Angell <dangell@transia.co>
2024-12-11 13:29:30 +11:00
Denis Angell
9d54da3880 Fix: failing assert (#397) 2024-12-11 13:08:50 +11:00
Ekiserrepé
542172f0a1 Update README.md (#396)
Updated Xaman link.
2024-12-11 13:08:50 +11:00
Richard Holland
e086724772 UDP RPC (admin) support (#390) 2024-12-11 13:08:44 +11:00
RichardAH
21863b05f3 Limit xahau genesis to networks starting with 2133X (#395) 2024-11-23 21:19:09 +10:00
Denis Angell
61ac04aacc Sync: Ripple(d) 1.11.0 (#299)
* Add jss fields used by Clio `nft_info`: (#4320)

Add Clio-specific JSS constants to ensure a common vocabulary of
keywords in Clio and this project. By providing visibility of the full
API keyword namespace, it reduces the likelihood of developers
introducing minor variations on names used by Clio, or unknowingly
claiming a keyword that Clio has already claimed. This change moves this
project slightly away from having only the code necessary for running
the core server, but it is a step toward the goal of keeping this
server's and Clio's APIs similar. The added JSS constants are annotated
to indicate their relevance to Clio.

Clio can be found here: https://github.com/XRPLF/clio

Signed-off-by: ledhed2222 <ledhed2222@users.noreply.github.com>

* Introduce support for a slabbed allocator: (#4218)

When instantiating a large amount of fixed-sized objects on the heap
the overhead that dynamic memory allocation APIs impose will quickly
become significant.

In some cases, allocating a large amount of memory at once and using
a slabbing allocator to carve the large block into fixed-sized units
that are used to service requests for memory out will help to reduce
memory fragmentation significantly and, potentially, improve overall
performance.

This commit introduces a new `SlabAllocator<>` class that exposes an
API that is _similar_ to the C++ concept of an `Allocator` but it is
not meant to be a general-purpose allocator.

It should not be used unless profiling and analysis of specific memory
allocation patterns indicates that the additional complexity introduced
will improve the performance of the system overall, and subsequent
profiling proves it.

A helper class, `SlabAllocatorSet<>` simplifies handling of variably
sized objects that benefit from slab allocations.

This commit incorporates improvements suggested by Greg Popovitch
(@greg7mdp).

Commit 1 of 3 in #4218.

* Optimize `SHAMapItem` and leverage new slab allocator: (#4218)

The `SHAMapItem` class contains a variable-sized buffer that
holds the serialized data associated with a particular item
inside a `SHAMap`.

Prior to this commit, the buffer for the serialized data was
allocated separately. Coupled with the fact that most instances
of `SHAMapItem` were wrapped around a `std::shared_ptr` meant
that an instantiation might result in up to three separate
memory allocations.

This commit switches away from `std::shared_ptr` for `SHAMapItem`
and uses `boost::intrusive_ptr` instead, allowing the reference
count for an instance to live inside the instance itself. Coupled
with using a slab-based allocator to optimize memory allocation
for the most commonly sized buffers, the net result is significant
memory savings. In testing, the reduction in memory usage hovers
between 400MB and 650MB. Other scenarios might result in larger
savings.

In performance testing with NFTs, this commit reduces memory size by
about 15% sustained over long duration.

Commit 2 of 3 in #4218.

* Avoid using std::shared_ptr when not necessary: (#4218)

The `Ledger` class contains two `SHAMap` instances: the state and
transaction maps. Previously, the maps were dynamically allocated using
`std::make_shared` despite the fact that they did not require lifetime
management separate from the lifetime of the `Ledger` instance to which
they belong.

The two `SHAMap` instances are now regular member variables. Some smart
pointers and dynamic memory allocation was avoided by using stack-based
alternatives.

Commit 3 of 3 in #4218.

* Prevent replay attacks with NetworkID field: (#4370)

Add a `NetworkID` field to help prevent replay attacks on and from
side-chains.

The new field must be used when the server is using a network id > 1024.

To preserve legacy behavior, all chains with a network ID less than 1025
retain the existing behavior. This includes Mainnet, Testnet, Devnet,
and hooks-testnet. If `sfNetworkID` is present in any transaction
submitted to any of the nodes on one of these chains, then
`telNETWORK_ID_MAKES_TX_NON_CANONICAL` is returned.

Since chains with a network ID less than 1025, including Mainnet, retain
the existing behavior, there is no need for an amendment.

The `NetworkID` helps to prevent replay attacks because users specify a
`NetworkID` field in every transaction for that chain.

This change introduces a new UINT32 field, `sfNetworkID` ("NetworkID").
There are also three new local error codes for transaction results:

- `telNETWORK_ID_MAKES_TX_NON_CANONICAL`
- `telREQUIRES_NETWORK_ID`
- `telWRONG_NETWORK`

To learn about the other transaction result codes, see:
https://xrpl.org/transaction-results.html

Local error codes were chosen because a transaction is not necessarily
malformed if it is submitted to a node running on the incorrect chain.
This is a local error specific to that node and could be corrected by
switching to a different node or by changing the `network_id` on that
node. See:
https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html

In addition to using `NetworkID`, it is still generally recommended to
use different accounts and keys on side-chains. However, people will
undoubtedly use the same keys on multiple chains; for example, this is
common practice on other blockchain networks. There are also some
legitimate use cases for this.

A `app.NetworkID` test suite has been added, and `core.Config` was
updated to include some network_id tests.

* Fix the fix for std::result_of (#4496)

Newer compilers, such as Apple Clang 15.0, have removed `std::result_of`
as part of C++20. The build instructions provided a fix for this (by
adding a preprocessor definition), but the fix was broken.

This fixes the fix by:
* Adding the `conf` prefix for tool configurations (which had been
  forgotten).
* Passing `extra_b2_flags` to `boost` package to fix its build.
  * Define `BOOST_ASIO_HAS_STD_INVOKE_RESULT` in order to build boost
    1.77 with a newer compiler.

* Use quorum specified via command line: (#4489)

If `--quorum` setting is present on the command line, use the specified
value as the minimum quorum. This allows for the use of a potentially
fork-unsafe quorum, but it is sometimes necessary for small and test
networks.

Fix #4488.

---------

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>

* Fix errors for Clang 16: (#4501)

Address issues related to the removal of `std::{u,bi}nary_function` in
C++17 and some warnings with Clang 16. Some warnings appeared with the
upgrade to Apple clang version 14.0.3 (clang-1403.0.22.14.1).

- `std::{u,bi}nary_function` were removed in C++17. They were empty
  classes with a few associated types. We already have conditional code
  to define the types. Just make it unconditional.
- libc++ checks a cast in an unevaluated context to see if a type
  inherits from a binary function class in the standard library, e.g.
  `std::equal_to`, and this causes an error when the type privately
  inherits from such a class. Change these instances to public
  inheritance.
- We don't need a middle-man for the empty base optimization. Prefer to
  inherit directly from an empty class than from
  `beast::detail::empty_base_optimization`.
- Clang warns when all the uses of a variable are removed by conditional
  compilation of assertions. Add a `[[maybe_unused]]` annotation to
  suppress it.
- As a drive-by clean-up, remove commented code.

See related work in #4486.

* Fix typo (#4508)

* fix!: Prevent API from accepting seed or public key for account (#4404)

The API would allow seeds (and public keys) to be used in place of
accounts at several locations in the API. For example, when calling
account_info, you could pass `"account": "foo"`. The string "foo" is
treated like a seed, so the method returns `actNotFound` (instead of
`actMalformed`, as most developers would expect). In the early days,
this was a convenience to make testing easier. However, it allows for
poor security practices, so it is no longer a good idea. Allowing a
secret or passphrase is now considered a bug. Previously, it was
controlled by the `strict` option on some methods. With this commit,
since the API does not interpret `account` as `seed`, the option
`strict` is no longer needed and is removed.

Removing this behavior from the API is a [breaking
change](https://xrpl.org/request-formatting.html#breaking-changes). One
could argue that it shouldn't be done without bumping the API version;
however, in this instance, there is no evidence that anyone is using the
API in the "legacy" way. Furthermore, it is a potential security hole,
as it allows users to send secrets to places where they are not needed,
where they could end up in logs, error messages, etc. There's no reason
to take such a risk with a seed/secret, since only the public address is
needed.

Resolves: #3329, #3330, #4337

BREAKING CHANGE: Remove non-strict account parsing (#3330)

* Add nftoken_id, nftoken_ids, offer_id fields for NFTokens (#4447)

Three new fields are added to the `Tx` responses for NFTs:

1. `nftoken_id`: This field is included in the `Tx` responses for
   `NFTokenMint` and `NFTokenAcceptOffer`. This field indicates the
   `NFTokenID` for the `NFToken` that was modified on the ledger by the
   transaction.
2. `nftoken_ids`: This array is included in the `Tx` response for
   `NFTokenCancelOffer`. This field provides a list of all the
   `NFTokenID`s for the `NFToken`s that were modified on the ledger by
   the transaction.
3. `offer_id`: This field is included in the `Tx` response for
   `NFTokenCreateOffer` transactions and shows the OfferID of the
   `NFTokenOffer` created.

The fields make it easier to track specific tokens and offers. The
implementation includes code (by @ledhed2222) from the Clio project to
extract NFTokenIDs from mint transactions.

* Ensure that switchover vars are initialized before use: (#4527)

Global variables in different TUs are initialized in an undefined order.
At least one global variable was accessing a global switchover variable.
This caused the switchover variable to be accessed in an uninitialized
state.

Since the switchover is always explicitly set before transaction
processing, this bug can not effect transaction processing, but could
effect unit tests (and potentially the value of some global variables).
Note: at the time of this patch the offending bug is not yet in
production.

* Move faulty assert (#4533)

This assert was put in the wrong place, but it only triggers if shards
are configured. This change moves the assert to the right place and
updates it to ensure correctness.

The assert could be hit after the server downloads some shards. It may
be necessary to restart after the shards are downloaded.

Note that asserts are normally checked only in debug builds, so release
packages should not be affected.

Introduced in: #4319 (66627b26cf)

* Fix unaligned load and stores: (#4528) (#4531)

Misaligned load and store operations are supported by both Intel and ARM
CPUs. However, in C++, these operations are undefined behavior (UB).
Substituting these operations with a `memcpy` fixes this UB. The
compiled assembly code is equivalent to the original, so there is no
performance penalty to using memcpy.

For context: The unaligned load and store operations fixed here were
originally introduced in the slab allocator (#4218).

* Add missing includes for gcc 13.1: (#4555)

gcc 13.1 failed to compile due to missing headers. This patch adds the
needed headers.

* Trivial: add comments for NFToken-related invariants (#4558)

* fix node size estimation (#4536)

Fix a bug in the `NODE_SIZE` auto-detection feature in `Config.cpp`.
Specifically, this patch corrects the calculation for the total amount
of RAM available, which was previously returned in bytes, but is now
being returned in units of the system's memory unit. Additionally, the
patch adjusts the node size based on the number of available hardware
threads of execution.

* fix: remove redundant moves (#4565)

- Resolve gcc compiler warning:
      AccountObjects.cpp:182:47: warning: redundant move in initialization [-Wredundant-move]
  - The std::move() operation on trivially copyable types may generate a
    compile warning in newer versions of gcc.
- Remove extraneous header (unused imports) from a unit test file.

* Revert "Fix the fix for std::result_of (#4496)"

This reverts commit cee8409d60.

* Revert "Fix typo (#4508)"

This reverts commit 2956f14de8.

* clang

* [fold] bad merge

* [fold] fix bad merge

- add back filter for ripple state on account_channels
- add back network id test (env auto adds network id in xahau)

* [fold] fix build error

---------

Signed-off-by: ledhed2222 <ledhed2222@users.noreply.github.com>
Co-authored-by: ledhed2222 <ledhed2222@users.noreply.github.com>
Co-authored-by: Nik Bougalis <nikb@bougalis.net>
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
Co-authored-by: John Freeman <jfreeman08@gmail.com>
Co-authored-by: Mark Travis <mtrippled@users.noreply.github.com>
Co-authored-by: solmsted <steven.olm@gmail.com>
Co-authored-by: drlongle <drlongle@gmail.com>
Co-authored-by: Shawn Xie <35279399+shawnxie999@users.noreply.github.com>
Co-authored-by: Scott Determan <scott.determan@yahoo.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
Co-authored-by: Scott Schurr <scott@ripple.com>
Co-authored-by: Chenna Keshava B S <21219765+ckeshava@users.noreply.github.com>
2024-11-20 10:54:03 +10:00
tequ
57a1329bff Fix lexicographical_compare_three_way build error at macos (#391)
Co-authored-by: Denis Angell <dangell@transia.co>
2024-11-15 08:33:55 +10:00
Denis Angell
daf22b3b85 Fix: RWDB (#389) 2024-11-15 07:31:55 +10:00
RichardAH
2b225977e2 Feature: RWDB (#378)
Co-authored-by: Denis Angell <dangell@transia.co>
2024-11-12 08:55:56 +10:00
Denis Angell
58b22901cb Fix: float_divide rounding error (#351)
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2024-11-09 15:17:00 +10:00
Denis Angell
8ba37a3138 Add Script for SfCode generation (#358)
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
Co-authored-by: tequ <git@tequ.dev>
2024-11-09 14:17:49 +10:00
tequ
8cffd3054d add trace message to exception on etxn_fee_base (#387) 2024-11-09 14:00:59 +10:00
Denis Angell
6b26045cbc Update settings.json (#342) 2024-10-25 11:56:16 +10:00
Wietse Wind
08f13b7cfe Fix account_tx sluggishness as per https://github.com/XRPLF/rippled/commit/2e9261cb (#308) 2024-10-25 11:13:42 +10:00
tequ
766f5d7ee1 Update macro.h (#366) 2024-10-25 10:10:43 +10:00
Wietse Wind
287c01ad04 Improve Admin command RPC Post (#384)
* Improve ADMIN HTTP POST RPC notifications: no queue limit, shorter HTTP call TTL
2024-10-25 10:10:14 +10:00
tequ
4239124750 Update amendments for rippled-standalone.cfg (#385) 2024-10-25 09:10:45 +10:00
RichardAH
1e45d4120c Update to boost186 (#377)
Co-Authored-By: Denis Angell <dangell@transia.co>
2024-10-17 01:29:17 +02:00
Denis Angell
9e446bcc85 Fix: Missing Headers - Linker Errors (#300) 2024-10-16 18:19:21 +10:00
RichardAH
376727d20c use std::lexicographical_compare_three_way for uint spaceship operator (#374)
* use std::lexicographical_compare_three_way for uint spaceship operator

* clang
2024-10-16 11:37:26 +10:00
Richard Holland
d921c87c88 also update max transactions for tx queue 2024-10-11 09:59:04 +11:00
RichardAH
7b94d3d99d increase txn in ledger target to 1000 (#372) 2024-10-11 08:21:23 +10:00
Denis Angell
79d83bd424 fix240911 (#363) 2024-09-11 13:43:03 +10:00
Richard Holland
1a4d54f9d9 force build 2024-09-07 15:41:00 +10:00
Wietse Wind
26cd629d28 Merge/2.2.2 jobqueue (#360)
* Merge fbbea9e6e2

* Merge 7741483894

* clang-format

* Oops

---------

Co-authored-by: Wietse Wind <wrw@Wietses-MacBook-Pro.local>
2024-09-07 15:22:15 +10:00
RichardAH
2fb93f874b fixPageCap (#359)
* page cap fix

---------

Co-authored-by: Denis Angell <dangell@transia.co>
2024-09-07 11:39:24 +10:00
RichardAH
833df20fce Fix240819 (#350)
fix240918
---------

Co-authored-by: Denis Angell <dangell@transia.co>
2024-08-20 09:40:31 +10:00
Wietse Wind
5737c2b6e8 Workaround CentOS7 EOL 2024-08-18 01:50:44 +02:00
Richard Holland
a15d0b2ecc set huge mode nudb cache to 64mib 2024-08-14 16:40:07 +10:00
Denis Angell
18d76d3082 fix delivered amount (#310)
* fix delivered amount
2024-07-16 08:56:30 +10:00
Wietse Wind
849a4435e0 CI Split jobs with prev job dependency & CI on jshooks (#320)
* CI on `jshooks` branch

* CI Split jobs with prev job dependency

* No multi branch worker in parallel

---------

Co-authored-by: Denis Angell <dangell@transia.co>
2024-05-29 13:45:59 +02:00
Wietse Wind
247e9d98bf CI on jshooks branch (#317) 2024-05-24 10:10:40 +10:00
Denis Angell
acd455f5df Fix: Server Definitions Typo (#306) 2024-04-19 07:39:21 +10:00
Denis Angell
6636e3b6fd Add RPC Tests (#295)
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2024-04-18 18:22:46 +10:00
Denis Angell
3c5f118b59 Fix: Add Tx Flags To Server Definitions (#304) 2024-04-18 15:43:48 +10:00
Vasu
88308126cc Removing duplicate macro #define LPAREN ( (#233) 2024-03-25 09:11:21 +11:00
Denis Angell
497e52fcc6 Update Macros (#279)
* add common macros
* remove old/unused macros
2024-03-25 08:43:46 +11:00
Denis Angell
a3852763e7 Fix: Namespace Delete (OwnerCount) (#296)
* fix ns delete owner count
* add a new success code and refactor success checks, limit ns delete operations to 256 entries per txn
---------
Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
2024-03-25 08:37:08 +11:00
RichardAH
7cd8f0a03a ZeroB2M amendment (#293)
* ZeroB2M amendment

Co-authored-by: Denis Angell <dangell@transia.co>
2024-03-22 10:49:35 +11:00
Denis Angell
d24c134612 add emitted order test (#273) 2024-03-11 11:46:03 +11:00
Denis Angell
cdac69a111 Fix: URIToken Test (#254)
* update tests for fixXahauV1
2024-03-11 10:06:15 +11:00
Denis Angell
1500522427 fix tsh on nftoken (#269) 2024-03-11 09:38:45 +11:00
Denis Angell
75aba531d6 Amendment: featureRemit (#278)
* Remit Amendment

Co-authored-by: Denis Angell <dangell@transia.co>

Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
2024-03-11 09:29:39 +11:00
Wietse Wind
caa8b382d8 🤦 2024-02-22 23:28:19 +01:00
Wietse Wind
82e04073be Revert checkout v3 2024-02-14 15:21:43 +01:00
Wietse Wind
e1b78f9682 Do clean 2024-02-14 15:20:24 +01:00
Wietse Wind
901d1d4e8d Update checkout CI to v4 2024-02-14 15:17:45 +01:00
Wietse Wind
aca5241515 Build Container per user 2024-02-14 15:15:12 +01:00
RichardAH
780378c221 fix hook emission ordering (#270) 2024-01-24 19:55:28 +01:00
RichardAH
2dc5e670ac fix buildinfo test (#266) 2024-01-22 12:34:13 +01:00
RichardAH
4dff5a5c8e fix permisisons on inject (#264) 2024-01-22 10:48:01 +01:00
Denis Angell
f64e626a3f Fix: TSH Updates & Emitted Txn (#261)
fixXahauV2
* refactor tsh
* add uritoken mint/cancel tsh
* add flags to hookexections meta and nonce to hookemissions meta

Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
2024-01-22 10:25:36 +01:00
RichardAH
f21d3e1e97 Merge pull request #260 from Xahau/emit_guard
Fix: EmittedTxn Reliability
2024-01-19 14:05:06 +01:00
Denis Angell
858055c811 clang-format 2024-01-19 11:07:22 +01:00
Denis Angell
5d66f17574 add test 2024-01-19 11:05:45 +01:00
Denis Angell
00328cb782 Merge branch 'dev' into emit_guard 2024-01-19 10:24:34 +01:00
Richard Holland
fdf7ea4174 dbg inject clang + permission check 2024-01-17 18:15:17 +00:00
Richard Holland
7877ed9704 debug txn injector 2024-01-17 18:07:05 +00:00
Richard Holland
17ccec9ac5 Add additional checks for emitted txns 2024-01-17 15:39:02 +00:00
RichardAH
de522ac4ae Merge pull request #255 from Xahau/candidate
Candidate/release/sync
2023-12-29 22:18:04 +01:00
RichardAH
74c83a9271 Merge branch 'release' into candidate 2023-12-29 21:56:05 +01:00
Wietse Wind
66ee96d456 Build on release after all 2023-12-29 15:43:33 +01:00
Wietse Wind
b476aea55b Do not auto build on release 2023-12-29 15:39:48 +01:00
RichardAH
4ad697069f Fix xahau v1audit (#250) 2023-12-27 14:53:38 +01:00
Denis Angell
97acfe9f97 Fix: URIToken Test (#247) 2023-12-22 11:54:54 +01:00
Denis Angell
475b6f7347 Amendment: Fix Xahau v1 (#231)
* FXV1: Meta Amount (#225)

* FXV1: Optional Offer Sequence (#224)

* FXV1: Patch Hooks OwnerDir (#236)

* FXV1:  Fix `Import` Quorum (#235)

* FXV1: Namespace Limit (#220)

* FXV1: allow duplicate entries in genesis mint transactor (#239)

* FXV1: Fix URIToken (#243)

* lite fixes for tsh issues (#244)

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2023-12-21 16:21:17 +01:00
RichardAH
64a07e5539 move utf8 checking to header file (#241)
Co-authored-by: Denis Angell <dangell@transia.co>
2023-12-19 19:52:34 +01:00
Denis Angell
c49b7a2301 Fix: Verify Emitted Result on GenesisMint tests (#242) 2023-12-19 10:08:51 +01:00
Denis Angell
ac694c7c90 Add HookParameters fee calc to all txs (#223) 2023-12-07 14:30:30 +01:00
Denis Angell
3be6baded9 update definitions test (#228) 2023-12-07 13:40:18 +01:00
Richard Holland
49798b1792 make server_definitions caching thread_local 2023-12-05 13:47:54 +00:00
Denis Angell
27f43ba9ee TSH Tests (#222) 2023-12-03 10:13:05 +01:00
Wietse Wind
7881b7f69f Build on most cores (nproc / 3 » nproc / 1.337) (#226)
Use those resources! 🎉🚀
2023-12-01 12:47:50 +01:00
Denis Angell
d2c45bf560 Refactor Testcases (#219)
* Refactor Testcases (#216)

* remove unused function

* add offer id tests

* add escrow id tests

* clang-format

* fix offer

* fix escrow

* fix offer test
2023-11-24 11:41:16 +01:00
RichardAH
c917977eb0 wildcard network (id=65535) ignores signature verification (#201)
* wildcard network (id=65535) ignores signature verification

* move isWildcardNetwork deeper to mimick normal signature checking in multisig

* again

* add wildcard test

* Update Transactor.cpp

* clang-format

* fix UNLReport

* move free pass

* update test for multisign

---------

Co-authored-by: Denis Angell <dangell@transia.co>
2023-11-21 18:47:08 +01:00
tequ
4cb93f943b Update RELEASENOTES.XAHAUD.md (#208) 2023-11-17 12:10:35 +01:00
Denis Angell
5447c4010d Add features to server_definitions (#190)
* add features to `server_definitions`

* clang-format

* Update RPCCall.cpp

* only return features without params

* clang-format

* include features in hashed value

* clang-format

* rework features addition to server_defintions to be cached at flag ledgers

* fix clang, duplicate hash key

---------

Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
2023-11-16 23:38:13 +01:00
Denis Angell
b77b0e70e3 Add Script to check for suspicious patterns (#199)
* Create check_keys.sh

* add to workflow
2023-11-16 16:23:55 +01:00
RichardAH
98833e4934 Fix server version (#195)
* modify the way server_version int is built to include build number

* clang

* Update BuildInfo_test.cpp

* clang-format

* Update BuildInfo_test.cpp

* clang-format

* Update BuildInfo_test.cpp

---------

Co-authored-by: Denis Angell <dangell@transia.co>
2023-11-16 16:20:32 +01:00
dorianlynn
b74697ff9a Update settings.json (#205) 2023-11-16 12:47:57 +01:00
Denis Angell
ac1883bbf7 Update Documentation & README (#192)
* update md files

* Update RELEASENOTES.XAHAUD.md

* fixup

* Update README.md

* update readme

* Update README.md

* Update README.md

* update review

* misc fixup

* Update RELEASENOTES.XAHAUD.md

* Update README.md
2023-11-16 12:01:51 +01:00
Denis Angell
4ada3f85bb fix: uritoken destination & amount preflight check (#188)
* fix: uritoken destination & amount

* Update URIToken.cpp

* add lsfBurnable flag

* make uritoken patch a fix amendment

* clang-format
2023-11-10 10:42:57 +01:00
Denis Angell
559b504c7d Update build-in-docker.yml (#196) 2023-11-09 19:44:28 +01:00
Denis Angell
43cb255337 Update Workflow (#193) 2023-11-09 19:01:50 +01:00
Denis Angell
63bb1906ed remove rippled release cmake ci/dpkg (#183)
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2023-11-09 12:58:27 +01:00
Denis Angell
ac6c102876 add response message and remove unused response code (#185) 2023-11-09 12:31:15 +01:00
Denis Angell
195904574c Change TER response codes from _XRP to _NATIVE. (#184)
* Change `_XRP` response codes to `_NATIVE`

* Update ServerDefinitions_test.cpp
2023-11-09 12:25:09 +01:00
Denis Angell
2a18ec563d Fix HBB workflow (#191)
* Update build-in-docker.yml

* Update build-in-docker.yml

* Update build-in-docker.yml

* Update build-core.sh

* update workflow

* Update build-core.sh

* Update build-core.sh

* Update build-core.sh

* Update build-core.sh

* update workflow

* fix workflow

* Update xahaud.binary.dockerfile

* fixup

* fixup

* fixup

* fixup

* fixup

* fixup

* Update build-in-docker.yml
2023-11-09 12:20:51 +01:00
Denis Angell
91f9e424a3 Server Definitions Cleanup (#174)
* update server-definitions + test

* Update ServerDefinitions.cpp

* clang-format
2023-11-06 11:01:37 +01:00
RichardAH
c2e8bd2be1 add native currency to various rpc repsonses (#179) 2023-11-02 12:15:45 +01:00
RichardAH
4d2ec0004b fix for ctid on tx rpc (#177)
* fix for ctid on tx rpc

* clang-format

---------

Co-authored-by: Denis Angell <dangell@transia.co>
2023-11-02 11:47:57 +01:00
RichardAH
b564fe3c92 fix for delivered_amount (#178) 2023-11-02 11:47:30 +01:00
Denis Angell
04ceb5af9a update workflow (#175) 2023-11-01 17:04:13 +01:00
Denis Angell
7dee512991 Housekeeping (#170)
* Update pull_request_template.md

* strip out unused files

* Update CONTRIBUTING.md

* fix docker

* remove conan stuff

* update license

* update hook directory

* Delete .codecov.yml

* Update .dockerignore

* Update genesis.json

* update validator list example

* Update rippled-standalone.cfg

* Update CONTRIBUTING.md
2023-11-01 14:56:12 +01:00
Denis Angell
70bd7c2ce7 Reintroduce Clang-Format & Levelization (#171)
* clang-format

* levelization

* clang-format

* update workflow (#172)

* update workflow

* Update build-in-docker.yml

* fix from `clang-format`

* Update Enum.h
2023-11-01 14:12:24 +01:00
Denis Angell
1b9373e220 add array xpop path 2023-10-30 15:53:48 +01:00
Denis Angell
c3a0e1df44 add array xpop path 2023-10-30 15:53:01 +01:00
Richard Holland
c65ee9d7f9 final distributions 2023-10-30 11:09:23 +00:00
Richard Holland
090b64728e fix for https://github.com/Xahau/xahaud/issues/161 2023-10-28 14:01:05 +00:00
Richard Holland
283ec19976 additional fix for https://github.com/Xahau/xahaud/issues/158 2023-10-28 13:37:33 +00:00
Richard Holland
ee450926a7 change tequ addr in xahaugenesis 2023-10-28 09:55:41 +00:00
Richard Holland
9ae0a907f5 fix for https://github.com/Xahau/xahaud/issues/158 2023-10-27 16:43:08 +00:00
Richard Holland
81b0bf8041 fix for https://github.com/Xahau/xahaud/issues/155 2023-10-27 10:46:19 +02:00
Denis Angell
c54377eb7a add uritoken delete test 2023-10-27 10:46:19 +02:00
Richard Holland
1516900c4f issuing URITokens is account deletion blocker 2023-10-27 10:46:19 +02:00
Richard Holland
ac5ff4ca32 remove completely the codepath for undoing l2 votes 2023-10-25 11:54:58 +00:00
Denis Angell
e48e4db9e6 Fix TxQ tests (#151) 2023-10-25 02:38:12 +02:00
Denis Angell
97cac67bba Patch import (#150)
* add index test

* remove comment

* remove warn env
2023-10-24 23:14:19 +02:00
Denis Angell
5c6a855ae3 add lut bad currency check (#147) 2023-10-24 23:12:42 +02:00
Richard Holland
5fcebdabc5 replace vecFromAcc with HBB compiler-friendly version 2023-10-24 17:32:04 +00:00
Richard Holland
69d7d653be testing CI tests without standalone check 2023-10-24 14:04:33 +00:00
Richard Holland
d5d29e319c fix for https://github.com/Xahau/xahaud/issues/148 breaks some tests that lack index field. xls41 needs to be updated to include index field. 2023-10-24 09:11:40 +00:00
Denis Angell
1f052020cb Fixup Import Signers (#138)
* add guard and tests

* disable xpop array feature
2023-10-24 10:57:58 +02:00
Richard Holland
263c6342cf fix for https://github.com/Xahau/xahaud/issues/149 2023-10-20 13:21:30 +00:00
Richard Holland
fcd935cecf fix for https://github.com/Xahau/xahaud/issues/146 2023-10-19 12:22:59 +00:00
Richard Holland
96cf8f089d fix for https://github.com/Xahau/xahaud/issues/145 2023-10-19 12:18:29 +00:00
Denis Angell
adf30fb819 XRP -> XAH (#134)
* `XRP` -> `XAH`

* update rpc

* fix badCurrency

* revert consensus update

* `xah` -> `native` in rpc commands

* Update Consensus.h
2023-10-19 12:11:12 +02:00
Denis Angell
a2c41016b0 add ticket tests (#129) 2023-10-18 10:10:49 +02:00
Denis Angell
251a79e897 add claim reward flag (#139) 2023-10-18 10:09:47 +02:00
Denis Angell
d734fe600b Fix tests re: xahau genesis (#135)
* Update AccountTxPaging_test.cpp

* fix failing tests

* fix failing tests

* Update Import_test.cpp
2023-10-18 10:06:29 +02:00
Richard Holland
a6c86b5a9e fix for https://github.com/Xahau/xahaud/issues/144 2023-10-18 08:04:40 +00:00
Richard Holland
dae172ba22 fix for https://github.com/Xahau/xahaud/issues/143 2023-10-18 08:00:44 +00:00
Richard Holland
d716787378 fix for: https://github.com/Xahau/xahaud/issues/142 2023-10-18 07:41:45 +00:00
Richard Holland
7bfc60c6ad fix for https://github.com/Xahau/xahaud/issues/141 2023-10-16 13:10:26 +00:00
Denis Angell
f341558735 fix import signers check (#136) 2023-10-09 22:26:00 +02:00
Richard Holland
9317562d8b Merge branch 'dev' of github.com:Xahau/xahaud into dev 2023-10-09 13:03:00 +00:00
Richard Holland
3bf3010f50 Xahau mainnet distribution 2023-10-09 13:02:51 +00:00
RichardAH
b4022c35b3 make account starting seq the parent close time to prevent replay attacks in reset networks (#131)
* make account starting seq the parent close time to prevent replay attacks in reset networks

* add tests for activation

---------

Co-authored-by: Denis Angell <dangell@transia.co>
2023-10-09 14:18:23 +02:00
Richard Holland
428009c8ca ensure XahauGenesis is enabled correctly just after the genesis ledger 2023-10-09 12:15:24 +00:00
2019 changed files with 250944 additions and 119267 deletions

View File

@@ -45,9 +45,11 @@ DisableFormat: false
ExperimentalAutoDetectBinPacking: false
ForEachMacros: [ Q_FOREACH, BOOST_FOREACH ]
IncludeCategories:
- Regex: '^<(BeastConfig)'
- Regex: '^<(test)/'
Priority: 0
- Regex: '^<(ripple)/'
- Regex: '^<(xrpld)/'
Priority: 1
- Regex: '^<(xrpl)/'
Priority: 2
- Regex: '^<(boost)/'
Priority: 3
@@ -56,6 +58,7 @@ IncludeCategories:
IncludeIsMainRegex: '$'
IndentCaseLabels: true
IndentFunctionDeclarationAfterType: false
IndentRequiresClause: true
IndentWidth: 4
IndentWrappedFunctionNames: false
KeepEmptyLinesAtTheStartOfBlocks: false
@@ -71,6 +74,7 @@ PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 200
PointerAlignment: Left
ReflowComments: true
RequiresClausePosition: OwnLine
SortIncludes: true
SpaceAfterCStyleCast: false
SpaceBeforeAssignmentOperators: true

View File

@@ -1,5 +0,0 @@
codecov:
ci:
- !appveyor
- travis

View File

@@ -2,6 +2,6 @@
*
# Allow files and directories
!/build-inner.sh
!/release-builder.sh
!/build-core.sh
!/build-full.sh
!/release-builder.sh

View File

@@ -2,3 +2,7 @@
# To use it by default in git blame:
# git config blame.ignoreRevsFile .git-blame-ignore-revs
50760c693510894ca368e90369b0cc2dabfd07f3
e2384885f5f630c8f0ffe4bf21a169b433a16858
241b9ddde9e11beb7480600fd5ed90e1ef109b21
760f16f56835663d9286bd29294d074de26a7ba6
0eebe6a5f4246fced516d52b83ec4e7f47373edd

View File

@@ -0,0 +1,31 @@
name: 'Configure ccache'
description: 'Sets up ccache with consistent configuration'
inputs:
max_size:
description: 'Maximum cache size'
required: false
default: '2G'
hash_dir:
description: 'Whether to include directory paths in hash'
required: false
default: 'true'
compiler_check:
description: 'How to check compiler for changes'
required: false
default: 'content'
runs:
using: 'composite'
steps:
- name: Configure ccache
shell: bash
run: |
mkdir -p ~/.ccache
export CONF_PATH="${CCACHE_CONFIGPATH:-${CCACHE_DIR:-$HOME/.ccache}/ccache.conf}"
mkdir -p $(dirname "$CONF_PATH")
echo "max_size = ${{ inputs.max_size }}" > "$CONF_PATH"
echo "hash_dir = ${{ inputs.hash_dir }}" >> "$CONF_PATH"
echo "compiler_check = ${{ inputs.compiler_check }}" >> "$CONF_PATH"
ccache -p # Print config for verification
ccache -z # Zero statistics before the build

View File

@@ -0,0 +1,110 @@
name: build
description: 'Builds the project with ccache integration'
inputs:
generator:
description: 'CMake generator to use'
required: true
configuration:
description: 'Build configuration (Debug, Release, etc.)'
required: true
build_dir:
description: 'Directory to build in'
required: false
default: '.build'
cc:
description: 'C compiler to use'
required: false
default: ''
cxx:
description: 'C++ compiler to use'
required: false
default: ''
compiler-id:
description: 'Unique identifier for compiler/version combination used for cache keys'
required: false
default: ''
cache_version:
description: 'Cache version for invalidation'
required: false
default: '1'
ccache_enabled:
description: 'Whether to use ccache'
required: false
default: 'true'
main_branch:
description: 'Main branch name for restore keys'
required: false
default: 'dev'
runs:
using: 'composite'
steps:
- name: Generate safe branch name
if: inputs.ccache_enabled == 'true'
id: safe-branch
shell: bash
run: |
SAFE_BRANCH=$(echo "${{ github.ref_name }}" | tr -c 'a-zA-Z0-9_.-' '-')
echo "name=${SAFE_BRANCH}" >> $GITHUB_OUTPUT
- name: Restore ccache directory
if: inputs.ccache_enabled == 'true'
id: ccache-restore
uses: actions/cache/restore@v4
with:
path: ~/.ccache
key: ${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ inputs.configuration }}-${{ steps.safe-branch.outputs.name }}
restore-keys: |
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ inputs.configuration }}-${{ inputs.main_branch }}
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ inputs.configuration }}-
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-
- name: Configure project
shell: bash
run: |
mkdir -p ${{ inputs.build_dir }}
cd ${{ inputs.build_dir }}
# Set compiler environment variables if provided
if [ -n "${{ inputs.cc }}" ]; then
export CC="${{ inputs.cc }}"
fi
if [ -n "${{ inputs.cxx }}" ]; then
export CXX="${{ inputs.cxx }}"
fi
# Configure ccache launcher args
CCACHE_ARGS=""
if [ "${{ inputs.ccache_enabled }}" = "true" ]; then
CCACHE_ARGS="-DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache"
fi
# Run CMake configure
cmake .. \
-G "${{ inputs.generator }}" \
$CCACHE_ARGS \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=${{ inputs.configuration }} \
-Dtests=TRUE \
-Dxrpld=TRUE
- name: Build project
shell: bash
run: |
cd ${{ inputs.build_dir }}
cmake --build . --config ${{ inputs.configuration }} --parallel $(nproc)
- name: Show ccache statistics
if: inputs.ccache_enabled == 'true'
shell: bash
run: ccache -s
- name: Save ccache directory
if: inputs.ccache_enabled == 'true'
uses: actions/cache/save@v4
with:
path: ~/.ccache
key: ${{ steps.ccache-restore.outputs.cache-primary-key }}

View File

@@ -0,0 +1,86 @@
name: dependencies
description: 'Installs build dependencies with caching'
inputs:
configuration:
description: 'Build configuration (Debug, Release, etc.)'
required: true
build_dir:
description: 'Directory to build dependencies in'
required: false
default: '.build'
compiler-id:
description: 'Unique identifier for compiler/version combination used for cache keys'
required: false
default: ''
cache_version:
description: 'Cache version for invalidation'
required: false
default: '1'
cache_enabled:
description: 'Whether to use caching'
required: false
default: 'true'
main_branch:
description: 'Main branch name for restore keys'
required: false
default: 'dev'
outputs:
cache-hit:
description: 'Whether there was a cache hit'
value: ${{ steps.cache-restore-conan.outputs.cache-hit }}
runs:
using: 'composite'
steps:
- name: Generate safe branch name
if: inputs.cache_enabled == 'true'
id: safe-branch
shell: bash
run: |
SAFE_BRANCH=$(echo "${{ github.ref_name }}" | tr -c 'a-zA-Z0-9_.-' '-')
echo "name=${SAFE_BRANCH}" >> $GITHUB_OUTPUT
- name: Restore Conan cache
if: inputs.cache_enabled == 'true'
id: cache-restore-conan
uses: actions/cache/restore@v4
with:
path: |
~/.conan
~/.conan2
key: ${{ runner.os }}-conan-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ hashFiles('**/conanfile.txt', '**/conanfile.py') }}-${{ inputs.configuration }}
restore-keys: |
${{ runner.os }}-conan-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ hashFiles('**/conanfile.txt', '**/conanfile.py') }}-
${{ runner.os }}-conan-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-
${{ runner.os }}-conan-v${{ inputs.cache_version }}-
- name: Export custom recipes
shell: bash
run: |
conan export external/snappy snappy/1.1.10@
conan export external/soci soci/4.0.3@
- name: Install dependencies
shell: bash
run: |
# Create build directory
mkdir -p ${{ inputs.build_dir }}
cd ${{ inputs.build_dir }}
# Install dependencies using conan
conan install \
--output-folder . \
--build missing \
--settings build_type=${{ inputs.configuration }} \
..
- name: Save Conan cache
if: inputs.cache_enabled == 'true' && steps.cache-restore-conan.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.conan
~/.conan2
key: ${{ steps.cache-restore-conan.outputs.cache-primary-key }}

View File

@@ -33,14 +33,38 @@ Please check [x] relevant options, delete irrelevant ones.
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Refactor (non-breaking change that only restructures code)
- [ ] Tests (You added tests for code that already exists, or your new feature included in this PR)
- [ ] Documentation Updates
- [ ] Performance (increase or change in throughput and/or latency)
- [ ] Tests (you added tests for code that already exists, or your new feature included in this PR)
- [ ] Documentation update
- [ ] Chore (no impact to binary, e.g. `.gitignore`, formatting, dropping support for older tooling)
- [ ] Release
### API Impact
<!--
Please check [x] relevant options, delete irrelevant ones.
* If there is any impact to the public API methods (HTTP / WebSocket), please update https://github.com/xrplf/rippled/blob/develop/API-CHANGELOG.md
* Update API-CHANGELOG.md and add the change directly in this PR by pushing to your PR branch.
* libxrpl: See https://github.com/XRPLF/rippled/blob/develop/docs/build/depend.md
* Peer Protocol: See https://xrpl.org/peer-protocol.html
-->
- [ ] Public API: New feature (new methods and/or new fields)
- [ ] Public API: Breaking change (in general, breaking changes should only impact the next api_version)
- [ ] `libxrpl` change (any change that may affect `libxrpl` or dependents of `libxrpl`)
- [ ] Peer protocol change (must be backward compatible or bump the peer protocol version)
<!--
## Before / After
If relevant, use this section for an English description of the change at a technical level.
If this change affects an API, examples should be included here.
For performance-impacting changes, please provide these details:
1. Is this a new feature, bug fix, or improvement to existing functionality?
2. What behavior/functionality does the change impact?
3. In what processing can the impact be measured? Be as specific as possible - e.g. RPC client call, payment transaction that involves LOB, AMM, caching, DB operations, etc.
4. Does this change affect concurrent processing - e.g. does it involve acquiring locks, multi-threaded processing, or async processing?
-->
<!--
@@ -52,4 +76,4 @@ This section may not be needed if your change includes thoroughly commented unit
<!--
## Future Tasks
For future tasks related to PR.
-->
-->

View File

@@ -2,34 +2,94 @@ name: Build using Docker
on:
push:
branches: [ "dev", "test", "release" ]
branches: ["dev", "candidate", "release", "jshooks"]
pull_request:
branches: [ "dev", "release" ]
branches: ["dev", "candidate", "release", "jshooks"]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
DEBUG_BUILD_CONTAINERS_AFTER_CLEANUP: 1
jobs:
builder:
checkout:
runs-on: [self-hosted, vanity]
outputs:
checkout_path: ${{ steps.vars.outputs.checkout_path }}
steps:
- uses: actions/checkout@v3
with:
clean: false
- name: Build using Docker
run: /bin/bash release-builder.sh
unittests:
needs: builder
runs-on: [self-hosted, vanity]
steps:
- name: Unit tests
run: /bin/bash docker-unit-tests.sh
- name: Prepare checkout path
id: vars
run: |
SAFE_BRANCH=$(echo "${{ github.ref_name }}" | sed -e 's/[^a-zA-Z0-9._-]/-/g')
CHECKOUT_PATH="${SAFE_BRANCH}-${{ github.sha }}"
echo "checkout_path=${CHECKOUT_PATH}" >> "$GITHUB_OUTPUT"
# publisher:
# needs: builder
# runs-on: [self-hosted, vanity]
# steps:
# - uses: actions/upload-artifact@v3
# with:
# name: build-${{ github.run_number }}
# path: |
# release-build/xahaud
# release-build/release.info
- uses: actions/checkout@v4
with:
path: ${{ steps.vars.outputs.checkout_path }}
clean: true
fetch-depth: 2 # Only get the last 2 commits, to avoid fetching all history
build:
runs-on: [self-hosted, vanity]
needs: [checkout]
defaults:
run:
working-directory: ${{ needs.checkout.outputs.checkout_path }}
steps:
- name: Set Cleanup Script Path
run: |
echo "JOB_CLEANUP_SCRIPT=$(mktemp)" >> $GITHUB_ENV
- name: Build using Docker
run: /bin/bash release-builder.sh
- name: Stop Container (Cleanup)
if: always()
run: |
echo "Running cleanup script: $JOB_CLEANUP_SCRIPT"
/bin/bash -e -x "$JOB_CLEANUP_SCRIPT"
CLEANUP_EXIT_CODE=$?
if [[ "$CLEANUP_EXIT_CODE" -eq 0 ]]; then
echo "Cleanup script succeeded."
rm -f "$JOB_CLEANUP_SCRIPT"
echo "Cleanup script removed."
else
echo "⚠️ Cleanup script failed! Keeping for debugging: $JOB_CLEANUP_SCRIPT"
fi
if [[ "${DEBUG_BUILD_CONTAINERS_AFTER_CLEANUP}" == "1" ]]; then
echo "🔍 Checking for leftover containers..."
BUILD_CONTAINERS=$(docker ps --format '{{.Names}}' | grep '^xahaud_cached_builder' || echo "")
if [[ -n "$BUILD_CONTAINERS" ]]; then
echo "⚠️ WARNING: Some build containers are still running"
echo "$BUILD_CONTAINERS"
else
echo "✅ No build containers found"
fi
fi
tests:
runs-on: [self-hosted, vanity]
needs: [build, checkout]
defaults:
run:
working-directory: ${{ needs.checkout.outputs.checkout_path }}
steps:
- name: Unit tests
run: /bin/bash docker-unit-tests.sh
cleanup:
runs-on: [self-hosted, vanity]
needs: [tests, checkout]
if: always()
steps:
- name: Cleanup workspace
run: |
CHECKOUT_PATH="${{ needs.checkout.outputs.checkout_path }}"
echo "Cleaning workspace for ${CHECKOUT_PATH}"
rm -rf "${{ github.workspace }}/${CHECKOUT_PATH}"

View File

@@ -4,11 +4,11 @@ on: [push, pull_request]
jobs:
check:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
env:
CLANG_VERSION: 10
CLANG_VERSION: 18
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Install clang-format
run: |
codename=$( lsb_release --codename --short )
@@ -19,10 +19,8 @@ jobs:
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add
sudo apt-get update
sudo apt-get install clang-format-${CLANG_VERSION}
- name: Format src/ripple
run: find src/ripple -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 clang-format-${CLANG_VERSION} -i
- name: Format src/test
run: find src/test -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 clang-format-${CLANG_VERSION} -i
- name: Format first-party sources
run: find include src -type f \( -name '*.cpp' -o -name '*.hpp' -o -name '*.h' -o -name '*.ipp' \) -not -path "src/magic/magic_enum.h" -exec clang-format-${CLANG_VERSION} -i {} +
- name: Check for differences
id: assert
run: |
@@ -30,7 +28,7 @@ jobs:
git diff --exit-code | tee "clang-format.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
continue-on-error: true
with:
name: clang-format.patch
@@ -52,7 +50,7 @@ jobs:
To fix it, you can do one of two things:
1. Download and apply the patch generated as an artifact of this
job to your repo, commit, and push.
2. Run 'git-clang-format --extensions c,cpp,h,cxx,ipp develop'
2. Run 'git-clang-format --extensions cpp,h,hpp,ipp develop'
in your repo, commit, and push.
run: |
echo "${PREAMBLE}"

View File

@@ -1,25 +0,0 @@
name: Build and publish Doxygen documentation
on:
push:
branches:
- develop
jobs:
job:
runs-on: ubuntu-latest
container:
image: docker://rippleci/rippled-ci-builder:2944b78d22db
steps:
- name: checkout
uses: actions/checkout@v2
- name: build
run: |
mkdir build
cd build
cmake -DBoost_NO_BOOST_CMAKE=ON ..
cmake --build . --target docs --parallel $(nproc)
- name: publish
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: build/docs/html

104
.github/workflows/instrumentation.yml vendored Normal file
View File

@@ -0,0 +1,104 @@
name: instrumentation
on:
pull_request:
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows, instrumentation)
branches:
# Always build the package branches
- develop
- release
- master
# Branches that opt-in to running
- 'ci/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
# NOTE we are not using dependencies built inside nix because nix is lagging
# with compiler versions. Instrumentation requires clang version 16 or later
instrumentation-build:
if: false # disable for now
env:
CLANG_RELEASE: 16
strategy:
fail-fast: false
runs-on: [self-hosted, heavy]
container: debian:bookworm
steps:
- name: install prerequisites
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update
apt-get install --yes --no-install-recommends \
clang-${CLANG_RELEASE} clang++-${CLANG_RELEASE} \
python3-pip python-is-python3 make cmake git wget
apt-get clean
update-alternatives --install \
/usr/bin/clang clang /usr/bin/clang-${CLANG_RELEASE} 100 \
--slave /usr/bin/clang++ clang++ /usr/bin/clang++-${CLANG_RELEASE}
update-alternatives --auto clang
pip install --no-cache --break-system-packages "conan<2"
- name: checkout
uses: actions/checkout@v4
- name: prepare environment
run: |
mkdir ${GITHUB_WORKSPACE}/.build
echo "SOURCE_DIR=$GITHUB_WORKSPACE" >> $GITHUB_ENV
echo "BUILD_DIR=$GITHUB_WORKSPACE/.build" >> $GITHUB_ENV
echo "CC=/usr/bin/clang" >> $GITHUB_ENV
echo "CXX=/usr/bin/clang++" >> $GITHUB_ENV
- name: configure Conan
run: |
conan profile new --detect default
conan profile update settings.compiler=clang default
conan profile update settings.compiler.version=${CLANG_RELEASE} default
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update settings.compiler.cppstd=20 default
conan profile update options.rocksdb=False default
conan profile update \
'conf.tools.build:compiler_executables={"c": "/usr/bin/clang", "cpp": "/usr/bin/clang++"}' default
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_DISABLE_CONCEPTS"' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
conan export external/snappy snappy/1.1.10@
conan export external/soci soci/4.0.3@
- name: build dependencies
run: |
cd ${BUILD_DIR}
conan install ${SOURCE_DIR} \
--output-folder ${BUILD_DIR} \
--install-folder ${BUILD_DIR} \
--build missing \
--settings build_type=Debug
- name: build with instrumentation
run: |
cd ${BUILD_DIR}
cmake -S ${SOURCE_DIR} -B ${BUILD_DIR} \
-Dvoidstar=ON \
-Dtests=ON \
-Dxrpld=ON \
-DCMAKE_BUILD_TYPE=Debug \
-DSECP256K1_BUILD_BENCHMARK=OFF \
-DSECP256K1_BUILD_TESTS=OFF \
-DSECP256K1_BUILD_EXHAUSTIVE_TESTS=OFF \
-DCMAKE_TOOLCHAIN_FILE=${BUILD_DIR}/build/generators/conan_toolchain.cmake
cmake --build . --parallel $(nproc)
- name: verify instrumentation enabled
run: |
cd ${BUILD_DIR}
./rippled --version | grep libvoidstar
- name: run unit tests
run: |
cd ${BUILD_DIR}
./rippled -u --unittest-jobs $(( $(nproc)/4 ))

View File

@@ -8,7 +8,7 @@ jobs:
env:
CLANG_VERSION: 10
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Check levelization
run: Builds/levelization/levelization.sh
- name: Check for differences
@@ -18,7 +18,7 @@ jobs:
git diff --exit-code | tee "levelization.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
continue-on-error: true
with:
name: levelization.patch

116
.github/workflows/xahau-ga-macos.yml vendored Normal file
View File

@@ -0,0 +1,116 @@
name: MacOS - GA Runner
on:
push:
branches: ["dev", "candidate", "release"]
pull_request:
branches: ["dev", "candidate", "release"]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
strategy:
matrix:
generator:
- Ninja
configuration:
- Debug
runs-on: macos-15
env:
build_dir: .build
# Bump this number to invalidate all caches globally.
CACHE_VERSION: 1
MAIN_BRANCH_NAME: dev
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Conan
run: |
brew install conan@1
# Add Conan 1 to the PATH for this job
echo "$(brew --prefix conan@1)/bin" >> $GITHUB_PATH
- name: Install Coreutils
run: |
brew install coreutils
echo "Num proc: $(nproc)"
- name: Install Ninja
if: matrix.generator == 'Ninja'
run: brew install ninja
- name: Install Python
run: |
if which python3 > /dev/null 2>&1; then
echo "Python 3 executable exists"
python3 --version
else
brew install python@3.12
fi
# Create 'python' symlink if it doesn't exist (for tools expecting 'python')
if ! which python > /dev/null 2>&1; then
sudo ln -sf $(which python3) /usr/local/bin/python
fi
- name: Install CMake
run: |
if which cmake > /dev/null 2>&1; then
echo "cmake executable exists"
cmake --version
else
brew install cmake
fi
- name: Install ccache
run: brew install ccache
- name: Configure ccache
uses: ./.github/actions/xahau-configure-ccache
with:
max_size: 2G
hash_dir: true
compiler_check: content
- name: Check environment
run: |
echo "PATH:"
echo "${PATH}" | tr ':' '\n'
which python && python --version || echo "Python not found"
which conan && conan --version || echo "Conan not found"
which cmake && cmake --version || echo "CMake not found"
clang --version
ccache --version
echo "---- Full Environment ----"
env
- name: Configure Conan
run: |
conan profile new default --detect || true # Ignore error if profile exists
conan profile update settings.compiler.cppstd=20 default
- name: Install dependencies
uses: ./.github/actions/xahau-ga-dependencies
with:
configuration: ${{ matrix.configuration }}
build_dir: ${{ env.build_dir }}
compiler-id: clang
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
- name: Build
uses: ./.github/actions/xahau-ga-build
with:
generator: ${{ matrix.generator }}
configuration: ${{ matrix.configuration }}
build_dir: ${{ env.build_dir }}
compiler-id: clang
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
- name: Test
run: |
${{ env.build_dir }}/rippled --unittest --unittest-jobs $(nproc)

123
.github/workflows/xahau-ga-nix.yml vendored Normal file
View File

@@ -0,0 +1,123 @@
name: Nix - GA Runner
on:
push:
branches: ["dev", "candidate", "release"]
pull_request:
branches: ["dev", "candidate", "release"]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build-job:
runs-on: ubuntu-latest
outputs:
artifact_name: ${{ steps.set-artifact-name.outputs.artifact_name }}
strategy:
fail-fast: false
matrix:
compiler: [gcc]
configuration: [Debug]
include:
- compiler: gcc
cc: gcc-11
cxx: g++-11
compiler_id: gcc-11
env:
build_dir: .build
# Bump this number to invalidate all caches globally.
CACHE_VERSION: 1
MAIN_BRANCH_NAME: dev
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install build dependencies
run: |
sudo apt-get update
sudo apt-get install -y ninja-build ${{ matrix.cc }} ${{ matrix.cxx }} ccache
# Install specific Conan version needed
pip install --upgrade "conan<2"
- name: Configure ccache
uses: ./.github/actions/xahau-configure-ccache
with:
max_size: 2G
hash_dir: true
compiler_check: content
- name: Configure Conan
run: |
conan profile new default --detect || true # Ignore error if profile exists
conan profile update settings.compiler.cppstd=20 default
conan profile update settings.compiler=${{ matrix.compiler }} default
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update env.CC=/usr/bin/${{ matrix.cc }} default
conan profile update env.CXX=/usr/bin/${{ matrix.cxx }} default
conan profile update conf.tools.build:compiler_executables='{"c": "/usr/bin/${{ matrix.cc }}", "cpp": "/usr/bin/${{ matrix.cxx }}"}' default
# Set correct compiler version based on matrix.compiler
if [ "${{ matrix.compiler }}" = "gcc" ]; then
conan profile update settings.compiler.version=11 default
elif [ "${{ matrix.compiler }}" = "clang" ]; then
conan profile update settings.compiler.version=14 default
fi
# Display profile for verification
conan profile show default
- name: Check environment
run: |
echo "PATH:"
echo "${PATH}" | tr ':' '\n'
which conan && conan --version || echo "Conan not found"
which cmake && cmake --version || echo "CMake not found"
which ${{ matrix.cc }} && ${{ matrix.cc }} --version || echo "${{ matrix.cc }} not found"
which ${{ matrix.cxx }} && ${{ matrix.cxx }} --version || echo "${{ matrix.cxx }} not found"
which ccache && ccache --version || echo "ccache not found"
echo "---- Full Environment ----"
env
- name: Install dependencies
uses: ./.github/actions/xahau-ga-dependencies
with:
configuration: ${{ matrix.configuration }}
build_dir: ${{ env.build_dir }}
compiler-id: ${{ matrix.compiler_id }}
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
- name: Build
uses: ./.github/actions/xahau-ga-build
with:
generator: Ninja
configuration: ${{ matrix.configuration }}
build_dir: ${{ env.build_dir }}
cc: ${{ matrix.cc }}
cxx: ${{ matrix.cxx }}
compiler-id: ${{ matrix.compiler_id }}
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
- name: Set artifact name
id: set-artifact-name
run: |
ARTIFACT_NAME="build-output-nix-${{ github.run_id }}-${{ matrix.compiler }}-${{ matrix.configuration }}"
echo "artifact_name=${ARTIFACT_NAME}" >> "$GITHUB_OUTPUT"
echo "Using artifact name: ${ARTIFACT_NAME}"
- name: Debug build directory
run: |
echo "Checking build directory contents: ${{ env.build_dir }}"
ls -la ${{ env.build_dir }} || echo "Build directory not found or empty"
- name: Run tests
run: |
# Ensure the binary exists before trying to run
if [ -f "${{ env.build_dir }}/rippled" ]; then
${{ env.build_dir }}/rippled --unittest --unittest-jobs $(nproc)
else
echo "Error: rippled executable not found in ${{ env.build_dir }}"
exit 1
fi

6
.gitignore vendored
View File

@@ -114,3 +114,9 @@ pkg_out
pkg
CMakeUserPresets.json
bld.rippled/
generated
.vscode
# Suggested in-tree build directory
/.build/

View File

@@ -1,169 +0,0 @@
# I don't know what the minimum size is, but we cannot build on t3.micro.
# TODO: Factor common builds between different tests.
# The parameters for our job matrix:
#
# 1. Generator (Make, Ninja, MSBuild)
# 2. Compiler (GCC, Clang, MSVC)
# 3. Build type (Debug, Release)
# 4. Definitions (-Dunity=OFF, -Dassert=ON, ...)
.job_linux_build_test:
only:
variables:
- $CI_PROJECT_URL =~ /^https?:\/\/gitlab.com\//
stage: build
tags:
- linux
- c5.2xlarge
image: thejohnfreeman/rippled-build-ubuntu:4b73694e07f0
script:
- bin/ci/build.sh
- bin/ci/test.sh
cache:
# Use a different key for each unique combination of (generator, compiler,
# build type). Caches are stored as `.zip` files; they are not merged.
# Generate a new key whenever you want to bust the cache, e.g. when the
# dependency versions have been bumped.
# By default, jobs pull the cache. Only a few specially chosen jobs update
# the cache (with policy `pull-push`); one for each unique combination of
# (generator, compiler, build type).
policy: pull
paths:
- .nih_c/
'build+test Make GCC Debug':
extends: .job_linux_build_test
variables:
GENERATOR: Unix Makefiles
COMPILER: gcc
BUILD_TYPE: Debug
cache:
key: 62ada41c-fc9e-4949-9533-736d4d6512b6
policy: pull-push
'build+test Ninja GCC Debug':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Debug
cache:
key: 1665d3eb-6233-4eef-9f57-172636899faa
policy: pull-push
'build+test Ninja GCC Debug -Dstatic=OFF':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dstatic=OFF'
cache:
key: 1665d3eb-6233-4eef-9f57-172636899faa
'build+test Ninja GCC Debug -Dstatic=OFF -DBUILD_SHARED_LIBS=ON':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dstatic=OFF -DBUILD_SHARED_LIBS=ON'
cache:
key: 1665d3eb-6233-4eef-9f57-172636899faa
'build+test Ninja GCC Debug -Dunity=OFF':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dunity=OFF'
cache:
key: 1665d3eb-6233-4eef-9f57-172636899faa
'build+test Ninja GCC Release -Dassert=ON':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Release
CMAKE_ARGS: '-Dassert=ON'
cache:
key: c45ec125-9625-4c19-acf7-4e889d5f90bd
policy: pull-push
'build+test(manual) Ninja GCC Release -Dassert=ON':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Release
CMAKE_ARGS: '-Dassert=ON'
MANUAL_TEST: 'true'
cache:
key: c45ec125-9625-4c19-acf7-4e889d5f90bd
'build+test Make clang Debug':
extends: .job_linux_build_test
variables:
GENERATOR: Unix Makefiles
COMPILER: clang
BUILD_TYPE: Debug
cache:
key: bf578dc2-5277-4580-8de5-6b9523118b19
policy: pull-push
'build+test Ninja clang Debug':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Debug
cache:
key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
policy: pull-push
'build+test Ninja clang Debug -Dunity=OFF':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dunity=OFF'
cache:
key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
'build+test Ninja clang Debug -Dunity=OFF -Dsan=address':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dunity=OFF -Dsan=address'
CONCURRENT_TESTS: 1
cache:
key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
'build+test Ninja clang Debug -Dunity=OFF -Dsan=undefined':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dunity=OFF -Dsan=undefined'
cache:
key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
'build+test Ninja clang Release -Dassert=ON':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Release
CMAKE_ARGS: '-Dassert=ON'
cache:
key: 7751be37-2358-4f08-b1d0-7e72e0ad266d
policy: pull-push

6
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,6 @@
# .pre-commit-config.yaml
repos:
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: v18.1.3
hooks:
- id: clang-format

View File

@@ -1,460 +0,0 @@
# There is a known issue where Travis will have trouble fetching the cache,
# particularly on non-linux builds. Try restarting the individual build
# (probably will not be necessary in the "windep" stages) if the end of the
# log looks like:
#
#---------------------------------------
# attempting to download cache archive
# fetching travisorder/cache--windows-1809-containers-f2bf1c76c7fb4095c897a4999bd7c9b3fb830414dfe91f33d665443b52416d39--compiler-gpp.tgz
# found cache
# adding C:/Users/travis/_cache to cache
# creating directory C:/Users/travis/_cache
# No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
# Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#build-times-out-because-no-output-was-received
# The build has been terminated
#---------------------------------------
language: cpp
dist: bionic
services:
- docker
stages:
- windep-vcpkg
- windep-boost
- build
env:
global:
- DOCKER_IMAGE="rippleci/rippled-ci-builder:2020-01-08"
- CMAKE_EXTRA_ARGS="-Dwerr=ON -Dwextra=ON"
- NINJA_BUILD=true
# change this if we get more VM capacity
- MAX_TIME_MIN=80
- CACHE_DIR=${TRAVIS_HOME}/_cache
- NIH_CACHE_ROOT=${CACHE_DIR}/nih_c
- PARALLEL_TESTS=true
# this is NOT used by linux container based builds (which already have boost installed)
- BOOST_URL='https://boostorg.jfrog.io/artifactory/main/release/1.75.0/source/boost_1_75_0.tar.gz'
# Alternate dowload location
- BOOST_URL2='https://downloads.sourceforge.net/project/boost/boost/1.75.0/boost_1_75_0.tar.bz2?r=&amp;ts=1594393912&amp;use_mirror=newcontinuum'
# Travis downloader doesn't seem to have updated certs. Using this option
# introduces obvious security risks, but they're Travis's risks.
# Note that this option is only used if the "normal" build fails.
- BOOST_WGET_OPTIONS='--no-check-certificate'
- VCPKG_DIR=${CACHE_DIR}/vcpkg
- USE_CCACHE=true
- CCACHE_BASEDIR=${TRAVIS_HOME}"
- CCACHE_NOHASHDIR=true
- CCACHE_DIR=${CACHE_DIR}/ccache
before_install:
- export NUM_PROCESSORS=$(nproc)
- echo "NUM PROC is ${NUM_PROCESSORS}"
- if [ "$(uname)" = "Linux" ] ; then docker pull ${DOCKER_IMAGE}; fi
- if [ "${MATRIX_EVAL}" != "" ] ; then eval "${MATRIX_EVAL}"; fi
- if [ "${CMAKE_ADD}" != "" ] ; then export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} ${CMAKE_ADD}"; fi
- bin/ci/ubuntu/travis-cache-start.sh
matrix:
fast_finish: true
allow_failures:
# TODO these need more investigation
#
# there are a number of UBs caught currently that need triage
- name: ubsan, clang-8
# this one often runs out of memory:
- name: manual tests, gcc-8, release
# The Windows build may fail if any of the dependencies fail, but
# allow the rest of the builds to continue. They may succeed if the
# dependency is already cached. These do not need to be retried if
# _any_ of the Windows builds succeed.
- stage: windep-vcpkg
- stage: windep-boost
# https://docs.travis-ci.com/user/build-config-yaml#usage-of-yaml-anchors-and-aliases
include:
# debug builds
- &linux
stage: build
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/
compiler: gcc-8
name: gcc-8, debug
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
script:
- sudo chmod -R a+rw ${CACHE_DIR}
- ccache -s
- travis_wait ${MAX_TIME_MIN} bin/ci/ubuntu/build-in-docker.sh
- ccache -s
- <<: *linux
compiler: clang-8
name: clang-8, debug
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Debug
- <<: *linux
compiler: clang-8
name: reporting, clang-8, debug
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dreporting=ON"
# coverage builds
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_cov/
compiler: gcc-8
name: coverage, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dcoverage=ON"
- TARGET=coverage_report
- SKIP_TESTS=true
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_cov/
compiler: clang-8
name: coverage, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dcoverage=ON"
- TARGET=coverage_report
- SKIP_TESTS=true
# test-free builds
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/
compiler: gcc-8
name: no-tests-unity, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dtests=OFF"
- SKIP_TESTS=true
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/
compiler: clang-8
name: no-tests-non-unity, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dtests=OFF -Dunity=OFF"
- SKIP_TESTS=true
# nounity
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_nounity/
compiler: gcc-8
name: non-unity, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dunity=OFF"
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_nounity/
compiler: clang-8
name: non-unity, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dunity=OFF"
# manual tests
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_man/
compiler: gcc-8
name: manual tests, gcc-8, debug
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- MANUAL_TESTS=true
# manual tests
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_man/
compiler: gcc-8
name: manual tests, gcc-8, release
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dassert=ON -Dunity=OFF"
- MANUAL_TESTS=true
# release builds
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_release/
compiler: gcc-8
name: gcc-8, release
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dassert=ON -Dunity=OFF"
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_release/
compiler: clang-8
name: clang-8, release
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dassert=ON"
# asan
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_san/
compiler: clang-8
name: asan, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dsan=address"
- ASAN_OPTIONS="print_stats=true:atexit=true"
#- LSAN_OPTIONS="verbosity=1:log_threads=1"
- PARALLEL_TESTS=false
# ubsan
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_san/
compiler: clang-8
name: ubsan, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dsan=undefined"
# once we can run clean under ubsan, add halt_on_error=1 to options below
- UBSAN_OPTIONS="print_stacktrace=1:report_error_type=1"
- PARALLEL_TESTS=false
# tsan
# current tsan failure *might* be related to:
# https://github.com/google/sanitizers/issues/1104
# but we can't get it to run, so leave it disabled for now
# - <<: *linux
# if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_san/
# compiler: clang-8
# name: tsan, clang-8
# env:
# - MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
# - BUILD_TYPE=Release
# - CMAKE_ADD="-Dsan=thread"
# - TSAN_OPTIONS="history_size=3 external_symbolizer_path=/usr/bin/llvm-symbolizer verbosity=1"
# - PARALLEL_TESTS=false
# dynamic lib builds
- <<: *linux
compiler: gcc-8
name: non-static, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dstatic=OFF"
- <<: *linux
compiler: gcc-8
name: non-static + BUILD_SHARED_LIBS, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dstatic=OFF -DBUILD_SHARED_LIBS=ON"
# makefile
- <<: *linux
compiler: gcc-8
name: makefile generator, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- NINJA_BUILD=false
# misc alternative compilers
- <<: *linux
compiler: gcc-9
name: gcc-9
env:
- MATRIX_EVAL="CC=gcc-9 && CXX=g++-9"
- BUILD_TYPE=Debug
- <<: *linux
compiler: clang-9
name: clang-9, debug
env:
- MATRIX_EVAL="CC=clang-9 && CXX=clang++-9"
- BUILD_TYPE=Debug
- <<: *linux
compiler: clang-9
name: clang-9, release
env:
- MATRIX_EVAL="CC=clang-9 && CXX=clang++-9"
- BUILD_TYPE=Release
# verify build with min version of cmake
- <<: *linux
compiler: gcc-8
name: min cmake version
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_EXE=/opt/local/cmake/bin/cmake
- SKIP_TESTS=true
# validator keys project as subproj of rippled
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_vkeys/
compiler: gcc-8
name: validator-keys
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dvalidator_keys=ON"
- TARGET=validator-keys
# macos
- &macos
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_mac/
stage: build
os: osx
osx_image: xcode13.1
name: xcode13.1, debug
env:
# put NIH in non-cache location since it seems to
# cause failures when homebrew updates
- NIH_CACHE_ROOT=${TRAVIS_BUILD_DIR}/nih_c
- BLD_CONFIG=Debug
- TEST_EXTRA_ARGS=""
- BOOST_ROOT=${CACHE_DIR}/boost_1_75_0
- >-
CMAKE_ADD="
-DBOOST_ROOT=${BOOST_ROOT}/_INSTALLED_
-DBoost_ARCHITECTURE=-x64
-DBoost_NO_SYSTEM_PATHS=ON
-DCMAKE_VERBOSE_MAKEFILE=ON"
addons:
homebrew:
packages:
- protobuf
- grpc
- pkg-config
- bash
- ninja
- cmake
- wget
- zstd
- libarchive
- openssl@1.1
update: true
install:
- export OPENSSL_ROOT=$(brew --prefix openssl@1.1)
- travis_wait ${MAX_TIME_MIN} Builds/containers/shared/install_boost.sh
- brew uninstall --ignore-dependencies boost
script:
- mkdir -p build.macos && cd build.macos
- cmake -G Ninja ${CMAKE_EXTRA_ARGS} -DCMAKE_BUILD_TYPE=${BLD_CONFIG} ..
- travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose
- ./rippled --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS} ${TEST_EXTRA_ARGS}
- <<: *macos
name: xcode13.1, release
before_script:
- export BLD_CONFIG=Release
- export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} -Dassert=ON"
- <<: *macos
name: ipv6 (macos)
before_script:
- export TEST_EXTRA_ARGS="--unittest-ipv6"
- <<: *macos
osx_image: xcode13.1
name: xcode13.1, debug
# windows
- &windows
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_win/
os: windows
env:
# put NIH in a non-cached location until
# we come up with a way to stabilize that
# cache on windows (minimize incremental changes)
- CACHE_NAME=win_01
- NIH_CACHE_ROOT=${TRAVIS_BUILD_DIR}/nih_c
- VCPKG_DEFAULT_TRIPLET="x64-windows-static"
- MATRIX_EVAL="CC=cl.exe && CXX=cl.exe"
- BOOST_ROOT=${CACHE_DIR}/boost_1_75
- >-
CMAKE_ADD="
-DCMAKE_PREFIX_PATH=${BOOST_ROOT}/_INSTALLED_
-DBOOST_ROOT=${BOOST_ROOT}/_INSTALLED_
-DBoost_ROOT=${BOOST_ROOT}/_INSTALLED_
-DBoost_DIR=${BOOST_ROOT}/_INSTALLED_/lib/cmake/Boost-1.75.0
-DBoost_COMPILER=vc141
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_TOOLCHAIN_FILE=${VCPKG_DIR}/scripts/buildsystems/vcpkg.cmake
-DVCPKG_TARGET_TRIPLET=x64-windows-static"
stage: windep-vcpkg
name: prereq-vcpkg
install:
- choco upgrade cmake.install
- choco install ninja visualstudio2017-workload-vctools -y
script:
- df -h
- env
- travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh openssl
- travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh grpc
- travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh libarchive[lz4]
# TBD consider rocksdb via vcpkg if/when we can build with the
# vcpkg version
# - travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh rocksdb[snappy,lz4,zlib]
- <<: *windows
stage: windep-boost
name: prereq-keep-boost
install:
- choco upgrade cmake.install
- choco install ninja visualstudio2017-workload-vctools -y
- choco install visualstudio2019buildtools visualstudio2019community visualstudio2019-workload-vctools -y
script:
- export BOOST_TOOLSET=msvc-14.1
- travis_wait ${MAX_TIME_MIN} Builds/containers/shared/install_boost.sh
- &windows-bld
<<: *windows
stage: build
name: windows, debug
before_script:
- export BLD_CONFIG=Debug
script:
- df -h
- . ./bin/sh/setup-msvc.sh
- mkdir -p build.ms && cd build.ms
- cmake -G Ninja ${CMAKE_EXTRA_ARGS} -DCMAKE_BUILD_TYPE=${BLD_CONFIG} ..
- travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose
# override num procs to force fewer unit test jobs
- export NUM_PROCESSORS=2
- travis_wait ${MAX_TIME_MIN} ./rippled.exe --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS}
- <<: *windows-bld
name: windows, release
before_script:
- export BLD_CONFIG=Release
- <<: *windows-bld
name: windows, visual studio, debug
script:
- mkdir -p build.ms && cd build.ms
- export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} -DCMAKE_GENERATOR_TOOLSET=host=x64"
- cmake -G "Visual Studio 15 2017 Win64" ${CMAKE_EXTRA_ARGS} ..
- export DESTDIR=${PWD}/_installed_
- travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose --config ${BLD_CONFIG} --target install
# override num procs to force fewer unit test jobs
- export NUM_PROCESSORS=2
- >-
travis_wait ${MAX_TIME_MIN} "./_installed_/Program Files/rippled/bin/rippled.exe" --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS}
- <<: *windows-bld
name: windows, vc2019
install:
- choco upgrade cmake.install
- choco install ninja -y
- choco install visualstudio2019buildtools visualstudio2019community visualstudio2019-workload-vctools -y
before_script:
- export BLD_CONFIG=Release
# we want to use the boost build from cache, which was built using the
# vs2017 compiler so we need to specify the Boost_COMPILER. BUT, we
# can't use the cmake config files generated by boost b/c they are
# broken for Boost_COMPILER override, so we need to specify both
# Boost_NO_BOOST_CMAKE and a slightly different Boost_COMPILER string
# to make the legacy find module work for us. If the cmake configs are
# fixed in the future, it should be possible to remove these
# workarounds.
- export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} -DBoost_NO_BOOST_CMAKE=ON -DBoost_COMPILER=-vc141"
before_cache:
- if [ $(uname) = "Linux" ] ; then SUDO="sudo"; else SUDO=""; fi
- cd ${TRAVIS_HOME}
- if [ -f cache_ignore.tar ] ; then $SUDO tar xvf cache_ignore.tar; fi
- cd ${TRAVIS_BUILD_DIR}
cache:
timeout: 900
directories:
- $CACHE_DIR
notifications:
email: false

View File

@@ -1,13 +1,13 @@
{
"C_Cpp.formatting": "clangFormat",
"C_Cpp.clang_format_path": "/Users/dustedfloor/projects/transia-rnd/rippled-icv2/.clang-format",
"C_Cpp.clang_format_path": ".clang-format",
"C_Cpp.clang_format_fallbackStyle": "{ ColumnLimit: 0 }",
"[cpp]":{
"editor.wordBasedSuggestions": false,
"editor.wordBasedSuggestions": "off",
"editor.suggest.insertMode": "replace",
"editor.semanticHighlighting.enabled": true,
"editor.tabSize": 4,
"editor.defaultFormatter": "xaver.clang-format",
"editor.formatOnSave": false
}
}
}

217
API-CHANGELOG.md Normal file
View File

@@ -0,0 +1,217 @@
# API Changelog
This changelog is intended to list all updates to the [public API methods](https://xrpl.org/public-api-methods.html).
For info about how [API versioning](https://xrpl.org/request-formatting.html#api-versioning) works, including examples, please view the [XLS-22d spec](https://github.com/XRPLF/XRPL-Standards/discussions/54). For details about the implementation of API versioning, view the [implementation PR](https://github.com/XRPLF/rippled/pull/3155). API versioning ensures existing integrations and users continue to receive existing behavior, while those that request a higher API version will experience new behavior.
The API version controls the API behavior you see. This includes what properties you see in responses, what parameters you're permitted to send in requests, and so on. You specify the API version in each of your requests. When a breaking change is introduced to the `rippled` API, a new version is released. To avoid breaking your code, you should set (or increase) your version when you're ready to upgrade.
For a log of breaking changes, see the **API Version [number]** headings. In general, breaking changes are associated with a particular API Version number. For non-breaking changes, scroll to the **XRP Ledger version [x.y.z]** headings. Non-breaking changes are associated with a particular XRP Ledger (`rippled`) release.
## API Version 2
API version 2 is available in `rippled` version 2.0.0 and later. To use this API, clients specify `"api_version" : 2` in each request.
#### Removed methods
In API version 2, the following deprecated methods are no longer available: (https://github.com/XRPLF/rippled/pull/4759)
- `tx_history` - Instead, use other methods such as `account_tx` or `ledger` with the `transactions` field set to `true`.
- `ledger_header` - Instead, use the `ledger` method.
#### Modifications to JSON transaction element in V2
In API version 2, JSON elements for transaction output have been changed and made consistent for all methods which output transactions. (https://github.com/XRPLF/rippled/pull/4775)
This helps to unify the JSON serialization format of transactions. (https://github.com/XRPLF/clio/issues/722, https://github.com/XRPLF/rippled/issues/4727)
- JSON transaction element is named `tx_json`
- Binary transaction element is named `tx_blob`
- JSON transaction metadata element is named `meta`
- Binary transaction metadata element is named `meta_blob`
Additionally, these elements are now consistently available next to `tx_json` (i.e. sibling elements), where possible:
- `hash` - Transaction ID. This data was stored inside transaction output in API version 1, but in API version 2 is a sibling element.
- `ledger_index` - Ledger index (only set on validated ledgers)
- `ledger_hash` - Ledger hash (only set on closed or validated ledgers)
- `close_time_iso` - Ledger close time expressed in ISO 8601 time format (only set on validated ledgers)
- `validated` - Bool element set to `true` if the transaction is in a validated ledger, otherwise `false`
This change affects the following methods:
- `tx` - Transaction data moved into element `tx_json` (was inline inside `result`) or, if binary output was requested, moved from `tx` to `tx_blob`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `account_tx` - Renamed transaction element from `tx` to `tx_json`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `transaction_entry` - Renamed transaction metadata element from `metadata` to `meta`. Changed location of `hash` and added new elements
- `subscribe` - Renamed transaction element from `transaction` to `tx_json`. Changed location of `hash` and added new elements
- `sign`, `sign_for`, `submit` and `submit_multisigned` - Changed location of `hash` element.
#### Modification to `Payment` transaction JSON schema
When reading Payments, the `Amount` field should generally **not** be used. Instead, use [delivered_amount](https://xrpl.org/partial-payments.html#the-delivered_amount-field) to see the amount that the Payment delivered. To clarify its meaning, the `Amount` field is being renamed to `DeliverMax`. (https://github.com/XRPLF/rippled/pull/4733)
- In `Payment` transaction type, JSON RPC field `Amount` is renamed to `DeliverMax`. To enable smooth client transition, `Amount` is still handled, as described below: (https://github.com/XRPLF/rippled/pull/4733)
- On JSON RPC input (e.g. `submit_multisigned` etc. methods), `Amount` is recognized as an alias to `DeliverMax` for both API version 1 and version 2 clients.
- On JSON RPC input, submitting both `Amount` and `DeliverMax` fields is allowed _only_ if they are identical; otherwise such input is rejected with `rpcINVALID_PARAMS` error.
- On JSON RPC output (e.g. `subscribe`, `account_tx` etc. methods), `DeliverMax` is present in both API version 1 and version 2.
- On JSON RPC output, `Amount` is only present in API version 1 and _not_ in version 2.
#### Modifications to account_info response
- `signer_lists` is returned in the root of the response. (In API version 1, it was nested under `account_data`.) (https://github.com/XRPLF/rippled/pull/3770)
- When using an invalid `signer_lists` value, the API now returns an "invalidParams" error. (https://github.com/XRPLF/rippled/pull/4585)
- (`signer_lists` must be a boolean. In API version 1, strings were accepted and may return a normal response - i.e. as if `signer_lists` were `true`.)
#### Modifications to [account_tx](https://xrpl.org/account_tx.html#account_tx) response
- Using `ledger_index_min`, `ledger_index_max`, and `ledger_index` returns `invalidParams` because if you use `ledger_index_min` or `ledger_index_max`, then it does not make sense to also specify `ledger_index`. In API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4571)
- The same applies for `ledger_index_min`, `ledger_index_max`, and `ledger_hash`. (https://github.com/XRPLF/rippled/issues/4545#issuecomment-1565065579)
- Using a `ledger_index_min` or `ledger_index_max` beyond the range of ledgers that the server has:
- returns `lgrIdxMalformed` in API version 2. Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/issues/4288)
- Attempting to use a non-boolean value (such as a string) for the `binary` or `forward` parameters returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
#### Modifications to [noripple_check](https://xrpl.org/noripple_check.html#noripple_check) response
- Attempting to use a non-boolean value (such as a string) for the `transactions` parameter returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
## API Version 1
This version is supported by all `rippled` versions. For WebSocket and HTTP JSON-RPC requests, it is currently the default API version used when no `api_version` is specified.
The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conventions/request-formatting/#commandline-format) always uses the latest API version. The command line is intended for ad-hoc usage by humans, not programs or automated scripts. The command line is not meant for use in production code.
### Inconsistency: server_info - network_id
The `network_id` field was added in the `server_info` response in version 1.5.0 (2019), but it is not returned in [reporting mode](https://xrpl.org/rippled-server-modes.html#reporting-mode). However, use of reporting mode is now discouraged, in favor of using [Clio](https://github.com/XRPLF/clio) instead.
## XRP Ledger server version 2.4.0
As of 2025-01-28, version 2.4.0 is in development. You can use a pre-release version by building from source or [using the `nightly` package](https://xrpl.org/docs/infrastructure/installation/install-rippled-on-ubuntu).
### Additions and bugfixes in 2.4.0
- `ledger_entry`: `state` is added an alias for `ripple_state`.
- `ledger_entry`: Enables case-insensitive filtering by canonical name in addition to case-sensitive filtering by RPC name.
- `validators`: Added new field `validator_list_threshold` in response.
- `simulate`: A new RPC that executes a [dry run of a transaction submission](https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0069d-simulate#2-rpc-simulate)
- Signing methods autofill fees better and properly handle transactions that don't have a base fee, and will also autofill the `NetworkID` field.
## XRP Ledger server version 2.3.0
[Version 2.3.0](https://github.com/XRPLF/rippled/releases/tag/2.3.0) was released on Nov 25, 2024.
### Breaking changes in 2.3.0
- `book_changes`: If the requested ledger version is not available on this node, a `ledgerNotFound` error is returned and the node does not attempt to acquire the ledger from the p2p network (as with other non-admin RPCs).
Admins can still attempt to retrieve old ledgers with the `ledger_request` RPC.
### Additions and bugfixes in 2.3.0
- `book_changes`: Returns a `validated` field in its response, which was missing in prior versions.
## XRP Ledger server version 2.2.0
[Version 2.2.0](https://github.com/XRPLF/rippled/releases/tag/2.2.0) was released on Jun 5, 2024. The following additions are non-breaking (because they are purely additive):
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 2.0.0
[Version 2.0.0](https://github.com/XRPLF/rippled/releases/tag/2.0.0) was released on Jan 9, 2024. The following additions are non-breaking (because they are purely additive):
- `server_definitions`: A new RPC that generates a `definitions.json`-like output that can be used in XRPL libraries.
- In `Payment` transactions, `DeliverMax` has been added. This is a replacement for the `Amount` field, which should not be used. Typically, the `delivered_amount` (in transaction metadata) should be used. To ease the transition, `DeliverMax` is present regardless of API version, since adding a field is non-breaking.
- API version 2 has been moved from beta to supported, meaning that it is generally available (regardless of the `beta_rpc_api` setting).
## XRP Ledger server version 2.2.0
The following is a non-breaking addition to the API.
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 1.12.0
[Version 1.12.0](https://github.com/XRPLF/rippled/releases/tag/1.12.0) was released on Sep 6, 2023. The following additions are non-breaking (because they are purely additive).
- `server_info`: Added `ports`, an array which advertises the RPC and WebSocket ports. This information is also included in the `/crawl` endpoint (which calls `server_info` internally). `grpc` and `peer` ports are also included. (https://github.com/XRPLF/rippled/pull/4427)
- `ports` contains objects, each containing a `port` for the listening port (a number string), and a `protocol` array listing the supported protocols on that port.
- This allows crawlers to build a more detailed topology without needing to port-scan nodes.
- (For peers and other non-admin clients, the info about admin ports is excluded.)
- Clawback: The following additions are gated by the Clawback amendment (`featureClawback`). (https://github.com/XRPLF/rippled/pull/4553)
- Adds an [AccountRoot flag](https://xrpl.org/accountroot.html#accountroot-flags) called `lsfAllowTrustLineClawback` (https://github.com/XRPLF/rippled/pull/4617)
- Adds the corresponding `asfAllowTrustLineClawback` [AccountSet Flag](https://xrpl.org/accountset.html#accountset-flags) as well.
- Clawback is disabled by default, so if an issuer desires the ability to claw back funds, they must use an `AccountSet` transaction to set the AllowTrustLineClawback flag. They must do this before creating any trust lines, offers, escrows, payment channels, or checks.
- Adds the [Clawback transaction type](https://github.com/XRPLF/XRPL-Standards/blob/master/XLS-39d-clawback/README.md#331-clawback-transaction), containing these fields:
- `Account`: The issuer of the asset being clawed back. Must also be the sender of the transaction.
- `Amount`: The amount being clawed back, with the `Amount.issuer` being the token holder's address.
- Adds [AMM](https://github.com/XRPLF/XRPL-Standards/discussions/78) ([#4294](https://github.com/XRPLF/rippled/pull/4294), [#4626](https://github.com/XRPLF/rippled/pull/4626)) feature:
- Adds `amm_info` API to retrieve AMM information for a given tokens pair.
- Adds `AMMCreate` transaction type to create `AMM` instance.
- Adds `AMMDeposit` transaction type to deposit funds into `AMM` instance.
- Adds `AMMWithdraw` transaction type to withdraw funds from `AMM` instance.
- Adds `AMMVote` transaction type to vote for the trading fee of `AMM` instance.
- Adds `AMMBid` transaction type to bid for the Auction Slot of `AMM` instance.
- Adds `AMMDelete` transaction type to delete `AMM` instance.
- Adds `sfAMMID` to `AccountRoot` to indicate that the account is `AMM`'s account. `AMMID` is used to fetch `ltAMM`.
- Adds `lsfAMMNode` `TrustLine` flag to indicate that one side of the `TrustLine` is `AMM` account.
- Adds `tfLPToken`, `tfSingleAsset`, `tfTwoAsset`, `tfOneAssetLPToken`, `tfLimitLPToken`, `tfTwoAssetIfEmpty`,
`tfWithdrawAll`, `tfOneAssetWithdrawAll` which allow a trader to specify different fields combination
for `AMMDeposit` and `AMMWithdraw` transactions.
- Adds new transaction result codes:
- tecUNFUNDED_AMM: insufficient balance to fund AMM. The account does not have funds for liquidity provision.
- tecAMM_BALANCE: AMM has invalid balance. Calculated balances greater than the current pool balances.
- tecAMM_FAILED: AMM transaction failed. Fails due to a processing failure.
- tecAMM_INVALID_TOKENS: AMM invalid LP tokens. Invalid input values, format, or calculated values.
- tecAMM_EMPTY: AMM is in empty state. Transaction requires AMM in non-empty state (LP tokens > 0).
- tecAMM_NOT_EMPTY: AMM is not in empty state. Transaction requires AMM in empty state (LP tokens == 0).
- tecAMM_ACCOUNT: AMM account. Clawback of AMM account.
- tecINCOMPLETE: Some work was completed, but more submissions required to finish. AMMDelete partially deletes the trustlines.
## XRP Ledger server version 1.11.0
[Version 1.11.0](https://github.com/XRPLF/rippled/releases/tag/1.11.0) was released on Jun 20, 2023.
### Breaking changes in 1.11
- Added the ability to mark amendments as obsolete. For the `feature` admin API, there is a new possible value for the `vetoed` field. (https://github.com/XRPLF/rippled/pull/4291)
- The value of `vetoed` can now be `true`, `false`, or `"Obsolete"`.
- Removed the acceptance of seeds or public keys in place of account addresses. (https://github.com/XRPLF/rippled/pull/4404)
- This simplifies the API and encourages better security practices (i.e. seeds should never be sent over the network).
- For the `ledger_data` method, when all entries are filtered out, the `state` field of the response is now an empty list (in other words, an empty array, `[]`). (Previously, it would return `null`.) While this is technically a breaking change, the new behavior is consistent with the documentation, so this is considered only a bug fix. (https://github.com/XRPLF/rippled/pull/4398)
- If and when the `fixNFTokenRemint` amendment activates, there will be a new AccountRoot field, `FirstNFTSequence`. This field is set to the current account sequence when the account issues their first NFT. If an account has not issued any NFTs, then the field is not set. ([#4406](https://github.com/XRPLF/rippled/pull/4406))
- There is a new account deletion restriction: an account can only be deleted if `FirstNFTSequence` + `MintedNFTokens` + `256` is less than the current ledger sequence.
- This is potentially a breaking change if clients have logic for determining whether an account can be deleted.
- NetworkID
- For sidechains and networks with a network ID greater than 1024, there is a new [transaction common field](https://xrpl.org/transaction-common-fields.html), `NetworkID`. (https://github.com/XRPLF/rippled/pull/4370)
- This field helps to prevent replay attacks and is now required for chains whose network ID is 1025 or higher.
- The field must be omitted for Mainnet, so there is no change for Mainnet users.
- There are three new local error codes:
- `telNETWORK_ID_MAKES_TX_NON_CANONICAL`: a `NetworkID` is present but the chain's network ID is less than 1025. Remove the field from the transaction, and try again.
- `telREQUIRES_NETWORK_ID`: a `NetworkID` is required, but is not present. Add the field to the transaction, and try again.
- `telWRONG_NETWORK`: a `NetworkID` is specified, but it is for a different network. Submit the transaction to a different server which is connected to the correct network.
### Additions and bug fixes in 1.11
- Added `nftoken_id`, `nftoken_ids` and `offer_id` meta fields into NFT `tx` and `account_tx` responses. (https://github.com/XRPLF/rippled/pull/4447)
- Added an `account_flags` object to the `account_info` method response. (https://github.com/XRPLF/rippled/pull/4459)
- Added `NFTokenPages` to the `account_objects` RPC. (https://github.com/XRPLF/rippled/pull/4352)
- Fixed: `marker` returned from the `account_lines` command would not work on subsequent commands. (https://github.com/XRPLF/rippled/pull/4361)
## XRP Ledger server version 1.10.0
[Version 1.10.0](https://github.com/XRPLF/rippled/releases/tag/1.10.0)
was released on Mar 14, 2023.
### Breaking changes in 1.10
- If the `XRPFees` feature is enabled, the `fee_ref` field will be
removed from the [ledger subscription stream](https://xrpl.org/subscribe.html#ledger-stream), because it will no longer
have any meaning.
# Unit tests for API changes
The following information is useful to developers contributing to this project:
The purpose of unit tests is to catch bugs and prevent regressions. In general, it often makes sense to create a test function when there is a breaking change to the API. For APIs that have changed in a new API version, the tests should be modified so that both the prior version and the new version are properly tested.
To take one example: for `account_info` version 1, WebSocket and JSON-RPC behavior should be tested. The latest API version, i.e. API version 2, should be tested over WebSocket, JSON-RPC, and command line.

617
BUILD.md Normal file
View File

@@ -0,0 +1,617 @@
| :warning: **WARNING** :warning:
|---|
| These instructions assume you have a C++ development environment ready with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up on Linux, macOS, or Windows, [see this guide](./docs/build/environment.md). |
> These instructions also assume a basic familiarity with Conan and CMake.
> If you are unfamiliar with Conan,
> you can read our [crash course](./docs/build/conan.md)
> or the official [Getting Started][3] walkthrough.
## Branches
For a stable release, choose the `master` branch or one of the [tagged
releases](https://github.com/ripple/rippled/releases).
```
git checkout master
```
For the latest release candidate, choose the `release` branch.
```
git checkout release
```
For the latest set of untested features, or to contribute, choose the `develop`
branch.
```
git checkout develop
```
## Minimum Requirements
See [System Requirements](https://xrpl.org/system-requirements.html).
Building rippled generally requires git, Python, Conan, CMake, and a C++ compiler. Some guidance on setting up such a [C++ development environment can be found here](./docs/build/environment.md).
- [Python 3.7](https://www.python.org/downloads/)
- [Conan 1.60](https://conan.io/downloads.html)[^1]
- [CMake 3.16](https://cmake.org/download/)
[^1]: It is possible to build with Conan 2.x,
but the instructions are significantly different,
which is why we are not recommending it yet.
Notably, the `conan profile update` command is removed in 2.x.
Profiles must be edited by hand.
`rippled` is written in the C++20 dialect and includes the `<concepts>` header.
The [minimum compiler versions][2] required are:
| Compiler | Version |
|-------------|---------|
| GCC | 11 |
| Clang | 13 |
| Apple Clang | 13.1.6 |
| MSVC | 19.23 |
### Linux
The Ubuntu operating system has received the highest level of
quality assurance, testing, and support.
Here are [sample instructions for setting up a C++ development environment on Linux](./docs/build/environment.md#linux).
### Mac
Many rippled engineers use macOS for development.
Here are [sample instructions for setting up a C++ development environment on macOS](./docs/build/environment.md#macos).
### Windows
Windows is not recommended for production use at this time.
- Additionally, 32-bit Windows development is not supported.
[Boost]: https://www.boost.org/
## Steps
### Set Up Conan
After you have a [C++ development environment](./docs/build/environment.md) ready with Git, Python, Conan, CMake, and a C++ compiler, you may need to set up your Conan profile.
These instructions assume a basic familiarity with Conan and CMake.
If you are unfamiliar with Conan, then please read [this crash course](./docs/build/conan.md) or the official [Getting Started][3] walkthrough.
You'll need at least one Conan profile:
```
conan profile new default --detect
```
Update the compiler settings:
```
conan profile update settings.compiler.cppstd=20 default
```
Configure Conan (1.x only) to use recipe revisions:
```
conan config set general.revisions_enabled=1
```
**Linux** developers will commonly have a default Conan [profile][] that compiles
with GCC and links with libstdc++.
If you are linking with libstdc++ (see profile setting `compiler.libcxx`),
then you will need to choose the `libstdc++11` ABI:
```
conan profile update settings.compiler.libcxx=libstdc++11 default
```
Ensure inter-operability between `boost::string_view` and `std::string_view` types:
```
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_BEAST_USE_STD_STRING_VIEW"]' default
conan profile update 'env.CXXFLAGS="-DBOOST_BEAST_USE_STD_STRING_VIEW"' default
```
If you have other flags in the `conf.tools.build` or `env.CXXFLAGS` sections, make sure to retain the existing flags and append the new ones. You can check them with:
```
conan profile show default
```
**Windows** developers may need to use the x64 native build tools.
An easy way to do that is to run the shortcut "x64 Native Tools Command
Prompt" for the version of Visual Studio that you have installed.
Windows developers must also build `rippled` and its dependencies for the x64
architecture:
```
conan profile update settings.arch=x86_64 default
```
### Multiple compilers
When `/usr/bin/g++` exists on a platform, it is the default cpp compiler. This
default works for some users.
However, if this compiler cannot build rippled or its dependencies, then you can
install another compiler and set Conan and CMake to use it.
Update the `conf.tools.build:compiler_executables` setting in order to set the correct variables (`CMAKE_<LANG>_COMPILER`) in the
generated CMake toolchain file.
For example, on Ubuntu 20, you may have gcc at `/usr/bin/gcc` and g++ at `/usr/bin/g++`; if that is the case, you can select those compilers with:
```
conan profile update 'conf.tools.build:compiler_executables={"c": "/usr/bin/gcc", "cpp": "/usr/bin/g++"}' default
```
Replace `/usr/bin/gcc` and `/usr/bin/g++` with paths to the desired compilers.
It should choose the compiler for dependencies as well,
but not all of them have a Conan recipe that respects this setting (yet).
For the rest, you can set these environment variables.
Replace `<path>` with paths to the desired compilers:
- `conan profile update env.CC=<path> default`
- `conan profile update env.CXX=<path> default`
Export our [Conan recipe for Snappy](./external/snappy).
It does not explicitly link the C++ standard library,
which allows you to statically link it with GCC, if you want.
```
# Conan 1.x
conan export external/snappy snappy/1.1.10@
# Conan 2.x
conan export --version 1.1.10 external/snappy
```
Export our [Conan recipe for RocksDB](./external/rocksdb).
It does not override paths to dependencies when building with Visual Studio.
```
# Conan 1.x
conan export external/rocksdb rocksdb/6.29.5@
# Conan 2.x
conan export --version 6.29.5 external/rocksdb
```
Export our [Conan recipe for SOCI](./external/soci).
It patches their CMake to correctly import its dependencies.
```
# Conan 1.x
conan export external/soci soci/4.0.3@
# Conan 2.x
conan export --version 4.0.3 external/soci
```
Export our [Conan recipe for NuDB](./external/nudb).
It fixes some source files to add missing `#include`s.
```
# Conan 1.x
conan export external/nudb nudb/2.0.8@
# Conan 2.x
conan export --version 2.0.8 external/nudb
```
### Build and Test
1. Create a build directory and move into it.
```
mkdir .build
cd .build
```
You can use any directory name. Conan treats your working directory as an
install folder and generates files with implementation details.
You don't need to worry about these files, but make sure to change
your working directory to your build directory before calling Conan.
**Note:** You can specify a directory for the installation files by adding
the `install-folder` or `-if` option to every `conan install` command
in the next step.
2. Use conan to generate CMake files for every configuration you want to build:
```
conan install .. --output-folder . --build missing --settings build_type=Release
conan install .. --output-folder . --build missing --settings build_type=Debug
```
To build Debug, in the next step, be sure to set `-DCMAKE_BUILD_TYPE=Debug`
For a single-configuration generator, e.g. `Unix Makefiles` or `Ninja`,
you only need to run this command once.
For a multi-configuration generator, e.g. `Visual Studio`, you may want to
run it more than once.
Each of these commands should also have a different `build_type` setting.
A second command with the same `build_type` setting will overwrite the files
generated by the first. You can pass the build type on the command line with
`--settings build_type=$BUILD_TYPE` or in the profile itself,
under the section `[settings]` with the key `build_type`.
If you are using a Microsoft Visual C++ compiler,
then you will need to ensure consistency between the `build_type` setting
and the `compiler.runtime` setting.
When `build_type` is `Release`, `compiler.runtime` should be `MT`.
When `build_type` is `Debug`, `compiler.runtime` should be `MTd`.
```
conan install .. --output-folder . --build missing --settings build_type=Release --settings compiler.runtime=MT
conan install .. --output-folder . --build missing --settings build_type=Debug --settings compiler.runtime=MTd
```
3. Configure CMake and pass the toolchain file generated by Conan, located at
`$OUTPUT_FOLDER/build/generators/conan_toolchain.cmake`.
Single-config generators:
Pass the CMake variable [`CMAKE_BUILD_TYPE`][build_type]
and make sure it matches the one of the `build_type` settings
you chose in the previous step.
For example, to build Debug, in the next command, replace "Release" with "Debug"
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release -Dxrpld=ON -Dtests=ON ..
```
Multi-config generators:
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -Dxrpld=ON -Dtests=ON ..
```
**Note:** You can pass build options for `rippled` in this step.
5. Build `rippled`.
For a single-configuration generator, it will build whatever configuration
you passed for `CMAKE_BUILD_TYPE`. For a multi-configuration generator,
you must pass the option `--config` to select the build configuration.
Single-config generators:
```
cmake --build .
```
Multi-config generators:
```
cmake --build . --config Release
cmake --build . --config Debug
```
6. Test rippled.
Single-config generators:
```
./rippled --unittest
```
Multi-config generators:
```
./Release/rippled --unittest
./Debug/rippled --unittest
```
The location of `rippled` in your build directory depends on your CMake
generator. Pass `--help` to see the rest of the command line options.
## Coverage report
The coverage report is intended for developers using compilers GCC
or Clang (including Apple Clang). It is generated by the build target `coverage`,
which is only enabled when the `coverage` option is set, e.g. with
`--options coverage=True` in `conan` or `-Dcoverage=ON` variable in `cmake`
Prerequisites for the coverage report:
- [gcovr tool][gcovr] (can be installed e.g. with [pip][python-pip])
- `gcov` for GCC (installed with the compiler by default) or
- `llvm-cov` for Clang (installed with the compiler by default)
- `Debug` build type
A coverage report is created when the following steps are completed, in order:
1. `rippled` binary built with instrumentation data, enabled by the `coverage`
option mentioned above
2. completed run of unit tests, which populates coverage capture data
3. completed run of the `gcovr` tool (which internally invokes either `gcov` or `llvm-cov`)
to assemble both instrumentation data and the coverage capture data into a coverage report
The above steps are automated into a single target `coverage`. The instrumented
`rippled` binary can also be used for regular development or testing work, at
the cost of extra disk space utilization and a small performance hit
(to store coverage capture). In case of a spurious failure of unit tests, it is
possible to re-run the `coverage` target without rebuilding the `rippled` binary
(since it is simply a dependency of the coverage report target). It is also possible
to select only specific tests for the purpose of the coverage report, by setting
the `coverage_test` variable in `cmake`
The default coverage report format is `html-details`, but the user
can override it to any of the formats listed in `cmake/CodeCoverage.cmake`
by setting the `coverage_format` variable in `cmake`. It is also possible
to generate more than one format at a time by setting the `coverage_extra_args`
variable in `cmake`. The specific command line used to run the `gcovr` tool will be
displayed if the `CODE_COVERAGE_VERBOSE` variable is set.
By default, the code coverage tool runs parallel unit tests with `--unittest-jobs`
set to the number of available CPU cores. This may cause spurious test
errors on Apple. Developers can override the number of unit test jobs with
the `coverage_test_parallelism` variable in `cmake`.
Example use with some cmake variables set:
```
cd .build
conan install .. --output-folder . --build missing --settings build_type=Debug
cmake -DCMAKE_BUILD_TYPE=Debug -Dcoverage=ON -Dxrpld=ON -Dtests=ON -Dcoverage_test_parallelism=2 -Dcoverage_format=html-details -Dcoverage_extra_args="--json coverage.json" -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake ..
cmake --build . --target coverage
```
After the `coverage` target is completed, the generated coverage report will be
stored inside the build directory, as either of:
- file named `coverage.`_extension_ , with a suitable extension for the report format, or
- directory named `coverage`, with the `index.html` and other files inside, for the `html-details` or `html-nested` report formats.
## Options
| Option | Default Value | Description |
| --- | ---| ---|
| `assert` | OFF | Enable assertions.
| `coverage` | OFF | Prepare the coverage report. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
| `tests` | OFF | Build tests. |
| `unity` | ON | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld (`rippled`) application, and not just the libxrpl library. |
[Unity builds][5] may be faster for the first build
(at the cost of much more memory) since they concatenate sources into fewer
translation units. Non-unity builds may be faster for incremental builds,
and can be helpful for detecting `#include` omissions.
## Troubleshooting
### Conan
If you have trouble building dependencies after changing Conan settings,
try removing the Conan cache.
```
rm -rf ~/.conan/data
```
### 'protobuf/port_def.inc' file not found
If `cmake --build .` results in an error due to a missing a protobuf file, then you might have generated CMake files for a different `build_type` than the `CMAKE_BUILD_TYPE` you passed to conan.
```
/rippled/.build/pb-xrpl.libpb/xrpl/proto/ripple.pb.h:10:10: fatal error: 'google/protobuf/port_def.inc' file not found
10 | #include <google/protobuf/port_def.inc>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
```
For example, if you want to build Debug:
1. For conan install, pass `--settings build_type=Debug`
2. For cmake, pass `-DCMAKE_BUILD_TYPE=Debug`
### no std::result_of
If your compiler version is recent enough to have removed `std::result_of` as
part of C++20, e.g. Apple Clang 15.0, then you might need to add a preprocessor
definition to your build.
```
conan profile update 'options.boost:extra_b2_flags="define=BOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'conf.tools.build:cflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
```
### call to 'async_teardown' is ambiguous
If you are compiling with an early version of Clang 16, then you might hit
a [regression][6] when compiling C++20 that manifests as an [error in a Boost
header][7]. You can workaround it by adding this preprocessor definition:
```
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_DISABLE_CONCEPTS"' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
```
### recompile with -fPIC
If you get a linker error suggesting that you recompile Boost with
position-independent code, such as:
```
/usr/bin/ld.gold: error: /home/username/.conan/data/boost/1.77.0/_/_/package/.../lib/libboost_container.a(alloc_lib.o):
requires unsupported dynamic reloc 11; recompile with -fPIC
```
Conan most likely downloaded a bad binary distribution of the dependency.
This seems to be a [bug][1] in Conan just for Boost 1.77.0 compiled with GCC
for Linux. The solution is to build the dependency locally by passing
`--build boost` when calling `conan install`.
```
/usr/bin/ld.gold: error: /home/username/.conan/data/boost/1.77.0/_/_/package/dc8aedd23a0f0a773a5fcdcfe1ae3e89c4205978/lib/libboost_container.a(alloc_lib.o): requires unsupported dynamic reloc 11; recompile with -fPIC
```
## Add a Dependency
If you want to experiment with a new package, follow these steps:
1. Search for the package on [Conan Center](https://conan.io/center/).
2. Modify [`conanfile.py`](./conanfile.py):
- Add a version of the package to the `requires` property.
- Change any default options for the package by adding them to the
`default_options` property (with syntax `'$package:$option': $value`).
3. Modify [`CMakeLists.txt`](./CMakeLists.txt):
- Add a call to `find_package($package REQUIRED)`.
- Link a library from the package to the target `ripple_libs`
(search for the existing call to `target_link_libraries(ripple_libs INTERFACE ...)`).
4. Start coding! Don't forget to include whatever headers you need from the package.
## A crash course in CMake and Conan
To better understand how to use Conan,
we should first understand _why_ we use Conan,
and to understand that,
we need to understand how we use CMake.
### CMake
Technically, you don't need CMake to build this project.
You could manually compile every translation unit into an object file,
using the right compiler options,
and then manually link all those objects together,
using the right linker options.
However, that is very tedious and error-prone,
which is why we lean on tools like CMake.
We have written CMake configuration files
([`CMakeLists.txt`](./CMakeLists.txt) and friends)
for this project so that CMake can be used to correctly compile and link
all of the translation units in it.
Or rather, CMake will generate files for a separate build system
(e.g. Make, Ninja, Visual Studio, Xcode, etc.)
that compile and link all of the translation units.
Even then, CMake has parameters, some of which are platform-specific.
In CMake's parlance, parameters are specially-named **variables** like
[`CMAKE_BUILD_TYPE`][build_type] or
[`CMAKE_MSVC_RUNTIME_LIBRARY`][runtime].
Parameters include:
- what build system to generate files for
- where to find the compiler and linker
- where to find dependencies, e.g. libraries and headers
- how to link dependencies, e.g. any special compiler or linker flags that
need to be used with them, including preprocessor definitions
- how to compile translation units, e.g. with optimizations, debug symbols,
position-independent code, etc.
- on Windows, which runtime library to link with
For some of these parameters, like the build system and compiler,
CMake goes through a complicated search process to choose default values.
For others, like the dependencies,
_we_ had written in the CMake configuration files of this project
our own complicated process to choose defaults.
For most developers, things "just worked"... until they didn't, and then
you were left trying to debug one of these complicated processes, instead of
choosing and manually passing the parameter values yourself.
You can pass every parameter to CMake on the command line,
but writing out these parameters every time we want to configure CMake is
a pain.
Most humans prefer to put them into a configuration file, once, that
CMake can read every time it is configured.
For CMake, that file is a [toolchain file][toolchain].
### Conan
These next few paragraphs on Conan are going to read much like the ones above
for CMake.
Technically, you don't need Conan to build this project.
You could manually download, configure, build, and install all of the
dependencies yourself, and then pass all of the parameters necessary for
CMake to link to those dependencies.
To guarantee ABI compatibility, you must be sure to use the same set of
compiler and linker options for all dependencies _and_ this project.
However, that is very tedious and error-prone, which is why we lean on tools
like Conan.
We have written a Conan configuration file ([`conanfile.py`](./conanfile.py))
so that Conan can be used to correctly download, configure, build, and install
all of the dependencies for this project,
using a single set of compiler and linker options for all of them.
It generates files that contain almost all of the parameters that CMake
expects.
Those files include:
- A single toolchain file.
- For every dependency, a CMake [package configuration file][pcf],
[package version file][pvf], and for every build type, a package
targets file.
Together, these files implement version checking and define `IMPORTED`
targets for the dependencies.
The toolchain file itself amends the search path
([`CMAKE_PREFIX_PATH`][prefix_path]) so that [`find_package()`][find_package]
will [discover][search] the generated package configuration files.
**Nearly all we must do to properly configure CMake is pass the toolchain
file.**
What CMake parameters are left out?
You'll still need to pick a build system generator,
and if you choose a single-configuration generator,
you'll need to pass the `CMAKE_BUILD_TYPE`,
which should match the `build_type` setting you gave to Conan.
Even then, Conan has parameters, some of which are platform-specific.
In Conan's parlance, parameters are either settings or options.
**Settings** are shared by all packages, e.g. the build type.
**Options** are specific to a given package, e.g. whether to build and link
OpenSSL as a shared library.
For settings, Conan goes through a complicated search process to choose
defaults.
For options, each package recipe defines its own defaults.
You can pass every parameter to Conan on the command line,
but it is more convenient to put them in a [profile][profile].
**All we must do to properly configure Conan is edit and pass the profile.**
[1]: https://github.com/conan-io/conan-center-index/issues/13168
[5]: https://en.wikipedia.org/wiki/Unity_build
[6]: https://github.com/boostorg/beast/issues/2648
[7]: https://github.com/boostorg/beast/issues/2661
[gcovr]: https://gcovr.com/en/stable/getting-started.html
[python-pip]: https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/
[build_type]: https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html
[runtime]: https://cmake.org/cmake/help/latest/variable/CMAKE_MSVC_RUNTIME_LIBRARY.html
[toolchain]: https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html
[pcf]: https://cmake.org/cmake/help/latest/manual/cmake-packages.7.html#package-configuration-file
[pvf]: https://cmake.org/cmake/help/latest/manual/cmake-packages.7.html#package-version-file
[find_package]: https://cmake.org/cmake/help/latest/command/find_package.html
[search]: https://cmake.org/cmake/help/latest/command/find_package.html#search-procedure
[prefix_path]: https://cmake.org/cmake/help/latest/variable/CMAKE_PREFIX_PATH.html
[profile]: https://docs.conan.io/en/latest/reference/profiles.html

View File

@@ -1,207 +0,0 @@
macro(group_sources_in source_dir curdir)
file(GLOB children RELATIVE ${source_dir}/${curdir}
${source_dir}/${curdir}/*)
foreach (child ${children})
if (IS_DIRECTORY ${source_dir}/${curdir}/${child})
group_sources_in(${source_dir} ${curdir}/${child})
else()
string(REPLACE "/" "\\" groupname ${curdir})
source_group(${groupname} FILES
${source_dir}/${curdir}/${child})
endif()
endforeach()
endmacro()
macro(group_sources curdir)
group_sources_in(${PROJECT_SOURCE_DIR} ${curdir})
endmacro()
macro (exclude_from_default target_)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_ALL ON)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_DEFAULT_BUILD ON)
endmacro ()
macro (exclude_if_included target_)
get_directory_property(has_parent PARENT_DIRECTORY)
if (has_parent)
exclude_from_default (${target_})
endif ()
endmacro ()
function (print_ep_logs _target)
ExternalProject_Get_Property (${_target} STAMP_DIR)
add_custom_command(TARGET ${_target} POST_BUILD
COMMENT "${_target} BUILD OUTPUT"
COMMAND ${CMAKE_COMMAND}
-DIN_FILE=${STAMP_DIR}/${_target}-build-out.log
-P ${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/echo_file.cmake
COMMAND ${CMAKE_COMMAND}
-DIN_FILE=${STAMP_DIR}/${_target}-build-err.log
-P ${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/echo_file.cmake)
endfunction ()
#[=========================================================[
This is a function override for one function in the
standard ExternalProject module. We want to change
the generated build script slightly to include printing
the build logs in the case of failure. Those modifications
have been made here. This function override could break
in the future if the ExternalProject module changes internal
function names or changes the way it generates the build
scripts.
See:
https://gitlab.kitware.com/cmake/cmake/blob/df1ddeec128d68cc636f2dde6c2acd87af5658b6/Modules/ExternalProject.cmake#L1855-1952
#]=========================================================]
function(_ep_write_log_script name step cmd_var)
ExternalProject_Get_Property(${name} stamp_dir)
set(command "${${cmd_var}}")
set(make "")
set(code_cygpath_make "")
if(command MATCHES "^\\$\\(MAKE\\)")
# GNU make recognizes the string "$(MAKE)" as recursive make, so
# ensure that it appears directly in the makefile.
string(REGEX REPLACE "^\\$\\(MAKE\\)" "\${make}" command "${command}")
set(make "-Dmake=$(MAKE)")
if(WIN32 AND NOT CYGWIN)
set(code_cygpath_make "
if(\${make} MATCHES \"^/\")
execute_process(
COMMAND cygpath -w \${make}
OUTPUT_VARIABLE cygpath_make
ERROR_VARIABLE cygpath_make
RESULT_VARIABLE cygpath_error
OUTPUT_STRIP_TRAILING_WHITESPACE
)
if(NOT cygpath_error)
set(make \${cygpath_make})
endif()
endif()
")
endif()
endif()
set(config "")
if("${CMAKE_CFG_INTDIR}" MATCHES "^\\$")
string(REPLACE "${CMAKE_CFG_INTDIR}" "\${config}" command "${command}")
set(config "-Dconfig=${CMAKE_CFG_INTDIR}")
endif()
# Wrap multiple 'COMMAND' lines up into a second-level wrapper
# script so all output can be sent to one log file.
if(command MATCHES "(^|;)COMMAND;")
set(code_execute_process "
${code_cygpath_make}
execute_process(COMMAND \${command} RESULT_VARIABLE result)
if(result)
set(msg \"Command failed (\${result}):\\n\")
foreach(arg IN LISTS command)
set(msg \"\${msg} '\${arg}'\")
endforeach()
message(FATAL_ERROR \"\${msg}\")
endif()
")
set(code "")
set(cmd "")
set(sep "")
foreach(arg IN LISTS command)
if("x${arg}" STREQUAL "xCOMMAND")
if(NOT "x${cmd}" STREQUAL "x")
string(APPEND code "set(command \"${cmd}\")${code_execute_process}")
endif()
set(cmd "")
set(sep "")
else()
string(APPEND cmd "${sep}${arg}")
set(sep ";")
endif()
endforeach()
string(APPEND code "set(command \"${cmd}\")${code_execute_process}")
file(GENERATE OUTPUT "${stamp_dir}/${name}-${step}-$<CONFIG>-impl.cmake" CONTENT "${code}")
set(command ${CMAKE_COMMAND} "-Dmake=\${make}" "-Dconfig=\${config}" -P ${stamp_dir}/${name}-${step}-$<CONFIG>-impl.cmake)
endif()
# Wrap the command in a script to log output to files.
set(script ${stamp_dir}/${name}-${step}-$<CONFIG>.cmake)
set(logbase ${stamp_dir}/${name}-${step})
set(code "
${code_cygpath_make}
function (_echo_file _fil)
file (READ \${_fil} _cont)
execute_process (COMMAND \${CMAKE_COMMAND} -E echo \"\${_cont}\")
endfunction ()
set(command \"${command}\")
execute_process(
COMMAND \${command}
RESULT_VARIABLE result
OUTPUT_FILE \"${logbase}-out.log\"
ERROR_FILE \"${logbase}-err.log\"
)
if(result)
set(msg \"Command failed: \${result}\\n\")
foreach(arg IN LISTS command)
set(msg \"\${msg} '\${arg}'\")
endforeach()
execute_process (COMMAND \${CMAKE_COMMAND} -E echo \"Build output for ${logbase} : \")
_echo_file (\"${logbase}-out.log\")
_echo_file (\"${logbase}-err.log\")
set(msg \"\${msg}\\nSee above\\n\")
message(FATAL_ERROR \"\${msg}\")
else()
set(msg \"${name} ${step} command succeeded. See also ${logbase}-*.log\")
message(STATUS \"\${msg}\")
endif()
")
file(GENERATE OUTPUT "${script}" CONTENT "${code}")
set(command ${CMAKE_COMMAND} ${make} ${config} -P ${script})
set(${cmd_var} "${command}" PARENT_SCOPE)
endfunction()
find_package(Git)
# function that calls git log to get current hash
function (git_hash hash_val)
# note: optional second extra string argument not in signature
if (NOT GIT_FOUND)
return ()
endif ()
set (_hash "")
set (_format "%H")
if (ARGC GREATER_EQUAL 2)
string (TOLOWER ${ARGV1} _short)
if (_short STREQUAL "short")
set (_format "%h")
endif ()
endif ()
execute_process (COMMAND ${GIT_EXECUTABLE} "log" "--pretty=${_format}" "-n1"
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_hash
OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET)
if (_git_exit_code EQUAL 0)
set (_hash ${_temp_hash})
endif ()
set (${hash_val} "${_hash}" PARENT_SCOPE)
endfunction ()
function (git_branch branch_val)
if (NOT GIT_FOUND)
return ()
endif ()
set (_branch "")
execute_process (COMMAND ${GIT_EXECUTABLE} "rev-parse" "--abbrev-ref" "HEAD"
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_branch
OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET)
if (_git_exit_code EQUAL 0)
set (_branch ${_temp_branch})
endif ()
set (${branch_val} "${_branch}" PARENT_SCOPE)
endfunction ()

View File

@@ -1,60 +0,0 @@
#[=========================================================[
SQLITE doesn't provide build files in the
standard source-only distribution. So we wrote
a simple cmake file and we copy it to the
external project folder so that we can use
this file to build the lib with ExternalProject
#]=========================================================]
add_library (sqlite3 STATIC sqlite3.c)
#[=========================================================[
When compiled with SQLITE_THREADSAFE=1, SQLite operates
in serialized mode. In this mode, SQLite can be safely
used by multiple threads with no restriction.
NOTE: This implies a global mutex!
When compiled with SQLITE_THREADSAFE=2, SQLite can be
used in a multithreaded program so long as no two
threads attempt to use the same database connection at
the same time.
NOTE: This is the preferred threading model, but not
currently enabled because we need to investigate our
use-model and concurrency requirements.
TODO: consider whether any other options should be
used: https://www.sqlite.org/compile.html
#]=========================================================]
target_compile_definitions (sqlite3
PRIVATE
SQLITE_THREADSAFE=1
HAVE_USLEEP=1)
target_compile_options (sqlite3
PRIVATE
$<$<BOOL:${MSVC}>:
-wd4100
-wd4127
-wd4232
-wd4244
-wd4701
-wd4706
-wd4996
>
$<$<NOT:$<BOOL:${MSVC}>>:-Wno-array-bounds>)
install (
TARGETS
sqlite3
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES DESTINATION include)
install (
FILES
sqlite3.h
sqlite3ext.h
DESTINATION include)

File diff suppressed because it is too large Load Diff

View File

@@ -1,98 +0,0 @@
#[===================================================================[
coverage report target
#]===================================================================]
if (coverage)
if (is_clang)
if (APPLE)
execute_process (COMMAND xcrun -f llvm-profdata
OUTPUT_VARIABLE LLVM_PROFDATA
OUTPUT_STRIP_TRAILING_WHITESPACE)
else ()
find_program (LLVM_PROFDATA llvm-profdata)
endif ()
if (NOT LLVM_PROFDATA)
message (WARNING "unable to find llvm-profdata - skipping coverage_report target")
endif ()
if (APPLE)
execute_process (COMMAND xcrun -f llvm-cov
OUTPUT_VARIABLE LLVM_COV
OUTPUT_STRIP_TRAILING_WHITESPACE)
else ()
find_program (LLVM_COV llvm-cov)
endif ()
if (NOT LLVM_COV)
message (WARNING "unable to find llvm-cov - skipping coverage_report target")
endif ()
set (extract_pattern "")
if (coverage_core_only)
set (extract_pattern "${CMAKE_CURRENT_SOURCE_DIR}/src/ripple/")
endif ()
if (LLVM_COV AND LLVM_PROFDATA)
add_custom_target (coverage_report
USES_TERMINAL
COMMAND ${CMAKE_COMMAND} -E echo "Generating coverage - results will be in ${CMAKE_BINARY_DIR}/coverage/index.html."
COMMAND ${CMAKE_COMMAND} -E echo "Running rippled tests."
COMMAND rippled --unittest$<$<BOOL:${coverage_test}>:=${coverage_test}> --quiet --unittest-log
COMMAND ${LLVM_PROFDATA}
merge -sparse default.profraw -o rip.profdata
COMMAND ${CMAKE_COMMAND} -E echo "Summary of coverage:"
COMMAND ${LLVM_COV}
report -instr-profile=rip.profdata
$<TARGET_FILE:rippled> ${extract_pattern}
# generate html report
COMMAND ${LLVM_COV}
show -format=html -output-dir=${CMAKE_BINARY_DIR}/coverage
-instr-profile=rip.profdata
$<TARGET_FILE:rippled> ${extract_pattern}
BYPRODUCTS coverage/index.html)
endif ()
elseif (is_gcc)
find_program (LCOV lcov)
if (NOT LCOV)
message (WARNING "unable to find lcov - skipping coverage_report target")
endif ()
find_program (GENHTML genhtml)
if (NOT GENHTML)
message (WARNING "unable to find genhtml - skipping coverage_report target")
endif ()
set (extract_pattern "*")
if (coverage_core_only)
set (extract_pattern "*/src/ripple/*")
endif ()
if (LCOV AND GENHTML)
add_custom_target (coverage_report
USES_TERMINAL
COMMAND ${CMAKE_COMMAND} -E echo "Generating coverage- results will be in ${CMAKE_BINARY_DIR}/coverage/index.html."
# create baseline info file
COMMAND ${LCOV}
--no-external -d "${CMAKE_CURRENT_SOURCE_DIR}" -c -d . -i -o baseline.info
| grep -v "ignoring data for external file"
# run tests
COMMAND ${CMAKE_COMMAND} -E echo "Running rippled tests for coverage report."
COMMAND rippled --unittest$<$<BOOL:${coverage_test}>:=${coverage_test}> --quiet --unittest-log
# Create test coverage data file
COMMAND ${LCOV}
--no-external -d "${CMAKE_CURRENT_SOURCE_DIR}" -c -d . -o tests.info
| grep -v "ignoring data for external file"
# Combine baseline and test coverage data
COMMAND ${LCOV}
-a baseline.info -a tests.info -o lcov-all.info
# extract our files
COMMAND ${LCOV}
-e lcov-all.info "${extract_pattern}" -o lcov.info
COMMAND ${CMAKE_COMMAND} -E echo "Summary of coverage:"
COMMAND ${LCOV} --summary lcov.info
# generate HTML report
COMMAND ${GENHTML}
-o ${CMAKE_BINARY_DIR}/coverage lcov.info
BYPRODUCTS coverage/index.html)
endif ()
endif ()
endif ()

View File

@@ -1,39 +0,0 @@
#[===================================================================[
multiconfig misc
#]===================================================================]
if (is_multiconfig)
# This code finds all source files in the src subdirectory for inclusion
# in the IDE file tree as non-compiled sources. Since this file list will
# have some overlap with files we have already added to our targets to
# be compiled, we explicitly remove any of these target source files from
# this list.
file (GLOB_RECURSE all_sources RELATIVE ${CMAKE_CURRENT_SOURCE_DIR}
CONFIGURE_DEPENDS
src/*.* Builds/*.md docs/*.md src/*.md Builds/*.cmake)
file(GLOB md_files RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} CONFIGURE_DEPENDS
*.md)
LIST(APPEND all_sources ${md_files})
foreach (_target secp256k1 ed25519-donna pbufs xrpl_core rippled)
get_target_property (_type ${_target} TYPE)
if(_type STREQUAL "INTERFACE_LIBRARY")
continue()
endif()
get_target_property (_src ${_target} SOURCES)
list (REMOVE_ITEM all_sources ${_src})
endforeach ()
target_sources (rippled PRIVATE ${all_sources})
set_property (
SOURCE ${all_sources}
APPEND
PROPERTY HEADER_FILE_ONLY true)
if (MSVC)
set_property(
DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
PROPERTY VS_STARTUP_PROJECT rippled)
endif ()
group_sources(src)
group_sources(docs)
group_sources(Builds)
endif ()

View File

@@ -1,197 +0,0 @@
#[===================================================================[
package/container targets - (optional)
#]===================================================================]
if (is_root_project)
if (NOT DOCKER)
find_program (DOCKER docker)
endif ()
if (DOCKER)
# if no container label is provided, use current git hash
git_hash (commit_hash)
if (NOT container_label)
set (container_label ${commit_hash})
endif ()
message (STATUS "using [${container_label}] as build container tag...")
file (MAKE_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/packages)
file (MAKE_DIRECTORY ${NIH_CACHE_ROOT}/pkgbuild)
if (is_linux)
execute_process (COMMAND id -u
OUTPUT_VARIABLE DOCKER_USER_ID
OUTPUT_STRIP_TRAILING_WHITESPACE)
message (STATUS "docker local user id: ${DOCKER_USER_ID}")
execute_process (COMMAND id -g
OUTPUT_VARIABLE DOCKER_GROUP_ID
OUTPUT_STRIP_TRAILING_WHITESPACE)
message (STATUS "docker local group id: ${DOCKER_GROUP_ID}")
endif ()
if (DOCKER_USER_ID AND DOCKER_GROUP_ID)
set(map_user TRUE)
endif ()
#[===================================================================[
rpm
#]===================================================================]
add_custom_target (rpm_container
docker build
--pull
--build-arg GIT_COMMIT=${commit_hash}
-t rippleci/rippled-rpm-builder:${container_label}
$<$<BOOL:${rpm_cache_from}>:--cache-from=${rpm_cache_from}>
-f centos-builder/Dockerfile .
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/Builds/containers
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/centos-builder/Dockerfile
Builds/containers/centos-builder/centos_setup.sh
Builds/containers/shared/update-rippled.sh
Builds/containers/shared/update_sources.sh
Builds/containers/shared/rippled.service
Builds/containers/shared/rippled-reporting.service
Builds/containers/packaging/rpm/rippled.spec
Builds/containers/packaging/rpm/build_rpm.sh
Builds/containers/packaging/rpm/50-rippled.preset
Builds/containers/packaging/rpm/50-rippled-reporting.preset
bin/getRippledInfo
)
exclude_from_default (rpm_container)
add_custom_target (rpm
docker run
-e NIH_CACHE_ROOT=/opt/rippled_bld/pkg/.nih_c
-v ${NIH_CACHE_ROOT}/pkgbuild:/opt/rippled_bld/pkg/.nih_c
-v ${CMAKE_CURRENT_SOURCE_DIR}:/opt/rippled_bld/pkg/rippled
-v ${CMAKE_CURRENT_BINARY_DIR}/packages:/opt/rippled_bld/pkg/out
-t rippleci/rippled-rpm-builder:${container_label}
/bin/bash -c "cp -fpu rippled/Builds/containers/packaging/rpm/build_rpm.sh . && ./build_rpm.sh"
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/packaging/rpm/rippled.spec
)
exclude_from_default (rpm)
if (NOT have_package_container)
add_dependencies(rpm rpm_container)
endif ()
#[===================================================================[
dpkg
#]===================================================================]
# currently use ubuntu 18.04 as a base b/c it has one of
# the lower versions of libc among ubuntu and debian releases.
# we could change this in the future and build with some other deb
# based system.
add_custom_target (dpkg_container
docker build
--pull
--build-arg DIST_TAG=18.04
--build-arg GIT_COMMIT=${commit_hash}
-t rippled-dpkg-builder:${container_label}
$<$<BOOL:${dpkg_cache_from}>:--cache-from=${dpkg_cache_from}>
-f ubuntu-builder/Dockerfile .
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/Builds/containers
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/packaging/dpkg/debian/rippled-reporting.links
Builds/containers/packaging/dpkg/debian/copyright
Builds/containers/packaging/dpkg/debian/rules
Builds/containers/packaging/dpkg/debian/rippled-reporting.install
Builds/containers/packaging/dpkg/debian/rippled-reporting.postinst
Builds/containers/packaging/dpkg/debian/rippled.links
Builds/containers/packaging/dpkg/debian/rippled.prerm
Builds/containers/packaging/dpkg/debian/rippled.postinst
Builds/containers/packaging/dpkg/debian/rippled-dev.install
Builds/containers/packaging/dpkg/debian/dirs
Builds/containers/packaging/dpkg/debian/rippled.postrm
Builds/containers/packaging/dpkg/debian/rippled.conffiles
Builds/containers/packaging/dpkg/debian/compat
Builds/containers/packaging/dpkg/debian/source/format
Builds/containers/packaging/dpkg/debian/source/local-options
Builds/containers/packaging/dpkg/debian/README.Debian
Builds/containers/packaging/dpkg/debian/rippled.install
Builds/containers/packaging/dpkg/debian/rippled.preinst
Builds/containers/packaging/dpkg/debian/docs
Builds/containers/packaging/dpkg/debian/control
Builds/containers/packaging/dpkg/debian/rippled-reporting.dirs
Builds/containers/packaging/dpkg/build_dpkg.sh
Builds/containers/ubuntu-builder/Dockerfile
Builds/containers/ubuntu-builder/ubuntu_setup.sh
bin/getRippledInfo
Builds/containers/shared/install_cmake.sh
Builds/containers/shared/update-rippled.sh
Builds/containers/shared/update_sources.sh
Builds/containers/shared/rippled.service
Builds/containers/shared/rippled-reporting.service
Builds/containers/shared/rippled-logrotate
Builds/containers/shared/update-rippled-cron
)
exclude_from_default (dpkg_container)
add_custom_target (dpkg
docker run
-e NIH_CACHE_ROOT=/opt/rippled_bld/pkg/.nih_c
-v ${NIH_CACHE_ROOT}/pkgbuild:/opt/rippled_bld/pkg/.nih_c
-v ${CMAKE_CURRENT_SOURCE_DIR}:/opt/rippled_bld/pkg/rippled
-v ${CMAKE_CURRENT_BINARY_DIR}/packages:/opt/rippled_bld/pkg/out
-t rippled-dpkg-builder:${container_label}
/bin/bash -c "cp -fpu rippled/Builds/containers/packaging/dpkg/build_dpkg.sh . && ./build_dpkg.sh"
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/packaging/dpkg/debian/control
)
exclude_from_default (dpkg)
if (NOT have_package_container)
add_dependencies(dpkg dpkg_container)
endif ()
#[===================================================================[
ci container
#]===================================================================]
# now use the same ubuntu image for our travis-ci docker images,
# but we use a newer distro (18.04 vs 16.04).
#
# the following steps assume the github pkg repo, but it's possible to
# adapt these for other docker hub repositories.
#
# steps for publishing a new CI image when you make changes:
#
# mkdir bld.ci && cd bld.ci && cmake -Dpackages_only=ON -Dcontainer_label=CI_LATEST
# cmake --build . --target ci_container --verbose
# docker tag rippled-ci-builder:CI_LATEST <HUB REPO PATH>/rippled-ci-builder:YYYY-MM-DD
# (NOTE: change YYYY-MM-DD to match current date, or use a different
# tag/version scheme if you prefer)
# docker push <HUB REPO PATH>/rippled-ci-builder:YYYY-MM-DD
# (NOTE: <HUB REPO PATH> is probably your user or org name if using
# docker hub, or it might be something like
# docker.pkg.github.com/ripple/rippled if using the github pkg
# registry. for any registry, you will need to be logged-in via
# docker and have push access.)
#
# ...then change the DOCKER_IMAGE line in .travis.yml :
# - DOCKER_IMAGE="<HUB REPO PATH>/rippled-ci-builder:YYYY-MM-DD"
add_custom_target (ci_container
docker build
--pull
--build-arg DIST_TAG=18.04
--build-arg GIT_COMMIT=${commit_hash}
--build-arg CI_USE=true
-t rippled-ci-builder:${container_label}
$<$<BOOL:${ci_cache_from}>:--cache-from=${ci_cache_from}>
-f ubuntu-builder/Dockerfile .
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/Builds/containers
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/ubuntu-builder/Dockerfile
Builds/containers/ubuntu-builder/ubuntu_setup.sh
)
exclude_from_default (ci_container)
else ()
message (STATUS "docker NOT found -- won't be able to build containers for packaging")
endif ()
endif ()

View File

@@ -1,122 +0,0 @@
#[===================================================================[
declare user options/settings
#]===================================================================]
option (assert "Enables asserts, even in release builds" OFF)
option (reporting "Build rippled with reporting mode enabled" OFF)
option (tests "Build tests" ON)
option (unity "Creates a build using UNITY support in cmake. This is the default" ON)
if (unity)
if (NOT is_ci)
set (CMAKE_UNITY_BUILD_BATCH_SIZE 15 CACHE STRING "")
endif ()
endif ()
if (is_gcc OR is_clang)
option (coverage "Generates coverage info." OFF)
option (profile "Add profiling flags" OFF)
set (coverage_test "" CACHE STRING
"On gcc & clang, the specific unit test(s) to run for coverage. Default is all tests.")
if (coverage_test AND NOT coverage)
set (coverage ON CACHE BOOL "gcc/clang only" FORCE)
endif ()
option (coverage_core_only
"Include only src/ripple files when generating coverage report. \
Set to OFF to include all sources in coverage report."
ON)
option (wextra "compile with extra gcc/clang warnings enabled" ON)
else ()
set (profile OFF CACHE BOOL "gcc/clang only" FORCE)
set (coverage OFF CACHE BOOL "gcc/clang only" FORCE)
set (wextra OFF CACHE BOOL "gcc/clang only" FORCE)
endif ()
if (is_linux)
option (BUILD_SHARED_LIBS "build shared ripple libraries" OFF)
option (static "link protobuf, openssl, libc++, and boost statically" ON)
option (perf "Enables flags that assist with perf recording" OFF)
option (use_gold "enables detection of gold (binutils) linker" ON)
else ()
# we are not ready to allow shared-libs on windows because it would require
# export declarations. On macos it's more feasible, but static openssl
# produces odd linker errors, thus we disable shared lib builds for now.
set (BUILD_SHARED_LIBS OFF CACHE BOOL "build shared ripple libraries - OFF for win/macos" FORCE)
set (static ON CACHE BOOL "static link, linux only. ON for WIN/macos" FORCE)
set (perf OFF CACHE BOOL "perf flags, linux only" FORCE)
set (use_gold OFF CACHE BOOL "gold linker, linux only" FORCE)
endif ()
if (is_clang)
option (use_lld "enables detection of lld linker" ON)
else ()
set (use_lld OFF CACHE BOOL "try lld linker, clang only" FORCE)
endif ()
option (jemalloc "Enables jemalloc for heap profiling" OFF)
option (werr "treat warnings as errors" OFF)
option (local_protobuf
"Force a local build of protobuf instead of looking for an installed version." OFF)
option (local_grpc
"Force a local build of gRPC instead of looking for an installed version." OFF)
# this one is a string and therefore can't be an option
set (san "" CACHE STRING "On gcc & clang, add sanitizer instrumentation")
set_property (CACHE san PROPERTY STRINGS ";undefined;memory;address;thread")
if (san)
string (TOLOWER ${san} san)
set (SAN_FLAG "-fsanitize=${san}")
set (SAN_LIB "")
if (is_gcc)
if (san STREQUAL "address")
set (SAN_LIB "asan")
elseif (san STREQUAL "thread")
set (SAN_LIB "tsan")
elseif (san STREQUAL "memory")
set (SAN_LIB "msan")
elseif (san STREQUAL "undefined")
set (SAN_LIB "ubsan")
endif ()
endif ()
set (_saved_CRL ${CMAKE_REQUIRED_LIBRARIES})
set (CMAKE_REQUIRED_LIBRARIES "${SAN_FLAG};${SAN_LIB}")
check_cxx_compiler_flag (${SAN_FLAG} COMPILER_SUPPORTS_SAN)
set (CMAKE_REQUIRED_LIBRARIES ${_saved_CRL})
if (NOT COMPILER_SUPPORTS_SAN)
message (FATAL_ERROR "${san} sanitizer does not seem to be supported by your compiler")
endif ()
endif ()
set (container_label "" CACHE STRING "tag to use for package building containers")
option (packages_only
"ONLY generate package building targets. This is special use-case and almost \
certainly not what you want. Use with caution as you won't be able to build \
any compiled targets locally." OFF)
option (have_package_container
"Sometimes you already have the tagged container you want to use for package \
building and you don't want docker to rebuild it. This flag will detach the \
dependency of the package build from the container build. It's an advanced \
use case and most likely you should not be touching this flag." OFF)
# the remaining options are obscure and rarely used
option (beast_no_unit_test_inline
"Prevents unit test definitions from being inserted into global table"
OFF)
option (single_io_service_thread
"Restricts the number of threads calling io_service::run to one. \
This can be useful when debugging."
OFF)
option (boost_show_deprecated
"Allow boost to fail on deprecated usage. Only useful if you're trying\
to find deprecated calls."
OFF)
option (beast_hashers
"Use local implementations for sha/ripemd hashes (experimental, not recommended)"
OFF)
if (WIN32)
option (beast_disable_autolink "Disables autolinking of system libraries on WIN32" OFF)
else ()
set (beast_disable_autolink OFF CACHE BOOL "WIN32 only" FORCE)
endif ()
if (coverage)
message (STATUS "coverage build requested - forcing Debug build")
set (CMAKE_BUILD_TYPE Debug CACHE STRING "build type" FORCE)
endif ()

View File

@@ -1,15 +0,0 @@
#[===================================================================[
read version from source
#]===================================================================]
file (STRINGS src/ripple/protocol/impl/BuildInfo.cpp BUILD_INFO)
foreach (line_ ${BUILD_INFO})
if (line_ MATCHES "versionString[ ]*=[ ]*\"(.+)\"")
set (rippled_version ${CMAKE_MATCH_1})
endif ()
endforeach ()
if (rippled_version)
message (STATUS "rippled version: ${rippled_version}")
else ()
message (FATAL_ERROR "unable to determine rippled version")
endif ()

View File

@@ -1,106 +0,0 @@
################################################################################
# SociConfig.cmake - CMake build configuration of SOCI library
################################################################################
# Copyright (C) 2010 Mateusz Loskot <mateusz@loskot.net>
#
# Distributed under the Boost Software License, Version 1.0.
# (See accompanying file LICENSE_1_0.txt or copy at
# http://www.boost.org/LICENSE_1_0.txt)
################################################################################
include(CheckCXXSymbolExists)
if(WIN32)
check_cxx_symbol_exists("_M_AMD64" "" SOCI_TARGET_ARCH_X64)
if(NOT RTC_ARCH_X64)
check_cxx_symbol_exists("_M_IX86" "" SOCI_TARGET_ARCH_X86)
endif(NOT RTC_ARCH_X64)
# add check for arm here
# see http://msdn.microsoft.com/en-us/library/b0084kay.aspx
else(WIN32)
check_cxx_symbol_exists("__i386__" "" SOCI_TARGET_ARCH_X86)
check_cxx_symbol_exists("__x86_64__" "" SOCI_TARGET_ARCH_X64)
check_cxx_symbol_exists("__arm__" "" SOCI_TARGET_ARCH_ARM)
endif(WIN32)
if(NOT DEFINED LIB_SUFFIX)
if(SOCI_TARGET_ARCH_X64)
set(_lib_suffix "64")
else()
set(_lib_suffix "")
endif()
set(LIB_SUFFIX ${_lib_suffix} CACHE STRING "Specifies suffix for the lib directory")
endif()
#
# C++11 Option
#
if(NOT SOCI_CXX_C11)
set (SOCI_CXX_C11 OFF CACHE BOOL "Build to the C++11 standard")
endif()
#
# Force compilation flags and set desired warnings level
#
if (MSVC)
add_definitions(-D_CRT_SECURE_NO_DEPRECATE)
add_definitions(-D_CRT_SECURE_NO_WARNINGS)
add_definitions(-D_CRT_NONSTDC_NO_WARNING)
add_definitions(-D_SCL_SECURE_NO_WARNINGS)
if(CMAKE_CXX_FLAGS MATCHES "/W[0-4]")
string(REGEX REPLACE "/W[0-4]" "/W4" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /W4 /we4266")
endif()
else()
set(SOCI_GCC_CLANG_COMMON_FLAGS "")
# "-pedantic -Werror -Wno-error=parentheses -Wall -Wextra -Wpointer-arith -Wcast-align -Wcast-qual -Wfloat-equal -Woverloaded-virtual -Wredundant-decls -Wno-long-long")
if (SOCI_CXX_C11)
set(SOCI_CXX_VERSION_FLAGS "-std=c++11")
else()
set(SOCI_CXX_VERSION_FLAGS "-std=gnu++98")
endif()
if("${CMAKE_CXX_COMPILER_ID}" MATCHES "Clang" OR "${CMAKE_CXX_COMPILER}" MATCHES "clang")
if(NOT CMAKE_CXX_COMPILER_VERSION LESS 3.1 AND SOCI_ASAN)
set(SOCI_GCC_CLANG_COMMON_FLAGS "${SOCI_GCC_CLANG_COMMON_FLAGS} -fsanitize=address")
endif()
# enforce C++11 for Clang
set(SOCI_CXX_C11 ON)
set(SOCI_CXX_VERSION_FLAGS "-std=c++11")
add_definitions(-DCATCH_CONFIG_CPP11_NO_IS_ENUM)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SOCI_GCC_CLANG_COMMON_FLAGS} ${SOCI_CXX_VERSION_FLAGS}")
elseif(CMAKE_COMPILER_IS_GNUCC OR CMAKE_COMPILER_IS_GNUCXX)
if(NOT CMAKE_CXX_COMPILER_VERSION LESS 4.8 AND SOCI_ASAN)
set(SOCI_GCC_CLANG_COMMON_FLAGS "${SOCI_GCC_CLANG_COMMON_FLAGS} -fsanitize=address")
endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SOCI_GCC_CLANG_COMMON_FLAGS} ${SOCI_CXX_VERSION_FLAGS} ")
if (CMAKE_COMPILER_IS_GNUCXX)
if (CMAKE_SYSTEM_NAME MATCHES "FreeBSD")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-variadic-macros")
endif()
endif()
else()
message(WARNING "Unknown toolset - using default flags to build SOCI")
endif()
endif()
# Set SOCI_HAVE_* variables for soci-config.h generator
set(SOCI_HAVE_CXX_C11 ${SOCI_CXX_C11} CACHE INTERNAL "Enables C++11 support")

View File

@@ -1,17 +0,0 @@
#[=========================================================[
This is a CMake script file that is used to write
the contents of a file to stdout (using the cmake
echo command). The input file is passed via the
IN_FILE variable.
#]=========================================================]
if (EXISTS ${IN_FILE})
file (READ ${IN_FILE} contents)
## only print files that actually have some text in them
if (contents MATCHES "[a-z0-9A-Z]+")
execute_process(
COMMAND
${CMAKE_COMMAND} -E echo "${contents}")
endif ()
endif ()

View File

@@ -1 +0,0 @@
[Please see the BUILD instructions here](../BUILD.md)

View File

@@ -1,405 +0,0 @@
#!/usr/bin/env python
# This file is part of rippled: https://github.com/ripple/rippled
# Copyright (c) 2012 - 2017 Ripple Labs Inc.
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
"""
Invocation:
./Builds/Test.py - builds and tests all configurations
The build must succeed without shell aliases for this to work.
To pass flags to cmake, put them at the very end of the command line, after
the -- flag - like this:
./Builds/Test.py -- -j4 # Pass -j4 to cmake --build
Common problems:
1) Boost not found. Solution: export BOOST_ROOT=[path to boost folder]
2) OpenSSL not found. Solution: export OPENSSL_ROOT=[path to OpenSSL folder]
3) cmake is not found. Solution: Be sure cmake directory is on your $PATH
"""
from __future__ import absolute_import, division, print_function, unicode_literals
import argparse
import itertools
import os
import platform
import re
import shutil
import sys
import subprocess
def powerset(iterable):
"""powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"""
s = list(iterable)
return itertools.chain.from_iterable(itertools.combinations(s, r) for r in range(len(s) + 1))
IS_WINDOWS = platform.system().lower() == 'windows'
IS_OS_X = platform.system().lower() == 'darwin'
# CMake
if IS_WINDOWS:
CMAKE_UNITY_CONFIGS = ['Debug', 'Release']
CMAKE_NONUNITY_CONFIGS = ['Debug', 'Release']
else:
CMAKE_UNITY_CONFIGS = []
CMAKE_NONUNITY_CONFIGS = []
CMAKE_UNITY_COMBOS = { '' : [['rippled'], CMAKE_UNITY_CONFIGS],
'.nounity' : [['rippled'], CMAKE_NONUNITY_CONFIGS] }
if IS_WINDOWS:
CMAKE_DIR_TARGETS = { ('msvc' + unity,) : targets for unity, targets in
CMAKE_UNITY_COMBOS.items() }
elif IS_OS_X:
CMAKE_DIR_TARGETS = { (build + unity,) : targets
for build in ['debug', 'release']
for unity, targets in CMAKE_UNITY_COMBOS.items() }
else:
CMAKE_DIR_TARGETS = { (cc + "." + build + unity,) : targets
for cc in ['gcc', 'clang']
for build in ['debug', 'release', 'coverage', 'profile']
for unity, targets in CMAKE_UNITY_COMBOS.items() }
# list of tuples of all possible options
if IS_WINDOWS or IS_OS_X:
CMAKE_ALL_GENERATE_OPTIONS = [tuple(x) for x in powerset(['-GNinja', '-Dassert=true'])]
else:
CMAKE_ALL_GENERATE_OPTIONS = list(set(
[tuple(x) for x in powerset(['-GNinja', '-Dstatic=true', '-Dassert=true', '-Dsan=address'])] +
[tuple(x) for x in powerset(['-GNinja', '-Dstatic=true', '-Dassert=true', '-Dsan=thread'])]))
parser = argparse.ArgumentParser(
description='Test.py - run ripple tests'
)
parser.add_argument(
'--all', '-a',
action='store_true',
help='Build all configurations.',
)
parser.add_argument(
'--keep_going', '-k',
action='store_true',
help='Keep going after one configuration has failed.',
)
parser.add_argument(
'--silent', '-s',
action='store_true',
help='Silence all messages except errors',
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help=('Report more information about which commands are executed and the '
'results.'),
)
parser.add_argument(
'--test', '-t',
default='',
help='Add a prefix for unit tests',
)
parser.add_argument(
'--testjobs',
default='0',
type=int,
help='Run tests in parallel'
)
parser.add_argument(
'--ipv6',
action='store_true',
help='Use IPv6 localhost when running unit tests.',
)
parser.add_argument(
'--clean', '-c',
action='store_true',
help='delete all build artifacts after testing',
)
parser.add_argument(
'--quiet', '-q',
action='store_true',
help='Reduce output where possible (unit tests)',
)
parser.add_argument(
'--dir', '-d',
default=(),
nargs='*',
help='Specify one or more CMake dir names. '
'Will also be used as -Dtarget=<dir> running cmake.'
)
parser.add_argument(
'--target',
default=(),
nargs='*',
help='Specify one or more CMake build targets. '
'Will be used as --target <target> running cmake --build.'
)
parser.add_argument(
'--config',
default=(),
nargs='*',
help='Specify one or more CMake build configs. '
'Will be used as --config <config> running cmake --build.'
)
parser.add_argument(
'--generator_option',
action='append',
help='Specify a CMake generator option. Repeat for multiple options. '
'Will be passed to the cmake generator. '
'Due to limits of the argument parser, arguments starting with \'-\' '
'must be attached to this option. e.g. --generator_option=-GNinja.')
parser.add_argument(
'--build_option',
action='append',
help='Specify a build option. Repeat for multiple options. '
'Will be passed to the build tool via cmake --build. '
'Due to limits of the argument parser, arguments starting with \'-\' '
'must be attached to this option. e.g. --build_option=-j8.')
parser.add_argument(
'extra_args',
default=(),
nargs='*',
help='Extra arguments are passed through to the tools'
)
ARGS = parser.parse_args()
def decodeString(line):
# Python 2 vs. Python 3
if isinstance(line, str):
return line
else:
return line.decode()
def shell(cmd, args=(), silent=False, cust_env=None):
""""Execute a shell command and return the output."""
silent = ARGS.silent or silent
verbose = not silent and ARGS.verbose
if verbose:
print('$' + cmd, *args)
command = (cmd,) + args
# shell is needed in Windows to find executable in the path
process = subprocess.Popen(
command,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
env=cust_env,
shell=IS_WINDOWS)
lines = []
count = 0
# readline returns '' at EOF
for line in iter(process.stdout.readline, ''):
if process.poll() is None:
decoded = decodeString(line)
lines.append(decoded)
if verbose:
print(decoded, end='')
elif not silent:
count += 1
if count >= 80:
print()
count = 0
else:
print('.', end='')
else:
break
if not verbose and count:
print()
process.wait()
return process.returncode, lines
def get_cmake_dir(cmake_dir):
return os.path.join('build' , 'cmake' , cmake_dir)
def run_cmake(directory, cmake_dir, args):
print('Generating build in', directory, 'with', *args or ('default options',))
old_dir = os.getcwd()
if not os.path.exists(directory):
os.makedirs(directory)
os.chdir(directory)
if IS_WINDOWS and not any(arg.startswith("-G") for arg in args) and not os.path.exists("CMakeCache.txt"):
if '--ninja' in args:
args += ( '-GNinja', )
else:
args += ( '-GVisual Studio 14 2015 Win64', )
# hack to extract cmake options/args from the legacy target format
if re.search('\.unity', cmake_dir):
args += ( '-Dunity=ON', )
if re.search('\.nounity', cmake_dir):
args += ( '-Dunity=OFF', )
if re.search('coverage', cmake_dir):
args += ( '-Dcoverage=ON', )
if re.search('profile', cmake_dir):
args += ( '-Dprofile=ON', )
if re.search('debug', cmake_dir):
args += ( '-DCMAKE_BUILD_TYPE=Debug', )
if re.search('release', cmake_dir):
args += ( '-DCMAKE_BUILD_TYPE=Release', )
m = re.search('gcc(-[^.]*)', cmake_dir)
if m:
args += ( '-DCMAKE_C_COMPILER=' + m.group(0),
'-DCMAKE_CXX_COMPILER=g++' + m.group(1), )
elif re.search('gcc', cmake_dir):
args += ( '-DCMAKE_C_COMPILER=gcc', '-DCMAKE_CXX_COMPILER=g++', )
m = re.search('clang(-[^.]*)', cmake_dir)
if m:
args += ( '-DCMAKE_C_COMPILER=' + m.group(0),
'-DCMAKE_CXX_COMPILER=clang++' + m.group(1), )
elif re.search('clang', cmake_dir):
args += ( '-DCMAKE_C_COMPILER=clang', '-DCMAKE_CXX_COMPILER=clang++', )
args += ( os.path.join('..', '..', '..'), )
resultcode, lines = shell('cmake', args)
if resultcode:
print('Generating FAILED:')
if not ARGS.verbose:
print(*lines, sep='')
sys.exit(1)
os.chdir(old_dir)
def run_cmake_build(directory, target, config, args):
print('Building', target, config, 'in', directory, 'with', *args or ('default options',))
build_args=('--build', directory)
if target:
build_args += ('--target', target)
if config:
build_args += ('--config', config)
if args:
build_args += ('--',)
build_args += tuple(args)
resultcode, lines = shell('cmake', build_args)
if resultcode:
print('Build FAILED:')
if not ARGS.verbose:
print(*lines, sep='')
sys.exit(1)
def run_cmake_tests(directory, target, config):
failed = []
if IS_WINDOWS:
target += '.exe'
executable = os.path.join(directory, config if config else 'Debug', target)
if(not os.path.exists(executable)):
executable = os.path.join(directory, target)
print('Unit tests for', executable)
testflag = '--unittest'
quiet = ''
testjobs = ''
ipv6 = ''
if ARGS.test:
testflag += ('=' + ARGS.test)
if ARGS.quiet:
quiet = '-q'
if ARGS.ipv6:
ipv6 = '--unittest-ipv6'
if ARGS.testjobs:
testjobs = ('--unittest-jobs=' + str(ARGS.testjobs))
resultcode, lines = shell(executable, (testflag, quiet, testjobs, ipv6))
if resultcode:
if not ARGS.verbose:
print('ERROR:', *lines, sep='')
failed.append([target, 'unittest'])
return failed
def main():
all_failed = []
if ARGS.all:
build_dir_targets = CMAKE_DIR_TARGETS
generator_options = CMAKE_ALL_GENERATE_OPTIONS
else:
build_dir_targets = { tuple(ARGS.dir) : [ARGS.target, ARGS.config] }
if ARGS.generator_option:
generator_options = [tuple(ARGS.generator_option)]
else:
generator_options = [tuple()]
if not build_dir_targets:
# Let CMake choose the build tool.
build_dir_targets = { () : [] }
if ARGS.build_option:
ARGS.build_option = ARGS.build_option + list(ARGS.extra_args)
else:
ARGS.build_option = list(ARGS.extra_args)
for args in generator_options:
for build_dirs, (build_targets, build_configs) in build_dir_targets.items():
if not build_dirs:
build_dirs = ('default',)
if not build_targets:
build_targets = ('rippled',)
if not build_configs:
build_configs = ('',)
for cmake_dir in build_dirs:
cmake_full_dir = get_cmake_dir(cmake_dir)
run_cmake(cmake_full_dir, cmake_dir, args)
for target in build_targets:
for config in build_configs:
run_cmake_build(cmake_full_dir, target, config, ARGS.build_option)
failed = run_cmake_tests(cmake_full_dir, target, config)
if failed:
print('FAILED:', *(':'.join(f) for f in failed))
if not ARGS.keep_going:
sys.exit(1)
else:
all_failed.extend([decodeString(cmake_dir +
"." + target + "." + config), ':'.join(f)]
for f in failed)
else:
print('Success')
if ARGS.clean:
shutil.rmtree(cmake_full_dir)
if all_failed:
if len(all_failed) > 1:
print()
print('FAILED:', *(':'.join(f) for f in all_failed))
sys.exit(1)
if __name__ == '__main__':
main()
sys.exit(0)

View File

@@ -1 +0,0 @@
[Build instructions are currently located in `BUILD.md`](../../BUILD.md)

View File

@@ -1,45 +0,0 @@
{
// See https://go.microsoft.com//fwlink//?linkid=834763 for more information about this file.
"configurations": [
{
"name": "x64-Debug",
"generator": "Visual Studio 16 2019",
"configurationType": "Debug",
"inheritEnvironments": [ "msvc_x64_x64" ],
"buildRoot": "${thisFileDir}\\build\\${name}",
"cmakeCommandArgs": "",
"buildCommandArgs": "-v:minimal",
"ctestCommandArgs": "",
"variables": [
{
"name": "BOOST_ROOT",
"value": "C:\\lib\\boost"
},
{
"name": "OPENSSL_ROOT",
"value": "C:\\lib\\OpenSSL-Win64"
}
]
},
{
"name": "x64-Release",
"generator": "Visual Studio 16 2019",
"configurationType": "Release",
"inheritEnvironments": [ "msvc_x64_x64" ],
"buildRoot": "${thisFileDir}\\build\\${name}",
"cmakeCommandArgs": "",
"buildCommandArgs": "-v:minimal",
"ctestCommandArgs": "",
"variables": [
{
"name": "BOOST_ROOT",
"value": "C:\\lib\\boost"
},
{
"name": "OPENSSL_ROOT",
"value": "C:\\lib\\OpenSSL-Win64"
}
]
}
]
}

View File

@@ -1,263 +0,0 @@
# Visual Studio 2019 Build Instructions
## Important
We do not recommend Windows for rippled production use at this time. Currently,
the Ubuntu platform has received the highest level of quality assurance,
testing, and support. Additionally, 32-bit Windows versions are not supported.
## Prerequisites
To clone the source code repository, create branches for inspection or
modification, build rippled under Visual Studio, and run the unit tests you will
need these software components
| Component | Minimum Recommended Version |
|-----------|-----------------------|
| [Visual Studio 2019](README.md#install-visual-studio-2019)| 15.5.4 |
| [Git for Windows](README.md#install-git-for-windows)| 2.16.1 |
| [OpenSSL Library](README.md#install-openssl) | 1.1.1L |
| [Boost library](README.md#build-boost) | 1.70.0 |
| [CMake for Windows](README.md#optional-install-cmake-for-windows)* | 3.12 |
\* Only needed if not using the integrated CMake in VS 2019 and prefer generating dedicated project/solution files.
## Install Software
### Install Visual Studio 2019
If not already installed on your system, download your choice of installer from
the [Visual Studio 2019
Download](https://www.visualstudio.com/downloads/download-visual-studio-vs)
page, run the installer, and follow the directions. **You may need to choose the
`Desktop development with C++` workload to install all necessary C++ features.**
Any version of Visual Studio 2019 may be used to build rippled. The **Visual
Studio 2019 Community** edition is available free of charge (see [the product
page](https://www.visualstudio.com/products/visual-studio-community-vs) for
licensing details), while paid editions may be used for an initial free-trial
period.
### Install Git for Windows
Git is a distributed revision control system. The Windows version also provides
the bash shell and many Windows versions of Unix commands. While there are other
varieties of Git (such as TortoiseGit, which has a native Windows interface and
integrates with the Explorer shell), we recommend installing [Git for
Windows](https://git-scm.com/) since it provides a Unix-like command line
environment useful for running shell scripts. Use of the bash shell under
Windows is mandatory for running the unit tests.
### Install OpenSSL
[Download the latest version of
OpenSSL.](http://slproweb.com/products/Win32OpenSSL.html) There will
several `Win64` bit variants available, you want the non-light
`v1.1` line. As of this writing, you **should** select
* Win64 OpenSSL v1.1.1q
and should **not** select
* Anything with "Win32" in the name
* Anything with "light" in the name
* Anything with "EXPERIMENTAL" in the name
* Anything in the 3.0 line - rippled won't currently build with this version.
Run the installer, and choose an appropriate location for your OpenSSL
installation. In this guide we use `C:\lib\OpenSSL-Win64` as the destination
location.
You may be informed on running the installer that "Visual C++ 2008
Redistributables" must first be installed first. If so, download it from the
[same page](http://slproweb.com/products/Win32OpenSSL.html), again making sure
to get the correct 32-/64-bit variant.
* NOTE: Since rippled links statically to OpenSSL, it does not matter where the
OpenSSL .DLL files are placed, or what version they are. rippled does not use
or require any external .DLL files to run other than the standard operating
system ones.
### Build Boost
Boost 1.70 or later is required.
[Download boost](http://www.boost.org/users/download/) and unpack it
to `c:\lib`. As of this writing, the most recent version of boost is 1.80.0,
which will unpack into a directory named `boost_1_80_0`. We recommended either
renaming this directory to `boost`, or creating a junction link `mklink /J boost
boost_1_80_0`, so that you can more easily switch between versions.
Next, open **Developer Command Prompt** and type the following commands
```powershell
cd C:\lib\boost
bootstrap
```
The rippled application is linked statically to the standard runtimes and
external dependencies on Windows, to ensure that the behavior of the executable
is not affected by changes in outside files. Therefore, it is necessary to build
the required boost static libraries using this command:
```powershell
b2 -j<Num Parallel> --toolset=msvc-14.2 address-model=64 architecture=x86 link=static threading=multi runtime-link=shared,static stage
```
where you should replace `<Num Parallel>` with the number of parallel
invocations to use build, e.g. `bjam -j8 ...` would use up to 8 concurrent build
shell commands for the build.
Building the boost libraries may take considerable time. When the build process
is completed, take note of both the reported compiler include paths and linker
library paths as they will be required later.
### (Optional) Install CMake for Windows
[CMake](http://cmake.org) is a cross platform build system generator. Visual
Studio 2019 includes an integrated version of CMake that avoids having to
manually run CMake, but it is undergoing continuous improvement. Users that
prefer to use standard Visual Studio project and solution files need to install
a dedicated version of CMake to generate them. The latest version can be found
at the [CMake download site](https://cmake.org/download/). It is recommended you
select the install option to add CMake to your path.
## Clone the rippled repository
If you are familiar with cloning github repositories, just follow your normal
process and clone `git@github.com:ripple/rippled.git`. Otherwise follow this
section for instructions.
1. If you don't have a github account, sign up for one at
[github.com](https://github.com/).
2. Make sure you have Github ssh keys. For help see
[generating-ssh-keys](https://help.github.com/articles/generating-ssh-keys).
Open the "Git Bash" shell that was installed with "Git for Windows" in the step
above. Navigate to the directory where you want to clone rippled (git bash uses
`/c` for windows's `C:` and forward slash where windows uses backslash, so
`C:\Users\joe\projs` would be `/c/Users/joe/projs` in git bash). Now clone the
repository and optionally switch to the *master* branch. Type the following at
the bash prompt:
```powershell
git clone git@github.com:XRPLF/rippled.git
cd rippled
```
If you receive an error about not having the "correct access rights" make sure
you have Github ssh keys, as described above.
For a stable release, choose the `master` branch or one of the tagged releases
listed on [rippled's GitHub page](https://github.com/ripple/rippled/releases).
```
git checkout master
```
To test the latest release candidate, choose the `release` branch.
```
git checkout release
```
If you are doing development work and want the latest set of beta features,
you can consider using the `develop` branch instead.
```
git checkout develop
```
# Build using Visual Studio integrated CMake
In Visual Studio 2017, Microsoft added [integrated IDE support for
cmake](https://blogs.msdn.microsoft.com/vcblog/2016/10/05/cmake-support-in-visual-studio/).
To begin, simply:
1. Launch Visual Studio and choose **File | Open | Folder**, navigating to the
cloned rippled folder.
2. Right-click on `CMakeLists.txt` in the **Solution Explorer - Folder View** to
generate a `CMakeSettings.json` file. A sample settings file is provided
[here](/Builds/VisualStudio2019/CMakeSettings-example.json). Customize the
settings for `BOOST_ROOT`, `OPENSSL_ROOT` to match the install paths if they
differ from those in the file.
4. Select either the `x64-Release` or `x64-Debug` configuration from the
**Project Settings** drop-down. This should invoke the built-in CMake project
generator. If not, you can right-click on the `CMakeLists.txt` file and
choose **Configure rippled**.
5. Select the `rippled.exe`
option in the **Select Startup Item** drop-down. This will be the target
built when you press F7. Alternatively, you can choose a target to build from
the top-level **CMake | Build** menu. Note that at this time, there are other
targets listed that come from third party visual studio files embedded in the
rippled repo, e.g. `datagen.vcxproj`. Please ignore them.
For details on configuring debugging sessions or further customization of CMake,
please refer to the [CMake tools for VS
documentation](https://docs.microsoft.com/en-us/cpp/ide/cmake-tools-for-visual-cpp).
If using the provided `CMakeSettings.json` file, the executable will be in
```
.\build\x64-Release\Release\rippled.exe
```
or
```
.\build\x64-Debug\Debug\rippled.exe
```
These paths are relative to your cloned git repository.
# Build using stand-alone CMake
This requires having installed [CMake for
Windows](README.md#optional-install-cmake-for-windows). We do not recommend
mixing this method with the integrated CMake method for the same repository
clone. Assuming you included the cmake executable folder in your path,
execute the following commands within your `rippled` cloned repository:
```
mkdir build\cmake
cd build\cmake
cmake ..\.. -G"Visual Studio 16 2019" -Ax64 -DBOOST_ROOT="C:\lib\boost" -DOPENSSL_ROOT="C:\lib\OpenSSL-Win64" -DCMAKE_GENERATOR_TOOLSET=host=x64
```
Now launch Visual Studio 2019 and select **File | Open | Project/Solution**.
Navigate to the `build\cmake` folder created above and select the `rippled.sln`
file. You can then choose whether to build the `Debug` or `Release` solution
configuration.
The executable will be in
```
.\build\cmake\Release\rippled.exe
```
or
```
.\build\cmake\Debug\rippled.exe
```
These paths are relative to your cloned git repository.
# Unity/No-Unity Builds
The rippled build system defaults to using
[unity source files](http://onqtam.com/programming/2018-07-07-unity-builds/)
to improve build times. In some cases it might be desirable to disable the
unity build and compile individual translation units. Here is how you can
switch to a "no-unity" build configuration:
## Visual Studio Integrated CMake
Edit your `CmakeSettings.json` (described above) by adding `-Dunity=OFF`
to the `cmakeCommandArgs` entry for each build configuration.
## Standalone CMake Builds
When running cmake to generate the Visual Studio project files, add
`-Dunity=OFF` to the command line options passed to cmake.
**Note:** you will need to re-run the cmake configuration step anytime you
want to switch between unity/no-unity builds.
# Unit Test (Recommended)
`rippled` builds a set of unit tests into the server executable. To run these
unit tests after building, pass the `--unittest` option to the compiled
`rippled` executable. The executable will exit with summary info after running
the unit tests.

View File

@@ -1,7 +0,0 @@
#!/usr/bin/env bash
num_procs=$(lscpu -p | grep -v '^#' | sort -u -t, -k 2,4 | wc -l) # number of physical cores
path=$(cd $(dirname $0) && pwd)
cd $(dirname $path)
${path}/Test.py -a -c --testjobs=${num_procs} -- -j${num_procs}

View File

@@ -1,31 +0,0 @@
# rippled Packaging and Containers
This folder contains docker container definitions and configuration
files to support building rpm and deb packages of rippled. The container
definitions include some additional software/packages that are used
for general build/test CI workflows of rippled but are not explicitly
needed for the package building workflow.
## CMake Targets
If you have docker installed on your local system, then the main
CMake file will enable several targets related to building packages:
`rpm_container`, `rpm`, `dpkg_container`, and `dpkg`. The package targets
depend on the container targets and will trigger a build of those first.
The container builds can take several dozen minutes to complete (depending
on hardware specs), so quick build cycles are not possible currently. As
such, these targets are often best suited to CI/automated build systems.
The package build can be invoked like any other cmake target from the
rippled root folder:
```
mkdir -p build/pkg && cd build/pkg
cmake -Dpackages_only=ON ../..
cmake --build . --target rpm
```
Upon successful completion, the generated package files will be in
the `build/pkg/packages` directory. For deb packages, simply replace
`rpm` with `dpkg` in the build command above.

View File

@@ -1,26 +0,0 @@
FROM rippleci/centos:7
ARG GIT_COMMIT=unknown
ARG CI_USE=false
LABEL git-commit=$GIT_COMMIT
COPY centos-builder/centos_setup.sh /tmp/
COPY shared/install_cmake.sh /tmp/
RUN chmod +x /tmp/centos_setup.sh && \
chmod +x /tmp/install_cmake.sh
RUN /tmp/centos_setup.sh
RUN /tmp/install_cmake.sh 3.16.3 /opt/local/cmake-3.16
RUN ln -s /opt/local/cmake-3.16 /opt/local/cmake
ENV PATH="/opt/local/cmake/bin:$PATH"
# TODO: Install latest CMake for testing
RUN if [ "${CI_USE}" = true ] ; then /tmp/install_cmake.sh 3.16.3 /opt/local/cmake-3.16; fi
RUN mkdir -m 777 -p /opt/rippled_bld/pkg
WORKDIR /opt/rippled_bld/pkg
RUN mkdir -m 777 ./rpmbuild
RUN mkdir -m 777 ./rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
COPY packaging/rpm/build_rpm.sh ./
CMD ./build_rpm.sh

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env bash
set -ex
source /etc/os-release
yum -y upgrade
yum -y update
yum -y install epel-release centos-release-scl
yum -y install \
wget curl time gcc-c++ yum-utils autoconf automake pkgconfig libtool \
libstdc++-static rpm-build gnupg which make cmake \
devtoolset-11 devtoolset-11-gdb devtoolset-11-binutils devtoolset-11-libstdc++-devel \
devtoolset-11-libasan-devel devtoolset-11-libtsan-devel devtoolset-11-libubsan-devel devtoolset-11-liblsan-devel \
flex flex-devel bison bison-devel parallel \
ncurses ncurses-devel ncurses-libs graphviz graphviz-devel \
lzip p7zip bzip2 bzip2-devel lzma-sdk lzma-sdk-devel xz-devel \
zlib zlib-devel zlib-static texinfo openssl openssl-static \
jemalloc jemalloc-devel \
libicu-devel htop \
rh-python38 \
ninja-build git svn \
swig perl-Digest-MD5

View File

@@ -1,28 +0,0 @@
#!/usr/bin/env sh
set -ex
pkgtype=$1
if [ "${pkgtype}" = "rpm" ] ; then
container_name="${RPM_CONTAINER_NAME}"
elif [ "${pkgtype}" = "dpkg" ] ; then
container_name="${DPKG_CONTAINER_NAME}"
else
echo "invalid package type"
exit 1
fi
if docker pull "${ARTIFACTORY_HUB}/${container_name}:latest_${CI_COMMIT_REF_SLUG}"; then
echo "found container for latest - using as cache."
docker tag \
"${ARTIFACTORY_HUB}/${container_name}:latest_${CI_COMMIT_REF_SLUG}" \
"${container_name}:latest_${CI_COMMIT_REF_SLUG}"
CMAKE_EXTRA="-D${pkgtype}_cache_from=${container_name}:latest_${CI_COMMIT_REF_SLUG}"
fi
cmake --version
test -d build && rm -rf build
mkdir -p build/container && cd build/container
eval time \
cmake -Dpackages_only=ON -DCMAKE_VERBOSE_MAKEFILE=ON ${CMAKE_EXTRA} \
-G Ninja ../..
time cmake --build . --target "${pkgtype}_container" -- -v

View File

@@ -1,28 +0,0 @@
#!/usr/bin/env sh
set -ex
pkgtype=$1
if [ "${pkgtype}" = "rpm" ] ; then
container_name="${RPM_CONTAINER_FULLNAME}"
container_tag="${RPM_CONTAINER_TAG}"
elif [ "${pkgtype}" = "dpkg" ] ; then
container_name="${DPKG_CONTAINER_FULLNAME}"
container_tag="${DPKG_CONTAINER_TAG}"
else
echo "invalid package type"
exit 1
fi
time docker pull "${ARTIFACTORY_HUB}/${container_name}"
docker tag \
"${ARTIFACTORY_HUB}/${container_name}" \
"${container_name}"
docker images
test -d build && rm -rf build
mkdir -p build/${pkgtype} && cd build/${pkgtype}
time cmake \
-Dpackages_only=ON \
-Dcontainer_label="${container_tag}" \
-Dhave_package_container=ON \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-Dunity=OFF \
-G Ninja ../..
time cmake --build . --target ${pkgtype} -- -v

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env sh
set -e
# used as a before/setup script for docker steps in gitlab-ci
# expects to be run in standard alpine/dind image
echo $(nproc)
docker login -u rippled \
-p ${ARTIFACTORY_DEPLOY_KEY_RIPPLED} ${ARTIFACTORY_HUB}
apk add --update py-pip
apk add \
bash util-linux coreutils binutils grep \
make ninja cmake build-base gcc g++ abuild git \
python3 python3-dev
pip3 install awscli
# list curdir contents to build log:
ls -la

View File

@@ -1,16 +0,0 @@
#!/usr/bin/env sh
case ${CI_COMMIT_REF_NAME} in
develop)
export COMPONENT="nightly"
;;
release)
export COMPONENT="unstable"
;;
master)
export COMPONENT="stable"
;;
*)
export COMPONENT="_unknown_"
;;
esac

View File

@@ -1,646 +0,0 @@
#########################################################################
## ##
## gitlab CI defintition for rippled build containers and distro ##
## packages (rpm and dpkg). ##
## ##
#########################################################################
# NOTE: these are sensible defaults for Ripple pipelines. These
# can be overridden by project or group variables as needed.
variables:
# these containers are built manually using the rippled
# cmake build (container targets) and tagged/pushed so they
# can be used here
RPM_CONTAINER_TAG: "2023-02-13"
RPM_CONTAINER_NAME: "rippled-rpm-builder"
RPM_CONTAINER_FULLNAME: "${RPM_CONTAINER_NAME}:${RPM_CONTAINER_TAG}"
DPKG_CONTAINER_TAG: "2023-03-20"
DPKG_CONTAINER_NAME: "rippled-dpkg-builder"
DPKG_CONTAINER_FULLNAME: "${DPKG_CONTAINER_NAME}:${DPKG_CONTAINER_TAG}"
ARTIFACTORY_HOST: "artifactory.ops.ripple.com"
ARTIFACTORY_HUB: "${ARTIFACTORY_HOST}:6555"
GIT_SIGN_PUBKEYS_URL: "https://gitlab.ops.ripple.com/xrpledger/rippled-packages/snippets/49/raw"
PUBLIC_REPO_ROOT: "https://repos.ripple.com/repos"
# also need to define this variable ONLY for the primary
# build/publish pipeline on the mainline repo:
# IS_PRIMARY_REPO = "true"
stages:
- build_packages
- sign_packages
- smoketest
- verify_sig
- tag_images
- push_to_test
- verify_from_test
- wait_approval_prod
- push_to_prod
- verify_from_prod
- get_final_hashes
- build_containers
.dind_template: &dind_param
before_script:
- . ./Builds/containers/gitlab-ci/docker_alpine_setup.sh
variables:
docker_driver: overlay2
DOCKER_TLS_CERTDIR: ""
image:
name: artifactory.ops.ripple.com/docker:latest
services:
# workaround for TLS issues - consider going back
# back to unversioned `dind` when issues are resolved
- name: artifactory.ops.ripple.com/docker:stable-dind
alias: docker
tags:
- 4xlarge
.only_primary_template: &only_primary
only:
refs:
- /^(master|release|develop)$/
variables:
- $IS_PRIMARY_REPO == "true"
.smoketest_local_template: &run_local_smoketest
tags:
- xlarge
script:
- . ./Builds/containers/gitlab-ci/smoketest.sh local
.smoketest_repo_template: &run_repo_smoketest
tags:
- xlarge
script:
- . ./Builds/containers/gitlab-ci/smoketest.sh repo
#########################################################################
## ##
## stage: build_packages ##
## ##
## build packages using containers from previous stage. ##
## ##
#########################################################################
rpm_build:
timeout: "1h 30m"
stage: build_packages
<<: *dind_param
artifacts:
paths:
- build/rpm/packages/
script:
- . ./Builds/containers/gitlab-ci/build_package.sh rpm
dpkg_build:
timeout: "1h 30m"
stage: build_packages
<<: *dind_param
artifacts:
paths:
- build/dpkg/packages/
script:
- . ./Builds/containers/gitlab-ci/build_package.sh dpkg
#########################################################################
## ##
## stage: sign_packages ##
## ##
## build packages using containers from previous stage. ##
## ##
#########################################################################
rpm_sign:
stage: sign_packages
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/centos:7
<<: *only_primary
before_script:
- |
# Make sure GnuPG is installed
yum -y install gnupg rpm-sign
# checking GPG signing support
if [ -n "$GPG_KEY_B64" ]; then
echo "$GPG_KEY_B64"| base64 -d | gpg --batch --no-tty --allow-secret-key-import --import -
unset GPG_KEY_B64
export GPG_PASSPHRASE=$(echo $GPG_KEY_PASS_B64 | base64 -di)
unset GPG_KEY_PASS_B64
export GPG_KEYID=$(gpg --with-colon --list-secret-keys | head -n1 | cut -d : -f 5)
else
echo -e "\033[0;31m****** GPG signing disabled ******\033[0m"
exit 1
fi
artifacts:
paths:
- build/rpm/packages/
script:
- ls -alh build/rpm/packages
- . ./Builds/containers/gitlab-ci/sign_package.sh rpm
dpkg_sign:
stage: sign_packages
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/ubuntu:18.04
<<: *only_primary
before_script:
- |
# make sure we have GnuPG
apt update
apt install -y gpg dpkg-sig
# checking GPG signing support
if [ -n "$GPG_KEY_B64" ]; then
echo "$GPG_KEY_B64"| base64 -d | gpg --batch --no-tty --allow-secret-key-import --import -
unset GPG_KEY_B64
export GPG_PASSPHRASE=$(echo $GPG_KEY_PASS_B64 | base64 -di)
unset GPG_KEY_PASS_B64
export GPG_KEYID=$(gpg --with-colon --list-secret-keys | head -n1 | cut -d : -f 5)
else
echo -e "\033[0;31m****** GPG signing disabled ******\033[0m"
exit 1
fi
artifacts:
paths:
- build/dpkg/packages/
script:
- ls -alh build/dpkg/packages
- . ./Builds/containers/gitlab-ci/sign_package.sh dpkg
#########################################################################
## ##
## stage: smoketest ##
## ##
## install unsigned packages from previous step and run unit tests. ##
## ##
#########################################################################
centos_7_smoketest:
stage: smoketest
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/centos:7
<<: *run_local_smoketest
rocky_8_smoketest:
stage: smoketest
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/rockylinux/rockylinux:8
<<: *run_local_smoketest
fedora_37_smoketest:
stage: smoketest
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/fedora:37
<<: *run_local_smoketest
fedora_38_smoketest:
stage: smoketest
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/fedora:38
<<: *run_local_smoketest
ubuntu_18_smoketest:
stage: smoketest
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/ubuntu:18.04
<<: *run_local_smoketest
ubuntu_20_smoketest:
stage: smoketest
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/ubuntu:20.04
<<: *run_local_smoketest
ubuntu_22_smoketest:
stage: smoketest
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/ubuntu:22.04
<<: *run_local_smoketest
debian_10_smoketest:
stage: smoketest
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/debian:10
<<: *run_local_smoketest
debian_11_smoketest:
stage: smoketest
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/debian:11
<<: *run_local_smoketest
#########################################################################
## ##
## stage: verify_sig ##
## ##
## use git/gpg to verify that HEAD is signed by an approved ##
## committer. The whitelist of pubkeys is manually mantained ##
## and fetched from GIT_SIGN_PUBKEYS_URL (currently a snippet ##
## link). ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
verify_head_signed:
stage: verify_sig
image:
name: artifactory.ops.ripple.com/ubuntu:latest
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/verify_head_commit.sh
#########################################################################
## ##
## stage: tag_images ##
## ##
## apply rippled version tag to containers from previous stage. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
tag_bld_images:
stage: tag_images
variables:
docker_driver: overlay2
DOCKER_TLS_CERTDIR: ""
image:
name: artifactory.ops.ripple.com/docker:latest
services:
# workaround for TLS issues - consider going back
# back to unversioned `dind` when issues are resolved
- name: artifactory.ops.ripple.com/docker:stable-dind
alias: docker
tags:
- large
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/tag_docker_image.sh
#########################################################################
## ##
## stage: push_to_test ##
## ##
## push packages to artifactory repositories (test) ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
push_test:
stage: push_to_test
variables:
DEB_REPO: "rippled-deb-test-mirror"
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/alpine:latest
artifacts:
paths:
- files.info
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "PUT" "."
#########################################################################
## ##
## stage: verify_from_test ##
## ##
## install/test packages from test repos. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
centos_7_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/centos:7
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
rocky_8_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/rockylinux/rockylinux:8
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_37_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/fedora:37
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_38_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/fedora:38
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_18_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "bionic"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: artifactory.ops.ripple.com/ubuntu:18.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_20_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "focal"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: artifactory.ops.ripple.com/ubuntu:20.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_22_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "jammy"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: artifactory.ops.ripple.com/ubuntu:22.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
debian_10_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "buster"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: artifactory.ops.ripple.com/debian:10
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
debian_11_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "bullseye"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: artifactory.ops.ripple.com/debian:11
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
#########################################################################
## ##
## stage: wait_approval_prod ##
## ##
## wait for manual approval before proceeding to next stage ##
## which pushes to prod repo. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
wait_before_push_prod:
stage: wait_approval_prod
image:
name: artifactory.ops.ripple.com/alpine:latest
<<: *only_primary
script:
- echo "proceeding to next stage"
when: manual
allow_failure: false
#########################################################################
## ##
## stage: push_to_prod ##
## ##
## push packages to artifactory repositories (prod) ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
push_prod:
variables:
DEB_REPO: "rippled-deb"
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/alpine:latest
stage: push_to_prod
artifacts:
paths:
- files.info
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "PUT" "."
#########################################################################
## ##
## stage: verify_from_prod ##
## ##
## install/test packages from prod repos. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
centos_7_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/centos:7
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
rocky_8_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/rockylinux/rockylinux:8
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_37_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/fedora:37
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_38_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/fedora:38
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_18_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "bionic"
DEB_REPO: "rippled-deb"
image:
name: artifactory.ops.ripple.com/ubuntu:18.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_20_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "focal"
DEB_REPO: "rippled-deb"
image:
name: artifactory.ops.ripple.com/ubuntu:20.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_22_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "jammy"
DEB_REPO: "rippled-deb"
image:
name: artifactory.ops.ripple.com/ubuntu:22.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
debian_10_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "buster"
DEB_REPO: "rippled-deb"
image:
name: artifactory.ops.ripple.com/debian:10
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
debian_11_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "bullseye"
DEB_REPO: "rippled-deb"
image:
name: artifactory.ops.ripple.com/debian:11
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
#########################################################################
## ##
## stage: get_final_hashes ##
## ##
## fetch final hashes from artifactory. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
get_prod_hashes:
variables:
DEB_REPO: "rippled-deb"
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/alpine:latest
stage: get_final_hashes
artifacts:
paths:
- files.info
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "GET" ".checksums"
#########################################################################
## ##
## stage: build_containers ##
## ##
## build containers from docker definitions. These containers are NOT ##
## used for the package build. This step is only used to ensure that ##
## the package build targets and files are still working properly. ##
## ##
#########################################################################
build_centos_container:
stage: build_containers
<<: *dind_param
script:
- . ./Builds/containers/gitlab-ci/build_container.sh rpm
build_ubuntu_container:
stage: build_containers
<<: *dind_param
script:
- . ./Builds/containers/gitlab-ci/build_container.sh dpkg

View File

@@ -1,92 +0,0 @@
#!/usr/bin/env sh
set -e
action=$1
filter=$2
. ./Builds/containers/gitlab-ci/get_component.sh
apk add curl jq coreutils util-linux
TOPDIR=$(pwd)
# DPKG
cd $TOPDIR
cd build/dpkg/packages
CURLARGS="-sk -X${action} -urippled:${ARTIFACTORY_DEPLOY_KEY_RIPPLED}"
RIPPLED_PKG=$(ls rippled_*.deb)
RIPPLED_REPORTING_PKG=$(ls rippled-reporting_*.deb)
RIPPLED_DBG_PKG=$(ls rippled-dbgsym_*.*deb)
RIPPLED_REPORTING_DBG_PKG=$(ls rippled-reporting-dbgsym_*.*deb)
# TODO - where to upload src tgz?
RIPPLED_SRC=$(ls rippled_*.orig.tar.gz)
DEB_MATRIX=";deb.component=${COMPONENT};deb.architecture=amd64"
for dist in buster bullseye bionic focal jammy; do
DEB_MATRIX="${DEB_MATRIX};deb.distribution=${dist}"
done
echo "{ \"debs\": {" > "${TOPDIR}/files.info"
for deb in ${RIPPLED_PKG} ${RIPPLED_DBG_PKG} ${RIPPLED_REPORTING_PKG} ${RIPPLED_REPORTING_DBG_PKG}; do
# first item doesn't get a comma separator
if [ $deb != $RIPPLED_PKG ] ; then
echo "," >> "${TOPDIR}/files.info"
fi
echo "\"${deb}\"": | tee -a "${TOPDIR}/files.info"
ca="${CURLARGS}"
if [ "${action}" = "PUT" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/${DEB_REPO}/pool/${COMPONENT}/${deb}${DEB_MATRIX}"
ca="${ca} -T${deb}"
elif [ "${action}" = "GET" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/api/storage/${DEB_REPO}/pool/${COMPONENT}/${deb}"
fi
echo "file info request url --> ${url}"
eval "curl ${ca} \"${url}\"" | jq -M "${filter}" | tee -a "${TOPDIR}/files.info"
done
echo "}," >> "${TOPDIR}/files.info"
# RPM
cd $TOPDIR
cd build/rpm/packages
RIPPLED_PKG=$(ls rippled-[0-9]*.x86_64.rpm)
RIPPLED_DEV_PKG=$(ls rippled-devel*.rpm)
RIPPLED_DBG_PKG=$(ls rippled-debuginfo*.rpm)
RIPPLED_REPORTING_PKG=$(ls rippled-reporting*.rpm)
# TODO - where to upload src rpm ?
RIPPLED_SRC=$(ls rippled-[0-9]*.src.rpm)
echo "\"rpms\": {" >> "${TOPDIR}/files.info"
for rpm in ${RIPPLED_PKG} ${RIPPLED_DEV_PKG} ${RIPPLED_DBG_PKG} ${RIPPLED_REPORTING_PKG}; do
# first item doesn't get a comma separator
if [ $rpm != $RIPPLED_PKG ] ; then
echo "," >> "${TOPDIR}/files.info"
fi
echo "\"${rpm}\"": | tee -a "${TOPDIR}/files.info"
ca="${CURLARGS}"
if [ "${action}" = "PUT" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/${RPM_REPO}/${COMPONENT}/"
ca="${ca} -T${rpm}"
elif [ "${action}" = "GET" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/api/storage/${RPM_REPO}/${COMPONENT}/${rpm}"
fi
echo "file info request url --> ${url}"
eval "curl ${ca} \"${url}\"" | jq -M "${filter}" | tee -a "${TOPDIR}/files.info"
done
echo "}}" >> "${TOPDIR}/files.info"
jq '.' "${TOPDIR}/files.info" > "${TOPDIR}/files.info.tmp"
mv "${TOPDIR}/files.info.tmp" "${TOPDIR}/files.info"
if [ ! -z "${SLACK_NOTIFY_URL}" ] && [ "${action}" = "GET" ] ; then
# extract files.info content to variable and sanitize so it can
# be interpolated into a slack text field below
finfo=$(cat ${TOPDIR}/files.info | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/\\n/g' | sed -E 's/"/\\"/g')
# try posting file info to slack.
# can add channel field to payload if the
# default channel is incorrect. Get rid of
# newlines in payload json since slack doesn't accept them
CONTENT=$(tr -d '[\n]' <<JSON
payload={
"username": "GitlabCI",
"text": "The package build for branch \`${CI_COMMIT_REF_NAME}\` is complete. File hashes are: \`\`\`${finfo}\`\`\`",
"icon_emoji": ":package:"}
JSON
)
curl ${SLACK_NOTIFY_URL} --data-urlencode "${CONTENT}"
fi

View File

@@ -1,38 +0,0 @@
#!/usr/bin/env bash
set -eo pipefail
sign_dpkg() {
if [ -n "${GPG_KEYID}" ]; then
dpkg-sig \
-g "--no-tty --digest-algo 'sha512' --passphrase '${GPG_PASSPHRASE}' --pinentry-mode=loopback" \
-k "${GPG_KEYID}" \
--sign builder \
"build/dpkg/packages/*.deb"
fi
}
sign_rpm() {
if [ -n "${GPG_KEYID}" ] ; then
find build/rpm/packages -name "*.rpm" -exec bash -c '
echo "yes" | setsid rpm \
--define "_gpg_name ${GPG_KEYID}" \
--define "_signature gpg" \
--define "__gpg_check_password_cmd /bin/true" \
--define "__gpg_sign_cmd %{__gpg} gpg --batch --no-armor --digest-algo 'sha512' --passphrase '${GPG_PASSPHRASE}' --no-secmem-warning -u '%{_gpg_name}' --sign --detach-sign --output %{__signature_filename} %{__plaintext_filename}" \
--addsign '{} \;
fi
}
case "${1}" in
dpkg)
sign_dpkg
;;
rpm)
sign_rpm
;;
*)
echo "Usage: ${0} (dpkg|rpm)"
;;
esac

View File

@@ -1,108 +0,0 @@
#!/usr/bin/env sh
set -e
install_from=$1
use_private=${2:-0} # this option not currently needed by any CI scripts,
# reserved for possible future use
if [ "$use_private" -gt 0 ] ; then
REPO_ROOT="https://rippled:${ARTIFACTORY_DEPLOY_KEY_RIPPLED}@${ARTIFACTORY_HOST}/artifactory"
else
REPO_ROOT="${PUBLIC_REPO_ROOT}"
fi
. ./Builds/containers/gitlab-ci/get_component.sh
. /etc/os-release
case ${ID} in
ubuntu|debian)
pkgtype="dpkg"
;;
fedora|centos|rhel|scientific|rocky)
pkgtype="rpm"
;;
*)
echo "unrecognized distro!"
exit 1
;;
esac
# this script provides info variables about pkg version
. build/${pkgtype}/packages/build_vars
if [ "${pkgtype}" = "dpkg" ] ; then
# sometimes update fails and requires a cleanup
updateWithRetry()
{
if ! apt-get -y update ; then
rm -rvf /var/lib/apt/lists/*
apt-get -y clean
apt-get -y update
fi
}
if [ "${install_from}" = "repo" ] ; then
apt-get -y upgrade
updateWithRetry
apt-get -y install apt apt-transport-https ca-certificates coreutils util-linux wget gnupg
wget -q -O - "${REPO_ROOT}/api/gpg/key/public" | apt-key add -
echo "deb ${REPO_ROOT}/${DEB_REPO} ${DISTRO} ${COMPONENT}" >> /etc/apt/sources.list
updateWithRetry
# uncomment this next line if you want to see the available package versions
# apt-cache policy rippled
apt-get -y install rippled=${dpkg_full_version}
elif [ "${install_from}" = "local" ] ; then
# cached pkg install
updateWithRetry
apt-get -y install libprotobuf-dev libprotoc-dev protobuf-compiler libssl-dev
rm -f build/dpkg/packages/rippled-dbgsym*.*
dpkg --no-debsig -i build/dpkg/packages/*.deb
else
echo "unrecognized pkg source!"
exit 1
fi
else
yum -y update
if [ "${install_from}" = "repo" ] ; then
pkgs=("yum-utils coreutils util-linux")
if [ "$ID" = "rocky" ]; then
pkgs="${pkgs[@]/coreutils}"
fi
yum install -y $pkgs
REPOFILE="/etc/yum.repos.d/artifactory.repo"
echo "[Artifactory]" > ${REPOFILE}
echo "name=Artifactory" >> ${REPOFILE}
echo "baseurl=${REPO_ROOT}/${RPM_REPO}/${COMPONENT}/" >> ${REPOFILE}
echo "enabled=1" >> ${REPOFILE}
echo "gpgcheck=0" >> ${REPOFILE}
echo "gpgkey=${REPO_ROOT}/${RPM_REPO}/${COMPONENT}/repodata/repomd.xml.key" >> ${REPOFILE}
echo "repo_gpgcheck=1" >> ${REPOFILE}
yum -y update
# uncomment this next line if you want to see the available package versions
# yum --showduplicates list rippled
yum -y install ${rpm_version_release}
elif [ "${install_from}" = "local" ] ; then
# cached pkg install
pkgs=("yum-utils openssl-static zlib-static")
if [[ "$ID" =~ rocky|fedora ]]; then
if [[ "$ID" =~ "rocky" ]]; then
sed -i 's/enabled=0/enabled=1/g' /etc/yum.repos.d/Rocky-PowerTools.repo
fi
pkgs="${pkgs[@]/openssl-static}"
fi
yum install -y $pkgs
rm -f build/rpm/packages/rippled-debug*.rpm
rm -f build/rpm/packages/*.src.rpm
rpm -i build/rpm/packages/*.rpm
else
echo "unrecognized pkg source!"
exit 1
fi
fi
# verify installed version
INSTALLED=$(/opt/ripple/bin/rippled --version | awk '{print $NF}')
if [ "${rippled_version}" != "${INSTALLED}" ] ; then
echo "INSTALLED version ${INSTALLED} does not match ${rippled_version}"
exit 1
fi
# run unit tests
/opt/ripple/bin/rippled --unittest --unittest-jobs $(nproc)
/opt/ripple/bin/validator-keys --unittest

View File

@@ -1,21 +0,0 @@
#!/usr/bin/env sh
set -e
docker login -u rippled \
-p ${ARTIFACTORY_DEPLOY_KEY_RIPPLED} "${ARTIFACTORY_HUB}"
# this gives us rippled_version :
source build/rpm/packages/build_vars
docker pull "${ARTIFACTORY_HUB}/${RPM_CONTAINER_FULLNAME}"
docker pull "${ARTIFACTORY_HUB}/${DPKG_CONTAINER_FULLNAME}"
# tag/push two labels...one using the current rippled version and one just using "latest"
for label in ${rippled_version} latest ; do
docker tag \
"${ARTIFACTORY_HUB}/${RPM_CONTAINER_FULLNAME}" \
"${ARTIFACTORY_HUB}/${RPM_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
docker push \
"${ARTIFACTORY_HUB}/${RPM_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
docker tag \
"${ARTIFACTORY_HUB}/${DPKG_CONTAINER_FULLNAME}" \
"${ARTIFACTORY_HUB}/${DPKG_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
docker push \
"${ARTIFACTORY_HUB}/${DPKG_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
done

View File

@@ -1,17 +0,0 @@
#!/usr/bin/env sh
set -ex
apt -y update
DEBIAN_FRONTEND="noninteractive" apt-get -y install tzdata
apt -y install software-properties-common curl git gnupg
curl -sk -o rippled-pubkeys.txt "${GIT_SIGN_PUBKEYS_URL}"
gpg --import rippled-pubkeys.txt
if git verify-commit HEAD; then
echo "git commit signature check passed"
else
echo "git commit signature check failed"
git log -n 5 --color \
--pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an> [%G?]%Creset' \
--abbrev-commit
exit 1
fi

View File

@@ -1,95 +0,0 @@
#!/usr/bin/env bash
set -ex
# make sure pkg source files are up to date with repo
cd /opt/rippled_bld/pkg
cp -fpru rippled/Builds/containers/packaging/dpkg/debian/. debian/
cp -fpu rippled/Builds/containers/shared/rippled*.service debian/
cp -fpu rippled/Builds/containers/shared/update_sources.sh .
source update_sources.sh
# Build the dpkg
#dpkg uses - as separator, so we need to change our -bN versions to tilde
RIPPLED_DPKG_VERSION=$(echo "${RIPPLED_VERSION}" | sed 's!-!~!g')
# TODO - decide how to handle the trailing/release
# version here (hardcoded to 1). Does it ever need to change?
RIPPLED_DPKG_FULL_VERSION="${RIPPLED_DPKG_VERSION}-1"
git config --global --add safe.directory /opt/rippled_bld/pkg/rippled
cd /opt/rippled_bld/pkg/rippled
if [[ -n $(git status --porcelain) ]]; then
git status
error "Unstaged changes in this repo - please commit first"
fi
git archive --format tar.gz --prefix rippled-${RIPPLED_DPKG_VERSION}/ -o ../rippled-${RIPPLED_DPKG_VERSION}.tar.gz HEAD
cd ..
# dpkg debmake would normally create this link, but we do it manually
ln -s ./rippled-${RIPPLED_DPKG_VERSION}.tar.gz rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz
tar xvf rippled-${RIPPLED_DPKG_VERSION}.tar.gz
cd rippled-${RIPPLED_DPKG_VERSION}
cp -pr ../debian .
# dpkg requires a changelog. We don't currently maintain
# a useable one, so let's just fake it with our current version
# TODO : not sure if the "unstable" will need to change for
# release packages (?)
NOWSTR=$(TZ=UTC date -R)
cat << CHANGELOG > ./debian/changelog
rippled (${RIPPLED_DPKG_FULL_VERSION}) unstable; urgency=low
* see RELEASENOTES
-- Ripple Labs Inc. <support@ripple.com> ${NOWSTR}
CHANGELOG
# PATH must be preserved for our more modern cmake in /opt/local
# TODO : consider allowing lintian to run in future ?
export DH_BUILD_DDEBS=1
debuild --no-lintian --preserve-envvar PATH --preserve-env -us -uc
rc=$?; if [[ $rc != 0 ]]; then
error "error building dpkg"
fi
cd ..
# copy artifacts
cp rippled-reporting_${RIPPLED_DPKG_FULL_VERSION}_amd64.deb ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.deb ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}.dsc ${PKG_OUTDIR}
# dbgsym suffix is ddeb under newer debuild, but just deb under earlier
cp rippled-dbgsym_${RIPPLED_DPKG_FULL_VERSION}_amd64.* ${PKG_OUTDIR}
cp rippled-reporting-dbgsym_${RIPPLED_DPKG_FULL_VERSION}_amd64.* ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.changes ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.build ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}.debian.tar.xz ${PKG_OUTDIR}
# buildinfo is only generated by later version of debuild
if [ -e rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.buildinfo ] ; then
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.buildinfo ${PKG_OUTDIR}
fi
cat rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.changes
# extract the text in the .changes file that appears between
# Checksums-Sha256: ...
# and
# Files: ...
awk '/Checksums-Sha256:/{hit=1;next}/Files:/{hit=0}hit' \
rippled_${RIPPLED_DPKG_VERSION}-1_amd64.changes | \
sed -E 's!^[[:space:]]+!!' > shasums
DEB_SHA256=$(cat shasums | \
grep "rippled_${RIPPLED_DPKG_VERSION}-1_amd64.deb" | cut -d " " -f 1)
DBG_SHA256=$(cat shasums | \
grep "rippled-dbgsym_${RIPPLED_DPKG_VERSION}-1_amd64.*" | cut -d " " -f 1)
REPORTING_DBG_SHA256=$(cat shasums | \
grep "rippled-reporting-dbgsym_${RIPPLED_DPKG_VERSION}-1_amd64.*" | cut -d " " -f 1)
REPORTING_SHA256=$(cat shasums | \
grep "rippled-reporting_${RIPPLED_DPKG_VERSION}-1_amd64.deb" | cut -d " " -f 1)
SRC_SHA256=$(cat shasums | \
grep "rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz" | cut -d " " -f 1)
echo "deb_sha256=${DEB_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "dbg_sha256=${DBG_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "reporting_sha256=${REPORTING_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "reporting_dbg_sha256=${REPORTING_DBG_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "src_sha256=${SRC_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "rippled_version=${RIPPLED_VERSION}" >> ${PKG_OUTDIR}/build_vars
echo "dpkg_version=${RIPPLED_DPKG_VERSION}" >> ${PKG_OUTDIR}/build_vars
echo "dpkg_full_version=${RIPPLED_DPKG_FULL_VERSION}" >> ${PKG_OUTDIR}/build_vars

View File

@@ -1,3 +0,0 @@
rippled daemon
-- Mike Ellery <mellery451@gmail.com> Tue, 04 Dec 2018 18:19:03 +0000

View File

@@ -1 +0,0 @@
10

View File

@@ -1,19 +0,0 @@
Source: rippled
Section: misc
Priority: extra
Maintainer: Ripple Labs Inc. <support@ripple.com>
Build-Depends: cmake, debhelper (>=9), zlib1g-dev, dh-systemd, ninja-build
Standards-Version: 3.9.7
Homepage: http://ripple.com/
Package: rippled
Architecture: any
Multi-Arch: foreign
Depends: ${misc:Depends}, ${shlibs:Depends}
Description: rippled daemon
Package: rippled-reporting
Architecture: any
Multi-Arch: foreign
Depends: ${misc:Depends}, ${shlibs:Depends}
Description: rippled reporting daemon

View File

@@ -1,86 +0,0 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: rippled
Source: https://github.com/ripple/rippled
Files: *
Copyright: 2012-2019 Ripple Labs Inc.
License: __UNKNOWN__
The accompanying files under various copyrights.
Copyright (c) 2012, 2013, 2014 Ripple Labs Inc.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
The accompanying files incorporate work covered by the following copyright
and previous license notice:
Copyright (c) 2011 Arthur Britto, David Schwartz, Jed McCaleb,
Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant
Some code from Raw Material Software, Ltd., provided under the terms of the
ISC License. See the corresponding source files for more details.
Copyright (c) 2013 - Raw Material Software Ltd.
Please visit http://www.juce.com
Some code from ASIO examples:
// Copyright (c) 2003-2011 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
Some code from Bitcoin:
// Copyright (c) 2009-2010 Satoshi Nakamoto
// Copyright (c) 2011 The Bitcoin developers
// Distributed under the MIT/X11 software license, see the accompanying
// file license.txt or http://www.opensource.org/licenses/mit-license.php.
Some code from Tom Wu:
This software is covered under the following copyright:
/*
* Copyright (c) 2003-2005 Tom Wu
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND,
* EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY
* WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
*
* IN NO EVENT SHALL TOM WU BE LIABLE FOR ANY SPECIAL, INCIDENTAL,
* INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER
* RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER OR NOT ADVISED OF
* THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT
* OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*
* In addition, the following condition applies:
*
* All redistributions must retain an intact copy of this copyright notice
* and disclaimer.
*/
Address all questions regarding this license to:
Tom Wu
tjw@cs.Stanford.EDU

View File

@@ -1,3 +0,0 @@
/var/log/rippled/
/var/lib/rippled/
/etc/systemd/system/rippled.service.d/

View File

@@ -1,3 +0,0 @@
README.md
LICENSE.md
RELEASENOTES.md

View File

@@ -1,3 +0,0 @@
opt/ripple/include
opt/ripple/lib/*.a
opt/ripple/lib/cmake/ripple

View File

@@ -1,3 +0,0 @@
/var/log/rippled-reporting/
/var/lib/rippled-reporting/
/etc/systemd/system/rippled-reporting.service.d/

View File

@@ -1,8 +0,0 @@
bld/rippled-reporting/rippled-reporting opt/rippled-reporting/bin
cfg/rippled-reporting.cfg opt/rippled-reporting/etc
debian/tmp/opt/rippled-reporting/etc/validators.txt opt/rippled-reporting/etc
opt/rippled-reporting/bin/update-rippled-reporting.sh
opt/rippled-reporting/bin/getRippledReportingInfo
opt/rippled-reporting/etc/update-rippled-reporting-cron
etc/logrotate.d/rippled-reporting

View File

@@ -1,3 +0,0 @@
opt/rippled-reporting/etc/rippled-reporting.cfg etc/opt/rippled-reporting/rippled-reporting.cfg
opt/rippled-reporting/etc/validators.txt etc/opt/rippled-reporting/validators.txt
opt/rippled-reporting/bin/rippled-reporting usr/local/bin/rippled-reporting

View File

@@ -1,33 +0,0 @@
#!/bin/sh
set -e
USER_NAME=rippled-reporting
GROUP_NAME=rippled-reporting
case "$1" in
configure)
id -u $USER_NAME >/dev/null 2>&1 || \
adduser --system --quiet \
--home /nonexistent --no-create-home \
--disabled-password \
--group "$GROUP_NAME"
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled-reporting/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled-reporting/
chmod 755 /var/log/rippled-reporting/
chmod 755 /var/lib/rippled-reporting/
chown -R $USER_NAME:$GROUP_NAME /opt/rippled-reporting
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,2 +0,0 @@
/opt/ripple/etc/rippled.cfg
/opt/ripple/etc/validators.txt

View File

@@ -1,8 +0,0 @@
opt/ripple/bin/rippled
opt/ripple/bin/validator-keys
opt/ripple/bin/update-rippled.sh
opt/ripple/bin/getRippledInfo
opt/ripple/etc/rippled.cfg
opt/ripple/etc/validators.txt
opt/ripple/etc/update-rippled-cron
etc/logrotate.d/rippled

View File

@@ -1,3 +0,0 @@
opt/ripple/etc/rippled.cfg etc/opt/ripple/rippled.cfg
opt/ripple/etc/validators.txt etc/opt/ripple/validators.txt
opt/ripple/bin/rippled usr/local/bin/rippled

View File

@@ -1,35 +0,0 @@
#!/bin/sh
set -e
USER_NAME=rippled
GROUP_NAME=rippled
case "$1" in
configure)
id -u $USER_NAME >/dev/null 2>&1 || \
adduser --system --quiet \
--home /nonexistent --no-create-home \
--disabled-password \
--group "$GROUP_NAME"
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled/
chown -R $USER_NAME:$GROUP_NAME /opt/ripple
chmod 755 /var/log/rippled/
chmod 755 /var/lib/rippled/
chmod 644 /opt/ripple/etc/update-rippled-cron
chmod 644 /etc/logrotate.d/rippled
chown -R root:$GROUP_NAME /opt/ripple/etc/update-rippled-cron
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,17 +0,0 @@
#!/bin/sh
set -e
case "$1" in
purge|remove|upgrade|failed-upgrade|abort-install|abort-upgrade|disappear)
;;
*)
echo "postrm called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,20 +0,0 @@
#!/bin/sh
set -e
case "$1" in
install|upgrade)
;;
abort-upgrade)
;;
*)
echo "preinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,20 +0,0 @@
#!/bin/sh
set -e
case "$1" in
remove|upgrade|deconfigure)
;;
failed-upgrade)
;;
*)
echo "prerm called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,80 +0,0 @@
#!/usr/bin/make -f
export DH_VERBOSE = 1
export DH_OPTIONS = -v
# debuild sets some warnings that don't work well
# for our curent build..so try to remove those flags here:
export CFLAGS:=$(subst -Wformat,,$(CFLAGS))
export CFLAGS:=$(subst -Werror=format-security,,$(CFLAGS))
export CXXFLAGS:=$(subst -Wformat,,$(CXXFLAGS))
export CXXFLAGS:=$(subst -Werror=format-security,,$(CXXFLAGS))
%:
dh $@ --with systemd
override_dh_systemd_start:
dh_systemd_start --no-restart-on-upgrade
override_dh_auto_configure:
env
rm -rf bld
conan export external/snappy snappy/1.1.9@
conan install . \
--install-folder bld/rippled \
--build missing \
--build boost \
--build sqlite3 \
--settings build_type=Release
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/opt/ripple \
-Dstatic=ON \
-Dunity=OFF \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-Dvalidator_keys=ON \
-B bld/rippled
conan install . \
--install-folder bld/rippled-reporting \
--build missing \
--build boost \
--build sqlite3 \
--build libuv \
--settings build_type=Release \
--options reporting=True
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/opt/rippled-reporting \
-Dstatic=ON \
-Dunity=OFF \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-Dreporting=ON \
-B bld/rippled-reporting
override_dh_auto_build:
cmake --build bld/rippled --target rippled --target validator-keys -j${nproc}
cmake --build bld/rippled-reporting --target rippled -j${nproc}
override_dh_auto_install:
cmake --install bld/rippled --prefix debian/tmp/opt/ripple
install -D bld/rippled/validator-keys/validator-keys debian/tmp/opt/ripple/bin/validator-keys
install -D Builds/containers/shared/update-rippled.sh debian/tmp/opt/ripple/bin/update-rippled.sh
install -D bin/getRippledInfo debian/tmp/opt/ripple/bin/getRippledInfo
install -D Builds/containers/shared/update-rippled-cron debian/tmp/opt/ripple/etc/update-rippled-cron
install -D Builds/containers/shared/rippled-logrotate debian/tmp/etc/logrotate.d/rippled
rm -rf debian/tmp/opt/ripple/lib64/cmake/date
mkdir -p debian/tmp/opt/rippled-reporting/etc
mkdir -p debian/tmp/opt/rippled-reporting/bin
cp cfg/validators-example.txt debian/tmp/opt/rippled-reporting/etc/validators.txt
sed -E 's/rippled?/rippled-reporting/g' Builds/containers/shared/update-rippled.sh > debian/tmp/opt/rippled-reporting/bin/update-rippled-reporting.sh
sed -E 's/rippled?/rippled-reporting/g' bin/getRippledInfo > debian/tmp/opt/rippled-reporting/bin/getRippledReportingInfo
sed -E 's/rippled?/rippled-reporting/g' Builds/containers/shared/update-rippled-cron > debian/tmp/opt/rippled-reporting/etc/update-rippled-reporting-cron
sed -E 's/rippled?/rippled-reporting/g' Builds/containers/shared/rippled-logrotate > debian/tmp/etc/logrotate.d/rippled-reporting

View File

@@ -1 +0,0 @@
3.0 (quilt)

View File

@@ -1,2 +0,0 @@
#abort-on-upstream-changes
#unapply-patches

View File

@@ -1 +0,0 @@
enable rippled-reporting.service

View File

@@ -1 +0,0 @@
enable rippled.service

View File

@@ -1,82 +0,0 @@
#!/usr/bin/env bash
set -ex
cd /opt/rippled_bld/pkg
cp -fpu rippled/Builds/containers/packaging/rpm/rippled.spec .
cp -fpu rippled/Builds/containers/shared/update_sources.sh .
source update_sources.sh
# Build the rpm
IFS='-' read -r RIPPLED_RPM_VERSION RELEASE <<< "$RIPPLED_VERSION"
export RIPPLED_RPM_VERSION
RPM_RELEASE=${RPM_RELEASE-1}
# post-release version
if [ "hf" = "$(echo "$RELEASE" | cut -c -2)" ]; then
RPM_RELEASE="${RPM_RELEASE}.${RELEASE}"
# pre-release version (-b or -rc)
elif [[ $RELEASE ]]; then
RPM_RELEASE="0.${RPM_RELEASE}.${RELEASE}"
fi
export RPM_RELEASE
if [[ $RPM_PATCH ]]; then
RPM_PATCH=".${RPM_PATCH}"
export RPM_PATCH
fi
cd /opt/rippled_bld/pkg/rippled
if [[ -n $(git status --porcelain) ]]; then
git status
error "Unstaged changes in this repo - please commit first"
fi
git archive --format tar.gz --prefix rippled/ -o ../rpmbuild/SOURCES/rippled.tar.gz HEAD
cd ..
source /opt/rh/devtoolset-11/enable
rpmbuild --define "_topdir ${PWD}/rpmbuild" -ba rippled.spec
rc=$?; if [[ $rc != 0 ]]; then
error "error building rpm"
fi
# Make a tar of the rpm and source rpm
RPM_VERSION_RELEASE=$(rpm -qp --qf='%{NAME}-%{VERSION}-%{RELEASE}' ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm)
tar_file=$RPM_VERSION_RELEASE.tar.gz
cp ./rpmbuild/RPMS/x86_64/* ${PKG_OUTDIR}
cp ./rpmbuild/SRPMS/* ${PKG_OUTDIR}
RPM_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm 2>/dev/null)
DBG_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-debuginfo*.rpm 2>/dev/null)
DEV_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-devel*.rpm 2>/dev/null)
REP_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-reporting*.rpm 2>/dev/null)
SRC_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/SRPMS/*.rpm 2>/dev/null)
RPM_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm | awk '{ print $1}')"
DBG_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-debuginfo*.rpm | awk '{ print $1}')"
REP_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-reporting*.rpm | awk '{ print $1}')"
DEV_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-devel*.rpm | awk '{ print $1}')"
SRC_SHA256="$(sha256sum ./rpmbuild/SRPMS/*.rpm | awk '{ print $1}')"
echo "rpm_md5sum=$RPM_MD5SUM" > ${PKG_OUTDIR}/build_vars
echo "rep_md5sum=$REP_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "dbg_md5sum=$DBG_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "dev_md5sum=$DEV_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "src_md5sum=$SRC_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "rpm_sha256=$RPM_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "rep_sha256=$REP_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "dbg_sha256=$DBG_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "dev_sha256=$DEV_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "src_sha256=$SRC_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "rippled_version=$RIPPLED_VERSION" >> ${PKG_OUTDIR}/build_vars
echo "rpm_version=$RIPPLED_RPM_VERSION" >> ${PKG_OUTDIR}/build_vars
echo "rpm_file_name=$tar_file" >> ${PKG_OUTDIR}/build_vars
echo "rpm_version_release=$RPM_VERSION_RELEASE" >> ${PKG_OUTDIR}/build_vars

View File

@@ -1,236 +0,0 @@
%define rippled_version %(echo $RIPPLED_RPM_VERSION)
%define rpm_release %(echo $RPM_RELEASE)
%define rpm_patch %(echo $RPM_PATCH)
%define _prefix /opt/ripple
Name: rippled
# Dashes in Version extensions must be converted to underscores
Version: %{rippled_version}
Release: %{rpm_release}%{?dist}%{rpm_patch}
Summary: rippled daemon
License: MIT
URL: http://ripple.com/
Source0: rippled.tar.gz
BuildRequires: cmake zlib-static ninja-build
%description
rippled
%package devel
Summary: Files for development of applications using xrpl core library
Group: Development/Libraries
Requires: zlib-static
%description devel
core library for development of standalone applications that sign transactions.
%package reporting
Summary: Reporting Server for rippled
%description reporting
History server for XRP Ledger
%prep
%setup -c -n rippled
%build
rm -rf ~/.conan/profiles/default
cp /opt/libcstd/libstdc++.so.6.0.22 /usr/lib64
cp /opt/libcstd/libstdc++.so.6.0.22 /lib64
ln -sf /usr/lib64/libstdc++.so.6.0.22 /usr/lib64/libstdc++.so.6
ln -sf /lib64/libstdc++.so.6.0.22 /usr/lib64/libstdc++.so.6
source /opt/rh/rh-python38/enable
pip install "conan<2"
conan profile new default --detect
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update settings.compiler.cppstd=20 default
cd rippled
mkdir -p bld.rippled
conan export external/snappy snappy/1.1.9@
pushd bld.rippled
conan install .. \
--settings build_type=Release \
--output-folder . \
--build missing
cmake -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_INSTALL_PREFIX=%{_prefix} \
-DCMAKE_BUILD_TYPE=Release \
-Dunity=OFF \
-Dstatic=ON \
-Dvalidator_keys=ON \
-DCMAKE_VERBOSE_MAKEFILE=ON \
..
cmake --build . --parallel $(nproc) --target rippled --target validator-keys
popd
mkdir -p bld.rippled-reporting
pushd bld.rippled-reporting
conan install .. \
--settings build_type=Release \
--output-folder . \
--build missing \
--settings compiler.cppstd=17 \
--options reporting=True
cmake -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_INSTALL_PREFIX=%{_prefix} \
-DCMAKE_BUILD_TYPE=Release \
-Dunity=OFF \
-Dstatic=ON \
-Dvalidator_keys=ON \
-Dreporting=ON \
-DCMAKE_VERBOSE_MAKEFILE=ON \
..
cmake --build . --parallel $(nproc) --target rippled
%pre
test -e /etc/pki/tls || { mkdir -p /etc/pki; ln -s /usr/lib/ssl /etc/pki/tls; }
%install
rm -rf $RPM_BUILD_ROOT
DESTDIR=$RPM_BUILD_ROOT cmake --build rippled/bld.rippled --target install #-- -v
mkdir -p $RPM_BUILD_ROOT
rm -rf ${RPM_BUILD_ROOT}/%{_prefix}/lib64/
install -d ${RPM_BUILD_ROOT}/etc/opt/ripple
install -d ${RPM_BUILD_ROOT}/usr/local/bin
install -D ./rippled/cfg/rippled-example.cfg ${RPM_BUILD_ROOT}/%{_prefix}/etc/rippled.cfg
install -D ./rippled/cfg/validators-example.txt ${RPM_BUILD_ROOT}/%{_prefix}/etc/validators.txt
ln -sf %{_prefix}/etc/rippled.cfg ${RPM_BUILD_ROOT}/etc/opt/ripple/rippled.cfg
ln -sf %{_prefix}/etc/validators.txt ${RPM_BUILD_ROOT}/etc/opt/ripple/validators.txt
ln -sf %{_prefix}/bin/rippled ${RPM_BUILD_ROOT}/usr/local/bin/rippled
install -D rippled/bld.rippled/validator-keys/validator-keys ${RPM_BUILD_ROOT}%{_bindir}/validator-keys
install -D ./rippled/Builds/containers/shared/rippled.service ${RPM_BUILD_ROOT}/usr/lib/systemd/system/rippled.service
install -D ./rippled/Builds/containers/packaging/rpm/50-rippled.preset ${RPM_BUILD_ROOT}/usr/lib/systemd/system-preset/50-rippled.preset
install -D ./rippled/Builds/containers/shared/update-rippled.sh ${RPM_BUILD_ROOT}%{_bindir}/update-rippled.sh
install -D ./rippled/bin/getRippledInfo ${RPM_BUILD_ROOT}%{_bindir}/getRippledInfo
install -D ./rippled/Builds/containers/shared/update-rippled-cron ${RPM_BUILD_ROOT}%{_prefix}/etc/update-rippled-cron
install -D ./rippled/Builds/containers/shared/rippled-logrotate ${RPM_BUILD_ROOT}/etc/logrotate.d/rippled
install -d $RPM_BUILD_ROOT/var/log/rippled
install -d $RPM_BUILD_ROOT/var/lib/rippled
# reporting mode
%define _prefix /opt/rippled-reporting
mkdir -p ${RPM_BUILD_ROOT}/etc/opt/rippled-reporting/
install -D rippled/bld.rippled-reporting/rippled-reporting ${RPM_BUILD_ROOT}%{_bindir}/rippled-reporting
install -D ./rippled/cfg/rippled-reporting.cfg ${RPM_BUILD_ROOT}%{_prefix}/etc/rippled-reporting.cfg
install -D ./rippled/cfg/validators-example.txt ${RPM_BUILD_ROOT}%{_prefix}/etc/validators.txt
install -D ./rippled/Builds/containers/packaging/rpm/50-rippled-reporting.preset ${RPM_BUILD_ROOT}/usr/lib/systemd/system-preset/50-rippled-reporting.preset
ln -s %{_prefix}/bin/rippled-reporting ${RPM_BUILD_ROOT}/usr/local/bin/rippled-reporting
ln -s %{_prefix}/etc/rippled-reporting.cfg ${RPM_BUILD_ROOT}/etc/opt/rippled-reporting/rippled-reporting.cfg
ln -s %{_prefix}/etc/validators.txt ${RPM_BUILD_ROOT}/etc/opt/rippled-reporting/validators.txt
install -d $RPM_BUILD_ROOT/var/log/rippled-reporting
install -d $RPM_BUILD_ROOT/var/lib/rippled-reporting
install -D ./rippled/Builds/containers/shared/rippled-reporting.service ${RPM_BUILD_ROOT}/usr/lib/systemd/system/rippled-reporting.service
sed -E 's/rippled?/rippled-reporting/g' ./rippled/Builds/containers/shared/update-rippled.sh > ${RPM_BUILD_ROOT}%{_bindir}/update-rippled-reporting.sh
sed -E 's/rippled?/rippled-reporting/g' ./rippled/bin/getRippledInfo > ${RPM_BUILD_ROOT}%{_bindir}/getRippledReportingInfo
sed -E 's/rippled?/rippled-reporting/g' ./rippled/Builds/containers/shared/update-rippled-cron > ${RPM_BUILD_ROOT}%{_prefix}/etc/update-rippled-reporting-cron
sed -E 's/rippled?/rippled-reporting/g' ./rippled/Builds/containers/shared/rippled-logrotate > ${RPM_BUILD_ROOT}/etc/logrotate.d/rippled-reporting
%post
%define _prefix /opt/ripple
USER_NAME=rippled
GROUP_NAME=rippled
getent passwd $USER_NAME &>/dev/null || useradd $USER_NAME
getent group $GROUP_NAME &>/dev/null || groupadd $GROUP_NAME
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled/
chown -R $USER_NAME:$GROUP_NAME %{_prefix}/
chmod 755 /var/log/rippled/
chmod 755 /var/lib/rippled/
chmod 644 %{_prefix}/etc/update-rippled-cron
chmod 644 /etc/logrotate.d/rippled
chown -R root:$GROUP_NAME %{_prefix}/etc/update-rippled-cron
%post reporting
%define _prefix /opt/rippled-reporting
USER_NAME=rippled-reporting
GROUP_NAME=rippled-reporting
getent passwd $USER_NAME &>/dev/null || useradd -r $USER_NAME
getent group $GROUP_NAME &>/dev/null || groupadd $GROUP_NAME
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled-reporting/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled-reporting/
chown -R $USER_NAME:$GROUP_NAME %{_prefix}/
chmod 755 /var/log/rippled-reporting/
chmod 755 /var/lib/rippled-reporting/
chmod -x /usr/lib/systemd/system/rippled-reporting.service
%files
%define _prefix /opt/ripple
%doc rippled/README.md rippled/LICENSE.md
%{_bindir}/rippled
/usr/local/bin/rippled
%{_bindir}/update-rippled.sh
%{_bindir}/getRippledInfo
%{_prefix}/etc/update-rippled-cron
%{_bindir}/validator-keys
%config(noreplace) %{_prefix}/etc/rippled.cfg
%config(noreplace) /etc/opt/ripple/rippled.cfg
%config(noreplace) %{_prefix}/etc/validators.txt
%config(noreplace) /etc/opt/ripple/validators.txt
%config(noreplace) /etc/logrotate.d/rippled
%config(noreplace) /usr/lib/systemd/system/rippled.service
%config(noreplace) /usr/lib/systemd/system-preset/50-rippled.preset
%dir /var/log/rippled/
%dir /var/lib/rippled/
%files devel
%{_prefix}/include
%{_prefix}/lib/*.a
%{_prefix}/lib/cmake/ripple
%files reporting
%define _prefix /opt/rippled-reporting
%doc rippled/README.md rippled/LICENSE.md
%{_bindir}/rippled-reporting
/usr/local/bin/rippled-reporting
%config(noreplace) /etc/opt/rippled-reporting/rippled-reporting.cfg
%config(noreplace) %{_prefix}/etc/rippled-reporting.cfg
%config(noreplace) %{_prefix}/etc/validators.txt
%config(noreplace) /etc/opt/rippled-reporting/validators.txt
%config(noreplace) /usr/lib/systemd/system/rippled-reporting.service
%config(noreplace) /usr/lib/systemd/system-preset/50-rippled-reporting.preset
%dir /var/log/rippled-reporting/
%dir /var/lib/rippled-reporting/
%{_bindir}/update-rippled-reporting.sh
%{_bindir}/getRippledReportingInfo
%{_prefix}/etc/update-rippled-reporting-cron
%config(noreplace) /etc/logrotate.d/rippled-reporting
%changelog
* Wed Aug 28 2019 Mike Ellery <mellery451@gmail.com>
- Switch to subproject build for validator-keys
* Wed May 15 2019 Mike Ellery <mellery451@gmail.com>
- Make validator-keys use local rippled build for core lib
* Wed Aug 01 2018 Mike Ellery <mellery451@gmail.com>
- add devel package for signing library
* Thu Jun 02 2016 Brandon Wilson <bwilson@ripple.com>
- Install validators.txt

View File

@@ -1,37 +0,0 @@
#!/usr/bin/env bash
set -e
IFS=. read cm_maj cm_min cm_rel <<<"$1"
: ${cm_rel:-0}
CMAKE_ROOT=${2:-"${HOME}/cmake"}
function cmake_version ()
{
if [[ -d ${CMAKE_ROOT} ]] ; then
local perms=$(test $(uname) = "Linux" && echo "/111" || echo "+111")
local installed=$(find ${CMAKE_ROOT} -perm ${perms} -type f -name cmake)
if [[ "${installed}" != "" ]] ; then
echo "$(${installed} --version | head -1)"
fi
fi
}
installed=$(cmake_version)
if [[ "${installed}" != "" && ${installed} =~ ${cm_maj}.${cm_min}.${cm_rel} ]] ; then
echo "cmake already installed: ${installed}"
exit
fi
# From CMake 20+ "Linux" is lowercase so using `uname` won't create be the correct path
if [ ${cm_min} -gt 19 ]; then
linux="linux"
else
linux=$(uname)
fi
pkgname="cmake-${cm_maj}.${cm_min}.${cm_rel}-${linux}-x86_64.tar.gz"
tmppkg="/tmp/cmake.tar.gz"
wget --quiet https://cmake.org/files/v${cm_maj}.${cm_min}/${pkgname} -O ${tmppkg}
mkdir -p ${CMAKE_ROOT}
cd ${CMAKE_ROOT}
tar --strip-components 1 -xf ${tmppkg}
rm -f ${tmppkg}
echo "installed: $(cmake_version)"

View File

@@ -1,15 +0,0 @@
/var/log/rippled/*.log {
daily
minsize 200M
rotate 7
nocreate
missingok
notifempty
compress
compresscmd /usr/bin/nice
compressoptions -n19 ionice -c3 gzip
compressext .gz
postrotate
/opt/ripple/bin/rippled --conf /opt/ripple/etc/rippled.cfg logrotate
endscript
}

View File

@@ -1,15 +0,0 @@
[Unit]
Description=Ripple Daemon
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/opt/rippled-reporting/bin/rippled-reporting --silent --conf /etc/opt/rippled-reporting/rippled-reporting.cfg
Restart=on-failure
User=rippled-reporting
Group=rippled-reporting
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@@ -1,15 +0,0 @@
[Unit]
Description=Ripple Daemon
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/opt/ripple/bin/rippled --net --silent --conf /etc/opt/ripple/rippled.cfg
Restart=on-failure
User=rippled
Group=rippled
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@@ -1,10 +0,0 @@
# For automatic updates, symlink this file to /etc/cron.d/
# Do not remove the newline at the end of this cron script
# bash required for use of RANDOM below.
SHELL=/bin/bash
PATH=/sbin;/bin;/usr/sbin;/usr/bin
# invoke check/update script with random delay up to 59 mins
0 * * * * root sleep $((RANDOM*3540/32768)) && /opt/ripple/bin/update-rippled.sh

View File

@@ -1,65 +0,0 @@
#!/usr/bin/env bash
# auto-update script for rippled daemon
# Check for sudo/root permissions
if [[ $(id -u) -ne 0 ]] ; then
echo "This update script must be run as root or sudo"
exit 1
fi
LOCKDIR=/tmp/rippleupdate.lock
UPDATELOG=/var/log/rippled/update.log
function cleanup {
# If this directory isn't removed, future updates will fail.
rmdir $LOCKDIR
}
# Use mkdir to check if process is already running. mkdir is atomic, as against file create.
if ! mkdir $LOCKDIR 2>/dev/null; then
echo $(date -u) "lockdir exists - won't proceed." >> $UPDATELOG
exit 1
fi
trap cleanup EXIT
source /etc/os-release
can_update=false
if [[ "$ID" == "ubuntu" || "$ID" == "debian" ]] ; then
# Silent update
apt-get update -qq
# The next line is an "awk"ward way to check if the package needs to be updated.
RIPPLE=$(apt-get install -s --only-upgrade rippled | awk '/^Inst/ { print $2 }')
test "$RIPPLE" == "rippled" && can_update=true
function apply_update {
apt-get install rippled -qq
}
elif [[ "$ID" == "fedora" || "$ID" == "centos" || "$ID" == "rhel" || "$ID" == "scientific" ]] ; then
RIPPLE_REPO=${RIPPLE_REPO-stable}
yum --disablerepo=* --enablerepo=ripple-$RIPPLE_REPO clean expire-cache
yum check-update -q --enablerepo=ripple-$RIPPLE_REPO rippled || can_update=true
function apply_update {
yum update -y --enablerepo=ripple-$RIPPLE_REPO rippled
}
else
echo "unrecognized distro!"
exit 1
fi
# Do the actual update and restart the service after reloading systemctl daemon.
if [ "$can_update" = true ] ; then
exec 3>&1 1>>${UPDATELOG} 2>&1
set -e
apply_update
systemctl daemon-reload
systemctl restart rippled.service
echo $(date -u) "rippled daemon updated."
else
echo $(date -u) "no updates available" >> $UPDATELOG
fi

View File

@@ -1,20 +0,0 @@
#!/usr/bin/env bash
function error {
echo $1
exit 1
}
cd /opt/rippled_bld/pkg/rippled
export RIPPLED_VERSION=$(egrep -i -o "\b(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)(-[0-9a-z\-]+(\.[0-9a-z\-]+)*)?(\+[0-9a-z\-]+(\.[0-9a-z\-]+)*)?\b" src/ripple/protocol/impl/BuildInfo.cpp)
: ${PKG_OUTDIR:=/opt/rippled_bld/pkg/out}
export PKG_OUTDIR
if [ ! -d ${PKG_OUTDIR} ]; then
error "${PKG_OUTDIR} is not mounted"
fi
if [ -x ${OPENSSL_ROOT}/bin/openssl ]; then
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${OPENSSL_ROOT}/lib ${OPENSSL_ROOT}/bin/openssl version -a
fi

View File

@@ -1,15 +0,0 @@
ARG DIST_TAG=18.04
FROM ubuntu:$DIST_TAG
ARG GIT_COMMIT=unknown
ARG CI_USE=false
LABEL git-commit=$GIT_COMMIT
WORKDIR /root
COPY ubuntu-builder/ubuntu_setup.sh .
RUN ./ubuntu_setup.sh && rm ubuntu_setup.sh
RUN mkdir -m 777 -p /opt/rippled_bld/pkg/
WORKDIR /opt/rippled_bld/pkg
COPY packaging/dpkg/build_dpkg.sh ./
CMD ./build_dpkg.sh

View File

@@ -1,76 +0,0 @@
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o xtrace
# Parameters
gcc_version=${GCC_VERSION:-10}
cmake_version=${CMAKE_VERSION:-3.25.1}
conan_version=${CONAN_VERSION:-1.59}
apt update
# Iteratively build the list of packages to install so that we can interleave
# the lines with comments explaining their inclusion.
dependencies=''
# - to identify the Ubuntu version
dependencies+=' lsb-release'
# - for add-apt-repository
dependencies+=' software-properties-common'
# - to download CMake
dependencies+=' curl'
# - to build CMake
dependencies+=' libssl-dev'
# - Python headers for Boost.Python
dependencies+=' python3-dev'
# - to install Conan
dependencies+=' python3-pip'
# - to download rippled
dependencies+=' git'
# - CMake generators (but not CMake itself)
dependencies+=' make ninja-build'
apt install --yes ${dependencies}
add-apt-repository --yes ppa:ubuntu-toolchain-r/test
apt install --yes gcc-${gcc_version} g++-${gcc_version} \
debhelper debmake debsums gnupg dh-buildinfo dh-make dh-systemd cmake \
ninja-build zlib1g-dev make cmake ninja-build autoconf automake \
pkg-config apt-transport-https
# Give us nice unversioned aliases for gcc and company.
update-alternatives --install \
/usr/bin/gcc gcc /usr/bin/gcc-${gcc_version} 100 \
--slave /usr/bin/g++ g++ /usr/bin/g++-${gcc_version} \
--slave /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-${gcc_version} \
--slave /usr/bin/gcc-nm gcc-nm /usr/bin/gcc-nm-${gcc_version} \
--slave /usr/bin/gcc-ranlib gcc-ranlib /usr/bin/gcc-ranlib-${gcc_version} \
--slave /usr/bin/gcov gcov /usr/bin/gcov-${gcc_version} \
--slave /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-dump-${gcc_version} \
--slave /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-tool-${gcc_version}
update-alternatives --auto gcc
# Download and unpack CMake.
cmake_slug="cmake-${cmake_version}"
curl --location --remote-name \
"https://github.com/Kitware/CMake/releases/download/v${cmake_version}/${cmake_slug}.tar.gz"
tar xzf ${cmake_slug}.tar.gz
rm ${cmake_slug}.tar.gz
# Build and install CMake.
cd ${cmake_slug}
./bootstrap --parallel=$(nproc)
make --jobs $(nproc)
make install
cd ..
rm --recursive --force ${cmake_slug}
# Install Conan.
pip3 install conan==${conan_version}
conan profile new --detect gcc
conan profile update settings.compiler=gcc gcc
conan profile update settings.compiler.version=${gcc_version} gcc
conan profile update settings.compiler.libcxx=libstdc++11 gcc
conan profile update env.CC=/usr/bin/gcc gcc
conan profile update env.CXX=/usr/bin/g++ gcc

View File

@@ -13,12 +13,15 @@ then
git clean -ix
fi
# Ensure all sorting is ASCII-order consistently across platforms.
export LANG=C
rm -rfv results
mkdir results
includes="$( pwd )/results/rawincludes.txt"
pushd ../..
echo Raw includes:
grep -r '#include.*/.*\.h' src/ripple/ src/test/ | \
grep -r '^[ ]*#include.*/.*\.h' include src | \
grep -v boost | tee ${includes}
popd
pushd results

View File

@@ -1,50 +1,8 @@
Loop: ripple.app ripple.core
ripple.app > ripple.core
Loop: test.app test.jtx
test.app > test.jtx
Loop: ripple.app ripple.ledger
ripple.app > ripple.ledger
Loop: ripple.app ripple.net
ripple.app > ripple.net
Loop: ripple.app ripple.nodestore
ripple.app > ripple.nodestore
Loop: ripple.app ripple.overlay
ripple.overlay ~= ripple.app
Loop: ripple.app ripple.peerfinder
ripple.app > ripple.peerfinder
Loop: ripple.app ripple.rpc
ripple.rpc > ripple.app
Loop: ripple.app ripple.shamap
ripple.app > ripple.shamap
Loop: ripple.basics ripple.core
ripple.core > ripple.basics
Loop: ripple.basics ripple.json
ripple.json ~= ripple.basics
Loop: ripple.basics ripple.protocol
ripple.protocol > ripple.basics
Loop: ripple.core ripple.net
ripple.net > ripple.core
Loop: ripple.ledger ripple.protocol
ripple.ledger > ripple.protocol
Loop: ripple.net ripple.rpc
ripple.rpc > ripple.net
Loop: ripple.nodestore ripple.overlay
ripple.overlay ~= ripple.nodestore
Loop: ripple.overlay ripple.rpc
ripple.rpc ~= ripple.overlay
Loop: test.app test.rpc
test.rpc ~= test.app
Loop: test.jtx test.toplevel
test.toplevel > test.jtx
@@ -52,3 +10,45 @@ Loop: test.jtx test.toplevel
Loop: test.jtx test.unit_test
test.unit_test == test.jtx
Loop: xrpl.protocol xrpld.app
xrpld.app > xrpl.protocol
Loop: xrpld.app xrpld.core
xrpld.app > xrpld.core
Loop: xrpld.app xrpld.ledger
xrpld.app > xrpld.ledger
Loop: xrpld.app xrpld.net
xrpld.app > xrpld.net
Loop: xrpld.app xrpld.nodestore
xrpld.app > xrpld.nodestore
Loop: xrpld.app xrpld.overlay
xrpld.overlay ~= xrpld.app
Loop: xrpld.app xrpld.peerfinder
xrpld.app > xrpld.peerfinder
Loop: xrpld.app xrpld.rpc
xrpld.rpc > xrpld.app
Loop: xrpld.app xrpld.shamap
xrpld.app > xrpld.shamap
Loop: xrpld.core xrpld.net
xrpld.net > xrpld.core
Loop: xrpld.core xrpld.perflog
xrpld.perflog == xrpld.core
Loop: xrpld.net xrpld.rpc
xrpld.rpc > xrpld.net
Loop: xrpld.overlay xrpld.rpc
xrpld.rpc ~= xrpld.overlay
Loop: xrpld.perflog xrpld.rpc
xrpld.rpc ~= xrpld.perflog

View File

@@ -1,228 +1,202 @@
ripple.app > ripple.basics
ripple.app > ripple.beast
ripple.app > ripple.conditions
ripple.app > ripple.consensus
ripple.app > ripple.crypto
ripple.app > ripple.json
ripple.app > ripple.protocol
ripple.app > ripple.resource
ripple.app > test.unit_test
ripple.basics > ripple.beast
ripple.conditions > ripple.basics
ripple.conditions > ripple.protocol
ripple.consensus > ripple.basics
ripple.consensus > ripple.beast
ripple.consensus > ripple.json
ripple.consensus > ripple.protocol
ripple.core > ripple.beast
ripple.core > ripple.json
ripple.core > ripple.protocol
ripple.crypto > ripple.basics
ripple.json > ripple.beast
ripple.ledger > ripple.basics
ripple.ledger > ripple.beast
ripple.ledger > ripple.core
ripple.ledger > ripple.json
ripple.net > ripple.basics
ripple.net > ripple.beast
ripple.net > ripple.json
ripple.net > ripple.protocol
ripple.net > ripple.resource
ripple.nodestore > ripple.basics
ripple.nodestore > ripple.beast
ripple.nodestore > ripple.core
ripple.nodestore > ripple.json
ripple.nodestore > ripple.protocol
ripple.nodestore > ripple.unity
ripple.overlay > ripple.basics
ripple.overlay > ripple.beast
ripple.overlay > ripple.core
ripple.overlay > ripple.json
ripple.overlay > ripple.peerfinder
ripple.overlay > ripple.protocol
ripple.overlay > ripple.resource
ripple.overlay > ripple.server
ripple.peerfinder > ripple.basics
ripple.peerfinder > ripple.beast
ripple.peerfinder > ripple.core
ripple.peerfinder > ripple.protocol
ripple.perflog > ripple.basics
ripple.perflog > ripple.beast
ripple.perflog > ripple.core
ripple.perflog > ripple.json
ripple.perflog > ripple.nodestore
ripple.perflog > ripple.protocol
ripple.perflog > ripple.rpc
ripple.protocol > ripple.beast
ripple.protocol > ripple.crypto
ripple.protocol > ripple.json
ripple.resource > ripple.basics
ripple.resource > ripple.beast
ripple.resource > ripple.json
ripple.resource > ripple.protocol
ripple.rpc > ripple.basics
ripple.rpc > ripple.beast
ripple.rpc > ripple.core
ripple.rpc > ripple.crypto
ripple.rpc > ripple.json
ripple.rpc > ripple.ledger
ripple.rpc > ripple.nodestore
ripple.rpc > ripple.protocol
ripple.rpc > ripple.resource
ripple.rpc > ripple.server
ripple.rpc > ripple.shamap
ripple.server > ripple.basics
ripple.server > ripple.beast
ripple.server > ripple.crypto
ripple.server > ripple.json
ripple.server > ripple.protocol
ripple.shamap > ripple.basics
ripple.shamap > ripple.beast
ripple.shamap > ripple.crypto
ripple.shamap > ripple.nodestore
ripple.shamap > ripple.protocol
test.app > ripple.app
test.app > ripple.basics
test.app > ripple.beast
test.app > ripple.core
test.app > ripple.json
test.app > ripple.ledger
test.app > ripple.overlay
test.app > ripple.protocol
test.app > ripple.resource
test.app > ripple.rpc
test.app > test.jtx
test.app > test.rpc
libxrpl.basics > xrpl.basics
libxrpl.crypto > xrpl.basics
libxrpl.json > xrpl.basics
libxrpl.json > xrpl.json
libxrpl.protocol > xrpl.basics
libxrpl.protocol > xrpl.hook
libxrpl.protocol > xrpl.json
libxrpl.protocol > xrpl.protocol
libxrpl.resource > xrpl.basics
libxrpl.resource > xrpl.resource
libxrpl.server > xrpl.basics
libxrpl.server > xrpl.json
libxrpl.server > xrpl.protocol
libxrpl.server > xrpl.server
test.app > test.toplevel
test.app > test.unit_test
test.basics > ripple.basics
test.basics > ripple.beast
test.basics > ripple.json
test.basics > ripple.protocol
test.basics > ripple.rpc
test.app > xrpl.basics
test.app > xrpld.app
test.app > xrpld.core
test.app > xrpld.ledger
test.app > xrpld.nodestore
test.app > xrpld.overlay
test.app > xrpld.rpc
test.app > xrpl.hook
test.app > xrpl.json
test.app > xrpl.protocol
test.app > xrpl.resource
test.basics > test.jtx
test.basics > test.unit_test
test.beast > ripple.basics
test.beast > ripple.beast
test.conditions > ripple.basics
test.conditions > ripple.beast
test.conditions > ripple.conditions
test.consensus > ripple.app
test.consensus > ripple.basics
test.consensus > ripple.beast
test.consensus > ripple.consensus
test.consensus > ripple.ledger
test.basics > xrpl.basics
test.basics > xrpld.perflog
test.basics > xrpld.rpc
test.basics > xrpl.json
test.basics > xrpl.protocol
test.beast > xrpl.basics
test.conditions > xrpl.basics
test.conditions > xrpld.conditions
test.consensus > test.csf
test.consensus > test.toplevel
test.consensus > test.unit_test
test.core > ripple.basics
test.core > ripple.beast
test.core > ripple.core
test.core > ripple.crypto
test.core > ripple.json
test.core > ripple.server
test.consensus > xrpl.basics
test.consensus > xrpld.app
test.consensus > xrpld.consensus
test.consensus > xrpld.core
test.consensus > xrpld.ledger
test.consensus > xrpl.protocol
test.core > test.jtx
test.core > test.toplevel
test.core > test.unit_test
test.csf > ripple.basics
test.csf > ripple.beast
test.csf > ripple.consensus
test.csf > ripple.json
test.csf > ripple.protocol
test.json > ripple.beast
test.json > ripple.json
test.core > xrpl.basics
test.core > xrpld.core
test.core > xrpld.perflog
test.core > xrpl.json
test.core > xrpl.server
test.csf > xrpl.basics
test.csf > xrpld.consensus
test.csf > xrpl.json
test.csf > xrpl.protocol
test.json > test.jtx
test.jtx > ripple.app
test.jtx > ripple.basics
test.jtx > ripple.beast
test.jtx > ripple.consensus
test.jtx > ripple.core
test.jtx > ripple.json
test.jtx > ripple.ledger
test.jtx > ripple.net
test.jtx > ripple.protocol
test.jtx > ripple.server
test.ledger > ripple.app
test.ledger > ripple.basics
test.ledger > ripple.beast
test.ledger > ripple.core
test.ledger > ripple.ledger
test.ledger > ripple.protocol
test.json > xrpl.json
test.jtx > xrpl.basics
test.jtx > xrpld.app
test.jtx > xrpld.consensus
test.jtx > xrpld.core
test.jtx > xrpld.ledger
test.jtx > xrpld.net
test.jtx > xrpld.rpc
test.jtx > xrpl.hook
test.jtx > xrpl.json
test.jtx > xrpl.protocol
test.jtx > xrpl.resource
test.jtx > xrpl.server
test.ledger > test.jtx
test.ledger > test.toplevel
test.net > ripple.net
test.net > test.jtx
test.net > test.toplevel
test.net > test.unit_test
test.nodestore > ripple.app
test.nodestore > ripple.basics
test.nodestore > ripple.beast
test.nodestore > ripple.core
test.nodestore > ripple.nodestore
test.nodestore > ripple.protocol
test.nodestore > ripple.unity
test.ledger > xrpl.basics
test.ledger > xrpld.app
test.ledger > xrpld.core
test.ledger > xrpld.ledger
test.ledger > xrpl.protocol
test.nodestore > test.jtx
test.nodestore > test.toplevel
test.nodestore > test.unit_test
test.overlay > ripple.app
test.overlay > ripple.basics
test.overlay > ripple.beast
test.overlay > ripple.core
test.overlay > ripple.overlay
test.overlay > ripple.peerfinder
test.overlay > ripple.protocol
test.overlay > ripple.shamap
test.nodestore > xrpl.basics
test.nodestore > xrpld.core
test.nodestore > xrpld.nodestore
test.nodestore > xrpld.unity
test.overlay > test.jtx
test.overlay > test.toplevel
test.overlay > test.unit_test
test.peerfinder > ripple.basics
test.peerfinder > ripple.beast
test.peerfinder > ripple.core
test.peerfinder > ripple.peerfinder
test.peerfinder > ripple.protocol
test.overlay > xrpl.basics
test.overlay > xrpld.app
test.overlay > xrpld.overlay
test.overlay > xrpld.peerfinder
test.overlay > xrpld.shamap
test.overlay > xrpl.protocol
test.peerfinder > test.beast
test.peerfinder > test.unit_test
test.protocol > ripple.basics
test.protocol > ripple.beast
test.protocol > ripple.crypto
test.protocol > ripple.json
test.protocol > ripple.ledger
test.protocol > ripple.protocol
test.peerfinder > xrpl.basics
test.peerfinder > xrpld.core
test.peerfinder > xrpld.peerfinder
test.peerfinder > xrpl.protocol
test.protocol > test.toplevel
test.resource > ripple.basics
test.resource > ripple.beast
test.resource > ripple.resource
test.protocol > xrpl.basics
test.protocol > xrpl.json
test.protocol > xrpl.protocol
test.resource > test.unit_test
test.rpc > ripple.app
test.rpc > ripple.basics
test.rpc > ripple.beast
test.rpc > ripple.core
test.rpc > ripple.json
test.rpc > ripple.net
test.rpc > ripple.nodestore
test.rpc > ripple.overlay
test.rpc > ripple.protocol
test.rpc > ripple.resource
test.rpc > ripple.rpc
test.resource > xrpl.basics
test.resource > xrpl.resource
test.rpc > test.jtx
test.rpc > test.nodestore
test.rpc > test.toplevel
test.server > ripple.app
test.server > ripple.basics
test.server > ripple.beast
test.server > ripple.core
test.server > ripple.json
test.server > ripple.rpc
test.server > ripple.server
test.rpc > xrpl.basics
test.rpc > xrpld.app
test.rpc > xrpld.core
test.rpc > xrpld.net
test.rpc > xrpld.overlay
test.rpc > xrpld.rpc
test.rpc > xrpl.hook
test.rpc > xrpl.json
test.rpc > xrpl.protocol
test.rpc > xrpl.resource
test.server > test.jtx
test.server > test.toplevel
test.server > test.unit_test
test.shamap > ripple.basics
test.shamap > ripple.beast
test.shamap > ripple.nodestore
test.shamap > ripple.protocol
test.shamap > ripple.shamap
test.server > xrpl.basics
test.server > xrpld.app
test.server > xrpld.core
test.server > xrpld.rpc
test.server > xrpl.json
test.server > xrpl.server
test.shamap > test.unit_test
test.toplevel > ripple.json
test.shamap > xrpl.basics
test.shamap > xrpld.nodestore
test.shamap > xrpld.shamap
test.shamap > xrpl.protocol
test.toplevel > test.csf
test.unit_test > ripple.basics
test.unit_test > ripple.beast
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
xrpl.hook > xrpl.basics
xrpl.json > xrpl.basics
xrpl.protocol > xrpl.basics
xrpl.protocol > xrpl.json
xrpl.resource > xrpl.basics
xrpl.resource > xrpl.json
xrpl.resource > xrpl.protocol
xrpl.server > xrpl.basics
xrpl.server > xrpl.json
xrpl.server > xrpl.protocol
xrpld.app > test.unit_test
xrpld.app > xrpl.basics
xrpld.app > xrpld.conditions
xrpld.app > xrpld.consensus
xrpld.app > xrpld.perflog
xrpld.app > xrpl.hook
xrpld.app > xrpl.json
xrpld.app > xrpl.resource
xrpld.conditions > xrpl.basics
xrpld.conditions > xrpl.protocol
xrpld.consensus > xrpl.basics
xrpld.consensus > xrpl.json
xrpld.consensus > xrpl.protocol
xrpld.core > xrpl.basics
xrpld.core > xrpl.json
xrpld.core > xrpl.protocol
xrpld.ledger > xrpl.basics
xrpld.ledger > xrpld.core
xrpld.ledger > xrpl.json
xrpld.ledger > xrpl.protocol
xrpld.net > xrpl.basics
xrpld.net > xrpl.json
xrpld.net > xrpl.protocol
xrpld.net > xrpl.resource
xrpld.nodestore > xrpl.basics
xrpld.nodestore > xrpld.core
xrpld.nodestore > xrpld.unity
xrpld.nodestore > xrpl.json
xrpld.nodestore > xrpl.protocol
xrpld.overlay > xrpl.basics
xrpld.overlay > xrpld.core
xrpld.overlay > xrpld.peerfinder
xrpld.overlay > xrpld.perflog
xrpld.overlay > xrpl.json
xrpld.overlay > xrpl.protocol
xrpld.overlay > xrpl.resource
xrpld.overlay > xrpl.server
xrpld.peerfinder > xrpl.basics
xrpld.peerfinder > xrpld.core
xrpld.peerfinder > xrpl.protocol
xrpld.perflog > xrpl.basics
xrpld.perflog > xrpl.json
xrpld.perflog > xrpl.protocol
xrpld.rpc > xrpl.basics
xrpld.rpc > xrpld.core
xrpld.rpc > xrpld.ledger
xrpld.rpc > xrpld.nodestore
xrpld.rpc > xrpld.shamap
xrpld.rpc > xrpl.json
xrpld.rpc > xrpl.protocol
xrpld.rpc > xrpl.resource
xrpld.rpc > xrpl.server
xrpld.shamap > xrpl.basics
xrpld.shamap > xrpld.nodestore
xrpld.shamap > xrpl.protocol

View File

@@ -1 +0,0 @@
[Build instructions are currently located in `BUILD.md`](../../BUILD.md)

View File

@@ -1 +0,0 @@
[Build instructions are currently located in `BUILD.md`](../../BUILD.md)

View File

@@ -1,42 +1,72 @@
cmake_minimum_required (VERSION 3.16)
if (POLICY CMP0074)
cmake_policy(SET CMP0074 NEW)
endif ()
project (rippled)
set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
if(POLICY CMP0074)
cmake_policy(SET CMP0074 NEW)
endif()
if(POLICY CMP0077)
cmake_policy(SET CMP0077 NEW)
endif()
# Fix "unrecognized escape" issues when passing CMAKE_MODULE_PATH on Windows.
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
if(POLICY CMP0144)
cmake_policy(SET CMP0144 NEW)
endif()
project (xrpl)
set(Boost_NO_BOOST_CMAKE ON)
# make GIT_COMMIT_HASH define available to all sources
find_package(Git)
if(Git_FOUND)
execute_process(COMMAND ${GIT_EXECUTABLE} describe --always --abbrev=40
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git rev-parse HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gch)
if(gch)
set(GIT_COMMIT_HASH "${gch}")
message(STATUS gch: ${GIT_COMMIT_HASH})
add_definitions(-DGIT_COMMIT_HASH="${GIT_COMMIT_HASH}")
endif()
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git rev-parse --abbrev-ref HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gb)
if(gb)
set(GIT_BRANCH "${gb}")
message(STATUS gb: ${GIT_BRANCH})
add_definitions(-DGIT_BRANCH="${GIT_BRANCH}")
endif()
endif() #git
if (thread_safety_analysis)
if(thread_safety_analysis)
add_compile_options(-Wthread-safety -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS -DRIPPLE_ENABLE_THREAD_SAFETY_ANNOTATIONS)
add_compile_options("-stdlib=libc++")
add_link_options("-stdlib=libc++")
endif()
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/deps")
option(USE_CONAN "Use Conan package manager for dependencies" OFF)
# Then, auto-detect if conan_toolchain.cmake is being used
if(CMAKE_TOOLCHAIN_FILE)
# Check if the toolchain file path contains "conan_toolchain"
if(CMAKE_TOOLCHAIN_FILE MATCHES "conan_toolchain")
set(USE_CONAN ON CACHE BOOL "Using Conan detected from toolchain file" FORCE)
message(STATUS "Conan toolchain detected: ${CMAKE_TOOLCHAIN_FILE}")
message(STATUS "Building with Conan dependencies")
endif()
endif()
if (NOT USE_CONAN)
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake/deps")
endif()
include (CheckCXXCompilerFlag)
include (FetchContent)
include (ExternalProject)
include (CMakeFuncs) # must come *after* ExternalProject b/c it overrides one function in EP
include (ProcessorCount)
if (target)
message (FATAL_ERROR "The target option has been removed - use native cmake options to control build")
endif ()
@@ -44,8 +74,9 @@ endif ()
include(RippledSanity)
include(RippledVersion)
include(RippledSettings)
include(RippledNIH)
include(RippledRelease)
if (NOT USE_CONAN)
include(RippledNIH)
endif()
# this check has to remain in the top-level cmake
# because of the early return statement
if (packages_only)
@@ -58,30 +89,91 @@ include(RippledCompiler)
include(RippledInterface)
###
if (NOT USE_CONAN)
set(SECP256K1_INSTALL TRUE)
add_subdirectory(external/secp256k1)
add_library(secp256k1::secp256k1 ALIAS secp256k1)
add_subdirectory(external/ed25519-donna)
add_subdirectory(external/antithesis-sdk)
include(deps/Boost)
include(deps/OpenSSL)
# include(deps/Secp256k1)
# include(deps/Ed25519-donna)
include(deps/Lz4)
include(deps/Libarchive)
include(deps/Sqlite)
include(deps/Soci)
include(deps/Rocksdb)
include(deps/Nudb)
include(deps/date)
# include(deps/Protobuf)
# include(deps/gRPC)
include(deps/cassandra)
include(deps/Postgres)
include(deps/WasmEdge)
else()
include(conan/Boost)
find_package(OpenSSL 1.1.1 REQUIRED)
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2
)
set(SECP256K1_INSTALL TRUE)
add_subdirectory(external/secp256k1)
add_library(secp256k1::secp256k1 ALIAS secp256k1)
add_subdirectory(external/ed25519-donna)
add_subdirectory(external/antithesis-sdk)
find_package(gRPC REQUIRED)
find_package(lz4 REQUIRED)
# Target names with :: are not allowed in a generator expression.
# We need to pull the include directories and imported location properties
# from separate targets.
find_package(LibArchive REQUIRED)
find_package(SOCI REQUIRED)
find_package(SQLite3 REQUIRED)
find_package(wasmedge REQUIRED)
option(rocksdb "Enable RocksDB" ON)
if(rocksdb)
find_package(RocksDB REQUIRED)
set_target_properties(RocksDB::rocksdb PROPERTIES
INTERFACE_COMPILE_DEFINITIONS RIPPLE_ROCKSDB_AVAILABLE=1
)
target_link_libraries(ripple_libs INTERFACE RocksDB::rocksdb)
endif()
find_package(nudb REQUIRED)
find_package(date REQUIRED)
find_package(xxHash REQUIRED)
if(TARGET nudb::core)
set(nudb nudb::core)
elseif(TARGET NuDB::nudb)
set(nudb NuDB::nudb)
else()
message(FATAL_ERROR "unknown nudb target")
endif()
target_link_libraries(ripple_libs INTERFACE ${nudb})
include(deps/Boost)
include(deps/OpenSSL)
include(deps/Secp256k1)
include(deps/Ed25519-donna)
include(deps/Lz4)
include(deps/Libarchive)
include(deps/Sqlite)
include(deps/Soci)
include(deps/Snappy)
include(deps/Rocksdb)
include(deps/Nudb)
include(deps/date)
include(deps/Protobuf)
include(deps/gRPC)
include(deps/cassandra)
include(deps/Postgres)
include(deps/WasmEdge)
target_link_libraries(ripple_libs INTERFACE
ed25519::ed25519
lz4::lz4
OpenSSL::Crypto
OpenSSL::SSL
# Ripple::grpc_pbufs
# Ripple::pbufs
secp256k1::secp256k1
soci::soci
SQLite::SQLite3
)
endif()
###
if(coverage)
include(RippledCov)
endif()
set(PROJECT_EXPORT_SET RippleExports)
include(RippledCore)
if (NOT USE_CONAN)
include(deps/Protobuf)
include(deps/gRPC)
endif()
include(RippledInstall)
include(RippledCov)
include(RippledMultiConfig)
include(RippledDocs)
include(RippledValidatorKeys)

View File

@@ -1,67 +1,497 @@
# Contributing
The XRP Ledger has many and diverse stakeholders, and everyone deserves a chance to contribute meaningful changes to the code that runs the XRPL.
To contribute, please:
1. Fork the repository under your own user.
2. Create a new branch on which to write your changes. Please note that changes which alter transaction processing must be composed via and guarded using [Amendments](https://xrpl.org/amendments.html). Changes which are _read only_ i.e. RPC, or changes which are only refactors and maintain the existing behaviour do not need to be made through an Amendment.
3. Write and test your code.
4. Ensure that your code compiles with the provided build engine and update the provided build engine as part of your PR where needed and where appropriate.
5. Write test cases for your code and include those in `src/test` such that they are runnable from the command line using `./rippled -u`. (Some changes will not be able to be tested this way.)
6. Ensure your code passes automated checks (e.g. clang-format and levelization.)
7. Squash your commits (i.e. rebase) into as few commits as is reasonable to describe your changes at a high level (typically a single commit for a small change.)
8. Open a PR to the main repository onto the _develop_ branch, and follow the provided template.
Xahau has many and diverse stakeholders, and everyone deserves
a chance to contribute meaningful changes to the code that runs Xahau.
# Contributing
We assume you are familiar with the general practice of [making
contributions on GitHub][1]. This file includes only special
instructions specific to this project.
## Before you start
In general, contributions should be developed in your personal
[fork](https://github.com/xahau/xahaud/fork).
The following branches exist in the main project repository:
- `dev`: The latest set of unreleased features, and the most common
starting point for contributions.
- `candidate`: The latest beta release or release candidate.
- `release`: The latest stable release.
The tip of each branch must be signed. In order for GitHub to sign a
squashed commit that it builds from your pull request, GitHub must know
your verifying key. Please set up [signature verification][signing].
[rippled]: https://github.com/xahau/xahaud
[signing]:
https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification
## Major contributions
If your contribution is a major feature or breaking change, then you
must first write a Xahau Standard (XLS) describing it. Go to
[Standards](https://github.com/XRPLF/XRPL-Standards/discussions),
choose the next available standard number, and open a discussion with an
appropriate title to propose your draft standard.
When you submit a pull request, please link the corresponding XLS in the
description. An XLS still in draft status is considered a
work-in-progress and open for discussion. Please allow time for
questions, suggestions, and changes to the XLS draft. It is the
responsibility of the XLS author to update the draft to match the final
implementation when its corresponding pull request is merged, unless the
author delegates that responsibility to others.
## Before making a pull request
Changes that alter transaction processing must be guarded by an
[Amendment](https://docs.xahau.network/features/amendments).
All other changes that maintain the existing behavior do not need an
Amendment.
Ensure that your code compiles according to the build instructions in the
[`documentation`](https://docs.xahau.network/infrastructure/building-xahau).
If you create new source files, they must go under `src/ripple`.
You will need to add them to one of the
[source lists](./cmake/RippledCore.cmake) in CMake.
Please write tests for your code.
If you create new test source files, they must go under `src/test`.
You will need to add them to one of the
[source lists](./cmake/RippledCore.cmake) in CMake.
If your test can be run offline, in under 60 seconds, then it can be an
automatic test run by `rippled --unittest`.
Otherwise, it must be a manual test.
The source must be formatted according to the style guide below.
Header includes must be [levelized](./Builds/levelization).
Changes should be usually squashed down into a single commit.
Some larger or more complicated change sets make more sense,
and are easier to review if organized into multiple logical commits.
Either way, all commits should fit the following criteria:
* Changes should be presented in a single commit or a logical
sequence of commits.
Specifically, chronological commits that simply
reflect the history of how the author implemented
the change, "warts and all", are not useful to
reviewers.
* Every commit should have a [good message](#good-commit-messages).
to explain a specific aspects of the change.
* Every commit should be signed.
* Every commit should be well-formed (builds successfully,
unit tests passing), as this helps to resolve merge
conflicts, and makes it easier to use `git bisect`
to find bugs.
### Good commit messages
Refer to
["How to Write a Git Commit Message"](https://cbea.ms/git-commit/)
for general rules on writing a good commit message.
tl;dr
> 1. Separate subject from body with a blank line.
> 2. Limit the subject line to 50 characters.
> * [...]shoot for 50 characters, but consider 72 the hard limit.
> 3. Capitalize the subject line.
> 4. Do not end the subject line with a period.
> 5. Use the imperative mood in the subject line.
> * A properly formed Git commit subject line should always be able
> to complete the following sentence: "If applied, this commit will
> _your subject line here_".
> 6. Wrap the body at 72 characters.
> 7. Use the body to explain what and why vs. how.
In addition to those guidelines, please add one of the following
prefixes to the subject line if appropriate.
* `fix:` - The primary purpose is to fix an existing bug.
* `perf:` - The primary purpose is performance improvements.
* `refactor:` - The changes refactor code without affecting
functionality.
* `test:` - The changes _only_ affect unit tests.
* `docs:` - The changes _only_ affect documentation. This can
include code comments in addition to `.md` files like this one.
* `build:` - The changes _only_ affect the build process,
including CMake and/or Conan settings.
* `chore:` - Other tasks that don't affect the binary, but don't fit
any of the other cases. e.g. formatting, git settings, updating
Github Actions jobs.
Whenever possible, when updating commits after the PR is open, please
add the PR number to the end of the subject line. e.g. `test: Add
unit tests for Feature X (#1234)`.
## Pull requests
In general, pull requests use `develop` as the base branch.
(Hotfixes are an exception.)
If your changes are not quite ready, but you want to make it easily available
for preliminary examination or review, you can create a "Draft" pull request.
While a pull request is marked as a "Draft", you can rebase or reorganize the
commits in the pull request as desired.
Github pull requests are created as "Ready" by default, or you can mark
a "Draft" pull request as "Ready".
Once a pull request is marked as "Ready",
any changes must be added as new commits. Do not
force-push to a branch in a pull request under review.
(This includes rebasing your branch onto the updated base branch.
Use a merge operation, instead or hit the "Update branch" button
at the bottom of the Github PR page.)
This preserves the ability for reviewers to filter changes since their last
review.
A pull request must obtain **approvals from at least two reviewers**
before it can be considered for merge by a Maintainer.
Maintainers retain discretion to require more approvals if they feel the
credibility of the existing approvals is insufficient.
Pull requests must be merged by [squash-and-merge][2]
to preserve a linear history for the `develop` branch.
### When and how to merge pull requests
#### "Passed"
A pull request should only have the "Passed" label added when it
meets a few criteria:
1. It must have two approving reviews [as described
above](#pull-requests). (Exception: PRs that are deemed "trivial"
only need one approval.)
2. All CI checks must be complete and passed. (One-off failures may
be acceptable if they are related to a known issue.)
3. The PR must have a [good commit message](#good-commit-messages).
* If the PR started with a good commit message, and it doesn't
need to be updated, the author can indicate that in a comment.
* Any contributor, preferably the author, can leave a comment
suggesting a commit message.
* If the author squashes and rebases the code in preparation for
merge, they should also ensure the commit message(s) are updated
as well.
4. The PR branch must be up to date with the base branch (usually
`develop`). This is usually accomplised by merging the base branch
into the feature branch, but if the other criteria are met, the
changes can be squashed and rebased on top of the base branch.
5. Finally, and most importantly, the author of the PR must
positively indicate that the PR is ready to merge. That can be
accomplished by adding the "Passed" label if their role allows,
or by leaving a comment to the effect that the PR is ready to
merge.
Once the "Passed" label is added, a maintainer may merge the PR at
any time, so don't use it lightly.
#### Instructions for maintainers
The maintainer should double-check that the PR has met all the
necessary criteria, and can request additional information from the
owner, or additional reviews, and can always feel free to remove the
"Passed" label if appropriate. The maintainer has final say on
whether a PR gets merged, and are encouraged to communicate and
issues or concerns to other maintainers.
##### Most pull requests: "Squash and merge"
Most pull requests don't need special handling, and can simply be
merged using the "Squash and merge" button on the Github UI. Update
the suggested commit message if necessary.
##### Slightly more complicated pull requests
Some pull requests need to be pushed to `develop` as more than one
commit. There are multiple ways to accomplish this. If the author
describes a process, and it is reasonable, follow it. Otherwise, do
a fast forward only merge (`--ff-only`) on the command line and push.
Either way, check that:
* The commits are based on the current tip of `develop`.
* The commits are clean: No merge commits (except when reverse
merging), no "[FOLD]" or "fixup!" messages.
* All commits are signed. If the commits are not signed by the author, use
`git commit --amend -S` to sign them yourself.
* At least one (but preferably all) of the commits has the PR number
in the commit message.
**Never use the "Create a merge commit" or "Rebase and merge"
functions!**
##### Releases, release candidates, and betas
All releases, including release candidates and betas, are handled
differently from typical PRs. Most importantly, never use
the Github UI to merge a release.
1. There are two possible conditions that the `develop` branch will
be in when preparing a release.
1. Ready or almost ready to go: There may be one or two PRs that
need to be merged, but otherwise, the only change needed is to
update the version number in `BuildInfo.cpp`. In this case,
merge those PRs as appropriate, updating the second one, and
waiting for CI to finish in between. Then update
`BuildInfo.cpp`.
2. Several pending PRs: In this case, do not use the Github UI,
because the delays waiting for CI in between each merge will be
unnecessarily onerous. Instead, create a working branch (e.g.
`develop-next`) based off of `develop`. Squash the changes
from each PR onto the branch, one commit each (unless
more are needed), being sure to sign each commit and update
the commit message to include the PR number. You may be able
to use a fast-forward merge for the first PR. The workflow may
look something like:
```
git fetch upstream
git checkout upstream/develop
git checkout -b develop-next
# Use -S on the ff-only merge if prbranch1 isn't signed.
# Or do another branch first.
git merge --ff-only user1/prbranch1
git merge --squash user2/prbranch2
git commit -S
git merge --squash user3/prbranch3
git commit -S
[...]
git push --set-upstream origin develop-next
</pre>
```
2. Create the Pull Request with `release` as the base branch. If any
of the included PRs are still open,
[use closing keywords](https://docs.github.com/articles/closing-issues-using-keywords)
in the description to ensure they are closed when the code is
released. e.g. "Closes #1234"
3. Instead of the default template, reuse and update the message from
the previous release. Include the following verbiage somewhere in
the description:
```
The base branch is release. All releases (including betas) go in
release. This PR will be merged with --ff-only (not squashed or
rebased, and not using the GitHub UI) to both release and develop.
```
4. Sign-offs for the three platforms usually occur offline, but at
least one approval will be needed on the PR.
5. Once everything is ready to go, open a terminal, and do the
fast-forward merges manually. Do not push any branches until you
verify that all of them update correctly.
```
git fetch upstream
git checkout -b upstream--develop -t upstream/develop || git checkout upstream--develop
git reset --hard upstream/develop
# develop-next must be signed already!
git merge --ff-only origin/develop-next
git checkout -b upstream--release -t upstream/release || git checkout upstream--release
git reset --hard upstream/release
git merge --ff-only origin/develop-next
# Only do these 3 steps if pushing a release. No betas or RCs
git checkout -b upstream--master -t upstream/master || git checkout upstream--master
git reset --hard upstream/master
git merge --ff-only origin/develop-next
# Check that all of the branches are updated
git log -1 --oneline
# The output should look like:
# 02ec8b7962 (HEAD -> upstream--master, origin/develop-next, upstream--release, upstream--develop, develop-next) Set version to 2.2.0-rc1
# Note that all of the upstream--develop/release/master are on this commit.
# (Master will be missing for betas, etc.)
# Just to be safe, do a dry run first:
git push --dry-run upstream-push HEAD:develop
git push --dry-run upstream-push HEAD:release
# git push --dry-run upstream-push HEAD:master
# Now push
git push upstream-push HEAD:develop
git push upstream-push HEAD:release
# git push upstream-push HEAD:master
# Don't forget to tag the release, too.
git tag <version number>
git push upstream-push <version number>
```
6. Finally
[create a new release on Github](https://github.com/XRPLF/rippled/releases).
# Major Changes
If your code change is a major feature, a breaking change or in some other way makes a significant alteration to the way the XRPL will operate, then you must first write an XLS document (XRP Ledger Standard) describing your change.
To do this:
1. Go to [XLS Standards](https://github.com/XRPLF/XRPL-Standards/discussions).
2. Choose the next available standard number.
3. Open a discussion with the appropriate title to propose your draft standard.
4. Link your XLS in your PR.
# Style guide
This is a non-exhaustive list of recommended style guidelines. These are not always strictly enforced and serve as a way to keep the codebase coherent rather than a set of _thou shalt not_ commandments.
This is a non-exhaustive list of recommended style guidelines. These are
not always strictly enforced and serve as a way to keep the codebase
coherent rather than a set of _thou shalt not_ commandments.
## Formatting
All code must conform to `clang-format` version 10, unless the result would be unreasonably difficult to read or maintain.
To change your code to conform use `clang-format -i <your changed files>`.
All code must conform to `clang-format` version 10,
according to the settings in [`.clang-format`](./.clang-format),
unless the result would be unreasonably difficult to read or maintain.
To demarcate lines that should be left as-is, surround them with comments like
this:
```
// clang-format off
...
// clang-format on
```
You can format individual files in place by running `clang-format -i <file>...`
from any directory within this project.
There is a Continuous Integration job that runs clang-format on pull requests. If the code doesn't comply, a patch file that corrects auto-fixable formatting issues is generated.
To download the patch file:
1. Next to `clang-format / check (pull_request) Failing after #s` -> click **Details** to open the details page.
2. Left menu -> click **Summary**
3. Scroll down to near the bottom-right under `Artifacts` -> click **clang-format.patch**
4. Download the zip file and extract it to your local git repository. Run `git apply [patch-file-name]`.
5. Commit and push.
You can install a pre-commit hook to automatically run `clang-format` before every commit:
```
pip3 install pre-commit
pre-commit install
```
## Contracts and instrumentation
We are using [Antithesis](https://antithesis.com/) for continuous fuzzing,
and keep a copy of [Antithesis C++ SDK](https://github.com/antithesishq/antithesis-sdk-cpp/)
in `external/antithesis-sdk`. One of the aims of fuzzing is to identify bugs
by finding external conditions which cause contracts violations inside `rippled`.
The contracts are expressed as `XRPL_ASSERT` or `UNREACHABLE` (defined in
`include/xrpl/beast/utility/instrumentation.h`), which are effectively (outside
of Antithesis) wrappers for `assert(...)` with added name. The purpose of name
is to provide contracts with stable identity which does not rely on line numbers.
When `rippled` is built with the Antithesis instrumentation enabled
(using `voidstar` CMake option) and ran on the Antithesis platform, the
contracts become
[test properties](https://antithesis.com/docs/using_antithesis/properties.html);
otherwise they are just like a regular `assert`.
To learn more about Antithesis, see
[How Antithesis Works](https://antithesis.com/docs/introduction/how_antithesis_works.html)
and [C++ SDK](https://antithesis.com/docs/using_antithesis/sdk/cpp/overview.html#)
We continue to use the old style `assert` or `assert(false)` in certain
locations, where the reporting of contract violations on the Antithesis
platform is either not possible or not useful.
For this reason:
* The locations where `assert` or `assert(false)` contracts should continue to be used:
* `constexpr` functions
* unit tests i.e. files under `src/test`
* unit tests-related modules (files under `beast/test` and `beast/unit_test`)
* Outside of the listed locations, do not use `assert`; use `XRPL_ASSERT` instead,
giving it unique name, with the short description of the contract.
* Outside of the listed locations, do not use `assert(false)`; use
`UNREACHABLE` instead, giving it unique name, with the description of the
condition being violated
* The contract name should start with a full name (including scope) of the
function, optionally a named lambda, followed by a colon ` : ` and a brief
(typically at most five words) description. `UNREACHABLE` contracts
can use slightly longer descriptions. If there are multiple overloads of the
function, use common sense to balance both brevity and unambiguity of the
function name. NOTE: the purpose of name is to provide stable means of
unique identification of every contract; for this reason try to avoid elements
which can change in some obvious refactors or when reinforcing the condition.
* Contract description typically (except for `UNREACHABLE`) should describe the
_expected_ condition, as in "I assert that _expected_ is true".
* Contract description for `UNREACHABLE` should describe the _unexpected_
situation which caused the line to have been reached.
* Example good name for an
`UNREACHABLE` macro `"Json::operator==(Value, Value) : invalid type"`; example
good name for an `XRPL_ASSERT` macro `"Json::Value::asCString : valid type"`.
* Example **bad** name
`"RFC1751::insert(char* s, int x, int start, int length) : length is greater than or equal zero"`
(missing namespace, unnecessary full function signature, description too verbose).
Good name: `"ripple::RFC1751::insert : minimum length"`.
* In **few** well-justified cases a non-standard name can be used, in which case a
comment should be placed to explain the rationale (example in `contract.cpp`)
* Do **not** rename a contract without a good reason (e.g. the name no longer
reflects the location or the condition being checked)
* Do not use `std::unreachable`
* Do not put contracts where they can be violated by an external condition
(e.g. timing, data payload before mandatory validation etc.) as this creates
bogus bug reports (and causes crashes of Debug builds)
## Unit Tests
To execute all unit tests:
```rippled --unittest --unittest-jobs=<number of cores>```
(Note: Using multiple cores on a Mac M1 can cause spurious test failures. The
cause is still under investigation. If you observe this problem, try specifying fewer jobs.)
To run a specific set of test suites:
```
rippled --unittest TestSuiteName
```
Note: In this example, all tests with prefix `TestSuiteName` will be run, so if
`TestSuiteName1` and `TestSuiteName2` both exist, then both tests will run.
Alternatively, if the unit test name finds an exact match, it will stop
doing partial matches, i.e. if a unit test with a title of `TestSuiteName`
exists, then no other unit test will be executed, apart from `TestSuiteName`.
## Avoid
1. Proliferation of nearly identical code.
2. Proliferation of new files and classes.
3. Complex inheritance and complex OOP patterns.
4. Unmanaged memory allocation and raw pointers.
5. Macros and non-trivial templates (unless they add significant value.)
6. Lambda patterns (unless these add significant value.)
7. CPU or architecture-specific code unless there is a good reason to include it, and where it is used guard it with macros and provide explanatory comments.
5. Macros and non-trivial templates (unless they add significant value).
6. Lambda patterns (unless these add significant value).
7. CPU or architecture-specific code unless there is a good reason to
include it, and where it is used, guard it with macros and provide
explanatory comments.
8. Importing new libraries unless there is a very good reason to do so.
## Seek to
9. Extend functionality of existing code rather than creating new code.
10. Prefer readability over terseness where important logic is concerned.
11. Inline functions that are not used or are not likely to be used elsewhere in the codebase.
12. Use clear and self-explanatory names for functions, variables, structs and classes.
13. Use TitleCase for classes, structs and filenames, camelCase for function and variable names, lower case for namespaces and folders.
14. Provide as many comments as you feel that a competent programmer would need to understand what your code does.
10. Prefer readability over terseness where important logic is
concerned.
11. Inline functions that are not used or are not likely to be used
elsewhere in the codebase.
12. Use clear and self-explanatory names for functions, variables,
structs and classes.
13. Use TitleCase for classes, structs and filenames, camelCase for
function and variable names, lower case for namespaces and folders.
14. Provide as many comments as you feel that a competent programmer
would need to understand what your code does.
# Maintainers
Maintainers are ecosystem participants with elevated access to the repository. They are able to push new code, make decisions on when a release should be made, etc.
## Code Review
New contributors' PRs must be reviewed by at least two of the maintainers. Well established prior contributors can be reviewed by a single maintainer.
Maintainers are ecosystem participants with elevated access to the repository.
They are able to push new code, make decisions on when a release should be
made, etc.
## Adding and Removing
New maintainers can be proposed by two existing maintainers, subject to a vote by a quorum of the existing maintainers. A minimum of 50% support and a 50% participation is required. In the event of a tie vote, the addition of the new maintainer will be rejected.
Existing maintainers can resign, or be subject to a vote for removal at the behest of two existing maintainers. A minimum of 60% agreement and 50% participation are required. The XRP Ledger Foundation will have the ability, for cause, to remove an existing maintainer without a vote.
## Adding and removing
## Existing Maintainers
* [JoelKatz](https://github.com/JoelKatz) (Ripple)
* [Manojsdoshi](https://github.com/manojsdoshi) (Ripple)
* [N3tc4t](https://github.com/n3tc4t) (XRPL Labs)
* [Nikolaos D Bougalis](https://github.com/nbougalis)
* [Nixer89](https://github.com/nixer89) (XRP Ledger Foundation)
* [RichardAH](https://github.com/RichardAH) (XRPL Labs + XRP Ledger Foundation)
* [Seelabs](https://github.com/seelabs) (Ripple)
* [Silkjaer](https://github.com/Silkjaer) (XRP Ledger Foundation)
* [WietseWind](https://github.com/WietseWind) (XRPL Labs + XRP Ledger Foundation)
* [Ximinez](https://github.com/ximinez) (Ripple)
New maintainers can be proposed by two existing maintainers, subject to a vote
by a quorum of the existing maintainers.
A minimum of 50% support and a 50% participation is required.
In the event of a tie vote, the addition of the new maintainer will be
rejected.
Existing maintainers can resign, or be subject to a vote for removal at the
behest of two existing maintainers.
A minimum of 60% agreement and 50% participation are required.
The XRP Ledger Foundation will have the ability, for cause, to remove an
existing maintainer without a vote.
## Current Maintainers
* [Richard Holland](https://github.com/RichardAH) (XRPL Labs + XRP Ledger Foundation)
* [Denis Angell](https://github.com/dangell7) (XRPL Labs + XRP Ledger Foundation)
* [Wietse Wind](https://github.com/WietseWind) (XRPL Labs + XRP Ledger Foundation)
[1]: https://docs.github.com/en/get-started/quickstart/contributing-to-projects
[2]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/about-pull-request-merges#squash-and-merge-your-commits

View File

@@ -2,6 +2,7 @@ ISC License
Copyright (c) 2011, Arthur Britto, David Schwartz, Jed McCaleb, Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant.
Copyright (c) 2012-2020, the XRP Ledger developers.
Copyright (c) 2020-2024, XRPL Labs.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above

View File

@@ -1,3 +1,71 @@
# The Xahau Ledger
# Xahau
TODO: Doco
**Note:** Throughout this README, references to "we" or "our" pertain to the community and contributors involved in the Xahau network. It does not imply a legal entity or a specific collection of individuals.
[Xahau](https://xahau.network/) is a decentralized cryptographic ledger that builds upon the robust foundation of the XRP Ledger. It inherits the XRP Ledger's Byzantine Fault Tolerant consensus algorithm and enhances it with additional features and functionalities. Developers and users familiar with the XRP Ledger will find that most documentation and tutorials available on [xrpl.org](https://xrpl.org) are relevant and applicable to Xahau, including those related to running validators and managing validator keys. For Xahau specific documentation you can visit our [documentation](https://docs.xahau.network/)
## XAH
XAH is the public, counterparty-free asset native to Xahau and functions primarily as network gas. Transactions submitted to the Xahau network must supply an appropriate amount of XAH, to be burnt by the network as a fee, in order to be successfully included in a validated ledger. In addition, XAH also acts as a bridge currency within the Xahau DEX. XAH is traded on the open-market and is available for anyone to access. Xahau was created in 2023 with a supply of 600 million units of XAH.
## xahaud
The server software that powers Xahau is called `xahaud` and is available in this repository under the permissive [ISC open-source license](LICENSE.md). The `xahaud` server software is written primarily in C++ and runs on a variety of platforms. The `xahaud` server software can run in several modes depending on its configuration.
### Build from Source
* [Read the build instructions in our documentation](https://docs.xahau.network/infrastructure/building-xahau)
* If you encounter any issues, please [open an issue](https://github.com/xahau/xahaud/issues)
## Highlights of Xahau
1. **Hooks**: Hooks are small, efficient WebAssembly modules designed specifically for Xahau. They add a robust smart contract functionality to Xahau, allowing you to construct and deploy applications with bespoke functionalities. Hooks can block or allow transactions to and from the account, change and keep track of the hooks internal state and logic, and autonomously initiate new transactions on the accounts behalf. They can be written in any language that can be compiled into WebAssembly.
2. **Balance Rewards**: Xahau offers a Balance Rewards feature that provides a 4% per annum reward. This feature encourages users to maintain a balance in their accounts and rewards them for doing so.
3. **URIToken**: The URIToken is a feature in Xahau that allows for the creation and management of non fungible tokens within the network. This feature can be used for a variety of purposes and specific use cases.
4. **Import/B2M**: The Import/B2M feature in Xahau allows for the importation of assets into the network. This feature can be used to bring external assets into the Xahau network, expanding the range of assets that can be managed and traded within the network.
5. **Governance Game**: The Governance Game is a feature in Xahau that allows for the decentralized governance of the network. This feature allows users to participate in the decision-making process of the network, ensuring that the network remains democratic and responsive to the needs of its users.
## Binary Releases and Versioning System
Xahau provides pre-compiled binary releases of its software, which are ready-to-run versions that users can download and execute without compiling the source code themselves. These binaries are built automatically using GitHub Actions whenever a new commit is pushed or a pull request is merged.
The versioning system for Xahau binaries is based on the date of the build, the branch name, and a build number, following the format `YYYY.MM.DD-branch+buildnumber`. For example, `2023.10.30-release+443` indicates a binary built on October 30, 2023, from the `release` branch, and it is the 443rd build from that branch.
Users can access these binaries on the [Xahau Build Server](https://build.xahau.tech/), which provides an organized list of releases along with release notes for each version. This system simplifies the deployment process for users and ensures they can easily identify and download the appropriate version for their needs.
## Source Code
Here are some good places to start learning the source code:
- Read the markdown files in the source tree: `src/ripple/**/*.md`.
- Read [the levelization document](./Builds/levelization) to get an idea of the internal dependency graph.
- In the big picture, the `main` function constructs an `ApplicationImp` object, which implements the `Application` virtual interface. Almost every component in the application takes an `Application&` parameter in its constructor, typically named `app` and stored as a member variable `app_`. This allows most components to depend on any other component.
### Repository Contents
| Folder | Contents |
|:-----------|:-------------------------------------------------|
| `./Builds` | Platform-specific guides for building `xahaud`. |
| `./cfg` | Example configuration files. |
| `./src` | Source code. |
Some of the directories under `src` are external repositories included using
git-subtree. See those directories' README files for more details.
## Resources
- **Documentation**: Documentation for XRPL, Xahau and Hooks.
- [Xrpl Documentation](https://xrpl.org)
- [Xahau Documentation](https://docs.xahau.network/)
- [Hooks Technical Documentation](https://xrpl-hooks.readme.io/)
- **Explorers**: Explore the Xahau ledger using various explorers:
- [xahauexplorer.com](https://xahauexplorer.com)
- [xahscan.com](https://xahscan.com)
- [xahau.xrpl.org](https://xahau.xrpl.org)
- [explorer.xahau.network](https://explorer.xahau.network)
- **Testnet & Faucet**: Test applications and obtain test XAH at [xahau-test.net](https://xahau-test.net) and use the testnet explorer at [explorer.xahau.network](https://explorer.xahau.network).
- **Supporting Wallets**: A list of wallets that support XAH and Xahau-based assets.
- [Xaman](https://xaman.app)
- [Crossmark](https://crossmark.io)

74
RELEASENOTES.XAHAUD.md Normal file
View File

@@ -0,0 +1,74 @@
# Release Notes
This document contains the release notes for `xahaud`, the reference server implementation of the Xahau protocol. To learn more about how to build, run or update a `xahaud` server, visit https://docs.xahau.network/infrastructure/peering/connect-to-xahau-mainnet
Have new ideas? Need help with setting up your node? [Please open an issue here](https://github.com/xahau/xahaud/issues/new/choose).
# Introducing Xahau version 2023.10.30-release+443
Version 2023.10.30-release+443 of `xahaud`, the reference server implementation of the Xahau protocol, is now available at [Build Server](https://build.xahau.tech/).
[Download Release Binary](https://build.xahau.tech/2023.10.30-release%2B443)
[Sign Up for Future Release Announcements](https://groups.google.com/g/xahau-server)
<!-- BREAK -->
## Action Required
New amendments are now open for voting according to Xahau's [amendment process](https://docs.xahau.network/features/amendments), which enables protocol changes following five days of >80% support from trusted validators.
If you operate a Xahau server, upgrade to version 2023.10.30-release+443 by October 31 to ensure service continuity. The exact time that protocol changes take effect depends on the voting decisions of the decentralized network.
## Install / Upgrade
On supported platforms, see the [instructions on installing or updating `xahaud`](https://docs.xahau.network/infrastructure/peering/connect-to-xahau-mainnet).
## New Amendments
- **`Hooks`**: This amendment activates hooks and the hook API in the Xahau network, allowing custom logic to be executed on the ledger in response to transactions.
- **`BalanceRewards`**: This amendment enables `ClaimReward` and `GenesisMint` transactions, facilitating balance rewards to be paid in XAH, the Xahau network's native currency.
- **`PaychanAndEscrowForTokens`**: This amendment allows the use of IOU tokens for PaymentChannels and Escrow transactions, enhancing flexibility and functionality.
- **`URIToken`**: This amendment activates URITokens, which are non-fungible, hook-friendly tokens in the Xahau network.
- **`Import`**: This amendment enables the Import transaction for B2M xpop processing, allowing transactions to be imported from another network or system.
- **`XahauGenesis`**: This amendment activates the genesis amendment for the initial distribution of XAH and the establishment of the governance game.
- **`HooksUpdate1`**: This amendment extends the hooks API to include Xpop functionality, enabling more complex transactions involving Xpop.
## Changelog
### New Features and Improvements
- **Server Definitions**: The new feature introduces a `server_definitions` endpoint. This endpoint is designed to return a JSON object that contains the definitions of various types, fields, and transaction results used in the Xahau protocol.
This feature enhances the functionality of the system by providing an efficient way to fetch and verify the current definitions used in the Ripple protocol.
### GitHub
The public source code repository for `xahaud` is hosted on GitHub at <https://github.com/xahau/xahaud>.
We welcome all contributions and invite everyone to join the community of Xahau developers to help build the Internet of Value.
### Credits
The following people contributed directly to this release:
- Nikolaos D. Bougalis <nikb@bougalis.net>
- Wietse Wind <wietse@xrpl-labs.com>
- Richard Holland <richard@xrpl-labs.com>
- Denis Angell <denis@xrpl-labs.com>
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find.
To report a bug, please send a detailed report to:
bugs@xahau.network

File diff suppressed because it is too large Load Diff

View File

@@ -1,149 +1,73 @@
### Operating an XRP Ledger server securely
### Operating the Xahau server securely
For more details on operating an XRP Ledger server securely, please visit https://xrpl.org/manage-the-rippled-server.html.
For more details on operating the Xahau server securely, please visit https://docs.xahau.network/infrastructure/building-xahau.
# Security Policy
## Supported Versions
Software constantly evolves. In order to focus resources, we only generally only accept vulnerability reports that affect recent and current versions of the software. We always accept reports for issues present in the **master**, **release** or **develop** branches, and with proposed, [open pull requests](https://github.com/ripple/rippled/pulls).
Software constantly evolves. In order to focus resources, we only generally only accept vulnerability reports that affect recent and current versions of the software. We always accept reports for issues present in the **release**, **candidate** or **dev** branches, and with proposed, [open pull requests](https://github.com/xahau/xahaud/pulls).
## Identifying and Reporting Vulnerabilities
# Responsible Disclosure
We take security seriously and we do our best to ensure that all our releases are bug free. But we aren't perfect and sometimes things will slip through.
## Responsible disclosure policy
### Responsible Investigation
At [Xahau](https://xahau.network) we believe that the security of our systems is extremely important.
We urge you to examine our code carefully and responsibly, and to disclose any issues that you identify in a responsible fashion.
Despite our concern for the security of our systems during product development and maintenance, there's always the possibility of someone finding something we need to improve / update / change / fix / ...
Responsible investigation includes, but isn't limited to, the following:
We appreciate you notifying us if you found a weak point in one of our systems as soon as possible so that we can take measures immediately to protect our customers and their data.
- Not performing tests on the main network. If testing is necessary, use the [Testnet or Devnet](https://xrpl.org/xrp-testnet-faucet.html).
- Not targeting physical security measures, or attempting to use social engineering, spam, distributed denial of service (DDOS) attacks, etc.
- Investigating bugs in a way that makes a reasonable, good faith effort not to be disruptive or harmful to the XRP Ledger and the broader ecosystem.
## How to report
### Responsible Disclosure
If you believe you found a security issue in one of our systems, please notify us as soon as possible by [send an email to bugs@xahau.network](mailto:bugs@xahau.network).
If you discover a vulnerability or potential threat, or if you _think_
you have, please reach out by dropping an email using the contact
information below.
## Rules
Your report should include the following:
This responsible disclosure policy is not an open invitation to actively scan our network and applications for vulnerabilities. Our continuous monitoring will likely detect your scan and these will be investigated.
- Your contact information (typically, an email address);
- The description of the vulnerability;
- The attack scenario (if any);
- The steps to reproduce the vulnerability;
- Any other relevant details or artifacts, including code, scripts or patches.
### We ask you to:
In your mail, please describe of the issue or the potential threat; if possible, please include a "repro" (code that can reproduce the issue) or describe the best way to reproduce and replicate the issue. Please make your report as extensive as possible.
- Not share information about the security issue with others until the problem is resolved and to immediately delete any confidential data acquired
- Not further abuse the problem, for example, by downloading more data than is necessary in order to demonstrate the leak or to view, delete or amend the data of third parties
- Provide detailed information in order for us to reproduce, validate and resolve the problem as quickly as possible. Include your test data, timestamps and URL(s) of the system(s) involved
- Leave your contact details (e-mail address and/or phone number) so that we may contact you about the progress of the solution. We do accept anonymous reports
- Do not use attacks on physical security, social engineering, distributed denial of service, spam or applications of third parties
For more information on responsible disclosure, please read this [Wikipedia article](https://en.wikipedia.org/wiki/Responsible_disclosure).
## Responsible Disclosure procedure(s)
## Report Handling Process
### When you report a security issue, we will act according to the following:
Please report the bug directly to us and limit further disclosure. If you want to prove that you knew the bug as of a given time, consider using a cryptographic precommitment: hash the content of your report and publish the hash on a medium of your choice (e.g. on Twitter or as a memo in a transaction) as "proof" that you had written the text at a given point in time.
- You will receive a confirmation of receipt from us within 4 working days after the report was made
- You will receive a response with the assessment of the security issue and an expected date of resolution within 4 working days after the confirmation of receipt was sent
- We will take no legal steps against you in relation to the report if you have kept to the conditions as set out above
- We will handle your report confidentially and we will not share your details with third parties without your permission, unless that is necessary in order to fulfil a legal obligation
Once we receive a report, we:
### This responsible disclosure scheme is not intended for:
1. Assign two people to independently evaluate the report;
2. Consider their recommendations;
3. If action is necessary, formulate a plan to address the issue;
4. Communicate privately with the reporter to explain our plan.
5. Prepare, test and release a version which fixes the issue; and
6. Announce the vulnerability publicly.
- Complaints
- Website unavailable reports
- Phishing reports
- Fraud reports
We will triage and respond to your disclosure within 24 hours. Beyond that, we will work to analyze the issue in more detail, formulate, develop and test a fix.
For these complaints or reports, please [contact our support team](mailto:bugs@xahau.network).
While we commit to responding with 24 hours of your initial report with our triage assessment, we cannot guarantee a response time for the remaining steps. We will communicate with you throughout this process, letting you know where we are and keeping you updated on the timeframe.
## Bug bounty program
## Bug Bounty Program
[Xahau](https://xahau.network) encourages the reporting of security issues or vulnerabilities. We may make an appropriate reward for confidential disclosure of any design or implementation issue that could be used to compromise the confidentiality or integrity of our users' data that was not yet known to us. We decide whether the report is eligible and the amount of the reward.
[Ripple](https://ripple.com) is generously sponsoring a bug bounty program for vulnerabilities in [`rippled`](https://github.com/ripple/rippled) (and other related projects, like [`ripple-lib`](https://github.com/ripple/ripple-lib)).
## Exclusions
This program allows us to recognize and reward individuals or groups that identify and report bugs. In summary, order to qualify for a bounty, the bug must be:
### The following type of security problems are excluded
1. **In scope**. Only bugs in software under the scope of the program qualify. Currently, that means `rippled` and `ripple-lib`.
2. **Relevant**. A security issue, posing a danger to user funds, privacy or the operation of the XRP Ledger.
1. **In scope**. Only bugs in software under the scope of the program qualify. Currently, that means `xahaud` and `xahau-lib`.
2. **Relevant**. A security issue, posing a danger to user funds, privacy or the operation of the Xahau Ledger.
3. **Original and previously unknown**. Bugs that are already known and discussed in public do not qualify. Previously reported bugs, even if publicly unknown, are not eligible.
4. **Specific**. We welcome general security advice or recommendations, but we cannot pay bounties for that.
5. **Fixable**. There has to be something we can do to permanently fix the problem. Note that bugs in other peoples software may still qualify in some cases. For example, if you find a bug in a library that we use which can compromises the security of software that is in scope and we can get it fixed, you may qualify for a bounty.
6. **Unused**. If you use the exploit to attack the XRP Ledger, you do not qualify for a bounty. If you report a vulnerability used in an ongoing or past attack and there is specific, concrete evidence that suggests you are the attacker we reserve the right not to pay a bounty.
5. **Fixable**. There has to be something we can do to permanently fix the problem. Note that bugs in other peoples software may still qualify in some cases. For example, if you find a bug in a library that we use which can compromise the security of software that is in scope and we can get it fixed, you may qualify for a bounty.
6. **Unused**. If you use the exploit to attack the Xahau Ledger, you do not qualify for a bounty. If you report a vulnerability used in an ongoing or past attack and there is specific, concrete evidence that suggests you are the attacker we reserve the right not to pay a bounty.
The amount paid varies dramatically. Vulnerabilities that are harmless on their own, but could form part of a critical exploit will usually receive a bounty. Full-blown exploits can receive much higher bounties. Please dont hold back partial vulnerabilities while trying to construct a full-blown exploit. We will pay a bounty to anyone who reports a complete chain of vulnerabilities even if they have reported each component of the exploit separately and those vulnerabilities have been fixed in the meantime. However, to qualify for a the full bounty, you must to have been the first to report each of the partial exploits.
Please note: Reports that are lacking any proof (such as screenshots or other data), detailed information or details on how to reproduce any unexpected result will be investigated but will not be eligible for any reward.
### Contacting Us
To report a qualifying bug, please send a detailed report to:
|Email Address|bugs@ripple.com |
|:-----------:|:----------------------------------------------------|
|Short Key ID | `0xC57929BE` |
|Long Key ID | `0xCD49A0AFC57929BE` |
|Fingerprint | `24E6 3B02 37E0 FA9C 5E96 8974 CD49 A0AF C579 29BE` |
The full PGP key for this address, which is also available on several key servers (e.g. on [keys.gnupg.net](https://keys.gnupg.net)), is:
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFUwGHYBEAC0wpGpBPkd8W1UdQjg9+cEFzeIEJRaoZoeuJD8mofwI5Ejnjdt
kCpUYEDal0ygkKobu8SzOoATcDl18iCrScX39VpTm96vISFZMhmOryYCIp4QLJNN
4HKc2ZdBj6W4igNi6vj5Qo6JMyGpLY2mz4CZskbt0TNuUxWrGood+UrCzpY8x7/N
a93fcvNw+prgCr0rCH3hAPmAFfsOBbtGzNnmq7xf3jg5r4Z4sDiNIF1X1y53DAfV
rWDx49IKsuCEJfPMp1MnBSvDvLaQ2hKXs+cOpx1BCZgHn3skouEUxxgqbtTzBLt1
xXpmuijsaltWngPnGO7mOAzbpZSdBm82/Emrk9bPMuD0QaLQjWr7HkTSUs6ZsKt4
7CLPdWqxyY/QVw9UaxeHEtWGQGMIQGgVJGh1fjtUr5O1sC9z9jXcQ0HuIHnRCTls
GP7hklJmfH5V4SyAJQ06/hLuEhUJ7dn+BlqCsT0tLmYTgZYNzNcLHcqBFMEZHvHw
9GENMx/tDXgajKql4bJnzuTK0iGU/YepanANLd1JHECJ4jzTtmKOus9SOGlB2/l1
0t0ADDYAS3eqOdOcUvo9ElSLCI5vSVHhShSte/n2FMWU+kMUboTUisEG8CgQnrng
g2CvvQvqDkeOtZeqMcC7HdiZS0q3LJUWtwA/ViwxrVlBDCxiTUXCotyBWwARAQAB
tDBSaXBwbGUgTGFicyBCdWcgQm91bnR5IFByb2dyYW0gPGJ1Z3NAcmlwcGxlLmNv
bT6JAjcEEwEKACEFAlUwGHYCGwMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ
zUmgr8V5Kb6R0g//SwY/mVJY59k87iL26/KayauSoOcz7xjcST26l4ZHVVX85gOY
HYZl8k0+m8X3zxeYm9a3QAoAml8sfoaFRFQP8ynnefRrLUPaZ2MjbJ0SACMwZNef
T6o7Mi8LBAaiNZdYVyIfX1oM6YXtqYkuJdav6ZCyvVYqc9OvMJPY2ZzJYuI/ZtvQ
/lTndxCeg9ALNX/iezOLGdfMpf4HuIFVwcPPlwGi+HDlB9/bggDEHC8z434SXVFc
aQatXAPcDkjMUweU7y0CZtYEj00HITd4pSX6MqGiHrxlDZTqinCOPs1Ieqp7qufs
MzlM6irLGucxj1+wa16ieyYvEtGaPIsksUKkywx0O7cf8N2qKg+eIkUk6O0Uc6eO
CszizmiXIXy4O6OiLlVHGKkXHMSW9Nwe9GE95O8G9WR8OZCEuDv+mHPAutO+IjdP
PDAAUvy+3XnkceO+HGWRpVvJZfFP2YH4A33InFL5yqlJmSoR/yVingGLxk55bZDM
+HYGR3VeMb8Xj1rf/02qERsZyccMCFdAvKDbTwmvglyHdVLu5sPmktxbBYiemfyJ
qxMxmYXCc9S0hWrWZW7edktBa9NpE58z1mx+hRIrDNbS2sDHrib9PULYCySyVYcF
P+PWEe1CAS5jqkR2ker5td2/pHNnJIycynBEs7l6zbc9fu+nktFJz0q2B+GJAhwE
EAEKAAYFAlUwGaQACgkQ+tiY1qQ2QkjMFw//f2hNY3BPNe+1qbhzumMDCnbTnGif
kLuAGl9OKt81VHG1f6RnaGiLpR696+6Ja45KzH15cQ5JJl5Bgs1YkR/noTGX8IAD
c70eNwiFu8JXTaaeeJrsmFkF9Tueufb364risYkvPP8tNUD3InBFEZT3WN7JKwix
coD4/BwekUwOZVDd/uCFEyhlhZsROxdKNisNo3VtAq2s+3tIBAmTrriFUl0K+ZC5
zgavcpnPN57zMtW9aK+VO3wXqAKYLYmtgxkVzSLUZt2M7JuwOaAdyuYWAneKZPCu
1AXkmyo+d84sd5mZaKOr5xArAFiNMWPUcZL4rkS1Fq4dKtGAqzzR7a7hWtA5o27T
6vynuxZ1n0PPh0er2O/zF4znIjm5RhTlfjp/VmhZdQfpulFEQ/dMxxGkQ9z5IYbX
mTlSDbCSb+FMsanRBJ7Drp5EmBIudVGY6SHI5Re1RQiEh7GoDfUMUwZO+TVDII5R
Ra7WyuimYleJgDo/+7HyfuIyGDaUCVj6pwVtYtYIdOI3tTw1R1Mr0V8yaNVnJghL
CHcEJQL+YHSmiMM3ySil3O6tm1By6lFz8bVe/rgG/5uklQrnjMR37jYboi1orCC4
yeIoQeV0ItlxeTyBwYIV/o1DBNxDevTZvJabC93WiGLw2XFjpZ0q/9+zI2rJUZJh
qxmKP+D4e27lCI65Ag0EVTAYdgEQAMvttYNqeRNBRpSX8fk45WVIV8Fb21fWdwk6
2SkZnJURbiC0LxQnOi7wrtii7DeFZtwM2kFHihS1VHekBnIKKZQSgGoKuFAQMGyu
a426H4ZsSmA9Ufd7kRbvdtEcp7/RTAanhrSL4lkBhaKJrXlxBJ27o3nd7/rh7r3a
OszbPY6DJ5bWClX3KooPTDl/RF2lHn+fweFk58UvuunHIyo4BWJUdilSXIjLun+P
Qaik4ZAsZVwNhdNz05d+vtai4AwbYoO7adboMLRkYaXSQwGytkm+fM6r7OpXHYuS
cR4zB/OK5hxCVEpWfiwN71N2NMvnEMaWd/9uhqxJzyvYgkVUXV9274TUe16pzXnW
ZLfmitjwc91e7mJBBfKNenDdhaLEIlDRwKTLj7k58f9srpMnyZFacntu5pUMNblB
cjXwWxz5ZaQikLnKYhIvrIEwtWPyjqOzNXNvYfZamve/LJ8HmWGCKao3QHoAIDvB
9XBxrDyTJDpxbog6Qu4SY8AdgVlan6c/PsLDc7EUegeYiNTzsOK+eq3G5/E92eIu
TsUXlciypFcRm1q8vLRr+HYYe2mJDo4GetB1zLkAFBcYJm/x9iJQbu0hn5NxJvZO
R0Y5nOJQdyi+muJzKYwhkuzaOlswzqVXkq/7+QCjg7QsycdcwDjiQh3OrsgXHrwl
M7gyafL9ABEBAAGJAh8EGAEKAAkFAlUwGHYCGwwACgkQzUmgr8V5Kb50BxAAhj9T
TwmNrgRldTHszj+Qc+v8RWqV6j+R+zc0cn5XlUa6XFaXI1OFFg71H4dhCPEiYeN0
IrnocyMNvCol+eKIlPKbPTmoixjQ4udPTR1DC1Bx1MyW5FqOrsgBl5t0e1VwEViM
NspSStxu5Hsr6oWz2GD48lXZWJOgoL1RLs+uxjcyjySD/em2fOKASwchYmI+ezRv
plfhAFIMKTSCN2pgVTEOaaz13M0U+MoprThqF1LWzkGkkC7n/1V1f5tn83BWiagG
2N2Q4tHLfyouzMUKnX28kQ9sXfxwmYb2sA9FNIgxy+TdKU2ofLxivoWT8zS189z/
Yj9fErmiMjns2FzEDX+bipAw55X4D/RsaFgC+2x2PDbxeQh6JalRA2Wjq32Ouubx
u+I4QhEDJIcVwt9x6LPDuos1F+M5QW0AiUhKrZJ17UrxOtaquh/nPUL9T3l2qPUn
1ChrZEEEhHO6vA8+jn0+cV9n5xEz30Str9iHnDQ5QyR5LyV4UBPgTdWyQzNVKA69
KsSr9lbHEtQFRzGuBKwt6UlSFv9vPWWJkJit5XDKAlcKuGXj0J8OlltToocGElkF
+gEBZfoOWi/IBjRLrFW2cT3p36DTR5O1Ud/1DLnWRqgWNBLrbs2/KMKE6EnHttyD
7Tz8SQkuxltX/yBXMV3Ddy0t6nWV2SZEfuxJAQI=
=spg4
-----END PGP PUBLIC KEY BLOCK-----
```
This policy is based on the National Cyber Security Centres Responsible Disclosure Guidelines and an [example by Floor Terra](https://responsibledisclosure.nl).

View File

@@ -1,470 +0,0 @@
#!/usr/bin/node
//
// ledger?l=L
// transaction?h=H
// ledger_entry?l=L&h=H
// account?l=L&a=A
// directory?l=L&dir_root=H&i=I
// directory?l=L&o=A&i=I // owner directory
// offer?l=L&offer=H
// offer?l=L&account=A&i=I
// ripple_state=l=L&a=A&b=A&c=C
// account_lines?l=L&a=A
//
// A=address
// C=currency 3 letter code
// H=hash
// I=index
// L=current | closed | validated | index | hash
//
var async = require("async");
var extend = require("extend");
var http = require("http");
var url = require("url");
var Remote = require("ripple-lib").Remote;
var program = process.argv[1];
var httpd_response = function (res, opts) {
var self=this;
res.statusCode = opts.statusCode;
res.end(
"<HTML>"
+ "<HEAD><TITLE>Title</TITLE></HEAD>"
+ "<BODY BACKGROUND=\"#FFFFFF\">"
+ "State:" + self.state
+ "<UL>"
+ "<LI><A HREF=\"/\">home</A>"
+ "<LI>" + html_link('r4EM4gBQfr1QgQLXSPF4r7h84qE9mb6iCC')
// + "<LI><A HREF=\""+test+"\">rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh</A>"
+ "<LI><A HREF=\"/ledger\">ledger</A>"
+ "</UL>"
+ (opts.body || '')
+ '<HR><PRE>'
+ (opts.url || '')
+ '</PRE>'
+ "</BODY>"
+ "</HTML>"
);
};
var html_link = function (generic) {
return '<A HREF="' + build_uri({ type: 'account', account: generic}) + '">' + generic + '</A>';
};
// Build a link to a type.
var build_uri = function (params, opts) {
var c;
if (params.type === 'account') {
c = {
pathname: 'account',
query: {
l: params.ledger,
a: params.account,
},
};
} else if (params.type === 'ledger') {
c = {
pathname: 'ledger',
query: {
l: params.ledger,
},
};
} else if (params.type === 'transaction') {
c = {
pathname: 'transaction',
query: {
h: params.hash,
},
};
} else {
c = {};
}
opts = opts || {};
c.protocol = "http";
c.hostname = opts.hostname || self.base.hostname;
c.port = opts.port || self.base.port;
return url.format(c);
};
var build_link = function (item, link) {
console.log(link);
return "<A HREF=" + link + ">" + item + "</A>";
};
var rewrite_field = function (type, obj, field, opts) {
if (field in obj) {
obj[field] = rewrite_type(type, obj[field], opts);
}
};
var rewrite_type = function (type, obj, opts) {
if ('amount' === type) {
if ('string' === typeof obj) {
// XRP.
return '<B>' + obj + '</B>';
} else {
rewrite_field('address', obj, 'issuer', opts);
return obj;
}
return build_link(
obj,
build_uri({
type: 'account',
account: obj
}, opts)
);
}
if ('address' === type) {
return build_link(
obj,
build_uri({
type: 'account',
account: obj
}, opts)
);
}
else if ('ledger' === type) {
return build_link(
obj,
build_uri({
type: 'ledger',
ledger: obj,
}, opts)
);
}
else if ('node' === type) {
// A node
if ('PreviousTxnID' in obj)
obj.PreviousTxnID = rewrite_type('transaction', obj.PreviousTxnID, opts);
if ('Offer' === obj.LedgerEntryType) {
if ('NewFields' in obj) {
if ('TakerGets' in obj.NewFields)
obj.NewFields.TakerGets = rewrite_type('amount', obj.NewFields.TakerGets, opts);
if ('TakerPays' in obj.NewFields)
obj.NewFields.TakerPays = rewrite_type('amount', obj.NewFields.TakerPays, opts);
}
}
obj.LedgerEntryType = '<B>' + obj.LedgerEntryType + '</B>';
return obj;
}
else if ('transaction' === type) {
// Reference to a transaction.
return build_link(
obj,
build_uri({
type: 'transaction',
hash: obj
}, opts)
);
}
return 'ERROR: ' + type;
};
var rewrite_object = function (obj, opts) {
var out = extend({}, obj);
rewrite_field('address', out, 'Account', opts);
rewrite_field('ledger', out, 'parent_hash', opts);
rewrite_field('ledger', out, 'ledger_index', opts);
rewrite_field('ledger', out, 'ledger_current_index', opts);
rewrite_field('ledger', out, 'ledger_hash', opts);
if ('ledger' in obj) {
// It's a ledger header.
out.ledger = rewrite_object(out.ledger, opts);
if ('ledger_hash' in out.ledger)
out.ledger.ledger_hash = '<B>' + out.ledger.ledger_hash + '</B>';
delete out.ledger.hash;
delete out.ledger.totalCoins;
}
if ('TransactionType' in obj) {
// It's a transaction.
out.TransactionType = '<B>' + obj.TransactionType + '</B>';
rewrite_field('amount', out, 'TakerGets', opts);
rewrite_field('amount', out, 'TakerPays', opts);
rewrite_field('ledger', out, 'inLedger', opts);
out.meta.AffectedNodes = out.meta.AffectedNodes.map(function (node) {
var kind = 'CreatedNode' in node
? 'CreatedNode'
: 'ModifiedNode' in node
? 'ModifiedNode'
: 'DeletedNode' in node
? 'DeletedNode'
: undefined;
if (kind) {
node[kind] = rewrite_type('node', node[kind], opts);
}
return node;
});
}
else if ('node' in obj && 'LedgerEntryType' in obj.node) {
// Its a ledger entry.
if (obj.node.LedgerEntryType === 'AccountRoot') {
rewrite_field('address', out.node, 'Account', opts);
rewrite_field('transaction', out.node, 'PreviousTxnID', opts);
rewrite_field('ledger', out.node, 'PreviousTxnLgrSeq', opts);
}
out.node.LedgerEntryType = '<B>' + out.node.LedgerEntryType + '</B>';
}
return out;
};
var augment_object = function (obj, opts, done) {
if (obj.node.LedgerEntryType == 'AccountRoot') {
var tx_hash = obj.node.PreviousTxnID;
var tx_ledger = obj.node.PreviousTxnLgrSeq;
obj.history = [];
async.whilst(
function () { return tx_hash; },
function (callback) {
// console.log("augment_object: request: %s %s", tx_hash, tx_ledger);
opts.remote.request_tx(tx_hash)
.on('success', function (m) {
tx_hash = undefined;
tx_ledger = undefined;
//console.log("augment_object: ", JSON.stringify(m));
m.meta.AffectedNodes.filter(function(n) {
// console.log("augment_object: ", JSON.stringify(n));
// if (n.ModifiedNode)
// console.log("augment_object: %s %s %s %s %s %s/%s", 'ModifiedNode' in n, n.ModifiedNode && (n.ModifiedNode.LedgerEntryType === 'AccountRoot'), n.ModifiedNode && n.ModifiedNode.FinalFields && (n.ModifiedNode.FinalFields.Account === obj.node.Account), Object.keys(n)[0], n.ModifiedNode && (n.ModifiedNode.LedgerEntryType), obj.node.Account, n.ModifiedNode && n.ModifiedNode.FinalFields && n.ModifiedNode.FinalFields.Account);
// if ('ModifiedNode' in n && n.ModifiedNode.LedgerEntryType === 'AccountRoot')
// {
// console.log("***: ", JSON.stringify(m));
// console.log("***: ", JSON.stringify(n));
// }
return 'ModifiedNode' in n
&& n.ModifiedNode.LedgerEntryType === 'AccountRoot'
&& n.ModifiedNode.FinalFields
&& n.ModifiedNode.FinalFields.Account === obj.node.Account;
})
.forEach(function (n) {
tx_hash = n.ModifiedNode.PreviousTxnID;
tx_ledger = n.ModifiedNode.PreviousTxnLgrSeq;
obj.history.push({
tx_hash: tx_hash,
tx_ledger: tx_ledger
});
console.log("augment_object: next: %s %s", tx_hash, tx_ledger);
});
callback();
})
.on('error', function (m) {
callback(m);
})
.request();
},
function (err) {
if (err) {
done();
}
else {
async.forEach(obj.history, function (o, callback) {
opts.remote.request_account_info(obj.node.Account)
.ledger_index(o.tx_ledger)
.on('success', function (m) {
//console.log("augment_object: ", JSON.stringify(m));
o.Balance = m.account_data.Balance;
// o.account_data = m.account_data;
callback();
})
.on('error', function (m) {
o.error = m;
callback();
})
.request();
},
function (err) {
done(err);
});
}
});
}
else {
done();
}
};
if (process.argv.length < 4 || process.argv.length > 7) {
console.log("Usage: %s ws_ip ws_port [<ip> [<port> [<start>]]]", program);
}
else {
var ws_ip = process.argv[2];
var ws_port = process.argv[3];
var ip = process.argv.length > 4 ? process.argv[4] : "127.0.0.1";
var port = process.argv.length > 5 ? process.argv[5] : "8080";
// console.log("START");
var self = this;
var remote = (new Remote({
websocket_ip: ws_ip,
websocket_port: ws_port,
trace: false
}))
.on('state', function (m) {
console.log("STATE: %s", m);
self.state = m;
})
// .once('ledger_closed', callback)
.connect()
;
self.base = {
hostname: ip,
port: port,
remote: remote,
};
// console.log("SERVE");
var server = http.createServer(function (req, res) {
var input = "";
req.setEncoding();
req.on('data', function (buffer) {
// console.log("DATA: %s", buffer);
input = input + buffer;
});
req.on('end', function () {
// console.log("URL: %s", req.url);
// console.log("HEADERS: %s", JSON.stringify(req.headers, undefined, 2));
var _parsed = url.parse(req.url, true);
var _url = JSON.stringify(_parsed, undefined, 2);
// console.log("HEADERS: %s", JSON.stringify(_parsed, undefined, 2));
if (_parsed.pathname === "/account") {
var request = remote
.request_ledger_entry('account_root')
.ledger_index(-1)
.account_root(_parsed.query.a)
.on('success', function (m) {
// console.log("account_root: %s", JSON.stringify(m, undefined, 2));
augment_object(m, self.base, function() {
httpd_response(res,
{
statusCode: 200,
url: _url,
body: "<PRE>"
+ JSON.stringify(rewrite_object(m, self.base), undefined, 2)
+ "</PRE>"
});
});
})
.request();
} else if (_parsed.pathname === "/ledger") {
var request = remote
.request_ledger(undefined, { expand: true, transactions: true })
.on('success', function (m) {
// console.log("Ledger: %s", JSON.stringify(m, undefined, 2));
httpd_response(res,
{
statusCode: 200,
url: _url,
body: "<PRE>"
+ JSON.stringify(rewrite_object(m, self.base), undefined, 2)
+"</PRE>"
});
})
if (_parsed.query.l && _parsed.query.l.length === 64) {
request.ledger_hash(_parsed.query.l);
}
else if (_parsed.query.l) {
request.ledger_index(Number(_parsed.query.l));
}
else {
request.ledger_index(-1);
}
request.request();
} else if (_parsed.pathname === "/transaction") {
var request = remote
.request_tx(_parsed.query.h)
// .request_transaction_entry(_parsed.query.h)
// .ledger_select(_parsed.query.l)
.on('success', function (m) {
// console.log("transaction: %s", JSON.stringify(m, undefined, 2));
httpd_response(res,
{
statusCode: 200,
url: _url,
body: "<PRE>"
+ JSON.stringify(rewrite_object(m, self.base), undefined, 2)
+"</PRE>"
});
})
.on('error', function (m) {
httpd_response(res,
{
statusCode: 200,
url: _url,
body: "<PRE>"
+ 'ERROR: ' + JSON.stringify(m, undefined, 2)
+"</PRE>"
});
})
.request();
} else {
var test = build_uri({
type: 'account',
ledger: 'closed',
account: 'rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh',
}, self.base);
httpd_response(res,
{
statusCode: req.url === "/" ? 200 : 404,
url: _url,
});
}
});
});
server.listen(port, ip, undefined,
function () {
console.log("Listening at: http://%s:%s", ip, port);
});
}
// vim:sw=2:sts=2:ts=8:et

View File

@@ -1,24 +0,0 @@
In this directory are two scripts, `build.sh` and `test.sh` used for building
and testing rippled.
(For now, they assume Bash and Linux. Once I get Windows containers for
testing, I'll try them there, but if Bash is not available, then they will
soon be joined by PowerShell scripts `build.ps` and `test.ps`.)
We don't want these scripts to require arcane invocations that can only be
pieced together from within a CI configuration. We want something that humans
can easily invoke, read, and understand, for when we eventually have to test
and debug them interactively. That means:
(1) They should work with no arguments.
(2) They should document their arguments.
(3) They should expand short arguments into long arguments.
While we want to provide options for common use cases, we don't need to offer
the kitchen sink. We can rightfully expect users with esoteric, complicated
needs to write their own scripts.
To make argument-handling easy for us, the implementers, we can just take all
arguments from environment variables. They have the nice advantage that every
command-line uses named arguments. For the benefit of us and our users, we
document those variables at the top of each script.

Some files were not shown because too many files have changed in this diff Show More