Compare commits

...

395 Commits

Author SHA1 Message Date
Ed Hennis
838978b869 Set version to 2.3.0-rc2 2024-11-12 18:40:22 -05:00
Mayukha Vadari
8186253707 fix: include index in server_definitions RPC (#5190) 2024-11-12 18:37:15 -05:00
Bronek Kozicki
2316d843d7 Fix ledger_entry crash on invalid credentials request (#5189) 2024-11-12 18:24:52 -05:00
Ed Hennis
9d58f11a60 Set version to 2.3.0-rc1 2024-11-06 17:37:59 -05:00
Shawn Xie
7b18006193 Replace Uint192 with Hash192 in server_definitions response (#5177) 2024-11-06 17:33:16 -05:00
Bronek Kozicki
9e48fc0c83 Fix potential deadlock (#5124)
* 2.2.2 changed functions acquireAsync and NetworkOPsImp::recvValidation to add an item to a collection under lock, unlock, do some work, then lock again to do remove the item. It will deadlock if an exception is thrown while adding the item - before unlocking.
* Replace ScopedUnlock with scope_unlock.
2024-11-06 17:22:42 -05:00
Olek
8e827e32ac Introduce Credentials support (XLS-70d): (#5103)
Amendment:
    - Credentials
    
    New Transactions:
    - CredentialCreate
    - CredentialAccept
    - CredentialDelete
    
    Modified Transactions:
    - DepositPreauth
    - Payment
    - EscrowFinish
    - PaymentChannelClaim
    - AccountDelete
    
    New Object:
    - Credential

    Modified Object:
    - DepositPreauth
    
    API updates:
    - ledger_entry
    - account_objects
    - ledger_data
    - deposit_authorized
    
    Read full spec: https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0070d-credentials
2024-11-06 17:05:03 -05:00
Gregory Tsipenyuk
c5c0e70e23 Fix token comparison in Payment (#5172)
* Checks only Currency or MPT Issuance ID part of the Asset object.
* Resolves temREDUNDANT regression detected in testing.
2024-11-06 11:20:30 -05:00
Gregory Tsipenyuk
ec61f5e9d3 Add fixAMMv1_2 amendment (#5176)
* Add reserve check on AMM Withdraw
* Try AMM max offer if changeSpotPriceQuality() fails
2024-11-05 15:06:16 -05:00
Gregory Tsipenyuk
d57cced17b Fix unity build (#5179) 2024-11-05 10:48:02 -05:00
yinyiqian1
54a350be79 Add AMMClawback Transaction (XLS-0073d) (#5142)
Amendment:
- AMMClawback

New Transactions:
- AMMClawback

Modified Transactions:
- AMMCreate
- AMMDeposit
2024-11-04 15:27:57 -05:00
Alloy Networks
d6dbf0e0a6 Add hubs.xrpkuwait.com to bootstrap (#5169) 2024-10-31 18:14:55 -04:00
Valentin Balaschenko
0d887ad815 docs: Add protobuf dependencies to linux setup instructions (#5156) 2024-10-29 16:26:20 -04:00
yinyiqian1
d4a5f8390e fix: reject invalid markers in account_objects RPC calls (#5046) 2024-10-29 16:13:01 -04:00
Bob Conan
ab5d450d3c Update RELEASENOTES.md (#5154)
fix the typo "concensus" -> "consensus"
2024-10-29 15:43:56 -04:00
Gregory Tsipenyuk
23c37fa506 Introduce MPT support (XLS-33d): (#5143)
Amendment:
- MPTokensV1

New Transactions:
- MPTokenIssuanceCreate
- MPTokenIssuanceDestroy
- MPTokenIssuanceSet
- MPTokenAuthorize

Modified Transactions:
- Payment
- Clawback

New Objects:
- MPTokenIssuance
- MPToken

API updates:
- ledger_entry
- account_objects
- ledger_data

Other:
- Add += and -= operators to ValueProxy

Read full spec: https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0033d-multi-purpose-tokens

---------
Co-authored-by: Shawn Xie <shawnxie920@gmail.com>
Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
Co-authored-by: John Freeman <jfreeman08@gmail.com>
2024-10-29 15:19:28 -04:00
John Freeman
63209c2646 Consolidate definitions of fields, objects, transactions, and features (#5122) 2024-10-16 14:02:29 -05:00
John Freeman
f0dabd1446 Ignore reformat when blaming 2024-10-15 18:28:43 -05:00
John Freeman
552377c76f Reformat code with clang-format-18 2024-10-15 18:27:56 -05:00
John Freeman
e7cd03325b Update pre-commit hook 2024-10-15 18:25:14 -05:00
John Freeman
decb3c178e Update clang-format settings 2024-10-15 18:24:22 -05:00
John Freeman
f6d647d6c3 Update clang-format workflow 2024-10-15 18:22:57 -05:00
Chenna Keshava B S
bf4a7b6ce8 Expand Error Message for rpcInternal (#4959)
Validator operators have been confused by the rpcInternal error, which can occur if the server is not running in another process.
2024-10-01 14:09:42 -07:00
Elliot Lee
8e2c85d14d docs: clean up API-CHANGELOG.md (#5064)
Move the newest information to the top, i.e., use reverse chronological order within each of the two sections ("API Versions" and "XRP Ledger server versions")
2024-10-01 14:07:31 -07:00
Elliot Lee
1fbf8da79f Set version to 2.3.0-b4 2024-09-20 14:27:37 -07:00
Denis Angell
a75309919e feat(SQLite): allow configurable database pragma values (#5135)
Make page_size and journal_size_limit configurable values in rippled.cfg
2024-09-20 14:26:21 -07:00
Vlad
0ece395c24 refactor: re-order PRAGMA statements (#5140)
The page_size will soon be made configurable with #5135, making this
re-ordering necessary.

When opening SQLite connection, there are specific pragmas set with
commonPragmas.

In particular, PRAGMA journal_mode creates journal file and locks the
page_size; as of this commit, this sets the page size to the default
value of 4096. Coincidentally, the hardcoded page_size was also 4096, so
no issue was noticed.
2024-09-20 11:04:10 -07:00
Chenna Keshava B S
b6391fe011 fix(book_changes): add "validated" field and reduce RPC latency (#5096)
Update book_changes RPC to reduce latency, add "validated" field, and accept shortcut strings (current, closed, validated) for ledger_index.

`"validated": true` indicates that the transaction has been included in a validated ledger so the result of the transaction is immutable.

Fix #5033

Fix #5034

Fix #5035

Fix #5036

---------

Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
2024-09-19 08:39:10 -07:00
luozexuan
9a6af9c431 chore: fix typos in comments (#5094)
Signed-off-by: luozexuan <fetchcode@139.com>
2024-09-16 13:53:19 -07:00
Elliot Lee
fa1cbb0746 Merge remote-tracking branch 'origin/master' into develop-next 2024-09-14 18:36:46 -07:00
Elliot Lee
68e1be3cf5 Set version to 2.2.3 2024-09-14 13:21:23 -07:00
J. Scott Branson
9abc4868d6 Update SQLite3 max_page_count to match current defaults (#5114)
When rippled initiates a connection to SQLite3, rippled sends a "PRAGMA"
statement defining the maximum number of pages allowed in the database.
Update the max_page_count so it is consistent with the default for newer
versions of SQLite3. Increasing max_page_count is critical for keeping
full history servers online.

Fix #5102
2024-09-14 11:38:25 -07:00
Ed Hennis
23991c99c3 test: Retry RPC commands to try to fix MacOS CI jobs (#5120)
* Retry some failed RPC connections / commands in unit tests
* Remove orphaned `getAccounts` function

Co-authored-by: John Freeman <jfreeman08@gmail.com>
2024-09-11 11:29:06 +01:00
Ed Hennis
cc0177be87 Update Release Notes for 2.2.1 and 2.2.2 2024-09-03 18:37:47 -04:00
Ed Hennis
37b3e96b04 Merge remote-tracking branch 'upstream/master' into upstream--develop
* upstream/master:
  Set version to 2.2.2
  Allow only 1 job queue slot for each validation ledger check
  Allow only 1 job queue slot for acquiring inbound ledger.
  Track latencies of certain code blocks, and log if they take too long
2024-09-03 17:58:54 -04:00
Ed Hennis
85214bdf81 Set version to 2.2.2 2024-08-31 15:12:59 -04:00
Mark Travis
fbbea9e6e2 Allow only 1 job queue slot for each validation ledger check
* refactor filtering of validations to specifically avoid
 concurrent checkAccept() calls for the same validation ledger hash.
* Log when duplicate concurrent validation requests are filtered.
* RAII for containers that track concurrent validation requests.
2024-08-31 15:12:59 -04:00
Mark Travis
7741483894 Allow only 1 job queue slot for acquiring inbound ledger.
* Log when duplicate concurrent inbound ledger are filtered.
* RAII for containers that track concurrent inbound ledger.
* Comment on when to asynchronously acquire inbound ledgers, which
   is possible to be always OK, but should have further review.
* Other small logging changes

Co-authored-by: Ed Hennis <ed@ripple.com>
2024-08-31 15:12:22 -04:00
John Freeman
2f432e812c docs: Update options documentation (#5083)
Co-authored-by: Elliot Lee <github.public@intelliot.com>
2024-08-28 17:31:33 -05:00
John Freeman
cad8970a57 refactor: Remove dead headers (#5081) 2024-08-28 14:23:38 -05:00
John Freeman
4d7aed84ec refactor: Remove reporting mode (#5092) 2024-08-28 13:00:50 -05:00
Valentin Balaschenko
00ed7c9424 Track latencies of certain code blocks, and log if they take too long 2024-08-26 19:03:56 -04:00
Ed Hennis
d9bd75e683 chore: Fix documentation generation job: (#5091)
* Add "doxygen" to list of supported branches to allow for testing and
  development.
* Add titles / H1 to some .md files that don't have them.
2024-08-15 17:03:50 -04:00
Ed Hennis
93d8bafb24 chore: libxrpl verification on CI (#5028)
Implements a CI workflow that detects when a new version of libxrpl is
proposed, uploads it to artifactory under the `clio` channel and
notifies Clio's CI to check this newly proposed version.
2024-08-15 12:51:50 -04:00
Scott Schurr
c19a88fee9 Address rare corruption of NFTokenPage linked list (#4945)
* Add fixNFTokenPageLinks amendment:

It was discovered that under rare circumstances the links between
NFTokenPages could be removed.  If this happens, then the
account_objects and account_nfts RPC commands under-report the
NFTokens owned by an account.

The fixNFTokenPageLinks amendment does the following to address
the problem:

- It fixes the underlying problem so no further broken links
  should be created.
- It adds Invariants so, if such damage were introduced in the
  future, an invariant would stop it.
- It adds a new FixLedgerState transaction that repairs
  directories that were damaged in this fashion.
- It adds unit tests for all of it.
2024-08-07 18:14:19 -04:00
Bronek Kozicki
0a331ea72e Factor out Transactor::trapTransaction (#5087) 2024-08-05 12:05:12 -04:00
John Freeman
7d27b11190 Remove shards (#5066) 2024-08-02 20:03:05 -04:00
Bronek Kozicki
eedfec015e Update gcovr EXCLUDE (#5084) 2024-08-02 17:25:44 -04:00
Bronek Kozicki
ffc343a2bc Fix crash inside OverlayImpl loops over ids_ (#5071) 2024-08-02 16:58:05 -04:00
Ed Hennis
e5aa605742 Set version to 2.3.0-b2 2024-07-31 17:14:04 -04:00
Bronek Kozicki
8b181ed818 Merge branch 'master' into develop 2024-07-31 15:12:15 +01:00
Ed Hennis
f5a349558e docs: Document the process for merging pull requests (#5010) 2024-07-30 20:19:03 -04:00
Scott Schurr
b9b75ddcf5 Remove unused constants from resource/Fees.h (#4856) 2024-07-30 12:47:04 -04:00
Mayukha Vadari
a39720e94a fix: change error for invalid feature param in feature RPC (#5063)
* Returns an "Invalid parameters" error if the `feature` parameter is provided and is not a string.
2024-07-30 11:18:25 -04:00
Ed Hennis
2820feb02a Ensure levelization sorting is ASCII-order across platforms (#5072) 2024-07-29 19:02:12 -04:00
Ed Hennis
8fc805d2e2 fix: Fix NuDB build error via Conan patch (#5061)
* Includes updated instructions in BUILD.md.
2024-07-29 18:14:41 -04:00
yinyiqian1
d54151e7c4 Disallow filtering account_objects by unsupported types (#5056)
* `account_objects` returns an invalid field error if `type` is not supported.
  This includes objects an account can't own, or which are unsupported by `account_objects`
* Includes:
  * Amendments
  * Directory Node
  * Fee Settings
  * Ledger Hashes
  * Negative UNL
2024-07-29 16:30:02 -04:00
Scott Schurr
21a0a64648 chore: Add comments to SignerEntries.h (#5059) 2024-07-25 17:12:59 -04:00
Scott Schurr
20707fac4a chore: Rename two files from Directory* to Dir*: (#5058)
The names of the files should reflect the name of the Dir class.

Co-authored-by: Zack Brunson <Zshooter@gmail.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
2024-07-25 14:50:27 -04:00
Ed Hennis
e6ef0fc26c Set version to 2.2.1 2024-07-24 19:37:40 -04:00
John Freeman
c157816017 Use error codes throughout fast Base58 implementation 2024-07-24 19:37:34 -04:00
Mayukha Vadari
eba5d19377 Improve error handling in some RPC commands 2024-07-24 19:25:40 -04:00
Denis Angell
ad14d09a2b Update BUILD.md after PR #5052 (#5067)
* Document the need to specify "xrpld" and "tests" to build and test rippled.
2024-07-23 16:49:16 -04:00
John Freeman
f3bcc651c7 Add xrpld build option and Conan package test (#5052)
* Make xrpld target optional

* Add job to test Conan recipe

* [fold] address review comments

* [fold] Enable tests in workflows

* [fold] Rename with_xrpld option

* [fold] Fix grep expression
2024-07-11 15:04:30 -07:00
dashangcun
e8602b81fa chore: remove repeat words (#5053)
Signed-off-by: dashangcun <jchaodaohang@foxmail.com>
Co-authored-by: dashangcun <jchaodaohang@foxmail.com>
Co-authored-by: Zack Brunson <Zshooter@gmail.com>
2024-07-09 14:05:14 -07:00
yinyiqian1
0f32109993 fix CTID in tx command returns invalidParams on lowercase hex (#5049)
* fix CTID in tx command returns invalidParams on lowercase hex

* test mixed case and change auto to explicit type

* add header cctype because std::tolower is called

* remove unused local variable

* change test case comment from 'lowercase' to 'mixed case'

---------

Co-authored-by: Zack Brunson <Zshooter@gmail.com>
2024-07-05 11:10:54 -07:00
Ed Hennis
a17ccca615 Invariant: prevent a deleted account from leaving (most) artifacts on the ledger. (#4663)
* Add feature / amendment "InvariantsV1_1"

* Adds invariant AccountRootsDeletedClean:

* Checks that a deleted account doesn't leave any directly
  accessible artifacts behind.
* Always tests, but only changes the transaction result if
  featureInvariantsV1_1 is enabled.
* Unit tests.

* Resolves #4638

* [FOLD] Review feedback from @gregtatcam:

* Fix unused variable warning
* Improve Invariant test const correctness

* [FOLD] Review feedback from @mvadari:

* Centralize the account keylet function list, and some optimization

* [FOLD] Some structured binding doesn't work in clang

* [FOLD] Review feedback 2 from @mvadari:

* Clean up and clarify some comments.

* [FOLD] Change InvariantsV1_1 to unsupported

* Will allow multiple PRs to be merged over time using the same amendment.

* fixup! [FOLD] Change InvariantsV1_1 to unsupported

* [FOLD] Update and clarify some comments. No code changes.

* Move CMake directory

* Rearrange sources

* Rewrite includes

* Recompute loops

* Fix merge issue and formatting

---------

Co-authored-by: Pretty Printer <cpp@ripple.com>
2024-07-05 10:27:15 -07:00
Bronek Kozicki
7a1b238035 Bump codecov plugin version to version 4.5.0 (#5055)
This version includes fix https://github.com/codecov/codecov-action/pull/1471
which should end the codecov upload errors due to throttling.
2024-07-02 12:42:56 -07:00
yinyiqian1
e1534a3200 fix "account_nfts" with unassociated marker returning issue (#5045)
* fix "account_nfts" with unassociated marker returning issue

* create unit test for fixing nft page invalid marker not returning error

add more test

change test name

create unit test

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* fix "account_nfts" with unassociated marker returning issue

* [FOLD] accumulated review suggestions

* move BEAST check out of lambda function

---------

Authored-by: Scott Schurr <scott@ripple.com>
2024-07-02 11:58:03 -07:00
Scott Schurr
9fec615dca fixInnerObjTemplate2 amendment (#5047)
* fixInnerObjTemplate2 amendment:

Apply inner object templates to all remaining (non-AMM)
inner objects.

Adds a unit test for applying the template to sfMajorities.
Other remaining inner objects showed no problems having
templates applied.

* Move CMake directory

* Rearrange sources

* Rewrite includes

* Recompute loops

---------

Co-authored-by: Pretty Printer <cpp@ripple.com>
2024-06-27 11:52:02 -07:00
seelabs
ef02893f2f Set version to 2.3.0-b1 2024-06-20 15:30:07 -04:00
John Freeman
7cf4611d7c Ignore restructuring commits (#4997) 2024-06-20 13:57:22 -05:00
Pretty Printer
d028005aa6 Recompute loops (#4997) 2024-06-20 13:57:18 -05:00
Pretty Printer
1d23148e6d Rewrite includes (#4997) 2024-06-20 13:57:16 -05:00
Pretty Printer
e416ee72ca Rearrange sources (#4997) 2024-06-20 13:57:14 -05:00
Pretty Printer
2e902dee53 Move CMake directory (#4997) 2024-06-20 13:57:12 -05:00
John Freeman
f6879da6c9 Add bin/physical.sh (#4997) 2024-06-20 13:57:10 -05:00
John Freeman
ae20a3ad3f Prepare to rearrange sources: (#4997)
- Remove CMake module "MultiConfig".
- Update clang-format configuration, CodeCov configuration,
  levelization script.
- Replace source lists in CMake with globs.
2024-06-20 13:57:03 -05:00
Bronek Kozicki
c706926ee3 Change order of checks in amm_info: (#4924)
* Change order of checks in amm_info

* Change amm_info error message in API version 3

* Change amm_info error tests
2024-06-18 12:55:40 -04:00
Scott Schurr
223e6c7590 Add the fixEnforceNFTokenTrustline amendment: (#4946)
Fix interactions between NFTokenOffers and trust lines.

Since the NFTokenAcceptOffer does not check the trust line that
the issuer receives as a transfer fee in the NFTokenAcceptOffer,
if the issuer deletes the trust line after NFTokenCreateOffer,
the trust line is created for the issuer by the
NFTokenAcceptOffer.  That's fixed.

Resolves #4925.
2024-06-18 02:47:54 -04:00
Chenna Keshava B S
825864032a Replaces the usage of boost::string_view with std::string_view (#4509) 2024-06-17 16:41:03 -04:00
Elliot Lee
06733ec21a docs: explain how to find a clang-format patch generated by CI (#4521) 2024-06-17 15:32:08 -04:00
tequ
9f7c619e4f XLS-52d: NFTokenMintOffer (#4845) 2024-06-14 19:32:25 -04:00
todaymoon
3f5e3212fe chore: remove repeat words (#5041) 2024-06-14 15:36:59 -04:00
Alex Kremer
20d05492d2 Expose all amendments known by libxrpl (#5026) 2024-06-14 14:00:57 -04:00
Scott Schurr
ae7ea33b75 fixReducedOffersV2: prevent offers from blocking order books: (#5032)
Fixes issue #4937.

The fixReducedOffersV1 amendment fixed certain forms of offer
modification that could lead to blocked order books.  Reduced
offers can block order books if the effective quality of the
reduced offer is worse than the quality of the original offer
(from the perspective of the taker). It turns out that, for
small values, the quality of the reduced offer can be
significantly affected by the rounding mode used during
scaling computations.

Issue #4937 identified an additional code path that modified
offers in a way that could lead to blocked order books.  This
commit changes the rounding in that newly located code path so
the quality of the modified offer is never worse than the
quality of the offer as it was originally placed.

It is possible that additional ways of producing blocking
offers will come to light.  Therefore there may be a future
need for a V3 amendment.
2024-06-13 17:57:12 -04:00
Chenna Keshava B S
263e984bf4 Additional unit tests for testing deletion of trust lines (#4886) 2024-06-13 14:44:39 -04:00
Olek
58f3abe3c6 Fix conan typo: (#5044)
Add missed coma in 'exportes_sources'
2024-06-12 13:16:54 -04:00
Bronek Kozicki
d576416953 Add new command line option to make replaying transactions easier: (#5027)
* Add trap_tx_hash command line option

This new option can be used only if replay is also enabled. It takes a transaction hash from the ledger loaded for replay, and will cause a specific line to be hit in Transactor.cpp, right before the selected transaction is applied.
2024-06-11 14:34:02 -04:00
John Freeman
e3d1bb271f Fix compatibility with Conan 2.x: (#5001)
Closes #4926, #4990
2024-06-11 13:26:01 -04:00
seelabs
2df635693d Set version to 2.2.0 2024-06-04 17:56:09 -04:00
seelabs
40b4adc9cc Set version to 2.2.0-rc3 2024-05-20 17:46:43 -04:00
Alex Kremer
0c971b4415 Add xrpl.libpp as an exported lib in conan (#5022) 2024-05-20 17:43:31 -04:00
Gregory Tsipenyuk
f2d37da4ca Fix Oracle's token pair deterministic order: (#5021)
Price Oracle data-series logic uses `unordered_map` to update the Oracle object.
This results in different servers disagreeing on the order of that hash table.
Consequently, the generated ledgers will have different hashes.
The fix uses `map` instead to guarantee the order of the token pairs
in the data-series.
2024-05-20 16:33:01 -04:00
seelabs
d5e5c3c220 Set version to 2.2.0-rc2 2024-05-16 15:36:21 -04:00
Gregory Tsipenyuk
15390bedd5 Fix last Liquidity Provider withdrawal:
Due to the rounding, LPTokenBalance of the last
Liquidity Provider (LP), might not match this LP's
trustline balance. This fix sets LPTokenBalance on
last LP withdrawal to this LP's LPToken trustline
balance.
2024-05-16 13:36:24 -04:00
Gregory Tsipenyuk
7f6a079aa4 Fix offer crossing via single path AMM with transfer fee:
Single path AMM offer has to factor in the transfer in rate
when calculating the upper bound quality and the quality function
because single path AMM's offer quality is not constant.
This fix factors in the transfer fee in
BookStep::adjustQualityWithFees().
2024-05-16 13:36:24 -04:00
Gregory Tsipenyuk
2a25f58d40 Fix adjustAmountsByLPTokens():
The fix is to return the actual adjusted lp tokens and amounts
by the function.
2024-05-16 13:32:05 -04:00
Gregory Tsipenyuk
2705109592 Add the fixAMMOfferRounding amendment: (#4983)
* Fix AMM offer rounding and low quality LOB offer blocking AMM:

A single-path AMM offer with account offer on DEX, is always generated
starting with the takerPays first, which is rounded up, and then
the takerGets, which is rounded down. This rounding ensures that the pool's
product invariant is maintained. However, when one of the offer's side
is XRP, this rounding can result in the AMM offer having a lower
quality, potentially causing offer generation to fail if the quality
is lower than the account's offer quality.

To address this issue, the proposed fix adjusts the offer generation process
to start with the XRP side first and always rounds it down. This results
in a smaller offer size, improving the offer's quality. Regardless if the offer
has XRP or not, the rounding is done so that the offer size is minimized.
This change still ensures the product invariant, as the other generated
side is the exact result of the swap-in or swap-out equations.

If a liquidity can be provided by both AMM and LOB offer on offer crossing
then AMM offer is generated so that it matches LOB offer quality. If LOB
offer quality is less than limit quality then generated AMM offer quality
is also less than limit quality and the offer doesn't cross. To address
this issue, if LOB quality is better than limit quality then use LOB
quality to generate AMM offer. Otherwise, don't use the quality to generate
AMM offer. In this case, limitOut() function in StrandFlow limits
the out amount to match strand's quality to limit quality and consume
maximum AMM liquidity.
2024-05-14 15:28:38 -04:00
Denis Angell
244ac5e024 Update CONTRIBUTING.md (#4904) 2024-05-13 10:54:34 -04:00
Gregory Tsipenyuk
f4da2e31d9 Price Oracle: validate input parameters and extend test coverage: (#5013)
* Price Oracle: validate input parameters and extend test coverage:

Validate trim, time_threshold, document_id are valid
Int, UInt, or string convertible to UInt. Validate base_asset
and quote_asset are valid currency. Update error codes.
Extend Oracle and GetAggregatePrice unit-tests.
Denote unreachable coverage code.

* Set one-line LCOV_EXCL_LINE

* Move ledger_entry tests to LedgerRPC_test.cpp

* Add constants for "None"

* Fix LedgerRPC test

---------

Co-authored-by: Scott Determan <scott.determan@yahoo.com>
2024-05-09 15:17:16 -04:00
Michael Legleux
f650949573 Add external directory to Conan recipe's exports (#5006) 2024-05-02 15:44:42 -04:00
John Freeman
76128051c0 Add missing includes (#5011) 2024-05-02 11:14:59 -04:00
seelabs
5aa1106ba1 Remove flow assert: (#5009)
Rounding in the payment engine is causing an assert to sometimes fire
with "dust" amounts. This is causing issues when running debug builds of
rippled. This issue will be addressed, but the assert is no longer
serving its purpose.
2024-05-01 13:25:31 -04:00
Nik Bougalis
dccf3f49ef Update list of maintainers: (#4984)
I am resigning from my role as maintainer of the `rippled` codebase.

Please update repository permissions accordingly, prior to merging this pull request.

Thanks to everyone who has contributed, especially those whom I had the opportunity to closely collaborate with.
2024-04-29 18:44:20 -04:00
Ed Hennis
02ec8b7962 Set version to 2.2.0-rc1 2024-04-26 10:56:09 -04:00
seelabs
3f7ce939c8 fix amendment: AMM swap should honor invariants: (#5002)
The AMM has an invariant for swaps where:
new_balance_1*new_balance_2 >= old_balance_1*old_balance_2

Due to rounding, this invariant could sometimes be violated (although by
very small amounts).

This patch introduces an amendment `fixAMMRounding` that changes the
rounding to always favor the AMM. Doing this should maintain the
invariant.

Co-authored-by: Bronek Kozicki
Co-authored-by: thejohnfreeman
2024-04-25 21:15:19 -04:00
seelabs
b65cea1984 Add global access to the current ledger rules:
It can be difficult to make transaction breaking changes to low level
code because the low level code does not have access to a ledger and the
current activated amendments in that ledger (the "rules"). This patch
adds global access to the current ledger rules as a `std::optional`. If
the optional is not seated, then there is no active transaction.
2024-04-25 21:15:19 -04:00
Snoppy
b422e71eed chore: fix typos (#4958) 2024-04-25 12:05:12 -05:00
Ed Hennis
e9859ac1b1 test: Add RPC error checking support to unit tests (#4987) 2024-04-24 13:54:46 -04:00
John Freeman
b84f7e7c10 Ignore more commits 2024-04-19 11:45:51 -05:00
John Freeman
513842b23f Address compiler warnings 2024-04-19 11:32:16 -05:00
John Freeman
985c80fbc6 Add markers around source lists 2024-04-19 11:32:16 -05:00
John Freeman
35fe957020 Fix source lists 2024-04-19 11:32:16 -05:00
Pretty Printer
0eebe6a5f4 Rewrite includes
$ find src/ripple/ src/test/ -type f -exec sed -i 's:include\s*["<]ripple/\(.*\)\.h\(pp\)\?[">]:include <ripple/\1.h>:' {} +
2024-04-19 11:32:15 -05:00
Pretty Printer
760f16f568 Format formerly .hpp files 2024-04-19 11:32:15 -05:00
Pretty Printer
241b9ddde9 Rename .hpp to .h 2024-04-19 11:32:15 -05:00
John Freeman
3fcfb5cd49 Simplify protobuf generation 2024-04-19 11:32:14 -05:00
Pretty Printer
e2384885f5 Consolidate external libraries 2024-04-19 11:32:14 -05:00
John Freeman
dd312c3cc5 Remove packaging scripts 2024-04-19 11:32:14 -05:00
John Freeman
80379927e8 Remove unused files 2024-04-19 11:32:13 -05:00
Ed Hennis
676aae2755 Set version to 2.2.0-b3 2024-04-18 21:09:31 -04:00
Ed Hennis
f20e66e6f9 fix: Remove redundant STAmount conversion in test (#4996) 2024-04-18 21:09:25 -04:00
Scott Determan
cd737ad7d3 fix: resolve database deadlock: (#4989)
The `rotateWithLock` function holds a lock while it calls a callback
function that's passed in by the caller. This is a problematic design
that needs to be used very carefully. In this case, at least one caller
passed in a callback that eventually relocks the mutex on the same
thread, causing UB (a deadlock was observed). The caller was from
SHAMapStoreImpl, and it called `clearCaches`. This `clearCaches` can
potentially call `fetchNodeObject`, which tried to relock the mutex.

This patch resolves the issue by changing the mutex type to a
`recursive_mutex`. Ideally, the code should be rewritten so it doesn't
hold the mutex during the callback and the mutex should be changed back
to a regular mutex.

Co-authored-by: Ed Hennis <ed@ripple.com>
2024-04-18 16:44:59 -04:00
Chenna Keshava B S
df3aa84523 test: verify the rounding behavior of equal-asset AMM deposits (#4982)
* Specifically, test using tfLPToken flag
2024-04-18 16:15:31 -04:00
John Freeman
24a275ba25 test: Add tests to raise coverage of AMM (#4971)
---------

Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
Co-authored-by: Mark Travis <mtravis@ripple.com>
Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
Co-authored-by: Mayukha Vadari <mvadari@gmail.com>
Co-authored-by: Chenna Keshava <ckeshavabs@gmail.com>
2024-04-18 15:25:22 -04:00
Bronek Kozicki
aae438315f chore: Improve codecov coverage reporting (#4977)
* Amend `.codecov.yml` to disable coverage reporting of test sources
  and explicitly set most parameters
* Increase codecov upload retry time to 210s (from 35s)
* Upgrade gcovr adding support for more coverage formats (lcov, clover, jacoco)
* Upgrade github actions in coverage workflow
* Explicitly disable codecov plugins (also removing `gcov` coverage, which is not
  correctly handled by codecov https://github.com/codecov/feedback/issues/334)
2024-04-18 13:21:33 -04:00
Bronek Kozicki
8b0d049b9f test: Unit test for AMM offer overflow (#4986) 2024-04-18 12:30:18 -04:00
Mayukha Vadari
659bd99a67 fix amendment to add PreviousTxnID/PreviousTxnLgrSequence (#4751)
This amendment, `fixPreviousTxnID`, adds `PreviousTxnID` and
`PreviousTxnLgrSequence` as fields to all ledger objects that did
not already have them included (`DirectoryNode`, `Amendments`,
`FeeSettings`, `NegativeUNL`, and `AMM`). This makes it much easier
to go through the history of these ledger objects.
2024-04-18 10:41:25 -04:00
Ed Hennis
c88166e055 Set version to 2.2.0-b2 2024-04-04 15:42:39 -04:00
Michael Legleux
099c0bcd34 fix Conan component reference typo 2024-04-04 15:42:37 -04:00
Bronek Kozicki
d992e63075 Remove unused lambdas from MultiApiJson_test 2024-04-04 18:22:23 +01:00
Ed Hennis
c187f750fe chore: Default validator-keys-tool to master branch: (#4943)
* master is the default branch for that project. There's no point in
  using develop.
2024-04-04 10:40:35 -04:00
Ed Hennis
bcbf6c1973 Merge pull request #4968 from XRPLF/master
Merging changes for 2.1.1 from master into develop
2024-03-28 13:51:17 -04:00
Ed Hennis
4bcbf70cae chore: change Github Action triggers for build/test jobs (#4956)
Github Actions for the build/test jobs (nix.yml, mac.yml, windows.yml) will only run on branches that build packages (develop, release, master), and branches with names starting with "ci/". This is intended as a compromise between disabling CI jobs on personal forks entirely, and having the jobs run as a free-for-all. Note that it will not affect PR jobs at all.
2024-03-28 13:29:23 -04:00
seelabs
2d1854f354 Set version to 2.1.1 2024-03-27 13:46:59 +01:00
Gregory Tsipenyuk
a7c4a47723 fix: improper handling of large synthetic AMM offers:
A large synthetic offer was not handled correctly in the payment engine.
This patch fixes that issue and introduces a new invariant check while
processing synthetic offers.
2024-03-27 13:46:59 +01:00
Scott Determan
61672ad3ff fixXChainRewardRounding: round reward shares down: (#4933)
When calculating reward shares, the amount should always be rounded
down. If the `fixUniversalNumber` amendment is not active, this works
correctly. If it is not active, then the amount is incorrectly rounded
up. This patch introduces an amendment so it will be rounded down.
2024-03-22 17:02:17 -04:00
Mark Travis
cea43099d2 Don't reach consensus as quickly if no other proposals seen: (#4763)
This fixes a case where a peer can desync under a certain timing
circumstance--if it reaches a certain point in consensus before it receives
proposals. 

This was noticed under high transaction volumes. Namely, when we arrive at the
point of deciding whether consensus is reached after minimum establish phase
duration but before having received any proposals. This could be caused by
finishing the previous round slightly faster and/or having some delay in
receiving proposals. Existing behavior arrives at consensus immediately after
the minimum establish duration with no proposals. This causes us to desync
because we then close a non-validated ledger. The change in this PR causes us to
wait for a configured threshold before making the decision to arrive at
consensus with no proposals. This allows validators to catch up and for brief
delays in receiving proposals to be absorbed. There should be no drawback since,
with no proposals coming in, we needn't be in a huge rush to jump ahead.
2024-03-22 16:22:29 -04:00
Bronek Kozicki
6edf03c152 Write improved forAllApiVersions used in NetworkOPs (#4833) 2024-03-22 15:28:16 -04:00
Alloy Networks
47c8cc24f4 Remove zaphod.alloy.ee hub from default server list: (#4903)
Remove the zaphod.alloy.ee hubs from the bootstrap and default configuration after 5 years. It has been an honor to run these servers, but it is now time for another entity to step into this role.

The zaphod servers will be taken offline in a phased manner keeping all those who have peering arrangements informed.

These would be the preferred attributes of a boostrap set of hubs:

    1. Commitment to run the hubs for a minimum of 2 years
    2. Highly available
    3. Geographically dispersed
    4. Secure and up to date
    5. Committed to ensure that peering information is kept private
2024-03-22 14:52:45 -04:00
Bronek Kozicki
64e46878e0 Enforce no duplicate slots from incoming connections: (#4944)
We do not currently enforce that incoming peer connection does not have
remote_endpoint which is already used (either by incoming or outgoing
connection), hence already stored in slots_. If we happen to receive a
connection from such a duplicate remote_endpoint, it will eventually result in a
crash (when disconnecting) or weird behavior (when updating slot state), as a
result of an apparently matching remote_endpoint in slots_ being used by a
different connection.
2024-03-22 14:08:16 -04:00
Mayukha Vadari
ea9b1e3503 fixEmptyDID: fix amendment to handle empty DID edge case: (#4950)
This amendment fixes an edge case where an empty DID object can be
created. It adds an additional check to ensure that DIDs are
non-empty when created, and returns a `tecEMPTY_DID` error if the DID
would be empty.
2024-03-22 11:09:54 -04:00
Olek
2e9261cb26 perf: improve account_tx SQL query: (#4955)
The witness server makes heavily use of the `account_tx` RPC command. Perf
testing showed that the SQL query used by `account_tx` became unacceptably slow
when the DB was large and there was a `marker` parameter. The plan for the query
showed only indexed reads. This appears to be an issue with the internal SQLite
optimizer. This patch rewrote the query to use `UNION` instead of `OR` and
significantly improves performance. See RXI-896 and RIPD-1847 for more details.
2024-03-21 18:42:29 -04:00
John Freeman
69143d71f8 Fix workflows (#4951)
- Update container for Doxygen workflow. Matches Linux workflow, with newer GLIBC version required by newer actions.
- Fixes macOS workflow to install and configure Conan correctly. Still fails on tests, but that does not seem attributable to the workflow.
2024-03-20 09:19:48 -05:00
Ed Hennis
0c32fc5f2a test: Env unit test RPC errors return a unique result: (#4877)
* telENV_RPC_FAILED is a new code, reserved exclusively
  for unit tests when RPC fails. This will
  make those types of errors distinct and easier to test
  for when expected and/or diagnose when not.
* Output RPC command result when result is not expected.
2024-03-19 12:13:52 -04:00
Michael Legleux
af9cabe100 Install more public headers (#4940)
Fixes some mistakes in #4885
2024-03-13 21:14:43 -05:00
John Freeman
2ecb851926 Update remaining actions (#4949)
Downgrade {upload,download}-artifact action to v3 because of unreliability with v4.
2024-03-13 17:42:41 -05:00
Bronek Kozicki
2ffead76c1 Upgrade to xxhash 0.8.2 as a Conan requirement, enable SIMD hashing (#4893)
We are currently using old version 0.6.2 of `xxhash`, as a verbatim copy and paste of its header file `xxhash.h`. Switch to the more recent version 0.8.2. Since this version is in Conan Center (and properly protects its ABI by keeping the state object incomplete), add it as a Conan requirement. Switch to the SIMD instructions (in the new `XXH3` family) supported by the new version.
2024-03-13 16:12:22 -05:00
John Freeman
5cc377751a Fix workflows (#4948)
The problem was `CONAN_USERNAME` environment variable, which Conan 1.x uses as the default user in package references.
2024-03-13 13:23:43 -05:00
Scott Determan
ad8e9764e6 fix: order book update variable swap: (#4890)
This is likely the result of a typo when the code was simplified.
2024-03-12 17:52:07 -04:00
John Freeman
4ce426d8f6 Embed patched recipe for RocksDB 6.29.5 (#4947) 2024-03-12 17:01:15 -04:00
Gregory Tsipenyuk
c28e005087 build: add STCurrency.h to xrpl_core to fix clio build (#4939) 2024-03-06 16:24:07 -05:00
Mayukha Vadari
22b751834f feat: add user version of feature RPC (#4781)
* uses same formatting as admin RPC
* hides potentially sensitive data
2024-03-05 16:43:31 -05:00
Scott Determan
cce09b717e Fast base58 codec: (#4327)
This algorithm is about an order of magnitude faster than the existing
algorithm (about 10x faster for encoding and about 15x faster for
decoding - including the double hash for the checksum). The algorithms
use gcc's int128 (fast MS version will have to wait, in the meantime MS
falls back to the slow code).
2024-03-05 15:23:27 -05:00
Chenna Keshava B S
62dae3c6c6 Remove default ctors from SecretKey and PublicKey: (#4607)
* It is now an invariant that all constructed Public Keys are valid,
  non-empty and contain 33 bytes of data.
* Additionally, the memory footprint of the PublicKey class is reduced.
  The size_ data member is declared as static.
* Distinguish and identify the PublisherList retrieved from the local
  config file, versus the ones obtained from other validators.
* Fixes #2942
2024-03-05 12:02:53 -05:00
seelabs
97863e0b62 Set version to 2.2.0-b1 2024-02-29 11:07:24 -05:00
Gregory Tsipenyuk
8a2f6bec33 fix compile error on gcc 13: (#4932)
The compilation fails due to an issue in the initializer list
of an optional argument, which holds a vector of pairs.
The code compiles correctly on earlier gcc versions, but fails on gcc 13.
2024-02-28 09:34:22 -05:00
Gregory Tsipenyuk
e718378bdb Price Oracle (XLS-47d): (#4789) (#4789)
Implement native support for Price Oracles.

 A Price Oracle is used to bring real-world data, such as market prices,
 onto the blockchain, enabling dApps to access and utilize information
 that resides outside the blockchain.

 Add Price Oracle functionality:
 - OracleSet: create or update the Oracle object
 - OracleDelete: delete the Oracle object

 To support this functionality add:
 - New RPC method, `get_aggregate_price`, to calculate aggregate price for a token pair of the specified oracles
 - `ltOracle` object

 The `ltOracle` object maintains:
 - Oracle Owner's account
 - Oracle's metadata
 - Up to ten token pairs with the scaled price
 - The last update time the token pairs were updated

 Add Oracle unit-tests
2024-02-26 06:28:26 -05:00
seelabs
d7d15a922a Set version to 2.1.0 2024-02-20 19:56:19 -05:00
Ed Hennis
e74cb35aa4 test: guarantee proper lifetime for temporary Rules object: (#4917)
* Commit 01c37fe introduced a change to the STTx unit test where a local
  "defaultRules" object was created with a temporary inline "presets"
  value provided to the ctor. Rules::Impl stores a const ref to the
  presets provided to the ctor.  This particular call provided an inline
  temp variable, which goes out of scope as soon as the object is
  created. On Windows, attempting to use the presets (e.g. via the
  enabled() function) causes an access violation, which crashes the test
  run.
* An audit of the code indicates that all other instances of Rules use
  the Application's config.features list, which will have a sufficient
  lifetime.
2024-02-16 13:31:03 -08:00
Elliot Lee
da68651f61 Set version to 2.1.0-rc1 2024-02-07 13:59:44 -08:00
Gregory Tsipenyuk
be12136b8a fixInnerObjTemplate: set inner object template (#4906)
Add `STObject` constructor to explicitly set the inner object template.
This allows certain AMM transactions to apply in the same ledger:

There is no issue if the trading fee is greater than or equal to 0.01%.
If the trading fee is less than 0.01%, then:
- After AMM create, AMM transactions must wait for one ledger to close
  (3-5 seconds).
- After one ledger is validated, all AMM transactions succeed, as
  appropriate, except for AMMVote.
- The first AMMVote which votes for a 0 trading fee in a ledger will
  succeed. Subsequent AMMVote transactions which vote for a 0 trading
  fee will wait for the next ledger (3-5 seconds). This behavior repeats
  for each ledger.

This has no effect on the ultimate correctness of AMM. This amendment
will allow the transactions described above to succeed as expected, even
if the trading fee is 0 and the transactions are applied within one
ledger (block).
2024-02-07 13:58:12 -08:00
Chenna Keshava B S
6d3c21e369 feat: allow port_grpc to be specified in [server] stanza (#4728)
Prior to this commit, `port_grpc` could not be added to the [server]
stanza. Instead of validating gRPC IP/Port/Protocol information in
ServerHandler, validate grpc port info in GRPCServer constructor. This
should not break backwards compatibility.

gRPC-related config info must be in a section (stanza) called
[port_gprc].

* Close #4015 - That was an alternate solution. It was decided that with
  relaxed validation, it is not necessary to rename port_grpc.
* Fix #4557
2024-02-06 20:14:40 -08:00
Michael Legleux
1e96a1d6eb build: add headers needed in Conan package for libxrpl (#4885)
These headers are required in the xrpl Conan package in order for
xbridge witness server (xbwd) to build. This change to libxrpl may help
any dependents of libxrpl. This addition does not change any C++ code.
2024-02-05 08:17:32 -08:00
Shawn Xie
828bb64ebc fixNFTokenReserve: ensure NFT tx fails when reserve is not met (#4767)
Without this amendment, an NFTokenAcceptOffer transaction can succeed
even when the NFToken recipient does not have sufficient reserves for
the new NFTokenPage. This allowed accounts to accept NFT sell offers
without having a sufficient reserve. (However, there was no issue in
brokered mode or when a buy offer is involved.)

Instead, the transaction should fail with `tecINSUFFICIENT_RESERVE` as
appropriate. The `fixNFTokenReserve` amendment adds checks in the
NFTokenAcceptOffer transactor to check if the OwnerCount changed. If it
did, then it checks the new reserve requirement.

Fix #4679
2024-02-02 14:27:21 -08:00
John Freeman
6f00d32f7e fix(libxrpl): change library names in Conan recipe (#4831)
Use consistent platform-agnostic library names on all platforms.

Fix an issue that prevents dependents like validator-keys-tool from
linking to libxrpl on Windows.

It is bad practice to change the binary base name depending on the
platform. CMake already manipulates the base name into a final name that
fits the conventions of the platform. Linkers accept base names on the
command line and then look for conventional names on disk.
2024-02-01 16:57:29 -08:00
Ed Hennis
f9e365828a test: check for success/failure of Windows CI unit tests (#4871)
* Disable the Windows CI unit tests "allowed to fail" workaround which
  was previously introduced in #4596.
* The runner hardware was upgraded, and the unit tests have been passing
  since then.
2024-01-31 22:29:29 -08:00
Bronek Kozicki
90d463b925 test: add unit test for redundant payment (#4860)
If the payee and payer are the same account, then the transaction fails
in preflight with temREDUNDANT.
2024-01-30 22:14:57 -08:00
Elliot Lee
22cdb5728b Set version to 2.0.1 2024-01-29 08:36:10 -08:00
Bronek Kozicki
901152bd93 chore: retry codecov upload (#4896)
Update to #4849, using a workaround for spurious codecov upload errors.

Spurious codecov upload errors are expected in public repos which rely
on PRs via forks. Retrying uploads is a decent and easy workaround.
2024-01-24 16:46:24 -08:00
John Freeman
d9a5bca625 docs: fix broken links in docs/build/conan.md (#4699) 2024-01-23 20:35:22 -08:00
Elliot Lee
1676e9fe21 Set version to 2.0.1-rc1 2024-01-22 14:48:05 -08:00
Bronek Kozicki
fad9d639bf test: improve code coverage reporting (#4849)
* Speed up the generation of coverage reports by using multiple cores.

* Add codecov step to coverage workflow.
2024-01-22 13:09:18 -08:00
Chenna Keshava B S
efe6722bf8 docs: update help message about unit test-suite pattern matching (#4846)
Update the "rippled --help" message for the "-u" parameter. This
documents the unit test name pattern matching rule implemented by #4634.

Fix #4800
2024-01-19 21:55:57 -08:00
Mark Travis
a41f38547b Revert "Asynchronously write batches to NuDB. (#4503)" (#4882)
This reverts commit 1d9db1bfdd.

This improves the stability of online deletion.
2024-01-19 13:26:26 -08:00
Elliot Lee
87ee7868ea Set version to 2.0.1-b1 2024-01-17 16:40:20 -08:00
Elliot Lee
861bd1a96e docs: add Performance type to PR template (#4875) 2024-01-17 16:37:28 -08:00
Bronek Kozicki
6ac2b705dd test: add DeliverMax to more JSONRPC tests (#4826)
Minor change in unit tests to improve testing scope.
2024-01-16 20:38:35 -08:00
John Freeman
fe4d6c68b5 fix: change default send_queue_limit to 500 (#4867)
Clients subscribed to `transactions` over WebSocket are being
disconnected because the traffic exceeds the default `send_queue_limit`
of 100.

This commit changes the default configuration, not the default in code.

Fix #4866
2024-01-16 11:06:46 -08:00
Chenna Keshava B S
5a7af5bb77 fix: clang warning about deprecated sprintf usage (#4747)
Resolves a warning that was emitted from the clang compiler. Switches
usage of the sprintf function to the recommended snprintf function.

Warning was observed in Apple clang version 15.0.0 (clang-1500.0.40.1).

Fix #4569
2024-01-15 22:32:52 -08:00
Ed Hennis
d9f90c84c0 Improve lifetime management of ledger objects (SLEs) to prevent runaway memory usage: (#4822)
* Add logging for Application.cpp sweep()
* Improve lifetime management of ledger objects (`SLE`s)
* Only store SLE digest in CachedView; get SLEs from CachedSLEs
* Also force release of last ledger used for path finding if there are
  no path finding requests to process
* Count more ST objects (derive from `CountedObject`)
* Track CachedView stats in CountedObjects
* Rename the CachedView counters
* Fix the scope of the digest lookup lock

Before this patch, if you asked "is it caching?" It was always caching.
2024-01-12 12:42:33 -05:00
Ed Hennis
4308407dc1 WebSocket should only call async_close once (#4848)
Prevent WebSocket connections from trying to close twice.

The issue only occurs in debug builds (assertions are disabled in
release builds, including published packages), and when the WebSocket
connections are unprivileged. The assert (and WRN log) occurs when a
client drives up the resource balance enough to be forcibly disconnected
while there are still messages pending to be sent.

Thanks to @lathanbritz for discovering this issue in #4822.
2024-01-11 20:11:09 -05:00
Elliot Lee
2b0313d60c Set version to 2.0.0 2024-01-08 13:22:59 -08:00
Michael Legleux
350d213ee8 Set version to 2.0.0-rc7
* Ignore python error about modifying system python (#4863)
2024-01-05 12:55:45 -08:00
Elliot Lee
ca3198164c Set version to 2.0.0-rc6 2023-12-20 09:33:33 -08:00
Scott Schurr
c53a5e7a72 Revert "Apply transaction batches in periodic intervals (#4504)" (#4852)
This reverts commit 002893f280.

There were two files with conflicts in the automated revert:

- src/ripple/rpc/impl/RPCHelpers.h and
- src/test/rpc/JSONRPC_test.cpp

Those files were manually resolved.
2023-12-20 09:30:12 -08:00
Bronek Kozicki
ffb53f2085 Revert "Add ProtocolStart and GracefulClose P2P protocol messages (#3839)" (#4850)
This reverts commit 8f89694fae.
2023-12-19 12:52:25 -08:00
Elliot Lee
3b191a3097 docs(API-CHANGELOG): clarify changes for V2 (#4773) 2023-12-06 14:27:33 -08:00
Hussein Badakhchani
656948cd0f fix typo: 'of' instead of 'on' (#4821)
Co-authored-by: Hussein Badakhchani <hoos@alsoug.com>
2023-12-06 14:26:18 -08:00
Elliot Lee
46f3d3ef61 Set version to 2.0.0-rc5 2023-11-30 21:46:37 -08:00
Bronek Kozicki
06251aa76f Workarounds for gcc-13 compatibility (#4817)
Workaround for compilation errors with gcc-13 and other compilers
relying on `libstdc++` version 13. This is temporary until actual fix is
available for us to use: https://github.com/boostorg/beast/pull/2682

Some boost.beast files (which we do use) rely on an old gcc-12 behaviour
where `#include <cstdint>` was not needed even though types from this
header were used. This was broken by a change in libstdc++ version 13:
https://gcc.gnu.org/gcc-13/porting_to.html#header-dep-changes

The necessary fix was implemented in boost.beast, however it is not yet
available. Until it is available, we can use this workaround to enable
compilation of `rippled` with gcc-13, clang-16, etc.
2023-11-30 21:46:06 -08:00
Sophia Xie
5aef102f4f Revert #4505, #4760 (#4842)
* Revert "Optimize calculation of close time to avoid impasse and minimize gratuitous proposal changes (#4760)"

This reverts commit 8ce85a9750.

* Revert "Several changes to improve Consensus stability: (#4505)"

This reverts commit f259cc1ab6.

* Add missing include

---------

Co-authored-by: seelabs <scott.determan@yahoo.com>
2023-11-30 21:00:50 -08:00
Elliot Lee
431646437e Set version to 2.0.0-rc4 2023-11-29 12:20:43 -08:00
Bronek Kozicki
fe8621b00f APIv2: show DeliverMax in submit, submit_multisigned (#4827)
Show `DeliverMax` instead of `Amount` in output from `submit`,
`submit_multisigned`, `sign`, and `sign_for`.

Fix #4829
2023-11-29 12:19:32 -08:00
Bronek Kozicki
c045060560 APIv2: consistently return ledger_index as integer (#4820)
For api_version 2, always return ledger_index as integer in JSON output.

api_version 1 retains prior behavior.
2023-11-29 12:12:38 -08:00
Manoj Doshi
8ec475b1fd Set version to 2.0.0-rc3 2023-11-27 15:24:25 -08:00
Elliot Lee
e33a6d5c2b docs(API-CHANGELOG): add extra bullet about DeliverMax (#4784) 2023-11-27 13:29:47 -08:00
Bronek Kozicki
923e1cef99 Update API-CHANGELOG.md for release 2.0 (#4828) 2023-11-27 13:28:34 -08:00
Michael Legleux
2e93dd57eb Add Debian 12 Bookworm; ignore core-utils in almalinux (#4836)
Add codename `bookworm` to the distro matrix during Artifactory uploads
allowing Debian 12 clients to install `rippled` packages.

Ignore installing conflicting `core-utils` packages during almalinux
package smoke tests.
2023-11-27 13:26:41 -08:00
Elliot Lee
92957d685d Merge pull request #4823 from XRPLF/release
Merging release into develop
2023-11-27 13:16:34 -08:00
Elliot Lee
f05acbd782 Merge pull request #4824 from manojsdoshi/develop
Merging rc2 to develop
2023-11-27 13:12:18 -08:00
Manoj Doshi
44a9266c9e Set version to 2.0.0-rc2 2023-11-20 16:30:17 -08:00
Mark Travis
c1710900b4 Optimize calculation of close time to avoid impasse and minimize gratuitous proposal changes (#4760)
* Optimize the calculation of close time to avoid
impasse and minimize gratuitous proposal changes.

* git apply clang-format.patch

* Review (Howard) fixes.

* Review fix for impasse discovered by John.

* Review fixes (comments) from John.

* Scott S review fixes. Also clang-format.
2023-11-20 16:29:11 -08:00
Bronek Kozicki
d5059b11b9 Fix 2.0 regression in tx method with binary output (#4812)
* Fix binary output from tx method

* Formatting fix

* Minor test improvement

* Minor test improvements
2023-11-20 16:28:52 -08:00
Michael Legleux
f95fa338c9 Update Linux smoketest distros (#4813)
Co-authored-by: manoj <mdoshi@ripple.com>
2023-11-20 16:28:28 -08:00
Bronek Kozicki
96c926c71e Promote API version 2 to supported (#4803)
* Promote API version 2 to supported

* Switch command line to API version 1

* Fix LedgerRequestRPC test

* Remove obsolete tx_account method

This method is not implemented, the only parts which are removed are related to command-line parsing

* Fix RPCCall test

* Reduce diff size, small test improvements

* Minor fixes

* Support for the mold linker

* [fold] handle case where both mold and gold are installed

* [fold] Use first non-default linker

* Fix TransactionEntry_test

* Fix AccountTx_test

---------

Co-authored-by: seelabs <scott.determan@yahoo.com>
2023-11-20 16:28:17 -08:00
Scott Determan
4dff203787 Support for the mold linker (#4807) 2023-11-20 16:28:00 -08:00
manoj
4977a5d43c Proposed 2.0.0-rc2 (#4818)
* Support for the mold linker (#4807)

* Promote API version 2 to supported (#4803)

* Promote API version 2 to be supported

* Switch the command line to API version 1

* Fix LedgerRequestRPC test

* Remove obsolete tx_account method

This method is not implemented, the only parts which are removed are related to command-line parsing

* Fix RPCCall test

* Reduce diff size, small test improvements

* Minor fixes

* Support for the mold linker

* Fix TransactionEntry_test

* Fix AccountTx_test

---------

Co-authored-by: seelabs <scott.determan@yahoo.com>

* Update Linux smoketest distros (#4813)

* Fix 2.0 regression in tx method with binary output (#4812)

* Fix binary output from tx method

* Formatting fix

* Minor test improvement

* Minor test improvements

* Optimize calculation of close time to avoid impasse and minimize gratuitous proposal changes (#4760)

* Optimize the calculation of close time to avoid
impasse and minimize gratuitous proposal changes.

* git apply clang-format.patch

* Scott S review fixes. Also clang-format.

* Set version to 2.0.0-rc2

---------

Co-authored-by: manoj <mdoshi@ripple.com>
Co-authored-by: Scott Determan <scott.determan@yahoo.com>
Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
Co-authored-by: Michael Legleux <legleux@users.noreply.github.com>
Co-authored-by: Mark Travis <mtrippled@users.noreply.github.com>
2023-11-20 13:34:59 -08:00
Mark Travis
8ce85a9750 Optimize calculation of close time to avoid impasse and minimize gratuitous proposal changes (#4760)
* Optimize the calculation of close time to avoid
impasse and minimize gratuitous proposal changes.

* git apply clang-format.patch

* Review (Howard) fixes.

* Review fix for impasse discovered by John.

* Review fixes (comments) from John.

* Scott S review fixes. Also clang-format.
2023-11-16 20:37:53 -08:00
Bronek Kozicki
d5939727e0 Fix 2.0 regression in tx method with binary output (#4812)
* Fix binary output from tx method

* Formatting fix

* Minor test improvement

* Minor test improvements
2023-11-15 16:40:10 -08:00
Michael Legleux
7b49f1e1de Update Linux smoketest distros (#4813)
Co-authored-by: manoj <mdoshi@ripple.com>
2023-11-14 16:56:48 -08:00
Bronek Kozicki
ac27089c69 Promote API version 2 to supported (#4803)
* Promote API version 2 to supported

* Switch command line to API version 1

* Fix LedgerRequestRPC test

* Remove obsolete tx_account method

This method is not implemented, the only parts which are removed are related to command-line parsing

* Fix RPCCall test

* Reduce diff size, small test improvements

* Minor fixes

* Support for the mold linker

* [fold] handle case where both mold and gold are installed

* [fold] Use first non-default linker

* Fix TransactionEntry_test

* Fix AccountTx_test

---------

Co-authored-by: seelabs <scott.determan@yahoo.com>
2023-11-13 15:04:27 -08:00
Scott Determan
4cb0bcb003 Support for the mold linker (#4807) 2023-11-13 12:28:25 -08:00
Manoj Doshi
cf4e9e5578 Set version to 2.0.0-rc1 2023-11-09 11:28:45 -08:00
Bronek Kozicki
32ced493de Unify JSON serialization format of transactions (#4775)
* Remove include <ranges>

* Formatting fix

* Output for subscriptions

* Output from sign, submit etc.

* Output from ledger

* Output from account_tx

* Output from transaction_entry

* Output from tx

* Store close_time_iso in API v2 output

* Add small APIv2 unit test for subscribe

* Add unit test for transaction_entry

* Add unit test for tx

* Remove inLedger from API version 2

* Set ledger_hash and ledger_index

* Move isValidated from RPCHelpers to LedgerMaster

* Store closeTime in LedgerFill

* Time formatting fix

* additional tests for Subscribe unit tests

* Improved comments

* Rename mInLedger to mLedgerIndex

* Minor fixes

* Set ledger_hash on closed ledger, even if not validated

* Update API-CHANGELOG.md

* Add ledger_hash, ledger_index to transaction_entry

* Fix validated and close_time_iso in account_tx

* Fix typos

* Improve getJson for Transaction and STTx

* Minor improvements

* Replace class enum JsonOptions with struct

We may consider turning this into a general-purpose template and using it elsewhere

* simplify the extraction of transactionID from Transaction object

* Remove obsolete comments

* Unconditionally set validated in account_tx output

* Minor improvements

* Minor fixes

---------

Co-authored-by: Chenna Keshava <ckeshavabs@gmail.com>
2023-11-08 10:36:24 -08:00
Scott Determan
09e0f103f4 fix: check for valid public key in attestations (#4798) 2023-11-02 15:19:21 -07:00
pwang200
056255e396 Fix unit test api_version to enable api_version 2 (#4785)
The command line API still uses `apiMaximumSupportedVersion`.
The unit test RPCs use `apiMinimumSupportedVersion` if unspecified.

Context:
- #4568
- #4552
2023-11-02 09:59:19 -07:00
Elliot Lee
85342b21c8 chore: delete unused Dockerfile (#4791) 2023-10-31 23:00:11 -07:00
Stefan van Kessel
26b0322aa5 fix: remove unused variable (#4677)
With clang 15, an unused-but-set-variable warning was emitted:

PostgresDatabase.cpp:178:14: warning: variable 'expNumResults' set but
not used [-Wunused-but-set-variable]
    uint32_t expNumResults = 1;
2023-10-30 20:35:28 -07:00
Gregory Tsipenyuk
3b624d8bf8 fixFillOrKill: fix offer crossing with tfFillOrKill (#4694)
Introduce the `fixFillOrKill` amendment.

Fix an edge case occurring when an offer with `tfFillOrKill` set (but
without `tfSell` set) fails to cross an offer with a better rate. If
`tfFillOrKill` is set, then the owner must receive the full TakerPays.
Without this amendment, an offer fails if the entire `TakerGets` is not
spent. With this amendment, when `tfSell` is not set, the entire
`TakerGets` does not have to be spent.

For details about OfferCreate, see: https://xrpl.org/offercreate.html

Fix #4684

---------

Co-authored-by: Scott Schurr <scott@ripple.com>
2023-10-30 14:05:46 -07:00
Bronek Kozicki
ac02e56519 fix: remove include <ranges> (#4788)
Remove dependency on `<ranges>` header, since it is not implemented by
all compilers which we want to support.

This code change only affects unit tests.

Resolve https://github.com/XRPLF/rippled/issues/4787
2023-10-30 11:13:25 -07:00
Bronek Kozicki
1eac4d2c07 APIv2: remove tx_history and ledger_header (#4759)
Remove `tx_history` and `ledger_header` methods from API version 2.

Update `RPC::Handler` to allow for methods (or method implementations)
to be API version specific. This partially resolves #4727. We can now
store multiple handlers with the same name, as long as they belong to
different (non-overlapping) API versions. This necessarily impacts the
handler lookup algorithm and its complexity; however, there is no
performance loss on x86_64 architecture, and only minimal performance
loss on arm64 (around 10ns). This design change gives us extra
flexibility evolving the API in the future, including other parts of
#4727.

In API version 2, `tx_history` and `ledger_header` are no longer
recognised; if they are called, `rippled` will return error
`unknownCmd`

Resolve #3638

Resolve #3539
2023-10-24 15:57:49 -07:00
Mark Travis
3e5f770a38 docs: clarify definition of network health (#4729)
Update the documentation to describe network health with more nuance as
well as context about related factors.
2023-10-24 14:23:44 -07:00
Elliot Lee
2a66bb3fc7 Set version to 2.0.0-b4 2023-10-23 11:51:45 -07:00
Bronek Kozicki
397268394b APIv2(DeliverMax): add alias for Amount in Payment transactions (#4733)
Using the "Amount" field in Payment transactions can cause incorrect
interpretation. There continue to be problems from the use of this
field. "Amount" is rarely the correct field to use; instead,
"delivered_amount" (or "DeliveredAmount") should be used.

Rename the "Amount" field to "DeliverMax", a less misleading name. With
api_version: 2, remove the "Amount" field from Payment transactions.

- Input: "DeliverMax" in `tx_json` is an alias for "Amount"
  - sign
  - submit (in sign-and-submit mode)
  - submit_multisigned
  - sign_for
- Output: Add "DeliverMax" where transactions are provided by the API
  - ledger
  - tx
  - tx_history
  - account_tx
  - transaction_entry
  - subscribe (transactions stream)
- Output: Remove "Amount" from API version 2

Fix #3484

Fix #3902
2023-10-23 11:26:16 -07:00
Ed Hennis
5026cbdaf3 perf(CI): use unity builds to speed up Windows CI (#4780)
The unity build speeds up compilation by bundling multiple source files
into one larger file. This reduces Windows CI build time by up to 50%.

As described in #4596, the automatic Windows builds take a very long
time. Unity builds are significantly faster - currently about 45 min,
much closer to the typical MacOS (35-40 minutes) and nix (~30 minutes)
run times.

This is intended as a stopgap solution until a more resourced and
reliable runner is available.

No C++ code was changed. This only affects CI.
2023-10-23 08:20:48 -07:00
shichengsg002
5af9dc5abd fix(CI): set permission for doxygen workflow (#4756) 2023-10-19 10:53:38 -07:00
Denis Angell
8d86c5e17d fix: allow pseudo-transactions to omit NetworkID (#4737)
The Network ID logic should not be applied to pseudo-transactions.

This allows amendments to enable on a network with an ID > 1024.

Context:
- NetworkID: https://github.com/XRPLF/rippled/pull/4370
- Pseudo-transactions: https://xrpl.org/pseudo-transaction-types.html

Fix #4736

---------

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2023-10-19 10:00:49 -07:00
Mayukha Vadari
078bd606c7 feat(rpc): add server_definitions method (#4703)
Add a new RPC / WS call for `server_definitions`, which returns an
SDK-compatible `definitions.json` (binary enum definitions) generated by
the server. This enables clients/libraries to dynamically work with new
fields and features, such as ones that may become available on side
chains. Clients query `server_definitions` on a node from the network
they want to work with, and immediately know how to speak that node's
binary "language", even if new features are added to it in the future
(as long as there are no new serialized types that the software doesn't
know how to serialize/deserialize).

Example:

```js
> {"command": "server_definitions"}
< {
    "result": {
        "FIELDS": [
            [
                "Generic",
                {
                    "isSerialized": false,
                    "isSigningField": false,
                    "isVLEncoded": false,
                    "nth": 0,
                    "type": "Unknown"
                }
            ],
            [
                "Invalid",
                {
                    "isSerialized": false,
                    "isSigningField": false,
                    "isVLEncoded": false,
                    "nth": -1,
                    "type": "Unknown"
                }
            ],
            [
                "ObjectEndMarker",
                {
                    "isSerialized": false,
                    "isSigningField": true,
                    "isVLEncoded": false,
                    "nth": 1,
                    "type": "STObject"
                }
            ],
        ...
```

Close #3657

---------

Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
2023-10-18 16:16:40 -07:00
Mayukha Vadari
b421945e71 DID: Decentralized identifiers (DIDs) (XLS-40): (#4636)
Implement native support for W3C DIDs.

Add a new ledger object: `DID`.

Add two new transactions:
1. `DIDSet`: create or update the `DID` object.
2. `DIDDelete`: delete the `DID` object.

This meets the requirements specified in the DID v1.0 specification
currently recommended by the W3C Credentials Community Group.

The DID format for the XRP Ledger conforms to W3C DID standards.
The objects can be created and owned by any XRPL account holder.
The transactions can be integrated by any service, wallet, or application.
2023-10-18 13:01:12 -07:00
Denis Angell
41cd337506 Update the reserved hook error code name to tecHOOK_REJECTED (#4559)
The old name was `tecHOOK_ERROR`
2023-10-17 22:56:22 -07:00
Scott Schurr
b69156ac01 refactor(peerfinder): use LogicError in PeerFinder::Logic (#4562)
It might be possible for the server code to indirect through certain
`end()` iterators. While a debug build would catch this problem with
`assert()`s, a release build would crash. If there are problems in this
area in the future, it is best to get a definitive indication of the
nature of the error regardless of whether it's a debug or release build.
To accomplish this, these `assert`s are converted into `LogicError`s
that will produce a reasonable error message when they fire.
2023-10-17 22:52:33 -07:00
Elliot Lee
be6ac7e7a1 docs(pull_request_template): add API Impact section (#4757) 2023-10-17 21:27:03 -07:00
Elliot Lee
1fc1eb9f68 Set version to 2.0.0-b3 2023-10-16 17:59:19 -07:00
Ed Hennis
1fde585003 fix(CI): Call python to upgrade pip on Windows (#4768)
In Windows, we need to call `python` in order for the `pip` upgrade
command to work.

This changes the GitHub Actions Windows CI job to use the correct
command to upgrade PIP, fixing this error:

```
ERROR: To modify pip, please run the following command:
C:\hostedtoolcache\windows\Python\3.9.13\x64\python.exe -m pip install --upgrade pip
```

A future task is to make job run on heavy Windows runners so that it
doesn't take so long.

Context: #4596
2023-10-16 14:26:40 -07:00
Jackson Mills
c915984340 docs(conan.md): fix broken link to conanfile.py (#4740) 2023-10-12 10:12:17 -07:00
Ed Hennis
50cc1cf0c9 fix(PathRequest): remove incorrect assert (#4743)
The assert is saying that the only reason `pathFinder` would be null is
if the request was aborted (connection dropped, etc.). That's what
`continueCallback()` checks. But that is very clearly not true if you
look at `getPathFinder`, which calls `findPaths`, which can return false
for many reasons.

Fix #4744
2023-10-12 10:10:35 -07:00
Chenna Keshava B S
1151fba415 fix(CI): update workflow for uploading binaries to artifactory (#4746)
Update the nix CI runner. This commit does not modify any source code
files. The unix builds were successful, but the binaries were not
uploaded to the internal artifactory. This PR borrows an idea from
@ximinez to attempt to fix this issue.

After successful authentication, the `outcome` variable contains a
string. In the upload step, we are checking if outcome == 'success' as a
prerequisite for uploading the binary.

This commit updates the contents of the `outcome` variable.
2023-10-11 08:45:45 -07:00
Ed Hennis
3e08c390f5 ci: reenable Windows CI build with Artifactory support (#4596)
Artifactory support was added to the `nix` builds with #4556. This
extends that support to the Windows build. Now the Windows build works;
CI will build and test a Windows release build. This only affects CI and
does not change any C++ code.

* Copy the remote setup step outcome fix from #4716 discussion
* Allow the Windows job to succeed if tests fail:
  * Currently the tests do not always pass, even on a single threaded
    run on the GitHub runners. So we are using parallel runs and mark
    the test step as allowed to fail (continue-on-error).
  * At this point, it's more important that the build succeeds than that
    the tests succeed, because:
  * We've got plenty of test coverage on the other jobs.
  * Test failures are much rarer than build failures because of
    cross-platform issues.
  * Having a test failure locally doesn't interrupt a workflow nearly as
    much as a build failure.

Note that Conan Center cannot hold the binaries we need. They do not
build the configurations we need, and they will not add them.

## Future Tasks

This introduces a new bottleneck since the build and test takes over an
hour. Speed up the job by:

* Making this job run on heavy Windows runners.
* Increasing the number of hardware threads.
2023-10-09 15:51:22 -07:00
Ed Hennis
053b69c63f docs(API-CHANGELOG): add XRPFees change (#4741)
* Add a new API Changelog section for release 1.10.
* Mark `jss::fee_ref` as deprecated.
* Fix a copy-paste error in one of the unit tests.
2023-10-06 11:11:39 -07:00
Florent
ced14ec1ab docs(rippled-example.cfg): add P2P link compression (#4753)
P2P link compression is a feature added in 1.6.0 by #3287.

https://xrpl.org/enable-link-compression.html

If the default changes in the future - for example, as currently
proposed by #4387 - the comment will be updated at that time.

Fix #4656
2023-10-06 08:23:30 -07:00
Denis Angell
6ba9450c89 fixDisallowIncomingV1: allow issuers to authorize trust lines (#4721)
Context: The `DisallowIncoming` amendment provides an option to block
incoming trust lines from reaching your account. The
asfDisallowIncomingTrustline AccountSet Flag, when enabled, prevents any
incoming trust line from being created. However, it was too restrictive:
it would block an issuer from authorizing a trust line, even if the
trust line already exists. Consider:

1. Issuer sets asfRequireAuth on their account.
2. User sets asfDisallowIncomingTrustline on their account.
3. User submits tx to SetTrust to Issuer.

At this point, without `fixDisallowIncomingV1` active, the issuer would
not be able to authorize the trust line.

The `fixDisallowIncomingV1` amendment, once activated, allows an issuer
to authorize a trust line even after the user sets the
asfDisallowIncomingTrustline flag, as long as the trust line already
exists.
2023-10-05 16:25:16 -07:00
Scott Determan
ec8626046b refactor: reduce boilerplate in applySteps: (#4710)
When a new transactor is added, there are several places in applySteps
that need to be modified. This patch refactors the code so only one
function needs to be modified.
2023-10-05 15:29:23 -07:00
Rome Reginelli
4e84ad6cd7 refactor: reunify transaction common fields: (#4715)
Make transactions and pseudo-transactions share the same commonFields
again. This regularizes the code in a nice way.

While this technically allows pseudo-transactions to have a
TicketSequence field, pseudo-transactions are only ever constructed by
code paths that don't add such a field, so this is not a transaction
processing change. It may be possible to add a separate check to ensure
TicketSequence (and other fields that don't make sense on
pseudo-transactions) are never added to pseudo-transactions, but that
should not be necessary. (TicketSequence is not the only common field
that can not and does not appear in pseudo-transactions.) Note:
TicketSequence is already documented as a common field.

Related: #4637

Fix #4714
2023-10-04 17:03:14 -07:00
sokkaofthewatertribe
40ebbecac8 fix typo in SECURITY.md (#4662) 2023-10-03 21:32:16 -07:00
Elliot Lee
0c43eb3973 docs(API-CHANGELOG): clarify account_info response (#4724) 2023-10-03 21:31:24 -07:00
Stefan van Kessel
3dea78d34b fix: asan stack-use-after-scope in soci::use with rvalues (#4676)
Address a stack-use-after-scope issue when using rvalues with
`soci::use`. Replace rvalues with lvalues to ensure the scope extends
beyond the end of the expression.

The issue arises from `soci` taking a reference to the rvalue without
copying its value or extending its lifetime. `soci` references rvalues
in `soci::use_container` and then the address in `soci_use_type`. For
types like `int`, memory access post-lifetime is unlikely to cause
issues. However, for `std::string`, the backing heap memory can be freed
and potentially reused, leading to a potential segmentation fault.

This was detected on x86_64 using clang-15 with asan. asan confirms
resolution of the issue.

Fix #4675
2023-10-03 21:21:36 -07:00
Chenna Keshava B S
e27d24ba00 docs(BUILD.md): require GCC 11 or higher (#4700)
Update minimum compiler requirement for building the codebase. The
feature "using enum" is required. This feature was introduced in C++20.

Updating the C++ compiler to version 11 or later fixes this error:

```
Building CXX object CMakeFiles/xrpl_core.dir/src/ripple/protocol/impl/STAmount.cpp.o
/build/ripple/binary/src/ripple/protocol/impl/STAmount.cpp: In lambda function:
/build/ripple/binary/src/ripple/protocol/impl/STAmount.cpp:1577:15: error: expected nested-name-specifier before 'enum'
 1577 |         using enum Number::rounding_mode;
      |               ^~~~
```

Fix #4693
2023-10-02 16:43:29 -07:00
Scott Determan
925aca764b fix(XLS-38): disallow the same bridge on one chain: (#4720)
Modify the `XChainBridge` amendment.

Before this patch, two door accounts on the same chain could could own
the same bridge spec (of course, one would have to be the issuer and one
would have to be the locker). While this is silly, it does not violate
any bridge invariants. However, on further review, if we allow this then
the `claim` transactions would need to change. Since it's hard to see a
use case for two doors to own the same bridge, this patch disallows
it. (The transaction will return tecDUPLICATE).
2023-10-02 07:49:33 -07:00
Scott Schurr
2bb8de030f fix: stabilize voting threshold for amendment majority mechanism (#4410)
Amendment "flapping" (an amendment repeatedly gaining and losing
majority) usually occurs when an amendment is on the verge of gaining
majority, and a validator not in favor of the amendment goes offline or
loses sync. This fix makes two changes:

1. The number of validators in the UNL determines the threshold required
   for an amendment to gain majority.
2. The AmendmentTable keeps a record of the most recent Amendment vote
   received from each trusted validator (and, with `trustChanged`, stays
   up-to-date when the set of trusted validators changes). If no
   validation arrives from a given validator, then the AmendmentTable
   assumes that the previously-received vote has not changed.

In other words, when missing an `STValidation` from a remote validator,
each server now uses the last vote seen. There is a 24 hour timeout for
recorded validator votes.

These changes do not require an amendment because they do not impact
transaction processing, but only the threshold at which each individual
validator decides to propose an EnableAmendment pseudo-transaction.

Fix #4350
2023-09-28 11:47:48 -07:00
Ed Hennis
b92d511558 fix(build): uint is not defined on Windows platform (#4731)
Fix the Windows build by using `unsigned int` (instead of `uint`).

The error, introduced by #4618, looks something like:
  rpc\impl\RPCHelpers.h(299,5): error C2061: syntax error: identifier
  'uint' (compiling source file app\ledger\Ledger.cpp)
2023-09-27 13:12:04 -07:00
Nik Bougalis
548c91ebb6 Eliminate the built-in SNTP support (fixes #4207): (#4628) 2023-09-26 17:35:31 -07:00
Elliot Lee
2c56d9fc3e Set version to 2.0.0-b2 2023-09-22 16:17:35 -07:00
John Freeman
6b61505ac2 fix: accept all valid currency codes in API (#4566)
A few methods, including `book_offers`, take currency codes as
parameters. The XRPL doesn't care if the letters in those codes are
lowercase or uppercase, as long as they come from an alphabet defined
internally. rippled doesn't care either, when they are submitted in a
hex representation. When they are submitted in an ASCII string
representation, rippled, but not XRPL, is more restrictive, preventing
clients from interacting with some currencies already in the XRPL.

This change gets rippled out of the way and lets clients submit currency
codes in ASCII using the full alphabet.

Fixes #4112
2023-09-22 16:07:43 -07:00
Bronek Kozicki
e4db0fba2b chore: add .build to .gitignore (#4722)
Currently, the `BUILD.md` instructions suggest using `.build` as the
build directory, so this change helps to reduce confusion.

An alternative would be to instruct developers to add `/.build/` to
`.git/info/exclude` or to user-level `.gitignore` (although the latter
is very intrusive). However, it is being added here because it is a good
practice to have a sensible default that's consistent with the build
instructions.
2023-09-22 16:01:00 -07:00
Gregory Tsipenyuk
8f89694fae Add ProtocolStart and GracefulClose P2P protocol messages (#3839)
Clean up the peer-to-peer protocol start/close sequences by introducing
START_PROTOCOL and GRACEFUL_CLOSE messages, which sync inbound/outbound
peer send/receive. The GRACEFUL_CLOSE message differentiates application
and link layer failures.

* Introduce the `InboundHandoff` class to manage inbound peer
  instantiation and synchronize the send/receive protocol messages
  between peers.
* Update `OverlayImpl` to utilize the `InboundHandoff` class to manage
  inbound handshakes.
* Update `PeerImp` for improved handling of protocol messages.
* Modify the `Message` class for better maintainability.
* Introduce P2P protocol version `2.3`.
2023-09-22 15:56:44 -07:00
ForwardSlashBack
5433e133d5 Fix typo in BUILD.md (#4718)
Co-authored-by: Chenna Keshava B S <21219765+ckeshava@users.noreply.github.com>
2023-09-21 15:16:22 -07:00
Peter Chen
2487dab194 APIv2(gateway_balances, channel_authorize): update errors (#4618)
gateway_balances
* When `account` does not exist in the ledger, return `actNotFound`
  * (Previously, a normal response was returned)
  * Fix #4290
* When required field(s) are missing, return `invalidParams`
  * (Previously, `invalidHotWallet` was incorrectly returned)
  * Fix #4548

channel_authorize
* When the specified `key_type` is invalid, return `badKeyType`
  * (Previously, `invalidParams` was returned)
  * Fix #4289

Since these are breaking changes, they apply only to API version 2.

Supersedes #4577
2023-09-21 13:33:24 -07:00
John Freeman
77e0912a0e build: use Boost 1.82 and link Boost.Json (#4632)
Add Boost::json to the list of linked Boost libraries.

This seems to be required for macOS.
2023-09-21 08:58:39 -07:00
Chenna Keshava B S
a948203dae docs(overlay): add URL of blog post and clarify wording (#4635) 2023-09-18 22:00:59 -07:00
Elliot Lee
8f65bc24a0 docs(API-CHANGELOG): api_version 2 is expected with 2.0 (#4633)
* Also update for #4585
2023-09-18 21:59:47 -07:00
Elliot Lee
9f102fc99d docs(RELEASENOTES): update 1.12.0 notes to match dev blog (#4691)
* Reorganize some changelog entries
* Add note about portable binaries
* Dev blog: https://xrpl.org/blog
2023-09-18 21:57:25 -07:00
John Freeman
7bff9dc7f0 Update secp256k1 to 0.3.2 (#4653)
Copy the new code to `src/secp256k1` without changes:
`src/secp256k1` is identical to bitcoin-core/secp256k1@acf5c55 (v0.3.2).

We could consider changing to a Git submodule, though that would require
changes to the build instructions because we are not using submodules
anywhere else.
2023-09-18 13:26:14 -07:00
Chenna Keshava B S
e86181c096 docs: fix comment for LedgerHistory::fixIndex return value (#4574)
`LedgerHistory::fixIndex` returns `false` if a repair was performed.

Fix #4572
2023-09-18 10:01:43 -07:00
Ed Hennis
65df4bceaa fix: remove unused variable causing clang 14 build errors (#4672)
Removed the unused variable `none` from `Writer.cpp` which was causing
build errors on clang version 14.
2023-09-18 08:38:51 -07:00
Elliot Lee
3397922989 docs(BUILD): make it easier to find environment.md (#4507)
Make the instructions a bit easier to follow. Users on different
platforms can look for their platform name to find relevant information.
2023-09-15 21:54:25 -07:00
Elliot Lee
046d0c2f0a Set version to 2.0.0-b1 2023-09-15 08:55:09 -07:00
Michael Legleux
01fc4b0203 Revert CMake changes (#4707)
This was likely put back when #4292 was rebased.
2023-09-15 08:54:46 -07:00
Scott Determan
5b7d2c133a Change XChainBridge amendment to Supported::yes (#4709) 2023-09-15 08:54:39 -07:00
Scott Determan
237b406e8c Fix Windows build by removing two unused declarations (#4708)
Remove the `verify` and `message` function declarations. The explicit
instantiation requests could not be completed because there were no
implementations for those two member functions. It is helpful that the
Microsoft (MSVC) compiler on Windows appears to be strict when it comes
to template instantiation.

This resolves the warning:

  XChainAttestations.h(450): warning C4661: 'bool
  ripple::XChainAttestationsBase<ripple::XChainClaimAttestation>::verify(void)
  const': no suitable definition provided for explicit template
  instantiation request
2023-09-14 21:02:31 -07:00
Ed Hennis
5427321260 Match unit tests on start of test name (#4634)
* For example, without this change, to run the TxQ tests, must specify
  `--unittest=TxQ1,TxQ2` on the command line. With this change, can use
  `--unittest=TxQ`, and both will be run.
* An exact match will prevent any further partial matching.
* This could have some side effects for different tests with a common
  name beginning. For example, NFToken, NFTokenBurn, NFTokenDir. This
  might be useful. If not, the shorter-named test(s) can be renamed. For
  example, NFToken to NFTokens.
* Split the NFToken, NFTokenBurn, and Offer test classes. Potentially speeds
  up parallel tests by a factor of 5.
2023-09-14 13:19:15 -07:00
Howard Hinnant
ce570c166d Revert ThreadName due to problems on Windows (#4702)
* Revert "Remove CurrentThreadName.h from RippledCore.cmake (#4697)"

This reverts commit 3b5fcd5873.

* Revert "Introduce replacement for getting and setting thread name: (#4312)"

This reverts commit 36cb5f90e2.
2023-09-14 13:16:50 -07:00
Scott Determan
649c11a78e XChainBridge: Introduce sidechain support (XLS-38): (#4292)
A bridge connects two blockchains: a locking chain and an issuing
chain (also called a mainchain and a sidechain). Both are independent
ledgers, with their own validators and potentially their own custom
transactions. Importantly, there is a way to move assets from the
locking chain to the issuing chain and a way to return those assets from
the issuing chain back to the locking chain: the bridge. This key
operation is called a cross-chain transfer. A cross-chain transfer is
not a single transaction. It happens on two chains, requires multiple
transactions, and involves an additional server type called a "witness".

A bridge does not exchange assets between two ledgers. Instead, it locks
assets on one ledger (the "locking chain") and represents those assets
with wrapped assets on another chain (the "issuing chain"). A good model
to keep in mind is a box with an infinite supply of wrapped assets.
Putting an asset from the locking chain into the box will release a
wrapped asset onto the issuing chain. Putting a wrapped asset from the
issuing chain back into the box will release one of the existing locking
chain assets back onto the locking chain. There is no other way to get
assets into or out of the box. Note that there is no way for the box to
"run out of" wrapped assets - it has an infinite supply.

Co-authored-by: Gregory Popovitch <greg7mdp@gmail.com>
2023-09-14 13:08:41 -07:00
Peter Chen
7fae1c1262 APIv2(account_tx, noripple_check): return error on invalid input (#4620)
For the `account_tx` and `noripple_check` methods, perform input
validation for optional parameters such as "binary", "forward",
"strict", "transactions". Previously, when these parameters had invalid
values (e.g. not a bool), no error would be returned. Now, it returns an
`invalidParams` error.

* This updates the behavior to match Clio
  (https://github.com/XRPLF/clio).
* Since this is potentially a breaking change, it only applies to
  requests specifying api_version: 2.
* Fix #4543.
2023-09-13 15:01:37 -07:00
Mark Travis
f259cc1ab6 Several changes to improve Consensus stability: (#4505)
* Verify accepted ledger becomes validated, and retry
   with a new consensus transaction set if not.
 * Always store proposals.
 * Track proposals by ledger sequence. This helps slow peers catch
   up with the rest of the network.
 * Acquire transaction sets for proposals with future ledger sequences.
   This also helps slow peers catch up.
 * Optimize timer delay for establish phase to wait based on how
   long validators have been sending proposals. This also helps slow
   peers to catch up.
 * Fix impasse achieving close time consensus.
 * Don't wait between open and establish phases.
2023-09-11 15:48:32 -07:00
Mark Travis
002893f280 Apply transaction batches in periodic intervals (#4504)
Add new transaction submission API field, "sync", which
determines behavior of the server while submitting transactions:
1) sync (default): Process transactions in a batch immediately,
   and return only once the transaction has been processed.
2) async: Put transaction into the batch for the next processing
   interval and return immediately.
3) wait: Put transaction into the batch for the next processing
   interval and return only after it is processed.
2023-09-11 15:47:40 -07:00
Mark Travis
1d9db1bfdd Asynchronously write batches to NuDB. (#4503) 2023-09-11 15:46:17 -07:00
Howard Hinnant
3b5fcd5873 Remove CurrentThreadName.h from RippledCore.cmake (#4697)
(File was already removed from the source)
2023-09-11 15:41:54 -07:00
Elliot Lee
a95505739d docs: update SECURITY.md (#4338) 2023-09-08 13:51:23 -07:00
Mayukha Vadari
31c8281922 refactor: simplify TxFormats common fields logic (#4637)
Minor refactor to `TxFormats.cpp`:
- Rename `commonFields` to `pseudoCommonFields` (since it is the common fields
  that all pseudo-transactions need)
- Add a new static variable, `commonFields`, which represents all the common
  fields that non-pseudo transactions need (essentially everything that
  `pseudoCommonFields` contains, plus `sfTicketSequence`)

This makes it harder to accidentally leave out `sfTicketSequence` in a new
transaction.
2023-09-08 10:24:51 -07:00
Peter Chen
c6f6375015 APIv2(ledger_entry): return invalidParams for bad parameters (#4630)
- Verify "check", used to retrieve a Check object, is a string.
- Verify "nft_page", used to retrieve an NFT Page, is a string.
- Verify "index", used to retrieve any type of ledger object by its
  unique ID, is a string.
- Verify "directory", used to retrieve a DirectoryNode, is a string or
  an object.

This change only impacts api_version 2 since it is a breaking change.

https://xrpl.org/ledger_entry.html

Fix #4550
2023-09-07 22:47:04 -07:00
Mark Pevec
6f74a745db docs(rippled-example.cfg): clarify ssl_cert vs ssl_chain (#4667)
Clarify usage of ssl_cert vs ssl_chain
2023-09-07 15:19:05 -07:00
Howard Hinnant
36cb5f90e2 Introduce replacement for getting and setting thread name: (#4312)
* In namespace ripple, introduces get_name function that takes a
  std:🧵:native_handle_type and returns a std::string.
* In namespace ripple, introduces get_name function that takes a
  std::thread or std::jthread and returns a std::string.
* In namespace ripple::this_thread, introduces get_name function
  that takes no parameters and returns the name of the current
  thread as a std::string.
* In namespace ripple::this_thread, introduces set_name function
  that takes a std::string_view and sets the name of the current
  thread.
* Intended to replace the beast utilities setCurrentThreadName
  and getCurrentThreadName.
2023-09-07 11:44:36 -07:00
Manoj Doshi
89780c8e4f Set version to 1.12.0 2023-09-06 13:58:34 -07:00
Elliot Lee
9d4d8c22d9 Set version to 1.12.0-rc4 2023-09-01 15:53:10 -07:00
Gregory Tsipenyuk
b014b79d88 amm_info: fetch by amm account id; add AMM object entry (#4682)
- Update amm_info to fetch AMM by amm account id.
  - This is an additional way to retrieve an AMM object.
  - Alternatively, AMM can still be fetched by the asset pair as well.
- Add owner directory entry for AMM object.

Context:

- Add back the AMM object directory entry, which was deleted by #4626.
  - This fixes `account_objects` for `amm` type.
2023-09-01 15:50:58 -07:00
Rome Reginelli
a61a88ea81 AMMBid: use tecINTERNAL for 'impossible' errors (#4674)
Modify two error cases in AMMBid transactor to return `tecINTERNAL` to
more clearly indicate that these errors should not be possible unless
operating in unforeseen circumstances. It likely indicates a bug.

The log level has been updated to `fatal()` since it indicates a
(potentially network-wide) unexpected condition when either of these
errors occurs.

Details:

The two specific transaction error cases changed are:

- `tecAMM_BALANCE` - In this case, this error (total LP Tokens
  outstanding is lower than the amount to be burned for the bid) is a
  subset of the case where the user doesn't have enough LP Tokens to pay
  for the bid. When this case is reached, the bidder's LP Tokens balance
  has already been checked first. The user's LP Tokens should always be
  a subset of total LP Tokens issued, so this should be impossible.
- `tecINSUFFICIENT_PAYMENT` - In this case, the amount to be refunded as
  a result of the bid is greater than the price paid for the auction
  slot. This should never occur unless something is wrong with the math
  for calculating the refund amount.

Both error cases in question are "defense in depth" measures meant to
protect against making things worse if the code has already reached a
state that is supposed to be impossible, likely due to a bug elsewhere.

Such "shouldn't ever occur" checks should use an error code that
categorically indicates a larger problem. This is similar to how
`tecINVARIANT_FAILED` is a warning sign that something went wrong and
likely could've been worse, but since there isn't an Invariant Check
applying here, `tecINTERNAL` is the appropriate error code.

This is "debatably" a transaction processing change since it could
hypothetically change how transactions are processed if there's a bug we
don't know about.
2023-09-01 13:44:48 -07:00
Elliot Lee
f31c50d04d Set version to 1.12.0-rc3 2023-08-30 15:47:07 -07:00
Elliot Lee
8ceb0f3a79 Revert "Asynchronously write batches to NuDB (#4503)"
This reverts commit 21c4aaf993.
2023-08-30 15:46:24 -07:00
Elliot Lee
d943c58b3d Revert "Apply transaction batches in periodic intervals (#4504)"
This reverts commit b580049ec0.
2023-08-30 15:46:12 -07:00
Elliot Lee
7ca1c644d1 Revert "Several changes to improve Consensus stability: (#4505)"
This reverts commit e8a7b2a1fc.
2023-08-29 16:52:31 -07:00
Manoj Doshi
300b7e078a Set version to 1.12.0-rc1 2023-08-21 16:23:06 -07:00
manoj
3574956e5e Set version to 1.12-b3
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
2023-08-21 16:15:38 -07:00
Mark Travis
e8a7b2a1fc Several changes to improve Consensus stability: (#4505)
* Verify accepted ledger becomes validated, and retry
   with a new consensus transaction set if not.
 * Always store proposals.
 * Track proposals by ledger sequence. This helps slow peers catch
   up with the rest of the network.
 * Acquire transaction sets for proposals with future ledger sequences.
   This also helps slow peers catch up.
 * Optimize timer delay for establish phase to wait based on how
   long validators have been sending proposals. This also helps slow
   peers to catch up.
 * Fix impasse achieving close time consensus.
 * Don't wait between open and establish phases.
2023-08-21 16:15:31 -07:00
Mark Travis
b580049ec0 Apply transaction batches in periodic intervals (#4504)
Add new transaction submission API field, "sync", which
determines behavior of the server while submitting transactions:
1) sync (default): Process transactions in a batch immediately,
   and return only once the transaction has been processed.
2) async: Put transaction into the batch for the next processing
   interval and return immediately.
3) wait: Put transaction into the batch for the next processing
   interval and return only after it is processed.
2023-08-21 16:15:31 -07:00
Mark Travis
21c4aaf993 Asynchronously write batches to NuDB (#4503) 2023-08-21 16:15:31 -07:00
Ikko Eltociear Ashimine
213491d94b refactor: fix typo in FeeUnits.h (#4644)
covert -> convert
2023-08-21 16:15:31 -07:00
Michael Legleux
da203b241b Update Ubuntu build image (#4650) 2023-08-21 16:15:31 -07:00
Arihant Kothari
5d88b726c2 test: add forAllApiVersions helper function (#4611)
Introduce a new variadic template helper function, `forAllApiVersions`,
that accepts callables to execute a set of functions over a range of
versions - from RPC::apiMinimumSupportedVersion to RPC::apiBetaVersion.
This avoids the duplication of code.

Context: #4552
2023-08-21 16:15:31 -07:00
Mayukha Vadari
dcb7dd8d27 add view updates for account SLEs (#4629)
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
2023-08-21 16:15:31 -07:00
John Freeman
91e9658217 Fix the package recipe for consumers of libxrpl (#4631)
- "Rename" the type `LedgerInfo` to `LedgerHeader` (but leave an alias
  for `LedgerInfo` to not yet disturb existing uses). Put it in its own
  public header, named after itself, so that it is more easily found.
- Move the type `Fees` and NFT serialization functions into public
  (installed) headers.
- Compile the XRPL and gRPC protocol buffers directly into `libxrpl` and
  install their headers. Fix the Conan recipe to correctly export these
  types.

Addresses change (2) in
https://github.com/XRPLF/XRPL-Standards/discussions/121.

For context: This work supports Clio's dependence on libxrpl. Clio is
just an example consumer. These changes should benefit all current and
future consumers.

---------

Co-authored-by: cyan317 <120398799+cindyyan317@users.noreply.github.com>
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
2023-08-21 16:15:31 -07:00
Alphonse Noni Mousse
b28f62bbd9 refactor: improve checking of path lengths (#4519)
Improve the checking of the path lengths during Payments. Previously,
the code that did the check of the payment path lengths was sometimes
executed, but without any effect. This changes it to only check when it
matters, and to not make unnecessary copies of the path vectors.

Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
2023-08-21 16:15:30 -07:00
Alphonse N. Mousse
4f4951022c refactor: use C++20 function std::popcount (#4389)
- Replace custom popcnt16 implementation with std::popcount from C++20
- Maintain compatibility with older compilers and MacOS by providing a
  conditional compilation fallback to __builtin_popcount and a lookup
  table method
- Move and inline related functions within SHAMapInnerNode for
  performance and readability

Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
2023-08-21 16:15:30 -07:00
Gregory Tsipenyuk
58f7aecb0b fix(AMM): prevent orphaned objects, inconsistent ledger state: (#4626)
When an AMM account is deleted, the owner directory entries must be
deleted in order to ensure consistent ledger state.

* When deleting AMM account:
  * Clean up AMM owner dir, linking AMM account and AMM object
  * Delete trust lines to AMM
* Disallow `CheckCreate` to AMM accounts
  * AMM cannot cash a check
* Constrain entries in AuthAccounts array to be accounts
  * AuthAccounts is an array of objects for the AMMBid transaction
* SetTrust (TrustSet): Allow on AMM only for LP tokens
  * If the destination is an AMM account and the trust line doesn't
    exist, then:
    * If the asset is not the AMM LP token, then fail the tx with
      `tecNO_PERMISSION`
    * If the AMM is in empty state, then fail the tx with `tecAMM_EMPTY`
      * This disallows trustlines to AMM in empty state
* Add AMMID to AMM root account
  * Remove lsfAMM flag and use sfAMMID instead
* Remove owner dir entry for ltAMM
* Add `AMMDelete` transaction type to handle amortized deletion
  * Limit number of trust lines to delete on final withdraw + AMMDelete
  * Put AMM in empty state when LPTokens is 0 upon final withdraw
  * Add `tfTwoAssetIfEmpty` deposit option in AMM empty state
  * Fail all AMM transactions in AMM empty state except special deposit
  * Add `tecINCOMPLETE` to indicate that not all AMM trust lines are
    deleted (i.e. partial deletion)
    * This is handled in Transactor similar to deleted offers
  * Fail AMMDelete with `tecINTERNAL` if AMM root account is nullptr
  * Don't validate for invalid asset pair in AMMDelete
* AMMWithdraw deletes AMM trust lines and AMM account/object only if the
  number of trust lines is less than max
  * Current `maxDeletableAMMTrustLines` = 512
  * Check no directory left after AMM trust lines are deleted
  * Enable partial trustline deletion in AMMWithdraw
* Add `tecAMM_NOT_EMPTY` to fail any transaction that expects an AMM in
  empty state
* Clawback considerations
  * Disallow clawback out of AMM account
  * Disallow AMM create if issuer can claw back

This patch applies to the AMM implementation in #4294.

Acknowledgements:
Richard Holland and Nik Bougalis for responsibly disclosing this issue.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the project code and urge researchers to
responsibly disclose any issues they may find.

To report a bug, please send a detailed report to:

    bugs@xrpl.org

Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
2023-08-21 16:15:08 -07:00
RichardAH
5530a0b665 feat: support Concise Transaction Identifier (CTID) (XLS-37) (#4418)
* add CTIM to tx rpc


---------

Co-authored-by: Rome Reginelli <mduo13@gmail.com>
Co-authored-by: Elliot Lee <github.public@intelliot.com>
Co-authored-by: Denis Angell <dangell@transia.co>
2023-08-17 18:43:47 -07:00
Elliot Lee
aded4a7a92 docs: add API Changelog (#4612)
Introduce an API Changelog, which logs the changes that have been made
to the API.

Without this changelog, it is difficult to keep track of the changes and
additions that are made to the API. While all changes are surfaced in
PRs and Release Notes, these are mixed in with other non-API-affecting
changes. PRs that affect the API have the `API Change` label applied,
but it is hard to identify which PRs have been included in each release.
Furthermore, some API changes will take effect based on `api_version`
(starting with rippled version 1.12, which will introduce `api_version:
2`), while others are based on the `rippled` version.

The API Changelog clarifies the details of the changes in a way that is
easily understood by API consumers, and breaks down the changes to be
clear which ones are gated by api_version (versus `rippled` version).

From now on, all PR authors are responsible for updating the API
Changelog according to the additions/changes that their PR makes to the
APIs.
2023-07-19 14:03:26 -07:00
Elliot Lee
01df19c08b Set version to 1.12.0-b2 2023-07-18 11:57:56 -07:00
Elliot Lee
c197c4cdd9 BUILD: list steps after dependencies update (#4623)
After dependencies update, developers may need to effectively "start
over" with the build steps.
2023-07-17 22:16:42 -07:00
Shawn Xie
5ba1f984df Rename allowClawback flag to allowTrustLineClawback (#4617)
Reason for this change is here XRPLF/XRPL-Standards#119

We would want to be explicit that this flag is exclusively for trustline. For new token types(eg. CFT), they will not utilize this flag for clawback, instead, they will turn clawback on/off on the token-level, which is more versatile.
2023-07-13 18:00:32 -07:00
John Freeman
cb09e61d2f Update dependencies (#4595)
Use the most recent versions in ConanCenter.

* Due to a bug in Clang 16, you may get a compile error:
  "call to 'async_teardown' is ambiguous"
  * A compiler flag workaround is documented in `BUILD.md`.
* At this time, building this with gcc 13 may require editing some files
  in `.conan/data`
  * A patch to support gcc13 may be added in a later PR.

---------

Co-authored-by: Scott Schurr <scott@ripple.com>
2023-07-13 13:25:08 -04:00
Gregory Tsipenyuk
3c9db4b69e Introduce AMM support (XLS-30d): (#4294)
Add AMM functionality:
- InstanceCreate
- Deposit
- Withdraw
- Governance
- Auctioning
- payment engine integration

To support this functionality, add:
- New RPC method, `amm_info`, to fetch pool and LPT balances
- AMM Root Account
- trust line for each IOU AMM token
- trust line to track Liquidity Provider Tokens (LPT)
- `ltAMM` object

The `ltAMM` object tracks:
- fee votes
- auction slot bids
- AMM tokens pair
- total outstanding tokens balance
- `AMMID` to AMM `RootAccountID` mapping

Add new classes to facilitate AMM integration into the payment engine.
`BookStep` uses these classes to infer if AMM liquidity can be consumed.

The AMM formula implementation uses the new Number class added in #4192.
IOUAmount and STAmount use Number arithmetic.

Add AMM unit tests for all features.

AMM requires the following amendments:
- featureAMM
- fixUniversalNumber
- featureFlowCross

Notes:
- Current trading fee threshold is 1%
- AMM currency is generated by: 0x03 + 152 bits of sha256{cur1, cur2}
- Current max AMM Offers is 30

---------

Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
2023-07-12 13:52:50 -04:00
John Freeman
eeb8b41889 Adapt to change in Conan recipe for NuDB (#4615)
The recipe was updated a few days ago and the exported library target was renamed.
2023-07-11 16:26:15 -05:00
Elliot Lee
f7dd37e355 docs(CONTRIBUTING): push beta releases to release (#4589)
Sections that were rewrapped were wrapped to 72 characters, the same as
the recommendation for commit messages.
2023-07-07 14:31:09 -07:00
Arihant Kothari
a45a95e5ea APIv2(ledger_entry): return "invalidParams" when fields missing (#4552)
Improve error handling for ledger_entry by returning an "invalidParams"
error when one or more request fields are specified incorrectly, or one
or more required fields are missing.

For example, if none of of the following fields is provided, then the
API should return an invalidParams error:
* index, account_root, directory, offer, ripple_state, check, escrow,
  payment_channel, deposit_preauth, ticket

Prior to this commit, the API returned an "unknownOption" error instead.
Since the error was actually due to invalid parameters, rather than
unknown options, this error was misleading.

Since this is an API breaking change, the "invalidParams" error is only
returned for requests using api_version: 2 and above. To maintain
backward compatibility, the "unknownOption" error is still returned for
api_version: 1.

Related: #4573

Fix #4303
2023-07-06 11:58:53 -07:00
Chenna Keshava B S
c6fee28b92 refactor: change the return type of mulDiv to std::optional (#4243)
- Previously, mulDiv had `std::pair<bool, uint64_t>` as the output type.
  - This is an error-prone interface as it is easy to ignore when
    overflow occurs.
- Using a return type of `std::optional` should decrease the likelihood
  of ignoring overflow.
  - It also allows for the use of optional::value_or() as a way to
    explicitly recover from overflow.
- Include limits.h header file preprocessing directive in order to
  satisfy gcc's numeric_limits incomplete_type requirement.

Fix #3495

---------

Co-authored-by: John Freeman <jfreeman08@gmail.com>
2023-07-05 20:11:19 -07:00
Shawn Xie
77dc63b549 fix: add allowClawback flag for account_info (#4590)
* Update the `account_info` API so that the `allowClawback` flag is
  included in the response.
  * The proposed `Clawback` amendement added an `allowClawback` flag in
    the `AccountRoot` object.
  * In the API response, under `account_flags`, there is now an
    `allowClawback` field with a boolean (`true` or `false`) value.
  * For reference, the XLS-39 Clawback implementation can be found in
    #4553

Fix #4588
2023-07-05 08:46:23 -07:00
John Freeman
66bfe909e6 build: add binary hardening compile and link flags (#4603)
Enhance security during the build process:

* The '-fstack-protector' flag enables stack protection for preventing
  buffer overflow vulnerabilities. If an attempt is made to overflow the
  buffer, the program will terminate, thus protecting the integrity of
  the stack.
* The '-Wl,-z,relro,-z,now' linker flag enables Read-only Relocations
  (RELRO), a feature that helps harden the binary against certain types
  of exploits, particularly those that involve overwriting the Global
  Offset Table (GOT).
  * This flag is only set for Linux builds, due to compatibility issues
    with apple-clang.
  * The `relro` option makes certain sections of memory read-only after
    initialization to prevent them from being overwritten, while `now`
    ensures that all dynamic symbols are resolved immediately on program
    start, reducing the window of opportunity for attacks.
2023-07-03 07:41:12 -07:00
Mayukha Vadari
9c50415ebe add clang-format pre-commit hook (#4599)
* Add a new YAML file (.pre-commit-config.yaml) to set up pre-commit
  hook for clang-format
  * The pre-commit hook is opt-in and needs to be installed in order to
    run automatically
* Update CONTRIBUTING.md with instructions on how to set up and use the
  clang-format linter

Automating the process of running clang-format before committing code
helps to save time by removing the need to fix formatting issues later.

This is a tooling improvement and doesn't change C++ code.
2023-07-01 15:23:57 -07:00
Ed Hennis
516ffb2147 chore: update checkout action version to v3: (#4598)
* Update the version of the checkout action (for GitHub Actions) in
  `clang-format.yml` and `levelization.yml`.
  * The previous version, v2, was raising deprecation warnings due to
    its reliance on Node.js 12.
  * The latest checkout action version, v3, uses Node.js 16.
2023-07-01 00:13:37 -07:00
Peter Chen
f18c6dfea7 APIv2(account_info): handle invalid "signer_lists" value (#4585)
When requesting `account_info` with an invalid `signer_lists` value, the
API should return an "invalidParams" error.

`signer_lists` should have a value of type boolean. If it is not a
boolean, then it is invalid input. The response now indicates that.

* This is an API breaking change, so the change is only reflected for
  requests containing `"api_version": 2`
* Fix #4539
2023-06-29 23:05:21 -07:00
Michael Legleux
1cb67fbd20 fix: deb package build (#4591)
The debug packages were named with the extension ".ddeb", but due to a
bug in Artifactory, they need to have the ".deb" extension. Debug symbol
packages with ".ddeb" extensions are not indexed, and thus are not
visible in apt clients.

* Fix the issue by renaming the debug packages in the build script.
* Use GCC-11 and update GCC Conan profile.
  * This software requires GCC 11 and C++20. However, reporting mode is
    built with C++17.

This is a quick band-aid to fix the build. Later, it will be better to
remove this package-building code.

For context, a Debian (deb) package contains bundled software and
resources necessary for installing and managing software on a
Debian-based system, including Ubuntu and derivatives.
2023-06-29 20:15:11 -07:00
John Freeman
54afdaa101 ci: cancel overridden workflows (#4597)
Small quality-of-life improvements to workflows using new concurrency
control features:

https://docs.github.com/en/actions/using-jobs/using-concurrency

At time of this commit, macOS runners are oversubscribed. This may help.
2023-06-29 15:31:36 -07:00
Chenna Keshava B S
1c2ae10dc0 fix: Update Handler::Condition enum values #3417 (#4239)
- Use powers of two to clearly indicate the bitmask
- Replace bitmask with explicit if-conditions to better indicate predicates

Change enum values to be powers of two (fix #3417) #4239

Implement the simplified condition evaluation
removes the complex bitwise and(&) operator
Implement the second proposed solution in Nik Bougalis's comment - Software does not distinguish between different Conditions (Version: 1.5) #3417 (comment)
I have tested this code change by performing RPC calls with the commands server_info, server_state, peers and validation_info. These commands worked as expected.
2023-06-29 09:12:15 -07:00
Peter Chen
d34e8be316 APIv2: add error messages for account_tx (#4571)
Certain inputs for the AccountTx method should return an error. In other
words, an invalid request from a user or client now results in an error
message.

Since this can change the response from the API, it is an API breaking
change. This commit maintains backward compatibility by keeping the
existing behavior for existing requests. When clients specify
"api_version": 2, they will be able to get the updated error messages.

Update unit tests to check the error based on the API version.

* Fix #4288
* Fix #4545
2023-06-29 08:41:13 -07:00
Ed Hennis
4111382a31 Fix build references to deleted ServerHandlerImp: (#4592)
* Commits 0b812cd (#4427) and 11e914f (#4516) conflict. The first added
  references to `ServerHandlerImp` in files outside of that class's
  organizational unit (which is technically incorrect). The second
  removed `ServerHandlerImp`, but was not up to date with develop. This
  results in the build failing.
* Fixes the build by changing references to `ServerHandlerImp` to
  the more correct `ServerHandler`.
2023-06-28 13:23:12 -07:00
Scott Schurr
11e914fbe9 refactor: rename ServerHandlerImp to ServerHandler (#4516)
Rename `ServerHandlerImp` to `ServerHandler`. There was no other
ServerHandler definition despite the existence of a header suggesting
that there was.

This resolves a piece of historical confusion in the code, which was
identified during a code review.

The changes in the diff may look more extensive than they actually are.
The contents of `impl/ServerHandlerImp.h` were merged into
`ServerHandler.h`, making the latter file appear to have undergone
significant modifications. However, this is a non-breaking refactor that
only restructures code.
2023-06-27 17:57:52 -07:00
Chenna Keshava B S
0e983528e1 fix: remove deprecated fields in ledger method (#4244)
Remove deprecated fields from the ledger command:
* accepted
* hash (use ledger_hash instead)
* seqNum (use ledger_index instead)
* totalCoins (use total_coins instead)

Update SHAMapStore unit tests to use `jss:ledger_hash` instead of the
deprecated `hash` field.

Fix #3214
2023-06-27 17:52:15 -07:00
John Freeman
6b4437db39 Fix package definition for Conan (#4485)
Fix the libxrpl library target for consumers using Conan.

* Fix installation issues and update includes.
* Update requirements in the Conan package info.
  * libxrpl requires openssl::crypto.

(Conan is a software package manager for C++.)
2023-06-27 01:23:52 -07:00
Denis Angell
534a36536c refactor: replace hand-rolled lexicalCast (#4473)
Replace hand-rolled code with std::from_chars for better
maintainability.

The C++ std::from_chars function is intended to be as fast as possible,
so it is unlikely to be slower than the code it replaces. This change is
a net gain because it reduces the amount of hand-rolled code.
2023-06-26 23:50:03 -07:00
Elliot Lee
beba87129e Set version to 1.12.0-b1 2023-06-26 14:24:00 -07:00
Shawn Xie
b7e902dccc XLS-39 Clawback: (#4553)
Introduces:
* AccountRoot flag: lsfAllowClawback
* New Clawback transaction
* More info on clawback spec: https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-39d-clawback
2023-06-26 14:07:20 -07:00
Howard Hinnant
9eb30d4316 refactor: remove TypedField's move constructor (#4567)
Apply a minor cleanup in `TypedField`:
* Remove a non-working and unused move constructor.
* Constrain the remaining constructor to not be overly generic enough as
  to be used as a copy or move constructor.
2023-06-26 12:32:10 -07:00
John Freeman
8fdad0d7fd ci: use Artifactory remote in nix workflow (#4556)
There is now an Artifactory (thanks @shichengripple001 and team!) to
hold dependency binaries for the builds.

* Rewrite the `nix` workflow to use it and cut the time down to a mere
  21 minutes.
  * This workflow should continue to work (just more slowly) for forks
    that do not have access to the Artifactory.
2023-06-23 14:20:20 -07:00
drlongle
0b812cdece Add RPC/WS ports to server_info (#4427)
Enhance the /crawl endpoint by publishing WebSocket/RPC ports in the
server_info response. The function processing requests to the /crawl
endpoint actually calls server_info internally, so this change enables a
server to advertise its WebSocket/RPC port(s) to peers via the /crawl
endpoint. `grpc` and `peer` ports are included as well.

The new `ports` array contains objects, each containing a `port` for the
listening port (number string), and an array `protocol` listing the
supported protocol(s).

This allows crawlers to build a richer topology without needing to
port-scan nodes. For non-admin users (including peers), the info about
*admin* ports is excluded.

Also increase test coverage for RPC ServerInfo.

Fix #2837.
2023-06-23 10:19:26 -07:00
Scott Schurr
724a301599 fixReducedOffersV1: prevent offers from blocking order books: (#4512)
Curtail the occurrence of order books that are blocked by reduced offers
with the implementation of the fixReducedOffersV1 amendment.

This commit identifies three ways in which offers can be reduced:

1. A new offer can be partially crossed by existing offers, so the new
   offer is reduced when placed in the ledger.

2. An in-ledger offer can be partially crossed by a new offer in a
   transaction. So the in-ledger offer is reduced by the new offer.

3. An in-ledger offer may be under-funded. In this case the in-ledger
   offer is scaled down to match the available funds.

Reduced offers can block order books if the effective quality of the
reduced offer is worse than the quality of the original offer (from the
perspective of the taker). It turns out that, for small values, the
quality of the reduced offer can be significantly affected by the
rounding mode used during scaling computations.

This commit adjusts some rounding modes so that the quality of a reduced
offer is always at least as good (from the taker's perspective) as the
original offer.

The amendment is titled fixReducedOffersV1 because additional ways of
producing reduced offers may come to light. Therefore, there may be a
future need for a V2 amendment.
2023-06-22 22:20:25 -07:00
Ed Hennis
71d7d67fa3 Enable the Beta RPC API (v2) for all unit tests: (#4573)
* Enable api_version 2, which is currently in beta. It is expected to be
  marked stable by the next stable release.
* This does not change any defaults.
* The only existing tests changed were one that set the same flag, which
  was now redundant, and a couple that tested versioning explicitly.
2023-06-21 11:51:37 -07:00
Elliot Lee
264280edd7 Set version to 1.11.0
* Add release notes
2023-06-20 11:40:11 -07:00
Elliot Lee
beb0904a32 Set version to 1.11.0-rc3 2023-06-09 17:34:40 -07:00
Chenna Keshava B S
77c0a62a74 fix: remove redundant moves (#4565)
- Resolve gcc compiler warning:
      AccountObjects.cpp:182:47: warning: redundant move in initialization [-Wredundant-move]
  - The std::move() operation on trivially copyable types may generate a
    compile warning in newer versions of gcc.
- Remove extraneous header (unused imports) from a unit test file.
2023-06-09 17:33:28 -07:00
Denis Angell
5d011c7e6b fix node size estimation (#4536)
Fix a bug in the `NODE_SIZE` auto-detection feature in `Config.cpp`.
Specifically, this patch corrects the calculation for the total amount
of RAM available, which was previously returned in bytes, but is now
being returned in units of the system's memory unit. Additionally, the
patch adjusts the node size based on the number of available hardware
threads of execution.
2023-06-09 09:37:18 -07:00
Scott Schurr
5644c8704f Trivial: add comments for NFToken-related invariants (#4558) 2023-06-08 17:31:19 -07:00
Scott Determan
c9a586c243 Add missing includes for gcc 13.1: (#4555)
gcc 13.1 failed to compile due to missing headers. This patch adds the
needed headers.
2023-06-05 15:50:03 -07:00
Scott Determan
f709311762 Fix unaligned load and stores: (#4528) (#4531)
Misaligned load and store operations are supported by both Intel and ARM
CPUs. However, in C++, these operations are undefined behavior (UB).
Substituting these operations with a `memcpy` fixes this UB. The
compiled assembly code is equivalent to the original, so there is no
performance penalty to using memcpy.

For context: The unaligned load and store operations fixed here were
originally introduced in the slab allocator (#4218).
2023-05-31 13:28:33 -07:00
oeggert
adde0c2d11 docs(BUILD): restructure content for better readability (#4514)
Follow-up to discussion #4433
2023-05-31 11:55:47 -07:00
Elliot Lee
adf672ff83 docs(README): add link to Clio (#4535) 2023-05-30 09:48:12 -07:00
Elliot Lee
029580886e Set version to 1.11.0-rc2 2023-05-23 14:29:51 -07:00
Ed Hennis
32f8ae1af1 Move faulty assert (#4533)
This assert was put in the wrong place, but it only triggers if shards
are configured. This change moves the assert to the right place and
updates it to ensure correctness.

The assert could be hit after the server downloads some shards. It may
be necessary to restart after the shards are downloaded.

Note that asserts are normally checked only in debug builds, so release
packages should not be affected.

Introduced in: #4319 (66627b26cf)
2023-05-23 14:25:18 -07:00
Scott Determan
ce997a6de8 Ensure that switchover vars are initialized before use: (#4527)
Global variables in different TUs are initialized in an undefined order.
At least one global variable was accessing a global switchover variable.
This caused the switchover variable to be accessed in an uninitialized
state.

Since the switchover is always explicitly set before transaction
processing, this bug can not effect transaction processing, but could
effect unit tests (and potentially the value of some global variables).
Note: at the time of this patch the offending bug is not yet in
production.
2023-05-22 19:36:46 -07:00
Shawn Xie
3620ac287e Add nftoken_id, nftoken_ids, offer_id fields for NFTokens (#4447)
Three new fields are added to the `Tx` responses for NFTs:

1. `nftoken_id`: This field is included in the `Tx` responses for
   `NFTokenMint` and `NFTokenAcceptOffer`. This field indicates the
   `NFTokenID` for the `NFToken` that was modified on the ledger by the
   transaction.
2. `nftoken_ids`: This array is included in the `Tx` response for
   `NFTokenCancelOffer`. This field provides a list of all the
   `NFTokenID`s for the `NFToken`s that were modified on the ledger by
   the transaction.
3. `offer_id`: This field is included in the `Tx` response for
   `NFTokenCreateOffer` transactions and shows the OfferID of the
   `NFTokenOffer` created.

The fields make it easier to track specific tokens and offers. The
implementation includes code (by @ledhed2222) from the Clio project to
extract NFTokenIDs from mint transactions.
2023-05-18 16:38:18 -07:00
John Freeman
629ed5c691 Switch to self-hosted runners for macOS (#4511) 2023-05-17 14:51:42 -07:00
drlongle
78076a6903 fix!: Prevent API from accepting seed or public key for account (#4404)
The API would allow seeds (and public keys) to be used in place of
accounts at several locations in the API. For example, when calling
account_info, you could pass `"account": "foo"`. The string "foo" is
treated like a seed, so the method returns `actNotFound` (instead of
`actMalformed`, as most developers would expect). In the early days,
this was a convenience to make testing easier. However, it allows for
poor security practices, so it is no longer a good idea. Allowing a
secret or passphrase is now considered a bug. Previously, it was
controlled by the `strict` option on some methods. With this commit,
since the API does not interpret `account` as `seed`, the option
`strict` is no longer needed and is removed.

Removing this behavior from the API is a [breaking
change](https://xrpl.org/request-formatting.html#breaking-changes). One
could argue that it shouldn't be done without bumping the API version;
however, in this instance, there is no evidence that anyone is using the
API in the "legacy" way. Furthermore, it is a potential security hole,
as it allows users to send secrets to places where they are not needed,
where they could end up in logs, error messages, etc. There's no reason
to take such a risk with a seed/secret, since only the public address is
needed.

Resolves: #3329, #3330, #4337

BREAKING CHANGE: Remove non-strict account parsing (#3330)
2023-05-16 17:22:10 -07:00
David Fuelling
67238b9fa6 Update environment.md build doc to install lzma: (#4498)
On macOS, if you have not installed something that depends on `xz`, then your
system may lack `lzma`, resulting in a build error similar to:

```
Downloading libarchive-3.6.0.tar.xz completed [6250.61k]
libarchive/3.6.0: 
ERROR: libarchive/3.6.0: Error in source() method, line 120
        get(self, **self.conan_data["sources"][self.version], strip_root=True)
        ReadError: file could not be opened successfully:
- method gz: ReadError('not a gzip file')
- method bz2: ReadError('not a bzip2 file')
- method xz: CompressionError('lzma module is not available')
- method tar: ReadError('invalid header')
```

The solution is to ensure that `lzma` is installed by installing `xz`.
2023-04-27 10:18:59 -07:00
John Freeman
c7ef4c9783 Add patched recipe for SOCI: (#4510)
SOCI is the C++ database access library. The SOCI recipe was updated in
Conan Center Index (CCI), and it breaks for our choice of options. This
breakage occurs when you build with a fresh Conan cache (e.g. when you
submit a PR, or delete `~/.conan/data`).

* Add a custom Conan recipe for SOCI v4.0.3
* Update dependency building to handle exporting and installing Snappy
  and SOCI
  * Fix workflows to use custom SOCI recipe
* Update BUILD.md to include instruction for exporting the SOCI Conan
  recipe:
  * `conan export external/soci soci/4.0.3@`

This solution has been verified on Ubuntu 20.04 and macOS.

Context:

* There is a compiler error that the `sqlite3.h` header is not available
  when building soci.
* When package B depends on package A, it finds the pieces it needs by
  importing the Package Configuration File (PCF) that Conan generates
  for package A.
  * Read the CMake written by package B to check that it is importing
    the PCF correctly and linking its exports correctly.
  * Since this can be difficult, it is often more efficient to check
    https://github.com/conan-io/conan-center-index/issues for package B
    to see if anyone else has seen a similar problem.
  * One of the issues points to a problem area in soci's CMake. To
    confirm the diagnosis, review soci's CMake (after any patches are
    applied) in the Conan build directory `build/$buildId/src/`.
  * Review the Conan-generated PCF in
    `build/$buildId/build/$buildType/generators/`.
  * In this case, the problem was likely (re)introduced by
    https://github.com/conan-io/conan-center-index/pull/17026
* If there is a problem in the source or in the Conan recipe, the
  fastest fix is to copy the recipe and either:
  * Add a source patch to fix any problems in the source.
  * Change the recipe to fix any problems in the recipe.
* In this case, this can be done by finding soci's Conan recipe at
  https://github.com/conan-io/conan-center-index/tree/master/recipes/soci
  and then copying the `all` directory as `external/$packageName` in our
  project. Then, make any changes.
  * Test packages can be removed from the recipe folder as they are not
    needed.
  * If adding a patch in the `patches` directory, add a description for
    it to `conandata.yml`.
  * Since `conanfile.py` has no `version` property on the recipe class,
    builders need to pass a version on the command line (like they do
    for our `snappy` recipe).
* Add an example command to `BUILD.md`.

Future work: It may make sense to refer to recipes by revision, by
checking in a lockfile.
2023-04-25 22:24:41 -07:00
solmsted
b21a05d465 Fix typo (#4508) 2023-04-25 14:11:08 -07:00
John Freeman
436de0e03a Expand Linux test matrix: (#4454)
This change makes progress on the plan in #4371. It does not replicate
the full [matrix] implemented in #3851, but it does replicate the 1.ii
section of the Linux matrix. It leverages "heavy" self-hosted runners,
and demonstrates a repeatable pattern for future matrices.

[matrix]: d794a0f3f1/.github/README.md (continuous-integration)
2023-04-24 16:17:51 -07:00
John Freeman
8d482d3557 Fix errors for Clang 16: (#4501)
Address issues related to the removal of `std::{u,bi}nary_function` in
C++17 and some warnings with Clang 16. Some warnings appeared with the
upgrade to Apple clang version 14.0.3 (clang-1403.0.22.14.1).

- `std::{u,bi}nary_function` were removed in C++17. They were empty
  classes with a few associated types. We already have conditional code
  to define the types. Just make it unconditional.
- libc++ checks a cast in an unevaluated context to see if a type
  inherits from a binary function class in the standard library, e.g.
  `std::equal_to`, and this causes an error when the type privately
  inherits from such a class. Change these instances to public
  inheritance.
- We don't need a middle-man for the empty base optimization. Prefer to
  inherit directly from an empty class than from
  `beast::detail::empty_base_optimization`.
- Clang warns when all the uses of a variable are removed by conditional
  compilation of assertions. Add a `[[maybe_unused]]` annotation to
  suppress it.
- As a drive-by clean-up, remove commented code.

See related work in #4486.
2023-04-21 12:20:35 -07:00
Mark Travis
c5003969de Use quorum specified via command line: (#4489)
If `--quorum` setting is present on the command line, use the specified
value as the minimum quorum. This allows for the use of a potentially
fork-unsafe quorum, but it is sometimes necessary for small and test
networks.

Fix #4488.

---------

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2023-04-20 11:36:18 -07:00
John Freeman
1f417764c3 Add install instructions for package managers: (#4472)
Add instructions for installing rippled using the package managers APT
and YUM. Some steps were adapted from xrpl.org.

---------

Co-authored-by: Michael Legleux <mlegleux@ripple.com>
2023-04-13 10:41:16 -07:00
John Freeman
e75cd49313 Fix the fix for std::result_of (#4496)
Newer compilers, such as Apple Clang 15.0, have removed `std::result_of`
as part of C++20. The build instructions provided a fix for this (by
adding a preprocessor definition), but the fix was broken.

This fixes the fix by:
* Adding the `conf` prefix for tool configurations (which had been
  forgotten).
* Passing `extra_b2_flags` to `boost` package to fix its build.
  * Define `BOOST_ASIO_HAS_STD_INVOKE_RESULT` in order to build boost
    1.77 with a newer compiler.
2023-04-12 16:32:37 -07:00
RichardAH
4f95b9d7a6 Prevent replay attacks with NetworkID field: (#4370)
Add a `NetworkID` field to help prevent replay attacks on and from
side-chains.

The new field must be used when the server is using a network id > 1024.

To preserve legacy behavior, all chains with a network ID less than 1025
retain the existing behavior. This includes Mainnet, Testnet, Devnet,
and hooks-testnet. If `sfNetworkID` is present in any transaction
submitted to any of the nodes on one of these chains, then
`telNETWORK_ID_MAKES_TX_NON_CANONICAL` is returned.

Since chains with a network ID less than 1025, including Mainnet, retain
the existing behavior, there is no need for an amendment.

The `NetworkID` helps to prevent replay attacks because users specify a
`NetworkID` field in every transaction for that chain.

This change introduces a new UINT32 field, `sfNetworkID` ("NetworkID").
There are also three new local error codes for transaction results:

- `telNETWORK_ID_MAKES_TX_NON_CANONICAL`
- `telREQUIRES_NETWORK_ID`
- `telWRONG_NETWORK`

To learn about the other transaction result codes, see:
https://xrpl.org/transaction-results.html

Local error codes were chosen because a transaction is not necessarily
malformed if it is submitted to a node running on the incorrect chain.
This is a local error specific to that node and could be corrected by
switching to a different node or by changing the `network_id` on that
node. See:
https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html

In addition to using `NetworkID`, it is still generally recommended to
use different accounts and keys on side-chains. However, people will
undoubtedly use the same keys on multiple chains; for example, this is
common practice on other blockchain networks. There are also some
legitimate use cases for this.

A `app.NetworkID` test suite has been added, and `core.Config` was
updated to include some network_id tests.
2023-04-11 17:11:17 -07:00
Nik Bougalis
066f91ca07 Avoid using std::shared_ptr when not necessary: (#4218)
The `Ledger` class contains two `SHAMap` instances: the state and
transaction maps. Previously, the maps were dynamically allocated using
`std::make_shared` despite the fact that they did not require lifetime
management separate from the lifetime of the `Ledger` instance to which
they belong.

The two `SHAMap` instances are now regular member variables. Some smart
pointers and dynamic memory allocation was avoided by using stack-based
alternatives.

Commit 3 of 3 in #4218.
2023-04-11 15:50:25 -07:00
Nik Bougalis
c3acbce82d Optimize SHAMapItem and leverage new slab allocator: (#4218)
The `SHAMapItem` class contains a variable-sized buffer that
holds the serialized data associated with a particular item
inside a `SHAMap`.

Prior to this commit, the buffer for the serialized data was
allocated separately. Coupled with the fact that most instances
of `SHAMapItem` were wrapped around a `std::shared_ptr` meant
that an instantiation might result in up to three separate
memory allocations.

This commit switches away from `std::shared_ptr` for `SHAMapItem`
and uses `boost::intrusive_ptr` instead, allowing the reference
count for an instance to live inside the instance itself. Coupled
with using a slab-based allocator to optimize memory allocation
for the most commonly sized buffers, the net result is significant
memory savings. In testing, the reduction in memory usage hovers
between 400MB and 650MB. Other scenarios might result in larger
savings.

In performance testing with NFTs, this commit reduces memory size by
about 15% sustained over long duration.

Commit 2 of 3 in #4218.
2023-04-10 17:13:03 -07:00
Nik Bougalis
b7f588b789 Introduce support for a slabbed allocator: (#4218)
When instantiating a large amount of fixed-sized objects on the heap
the overhead that dynamic memory allocation APIs impose will quickly
become significant.

In some cases, allocating a large amount of memory at once and using
a slabbing allocator to carve the large block into fixed-sized units
that are used to service requests for memory out will help to reduce
memory fragmentation significantly and, potentially, improve overall
performance.

This commit introduces a new `SlabAllocator<>` class that exposes an
API that is _similar_ to the C++ concept of an `Allocator` but it is
not meant to be a general-purpose allocator.

It should not be used unless profiling and analysis of specific memory
allocation patterns indicates that the additional complexity introduced
will improve the performance of the system overall, and subsequent
profiling proves it.

A helper class, `SlabAllocatorSet<>` simplifies handling of variably
sized objects that benefit from slab allocations.

This commit incorporates improvements suggested by Greg Popovitch
(@greg7mdp).

Commit 1 of 3 in #4218.
2023-04-10 14:22:59 -07:00
ledhed2222
9346842eed Add jss fields used by Clio nft_info: (#4320)
Add Clio-specific JSS constants to ensure a common vocabulary of
keywords in Clio and this project. By providing visibility of the full
API keyword namespace, it reduces the likelihood of developers
introducing minor variations on names used by Clio, or unknowingly
claiming a keyword that Clio has already claimed. This change moves this
project slightly away from having only the code necessary for running
the core server, but it is a step toward the goal of keeping this
server's and Clio's APIs similar. The added JSS constants are annotated
to indicate their relevance to Clio.

Clio can be found here: https://github.com/XRPLF/clio

Signed-off-by: ledhed2222 <ledhed2222@users.noreply.github.com>
2023-04-06 11:33:20 -07:00
RichardAH
f191c911d4 Add NFTokenPages to account_objects RPC: (#4352)
- Include NFTokenPages in account_objects to make it easier to
  understand an account's Owner Reserve and simplify app development.
- Update related tests and documentation.
- Fix #4347.

For info about the Owner Reserve, see https://xrpl.org/reserves.html

---------

Co-authored-by: Scott Schurr <scott@ripple.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
2023-04-05 13:58:55 -07:00
drlongle
e6f49040f5 Fix unit test app.LedgerData (#4484) 2023-03-31 11:18:42 -07:00
drlongle
2f3f6dcb03 Fix ledger_data to return an empty list: (#4398)
Change `ledger_data` to return an empty list when all entries are
filtered out.

When the `type` field is specified for the `ledger_data` method, it is
possible that no objects of the specified type are found. This can even
occur if those objects exist, but not in the section that the server
checked while serving your request. Previously, the `state` field of the
response has the value `null`, instead of an empty array `[]`. By
changing this to an empty array, the response is the same data type so
that clients can handle it consistently.

For example, in Python, `for entry in state` should now work correctly.
It would raise an exception if `state` is `null` (or `None`). 

This could break client code that explicitly checks for null. However,
this fix aligns the response with the documentation, where the `state`
field is an array.

Fix #4392.
2023-03-30 11:59:10 -07:00
drlongle
5ebcaf0a6c Add account flags to account_info response: (#4459)
Previously, the object `account_data` in the `account_info` response
contained a single field `Flags` that contains flags of an account. API
consumers must perform bitwise operations on this field to retrieve the
account flags.

This change adds a new object, `account_flags`, at the top level of the
`account_info` response `result`. The object contains relevant flags of
the account. This makes it easier to write simple code to check a flag's
value.

The flags included may depend on the amendments that are enabled.

Fix #2457.
2023-03-30 11:46:18 -07:00
drlongle
8bfdbcbab5 Add logging for exceptions: (#4400)
Log exception messages at several locations.

Previously, these were locations where an exception was caught, but the
exception message was not logged. Logging the exception messages can be
useful for analysis or debugging. The additional logging could have a
small negative performance impact.

Fix #3213.
2023-03-30 10:13:30 -07:00
Brandon Wilson
135b63dbe0 Update example [validator_list_sites] (#4448) 2023-03-29 23:01:41 -07:00
Elliot Lee
46167d1c46 Add link to BUILD.md: (#4450)
In the release notes (current and historical), there is a link to the
`Builds` directory. By creating `Builds/README.md` with a link to
`BUILD.md`, it is easier to find the build instructions.
2023-03-28 15:55:53 -07:00
Alloy Networks
79e621d96c Update README.md (#4463) 2023-03-28 12:04:06 -07:00
Ed Hennis
66627b26cf Refactor fee initialization and configuration: (#4319)
* Create the FeeSettings object in genesis ledger.
* Initialize with default values from the config. Removes the need to
  pass a Config down into the Ledger initialization functions, including
  setup().
* Drop the undocumented fee config settings in favor of the [voting]
  section.
  * Fix #3734.
  * If you previously used fee_account_reserve and/or fee_owner_reserve,
    you should change to using the [voting] section instead. Example:

```
[voting]
account_reserve=10000000
owner_reserve=2000000
```

* Because old Mainnet ledgers (prior to 562177 - yes, I looked it up)
  don't have FeeSettings, some of the other ctors will default them to
  the config values before setup() tries to load the object.
* Update default Config fee values to match Mainnet.
* Fix unit tests:
  * Updated fees: Some tests are converted to use computed values of fee
    object, but the default Env config was also updated to fix the rest.
  * Unit tests that check the structure of the ledger have updated
    hashes and counts.
2023-03-28 09:03:25 -07:00
Ed Hennis
7aad6e5127 feat: mark 4 amendments as obsolete: (#4291)
Add the ability to mark amendments as obsolete. There are some known
amendments that should not be voted for because they are broken (or
similar reasons).

This commit marks four amendments as obsolete:

1. `CryptoConditionsSuite`
2. `NonFungibleTokensV1`
3. `fixNFTokenDirV1`
4. `fixNFTokenNegOffer`

When an amendment is `Obsolete`, voting for the amendment is prevented.
A determined operator can still vote for the amendment by changing the
source, and doing so does not break any protocol rules.

The "feature" command now does not modify the vote for obsolete
amendments.

Before this change, there were two options for an amendment's
`DefaultVote` behavior: yes and no.

After this change, there are three options for an amendment's
`VoteBehavior`: DefaultYes, DefaultNo, and Obsolete.

To be clear, if an obsolete amendment were to (somehow) be activated by
consensus, the server still has the code to process transactions
according to that amendment, and would not be amendment blocked. It
would function the same as if it had been voting "no" on the amendment.

Resolves #4014.

Incorporates review feedback from @scottschurr.
2023-03-23 22:28:53 -07:00
Scott Schurr
dffcdea12b fix: Expected to return a value: (#4401)
Fix a case where `ripple::Expected` returned a json array, not a value.

The problem was that `Expected` invoked the wrong constructor for the
expected type, which resulted in a constructor that took multiple
arguments being interpreted as an array.

A proposed fix was provided by @godexsoft, which involved a minor
adjustment to three constructors that replaces the use of curly braces
with parentheses. This makes `Expected` usable for
[Clio](https://github.com/XRPLF/clio).

A unit test is also included to ensure that the issue doesn't occur
again in the future.
2023-03-23 17:32:17 -07:00
John Freeman
d7725837f5 build: add interface library libxrpl: (#4449)
Make it easy for projects to depend on libxrpl by adding an `ALIAS`
target named `xrpl::libxrpl` for projects to link.

The name was chosen because:

* The current library target is named `xrpl_core`. There is no other
  "non-core" library target against which we need to distinguish the
  "core" library. We only export one library target, and it should just
  be named after the project to keep things simple and predictable.
* Underscores in target or library names are generally discouraged.
* Every target exported in CMake should be prefixed with the project
  name.

By adding an `ALIAS` target, existing consumers who use the `xrpl_core`
target will not be affected.

* In the future, there can be a migration plan to make `xrpl_core` the
  `ALIAS` target (and `libxrpl` the "real" target, which will affect the
  filename of the compiled binary), and eventually remove it entirely.

Also:

* Fix the Conan recipe so that consumers using Conan import a target
  named `xrpl::libxrpl`. This way, every consumer can use the same
  instructions.
* Document the two easiest methods to depend on libxrpl. Both have been
  tested.
* See #4443.
2023-03-22 17:21:03 -07:00
John Freeman
7745c72b2c docs: update build instructions: (#4381)
* Remove obsolete build instructions.
* By using Conan, builders can choose which dependencies specifically to
  build and link as shared objects.
* Refactor the build instructions based on the plan in #4433.
2023-03-22 12:02:42 -07:00
Elliot Lee
acb373280b Merge branch 'master' (1.10.1) into develop 2023-03-22 11:13:36 -07:00
Elliot Lee
4f506599f6 Set version to 1.10.1
* Add release notes
2023-03-22 09:27:56 -07:00
Elliot Lee
383f1b6ab3 Set version to 1.10.1-rc1 2023-03-21 11:14:20 -07:00
Michael Legleux
da18c86cbf Build packages with Ubuntu 18.04
Restores Ubuntu 18.04 packages
Update docker images to use Conan
2023-03-21 11:13:03 -07:00
Elliot Lee
9fcb28acad docs: update protocol README (#4457) 2023-03-21 08:01:47 -07:00
Shawn Xie
305c9a8d61 fixNFTokenRemint: prevent NFT re-mint: (#4406)
Without the protocol amendment introduced by this commit, an NFT ID can
be reminted in this manner:

1. Alice creates an account and mints an NFT.
2. Alice burns the NFT with an `NFTokenBurn` transaction.
3. Alice deletes her account with an `AccountDelete` transaction.
4. Alice re-creates her account.
5. Alice mints an NFT with an `NFTokenMint` transaction with params:
   `NFTokenTaxon` = 0, `Flags` = 9).

This will mint a NFT with the same `NFTokenID` as the one minted in step
1. The params that construct the NFT ID will cause a collision in
`NFTokenID` if their values are equal before and after the remint.

With the `fixNFTokenRemint` amendment, there is a new sequence number
construct which avoids this scenario:

- A new `AccountRoot` field, `FirstNFTSequence`, stays constant over
  time.
  - This field is set to the current account sequence when the account
    issues their first NFT.
  - Otherwise, it is not set.
- The sequence of a newly-minted NFT is computed by: `FirstNFTSequence +
  MintedNFTokens`.
  - `MintedNFTokens` is then incremented by 1 for each mint.

Furthermore, there is a new account deletion restriction:

- An account can only be deleted if `FirstNFTSequence + MintedNFTokens +
  256` is less than the current ledger sequence.
  - 256 was chosen because it already exists in the current account
    deletion constraint.

Without this restriction, an NFT may still be remintable. Example
scenario:

1. Alice's account sequence is at 1.
2. Bob is Alice's authorized minter.
3. Bob mints 500 NFTs for Alice. The NFTs will have sequences 1-501, as
   NFT sequence is computed by `FirstNFTokenSequence + MintedNFTokens`).
4. Alice deletes her account at ledger 257 (as required by the existing
   `AccountDelete` amendment).
5. Alice re-creates her account at ledger 258.
6. Alice mints an NFT. `FirstNFTokenSequence` initializes to her account
   sequence (258), and `MintedNFTokens` initializes as 0. This
   newly-minted NFT would have a sequence number of 258, which is a
   duplicate of what she issued through authorized minting before she
   deleted her account.

---------

Signed-off-by: Shawn Xie <shawnxie920@gmail.com>
2023-03-20 14:47:46 -07:00
Ed Hennis
9b2d563dec fix: support RPC markers for any ledger object: (#4361)
There were situations where `marker`s  returned by `account_lines` did
not work on subsequent requests, returning "Invalid Parameters".

This was caused by the optimization implemented in "Enforce account RPC
limits by account objects traversed":

e28989638d

Previously, the ledger traversal would find up to `limit` account lines,
and if there were more, the marker would be derived from the key of the
next account line. After the change, ledger traversal would _consider_
up to `limit` account objects of any kind found in the account's
directory structure. If there were more, the marker would be derived
from the key of the next object, regardless of type.

With this optimization, it is expected that `account_lines` may return
fewer than `limit` account lines - even 0 - along with a marker
indicating that there are may be more available.

The problem is that this optimization did not update the
`RPC::isOwnedByAccount` helper function to handle those other object
types. Additionally, XLS-20 added `ltNFTOKEN_OFFER` ledger objects to
objects that have been added to the account's directory structure, but
did not update `RPC::isOwnedByAccount` to be able to handle those
objects. The `marker` provided in the example for #4354 includes the key
for an `ltNFTOKEN_OFFER`. When that `marker` is used on subsequent
calls, it is not recognized as valid, and so the request fails.

* Add unit test that walks all the object types and verifies that all of
  their indexes can work as a marker.
* Fix #4340
* Fix #4354
2023-03-20 10:22:15 -07:00
Nik Bougalis
150d4a47e4 refactor: optimize NodeStore object conversion: (#4353)
When writing objects to the NodeStore, we need to convert them from
the in-memory format to the binary format used by the node store.

The conversion is handled by the `EncodedBlob` class, which is only
instantiated on the stack. Coupled with the fact that most objects
are under 1024 bytes in size, this presents an opportunity to elide
a memory allocation in a critical path.

This commit also simplifies the interface of `EncodedBlob` and
eliminates a subtle corner case that could result in dangling
pointers.

These changes are not expected to cause a significant reduction in
memory usage. The change avoids the use of a `std::shared_ptr` when
unnecessary and tries to use stack-based memory allocation instead
of the heap whenever possible.

This is a net gain both in terms of memory usage (lower
fragmentation) and performance (less work to do at runtime).
2023-03-16 15:00:07 -07:00
Ed Hennis
1c9df69b33 fix(ValidatorSite): handle rare null pointer dereference in timeout: (#4420)
In rare circumstances, both `onRequestTimeout` and the response handler
(`onSiteFetch` or `onTextFetch`) can get queued and processed. In all
observed cases, the response handler processes a network error.
`onRequestTimeout` usually runs first, but on rare occasions, the
response handler runs first, which leaves `activeResource` empty.
2023-03-16 10:32:22 -07:00
RichardAH
10555faa92 fix(gateway_balances): handle overflow exception: (#4355)
* Prevent internal error by catching overflow exception in `gateway_balances`.
* Treat `gateway_balances` obligations overflow as max (largest valid) `STAmount`.
  * Note that very large sums of STAmount are approximations regardless.

---------

Co-authored-by: Scott Schurr <scott@ripple.com>
2023-03-16 10:25:40 -07:00
Elliot Lee
0f1ffff068 Set version to 1.10.1-b1 2023-03-14 21:21:50 -07:00
Chenna Keshava B S
9309b57364 Rectify the import paths of boost/iterator: (#4293)
- MSVC 19.x reported a warning about import paths in boost for
  function_output_iterator class (boost::function_output_iterator).
- Eliminate that warning by updating the import paths, as suggested by
  the compiler warnings.
2023-03-14 21:10:56 -07:00
RichardAH
cb08f2b6ec Allow port numbers be be specified with a colon: (#4328)
Port numbers can now be specified using either a colon or a space.

Examples:

1.2.3.4:51235

1.2.3.4 51235

- In the configuration file, an annoying "gotcha" for node operators is
  accidentally specifying IP:PORT combinations using a colon. The code
  previously expected a space, not a colon. It also does not provide
  good feedback when this operator error is made.
- This change simply allows this mistake (using a colon) to be fixed
  automatically, preserving the intention of the operator.
- Add unit tests, which test the functionality when specifying IP:PORT
  in the configuration file.
- The RPCCall test regime is not specific enough to test this
  functionality, it has been tested by hand.
- Ensure IPv6 addresses are not confused for ip:port

---------

Co-authored-by: Elliot Lee <github.public@intelliot.com>
2023-03-14 21:06:30 -07:00
drlongle
84cde3ce0b Use <=> operator for base_uint, Issue, and Book: (#4411)
- Implement the `operator==` and the `operator<=>` (aka the spaceship
  operator) in `base_uint`, `Issue`, and `Book`. 
- C++20-compliant compilers automatically provide the remaining
  comparison operators (e.g. `operator<`, `operator<=`, ...).
- Remove the function compare() because it is no longer needed.
- Maintain the same semantics as the existing code.
- Add some unit tests to gain further confidence.
- Fix #2525.
2023-03-14 20:54:54 -07:00
Mark Travis
f7b3ddd87b Reporting Mode: Do not attempt to acquire missing data from peer network (#4458)
In Reporting Mode, a server would core dump when it is not able to read
from Cassandra. This patch prevents the core dump when Cassandra is down
for reporting mode servers. This does not fix the root cause, but it
cuts down on some of the resulting noise.
2023-03-14 20:49:40 -07:00
Elliot Lee
1e7710eee2 docs: security bug bounty acknowledgements (#4460) 2023-03-14 13:08:56 -07:00
Elliot Lee
07f047b1e2 Set version to 1.10.0
Merge #4451
2023-03-14 09:30:22 -07:00
1753 changed files with 156972 additions and 71287 deletions

View File

@@ -45,9 +45,11 @@ DisableFormat: false
ExperimentalAutoDetectBinPacking: false
ForEachMacros: [ Q_FOREACH, BOOST_FOREACH ]
IncludeCategories:
- Regex: '^<(BeastConfig)'
- Regex: '^<(test)/'
Priority: 0
- Regex: '^<(ripple)/'
- Regex: '^<(xrpld)/'
Priority: 1
- Regex: '^<(xrpl)/'
Priority: 2
- Regex: '^<(boost)/'
Priority: 3
@@ -56,6 +58,7 @@ IncludeCategories:
IncludeIsMainRegex: '$'
IndentCaseLabels: true
IndentFunctionDeclarationAfterType: false
IndentRequiresClause: true
IndentWidth: 4
IndentWrappedFunctionNames: false
KeepEmptyLinesAtTheStartOfBlocks: false
@@ -71,6 +74,7 @@ PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 200
PointerAlignment: Left
ReflowComments: true
RequiresClausePosition: OwnLine
SortIncludes: true
SpaceAfterCStyleCast: false
SpaceBeforeAssignmentOperators: true

View File

@@ -1,5 +1,37 @@
codecov:
ci:
- !appveyor
- travis
require_ci_to_pass: true
comment:
behavior: default
layout: reach,diff,flags,tree,reach
show_carryforward_flags: false
coverage:
range: "60..80"
precision: 1
round: nearest
status:
project:
default:
target: 60%
threshold: 2%
patch:
default:
target: auto
threshold: 2%
changes: false
github_checks:
annotations: true
parsers:
cobertura:
partials_as_hits: true
handle_missing_conditions : true
slack_app: false
ignore:
- "src/test/"
- "include/xrpl/beast/test/"
- "include/xrpl/beast/unit_test/"

View File

@@ -2,3 +2,12 @@
# To use it by default in git blame:
# git config blame.ignoreRevsFile .git-blame-ignore-revs
50760c693510894ca368e90369b0cc2dabfd07f3
e2384885f5f630c8f0ffe4bf21a169b433a16858
241b9ddde9e11beb7480600fd5ed90e1ef109b21
760f16f56835663d9286bd29294d074de26a7ba6
0eebe6a5f4246fced516d52b83ec4e7f47373edd
2189cc950c0cebb89e4e2fa3b2d8817205bf7cef
b9d007813378ad0ff45660dc07285b823c7e9855
fe9a5365b8a52d4acc42eb27369247e6f238a4f9
9a93577314e6a8d4b4a8368cc9d2b15a5d8303e8
552377c76f55b403a1c876df873a23d780fcc81c

34
.github/actions/build/action.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: build
inputs:
generator:
default: null
configuration:
required: true
cmake-args:
default: null
cmake-target:
default: all
# An implicit input is the environment variable `build_dir`.
runs:
using: composite
steps:
- name: configure
shell: bash
run: |
cd ${build_dir}
cmake \
${{ inputs.generator && format('-G "{0}"', inputs.generator) || '' }} \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=${{ inputs.configuration }} \
-Dtests=TRUE \
-Dxrpld=TRUE \
${{ inputs.cmake-args }} \
..
- name: build
shell: bash
run: |
cmake \
--build ${build_dir} \
--config ${{ inputs.configuration }} \
--parallel ${NUM_PROCESSORS:-$(nproc)} \
--target ${{ inputs.cmake-target }}

60
.github/actions/dependencies/action.yml vendored Normal file
View File

@@ -0,0 +1,60 @@
name: dependencies
inputs:
configuration:
required: true
# An implicit input is the environment variable `build_dir`.
runs:
using: composite
steps:
- name: unlock Conan
shell: bash
run: conan remove --locks
- name: export custom recipes
shell: bash
run: |
conan config set general.revisions_enabled=1
conan export external/snappy snappy/1.1.10@
conan export external/rocksdb rocksdb/6.29.5@
conan export external/soci soci/4.0.3@
- name: add Ripple Conan remote
shell: bash
run: |
conan remote list
conan remote remove ripple || true
# Do not quote the URL. An empty string will be accepted (with
# a non-fatal warning), but a missing argument will not.
conan remote add ripple ${{ env.CONAN_URL }} --insert 0
- name: try to authenticate to Ripple Conan remote
id: remote
shell: bash
run: |
# `conan user` implicitly uses the environment variables
# CONAN_LOGIN_USERNAME_<REMOTE> and CONAN_PASSWORD_<REMOTE>.
# https://docs.conan.io/1/reference/commands/misc/user.html#using-environment-variables
# https://docs.conan.io/1/reference/env_vars.html#conan-login-username-conan-login-username-remote-name
# https://docs.conan.io/1/reference/env_vars.html#conan-password-conan-password-remote-name
echo outcome=$(conan user --remote ripple --password >&2 \
&& echo success || echo failure) | tee ${GITHUB_OUTPUT}
- name: list missing binaries
id: binaries
shell: bash
# Print the list of dependencies that would need to be built locally.
# A non-empty list means we have "failed" to cache binaries remotely.
run: |
echo missing=$(conan info . --build missing --settings build_type=${{ inputs.configuration }} --json 2>/dev/null | grep '^\[') | tee ${GITHUB_OUTPUT}
- name: install dependencies
shell: bash
run: |
mkdir ${build_dir}
cd ${build_dir}
conan install \
--output-folder . \
--build missing \
--options tests=True \
--options xrpld=True \
--settings build_type=${{ inputs.configuration }} \
..
- name: upload dependencies to remote
if: (steps.binaries.outputs.missing != '[]') && (steps.remote.outputs.outcome == 'success')
shell: bash
run: conan upload --remote ripple '*' --all --parallel --confirm

View File

@@ -1,6 +1,12 @@
<!--
This PR template helps you to write a good pull request description.
Please feel free to include additional useful information even beyond what is requested below.
If your branch is on a personal fork and has a name that allows it to
run CI build/test jobs (e.g. "ci/foo"), remember to rename it BEFORE
opening the PR. This avoids unnecessary redundant test runs. Renaming
the branch after opening the PR will close the PR.
https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-branches-in-your-repository/renaming-a-branch
-->
## High Level Overview of Change
@@ -33,14 +39,38 @@ Please check [x] relevant options, delete irrelevant ones.
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Refactor (non-breaking change that only restructures code)
- [ ] Tests (You added tests for code that already exists, or your new feature included in this PR)
- [ ] Documentation Updates
- [ ] Performance (increase or change in throughput and/or latency)
- [ ] Tests (you added tests for code that already exists, or your new feature included in this PR)
- [ ] Documentation update
- [ ] Chore (no impact to binary, e.g. `.gitignore`, formatting, dropping support for older tooling)
- [ ] Release
### API Impact
<!--
Please check [x] relevant options, delete irrelevant ones.
* If there is any impact to the public API methods (HTTP / WebSocket), please update https://github.com/xrplf/rippled/blob/develop/API-CHANGELOG.md
* Update API-CHANGELOG.md and add the change directly in this PR by pushing to your PR branch.
* libxrpl: See https://github.com/XRPLF/rippled/blob/develop/docs/build/depend.md
* Peer Protocol: See https://xrpl.org/peer-protocol.html
-->
- [ ] Public API: New feature (new methods and/or new fields)
- [ ] Public API: Breaking change (in general, breaking changes should only impact the next api_version)
- [ ] `libxrpl` change (any change that may affect `libxrpl` or dependents of `libxrpl`)
- [ ] Peer protocol change (must be backward compatible or bump the peer protocol version)
<!--
## Before / After
If relevant, use this section for an English description of the change at a technical level.
If this change affects an API, examples should be included here.
For performance-impacting changes, please provide these details:
1. Is this a new feature, bug fix, or improvement to existing functionality?
2. What behavior/functionality does the change impact?
3. In what processing can the impact be measured? Be as specific as possible - e.g. RPC client call, payment transaction that involves LOB, AMM, caching, DB operations, etc.
4. Does this change affect concurrent processing - e.g. does it involve acquiring locks, multi-threaded processing, or async processing?
-->
<!--

View File

@@ -4,11 +4,11 @@ on: [push, pull_request]
jobs:
check:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
env:
CLANG_VERSION: 10
CLANG_VERSION: 18
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Install clang-format
run: |
codename=$( lsb_release --codename --short )
@@ -19,10 +19,8 @@ jobs:
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add
sudo apt-get update
sudo apt-get install clang-format-${CLANG_VERSION}
- name: Format src/ripple
run: find src/ripple -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 clang-format-${CLANG_VERSION} -i
- name: Format src/test
run: find src/test -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 clang-format-${CLANG_VERSION} -i
- name: Format first-party sources
run: find include src -type f \( -name '*.cpp' -o -name '*.hpp' -o -name '*.h' -o -name '*.ipp' \) -exec clang-format-${CLANG_VERSION} -i {} +
- name: Check for differences
id: assert
run: |
@@ -30,7 +28,7 @@ jobs:
git diff --exit-code | tee "clang-format.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: clang-format.patch
@@ -52,7 +50,7 @@ jobs:
To fix it, you can do one of two things:
1. Download and apply the patch generated as an artifact of this
job to your repo, commit, and push.
2. Run 'git-clang-format --extensions c,cpp,h,cxx,ipp develop'
2. Run 'git-clang-format --extensions cpp,h,hpp,ipp develop'
in your repo, commit, and push.
run: |
echo "${PREAMBLE}"

View File

@@ -4,21 +4,26 @@ on:
push:
branches:
- develop
- doxygen
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
job:
runs-on: ubuntu-latest
container:
image: docker://rippleci/rippled-ci-builder:2944b78d22db
permissions:
contents: write
container: rippleci/rippled-build-ubuntu:aaf5e3e
steps:
- name: checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: check environment
run: |
echo ${PATH} | tr ':' '\n'
cmake --version
doxygen --version
env
env | sort
- name: build
run: |
mkdir build

View File

@@ -8,7 +8,7 @@ jobs:
env:
CLANG_VERSION: 10
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Check levelization
run: Builds/levelization/levelization.sh
- name: Check for differences
@@ -18,7 +18,7 @@ jobs:
git diff --exit-code | tee "levelization.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: levelization.patch

88
.github/workflows/libxrpl.yml vendored Normal file
View File

@@ -0,0 +1,88 @@
name: Check libXRPL compatibility with Clio
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
on:
pull_request:
paths:
- 'src/libxrpl/protocol/BuildInfo.cpp'
- '.github/workflows/libxrpl.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
publish:
name: Publish libXRPL
outputs:
outcome: ${{ steps.upload.outputs.outcome }}
version: ${{ steps.version.outputs.version }}
channel: ${{ steps.channel.outputs.channel }}
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
steps:
- name: Wait for essential checks to succeed
uses: lewagon/wait-on-check-action@v1.3.4
with:
ref: ${{ github.event.pull_request.head.sha || github.sha }}
running-workflow-name: wait-for-check-regexp
check-regexp: '(dependencies|test).*linux.*' # Ignore windows and mac tests but make sure linux passes
repo-token: ${{ secrets.GITHUB_TOKEN }}
wait-interval: 10
- name: Checkout
uses: actions/checkout@v4
- name: Generate channel
id: channel
shell: bash
run: |
echo channel="clio/pr_${{ github.event.pull_request.number }}" | tee ${GITHUB_OUTPUT}
- name: Export new package
shell: bash
run: |
conan export . ${{ steps.channel.outputs.channel }}
- name: Add Ripple Conan remote
shell: bash
run: |
conan remote list
conan remote remove ripple || true
# Do not quote the URL. An empty string will be accepted (with a non-fatal warning), but a missing argument will not.
conan remote add ripple ${{ env.CONAN_URL }} --insert 0
- name: Parse new version
id: version
shell: bash
run: |
echo version="$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" \
| awk -F '"' '{print $2}')" | tee ${GITHUB_OUTPUT}
- name: Try to authenticate to Ripple Conan remote
id: remote
shell: bash
run: |
# `conan user` implicitly uses the environment variables CONAN_LOGIN_USERNAME_<REMOTE> and CONAN_PASSWORD_<REMOTE>.
# https://docs.conan.io/1/reference/commands/misc/user.html#using-environment-variables
# https://docs.conan.io/1/reference/env_vars.html#conan-login-username-conan-login-username-remote-name
# https://docs.conan.io/1/reference/env_vars.html#conan-password-conan-password-remote-name
echo outcome=$(conan user --remote ripple --password >&2 \
&& echo success || echo failure) | tee ${GITHUB_OUTPUT}
- name: Upload new package
id: upload
if: (steps.remote.outputs.outcome == 'success')
shell: bash
run: |
echo "conan upload version ${{ steps.version.outputs.version }} on channel ${{ steps.channel.outputs.channel }}"
echo outcome=$(conan upload xrpl/${{ steps.version.outputs.version }}@${{ steps.channel.outputs.channel }} --remote ripple --confirm >&2 \
&& echo success || echo failure) | tee ${GITHUB_OUTPUT}
notify_clio:
name: Notify Clio
runs-on: ubuntu-latest
needs: publish
env:
GH_TOKEN: ${{ secrets.CLIO_NOTIFY_TOKEN }}
steps:
- name: Notify Clio about new version
if: (needs.publish.outputs.outcome == 'success')
shell: bash
run: |
gh api --method POST -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" \
/repos/xrplf/clio/dispatches -f "event_type=check_libxrpl" \
-F "client_payload[version]=${{ needs.publish.outputs.version }}@${{ needs.publish.outputs.channel }}"

71
.github/workflows/macos.yml vendored Normal file
View File

@@ -0,0 +1,71 @@
name: macos
on:
pull_request:
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows)
branches:
# Always build the package branches
- develop
- release
- master
# Branches that opt-in to running
- 'ci/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
strategy:
matrix:
platform:
- macos
generator:
- Ninja
configuration:
- Release
runs-on: [self-hosted, macOS]
env:
# The `build` action requires these variables.
build_dir: .build
NUM_PROCESSORS: 12
steps:
- name: checkout
uses: actions/checkout@v4
- name: install Conan
run: |
brew install conan@1
echo '/opt/homebrew/opt/conan@1/bin' >> $GITHUB_PATH
- name: install Ninja
if: matrix.generator == 'Ninja'
run: brew install ninja
- name: check environment
run: |
env | sort
echo ${PATH} | tr ':' '\n'
python --version
conan --version
cmake --version
- name: configure Conan
run : |
conan profile new default --detect || true
conan profile update settings.compiler.cppstd=20 default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
- name: build dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
with:
configuration: ${{ matrix.configuration }}
- name: build
uses: ./.github/actions/build
with:
generator: ${{ matrix.generator }}
configuration: ${{ matrix.configuration }}
- name: test
run: |
${build_dir}/rippled --unittest

View File

@@ -1,96 +1,284 @@
name: nix
on: [push, pull_request]
on:
pull_request:
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows)
branches:
# Always build the package branches
- develop
- release
- master
# Branches that opt-in to running
- 'ci/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
# This workflow has two job matrixes.
# They can be considered phases because the second matrix ("test")
# depends on the first ("dependencies").
#
# The first phase has a job in the matrix for each combination of
# variables that affects dependency ABI:
# platform, compiler, and configuration.
# It creates a GitHub artifact holding the Conan profile,
# and builds and caches binaries for all the dependencies.
# If an Artifactory remote is configured, they are cached there.
# If not, they are added to the GitHub artifact.
# GitHub's "cache" action has a size limit (10 GB) that is too small
# to hold the binaries if they are built locally.
# We must use the "{upload,download}-artifact" actions instead.
#
# The second phase has a job in the matrix for each test configuration.
# It installs dependency binaries from the cache, whichever was used,
# and builds and tests rippled.
jobs:
dependencies:
strategy:
fail-fast: false
matrix:
platform:
- linux
compiler:
- gcc
- clang
configuration:
- Debug
- Release
include:
- compiler: gcc
profile:
version: 11
cc: /usr/bin/gcc
cxx: /usr/bin/g++
- compiler: clang
profile:
version: 14
cc: /usr/bin/clang-14
cxx: /usr/bin/clang++-14
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
steps:
- name: checkout
uses: actions/checkout@v4
- name: check environment
run: |
echo ${PATH} | tr ':' '\n'
conan --version
cmake --version
env | sort
- name: configure Conan
run: |
conan profile new default --detect
conan profile update settings.compiler.cppstd=20 default
conan profile update settings.compiler=${{ matrix.compiler }} default
conan profile update settings.compiler.version=${{ matrix.profile.version }} default
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update env.CC=${{ matrix.profile.cc }} default
conan profile update env.CXX=${{ matrix.profile.cxx }} default
conan profile update conf.tools.build:compiler_executables='{"c": "${{ matrix.profile.cc }}", "cpp": "${{ matrix.profile.cxx }}"}' default
- name: archive profile
# Create this archive before dependencies are added to the local cache.
run: tar -czf conan.tar -C ~/.conan .
- name: build dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
with:
configuration: ${{ matrix.configuration }}
- name: upload archive
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.platform }}-${{ matrix.compiler }}-${{ matrix.configuration }}
path: conan.tar
if-no-files-found: error
test:
strategy:
fail-fast: false
matrix:
platform:
- ubuntu-latest
- macos-12
generator:
- Ninja
- linux
compiler:
- gcc
- clang
configuration:
- Debug
- Release
runs-on: ${{ matrix.platform }}
cmake-args:
-
- "-Dunity=ON"
needs: dependencies
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
steps:
- name: checkout
uses: actions/checkout@v3
- name: install Ninja on Linux
if: matrix.generator == 'Ninja' && runner.os == 'Linux'
run: sudo apt install ninja-build
- name: install Ninja on OSX
if: matrix.generator == 'Ninja' && runner.os == 'macOS'
run: brew install ninja
- name: install nproc on OSX
if: runner.os == 'macOS'
run: brew install coreutils
- name: choose Python
uses: actions/setup-python@v3
- name: download cache
uses: actions/download-artifact@v3
with:
python-version: 3.9
- name: learn Python cache directory
id: pip-cache
name: ${{ matrix.platform }}-${{ matrix.compiler }}-${{ matrix.configuration }}
- name: extract cache
run: |
sudo pip install --upgrade pip
echo "::set-output name=dir::$(pip cache dir)"
- name: restore Python cache directory
uses: actions/cache@v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-${{ hashFiles('.github/workflows/nix.yml') }}
- name: install Conan
run: pip install wheel 'conan~=1.52'
mkdir -p ~/.conan
tar -xzf conan.tar -C ~/.conan
- name: check environment
run: |
env | sort
echo ${PATH} | tr ':' '\n'
python --version
conan --version
cmake --version
env
- name: configure Conan
run: |
conan profile new default --detect
conan profile update settings.compiler.cppstd=20 default
- name: configure Conan on Linux
if: runner.os == 'Linux'
run: |
conan profile update settings.compiler.libcxx=libstdc++11 default
- name: learn Conan cache directory
id: conan-cache
run: |
echo "::set-output name=dir::$(conan config get storage.path)"
- name: restore Conan cache directory
uses: actions/cache@v2
- name: checkout
uses: actions/checkout@v4
- name: dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
with:
path: ${{ steps.conan-cache.outputs.dir }}
key: ${{ hashFiles('~/.conan/profiles/default', 'conanfile.py', 'external/rocksdb/*', '.github/workflows/nix.yml') }}
- name: export Snappy
run: conan export external/snappy snappy/1.1.9@
- name: install dependencies
run: |
mkdir ${build_dir}
cd ${build_dir}
conan install .. --build missing --settings build_type=${{ matrix.configuration }}
- name: configure
run: |
cd ${build_dir}
cmake \
-G ${{ matrix.generator }} \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=${{ matrix.configuration }} \
-Dassert=ON \
-Dcoverage=OFF \
-Dreporting=OFF \
-Dunity=OFF \
..
configuration: ${{ matrix.configuration }}
- name: build
run: |
cmake --build ${build_dir} --target rippled --parallel $(nproc)
uses: ./.github/actions/build
with:
generator: Ninja
configuration: ${{ matrix.configuration }}
cmake-args: ${{ matrix.cmake-args }}
- name: test
run: |
${build_dir}/rippled --unittest --unittest-jobs $(nproc)
coverage:
strategy:
fail-fast: false
matrix:
platform:
- linux
compiler:
- gcc
configuration:
- Debug
needs: dependencies
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
steps:
- name: download cache
uses: actions/download-artifact@v3
with:
name: ${{ matrix.platform }}-${{ matrix.compiler }}-${{ matrix.configuration }}
- name: extract cache
run: |
mkdir -p ~/.conan
tar -xzf conan.tar -C ~/.conan
- name: install gcovr
run: pip install "gcovr>=7,<8"
- name: check environment
run: |
echo ${PATH} | tr ':' '\n'
conan --version
cmake --version
gcovr --version
env | sort
ls ~/.conan
- name: checkout
uses: actions/checkout@v4
- name: dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
with:
configuration: ${{ matrix.configuration }}
- name: build
uses: ./.github/actions/build
with:
generator: Ninja
configuration: ${{ matrix.configuration }}
cmake-args: >-
-Dcoverage=ON
-Dcoverage_format=xml
-DCODE_COVERAGE_VERBOSE=ON
-DCMAKE_CXX_FLAGS="-O0"
-DCMAKE_C_FLAGS="-O0"
cmake-target: coverage
- name: move coverage report
shell: bash
run: |
mv "${build_dir}/coverage.xml" ./
- name: archive coverage report
uses: actions/upload-artifact@v3
with:
name: coverage.xml
path: coverage.xml
retention-days: 30
- name: upload coverage report
uses: wandalen/wretry.action@v1.4.10
with:
action: codecov/codecov-action@v4.5.0
with: |
files: coverage.xml
fail_ci_if_error: true
disable_search: true
verbose: true
plugin: noop
token: ${{ secrets.CODECOV_TOKEN }}
attempt_limit: 5
attempt_delay: 210000 # in milliseconds
conan:
needs: dependencies
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
configuration: Release
steps:
- name: download cache
uses: actions/download-artifact@v3
with:
name: linux-gcc-${{ env.configuration }}
- name: extract cache
run: |
mkdir -p ~/.conan
tar -xzf conan.tar -C ~/.conan
- name: check environment
run: |
env | sort
echo ${PATH} | tr ':' '\n'
conan --version
cmake --version
- name: checkout
uses: actions/checkout@v4
- name: dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
with:
configuration: ${{ env.configuration }}
- name: export
run: |
version=$(conan inspect --raw version .)
reference="xrpl/${version}@local/test"
conan remove -f ${reference} || true
conan export . local/test
echo "reference=${reference}" >> "${GITHUB_ENV}"
- name: build
run: |
cd examples/example
mkdir ${build_dir}
cd ${build_dir}
conan install .. --output-folder . \
--require-override ${reference} --build missing
cmake .. \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=./build/${configuration}/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=${configuration}
cmake --build .
./example | grep '^[[:digit:]]\+\.[[:digit:]]\+\.[[:digit:]]\+'

View File

@@ -1,19 +1,22 @@
name: windows
# We have disabled this workflow because it fails in our CI Windows
# environment, but we cannot replicate the failure in our personal Windows
# test environments, nor have we gone through the trouble of setting up an
# interactive CI Windows environment.
# We welcome contributions to diagnose or debug the problems on Windows. Until
# then, we leave this tombstone as a reminder that we have tried (but failed)
# to write a reliable test for Windows.
# on: [push, pull_request]
on:
workflow_dispatch:
pull_request:
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows)
branches:
- 'action'
paths:
- '.github/workflow/windows.yml'
# Always build the package branches
- develop
- release
- master
# Branches that opt-in to running
- 'ci/**'
# https://docs.github.com/en/actions/using-jobs/using-concurrency
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
@@ -25,73 +28,64 @@ jobs:
- Visual Studio 16 2019
configuration:
- Release
# Github hosted runners tend to hang when running Debug unit tests.
# Instead of trying to work around it, disable the Debug job until
# something beefier (i.e. a heavy self-hosted runner) becomes
# available.
# - Debug
runs-on: windows-2019
env:
build_dir: .build
steps:
- name: checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: choose Python
uses: actions/setup-python@v3
uses: actions/setup-python@v5
with:
python-version: 3.9
- name: learn Python cache directory
id: pip-cache
shell: bash
run: |
pip install --upgrade pip
echo "::set-output name=dir::$(pip cache dir)"
python -m pip install --upgrade pip
echo "dir=$(pip cache dir)" | tee ${GITHUB_OUTPUT}
- name: restore Python cache directory
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-${{ hashFiles('.github/workflows/windows.yml') }}
- name: install Conan
run: pip install wheel 'conan~=1.52'
run: pip install wheel 'conan<2'
- name: check environment
run: |
dir env:
$env:PATH -split ';'
python --version
conan --version
cmake --version
dir env:
- name: configure Conan
shell: bash
run: |
conan profile new default --detect
conan profile update settings.compiler.cppstd=20 default
conan profile update settings.compiler.runtime=MT default
conan profile update settings.compiler.toolset=v141 default
- name: learn Conan cache directory
id: conan-cache
run: |
echo "::set-output name=dir::$(conan config get storage.path)"
- name: restore Conan cache directory
uses: actions/cache@v2
conan profile update settings.compiler.runtime=MT${{ matrix.configuration == 'Debug' && 'd' || '' }} default
- name: build dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
with:
path: ${{ steps.conan-cache.outputs.dir }}
key: ${{ hashFiles('~/.conan/profiles/default', 'conanfile.py', 'external/rocksdb/*', '.github/workflows/windows.yml') }}
- name: export Snappy
run: conan export external/snappy snappy/1.1.9@
- name: install dependencies
run: |
mkdir $env:build_dir
cd $env:build_dir
conan install .. --build missing --settings build_type=${{ matrix.configuration }}
- name: configure
run: |
$env:build_dir
cd $env:build_dir
pwd
ls
cmake `
-G "${{ matrix.generator }}" `
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake `
-Dassert=ON `
-Dreporting=OFF `
-Dunity=OFF `
..
configuration: ${{ matrix.configuration }}
- name: build
run: |
cmake --build $env:build_dir --target rippled --config ${{ matrix.configuration }} --parallel $env:NUMBER_OF_PROCESSORS
uses: ./.github/actions/build
with:
generator: '${{ matrix.generator }}'
configuration: ${{ matrix.configuration }}
# Hard code for now. Move to the matrix if varied options are needed
cmake-args: '-Dassert=ON -Dreporting=OFF -Dunity=ON'
cmake-target: install
- name: test
shell: bash
run: |
& "$env:build_dir\${{ matrix.configuration }}\rippled.exe" --unittest
${build_dir}/${{ matrix.configuration }}/rippled --unittest --unittest-jobs $(nproc)

7
.gitignore vendored
View File

@@ -21,7 +21,6 @@ bin/project-cache.jam
# Ignore object files.
*.o
build
.nih_c
tags
TAGS
@@ -65,7 +64,7 @@ docs/html_doc
# Xcode user-specific project settings
# Xcode
.DS_Store
*/build/*
/build/
*.pbxuser
!default.pbxuser
*.mode1v3
@@ -109,3 +108,7 @@ pkg_out
pkg
CMakeUserPresets.json
bld.rippled/
.vscode
# Suggested in-tree build directory
/.build/

6
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,6 @@
# .pre-commit-config.yaml
repos:
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: v18.1.3
hooks:
- id: clang-format

193
API-CHANGELOG.md Normal file
View File

@@ -0,0 +1,193 @@
# API Changelog
This changelog is intended to list all updates to the [public API methods](https://xrpl.org/public-api-methods.html).
For info about how [API versioning](https://xrpl.org/request-formatting.html#api-versioning) works, including examples, please view the [XLS-22d spec](https://github.com/XRPLF/XRPL-Standards/discussions/54). For details about the implementation of API versioning, view the [implementation PR](https://github.com/XRPLF/rippled/pull/3155). API versioning ensures existing integrations and users continue to receive existing behavior, while those that request a higher API version will experience new behavior.
The API version controls the API behavior you see. This includes what properties you see in responses, what parameters you're permitted to send in requests, and so on. You specify the API version in each of your requests. When a breaking change is introduced to the `rippled` API, a new version is released. To avoid breaking your code, you should set (or increase) your version when you're ready to upgrade.
For a log of breaking changes, see the **API Version [number]** headings. In general, breaking changes are associated with a particular API Version number. For non-breaking changes, scroll to the **XRP Ledger version [x.y.z]** headings. Non-breaking changes are associated with a particular XRP Ledger (`rippled`) release.
## API Version 2
API version 2 is available in `rippled` version 2.0.0 and later. To use this API, clients specify `"api_version" : 2` in each request.
#### Removed methods
In API version 2, the following deprecated methods are no longer available: (https://github.com/XRPLF/rippled/pull/4759)
- `tx_history` - Instead, use other methods such as `account_tx` or `ledger` with the `transactions` field set to `true`.
- `ledger_header` - Instead, use the `ledger` method.
#### Modifications to JSON transaction element in V2
In API version 2, JSON elements for transaction output have been changed and made consistent for all methods which output transactions. (https://github.com/XRPLF/rippled/pull/4775)
This helps to unify the JSON serialization format of transactions. (https://github.com/XRPLF/clio/issues/722, https://github.com/XRPLF/rippled/issues/4727)
- JSON transaction element is named `tx_json`
- Binary transaction element is named `tx_blob`
- JSON transaction metadata element is named `meta`
- Binary transaction metadata element is named `meta_blob`
Additionally, these elements are now consistently available next to `tx_json` (i.e. sibling elements), where possible:
- `hash` - Transaction ID. This data was stored inside transaction output in API version 1, but in API version 2 is a sibling element.
- `ledger_index` - Ledger index (only set on validated ledgers)
- `ledger_hash` - Ledger hash (only set on closed or validated ledgers)
- `close_time_iso` - Ledger close time expressed in ISO 8601 time format (only set on validated ledgers)
- `validated` - Bool element set to `true` if the transaction is in a validated ledger, otherwise `false`
This change affects the following methods:
- `tx` - Transaction data moved into element `tx_json` (was inline inside `result`) or, if binary output was requested, moved from `tx` to `tx_blob`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `account_tx` - Renamed transaction element from `tx` to `tx_json`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `transaction_entry` - Renamed transaction metadata element from `metadata` to `meta`. Changed location of `hash` and added new elements
- `subscribe` - Renamed transaction element from `transaction` to `tx_json`. Changed location of `hash` and added new elements
- `sign`, `sign_for`, `submit` and `submit_multisigned` - Changed location of `hash` element.
#### Modification to `Payment` transaction JSON schema
When reading Payments, the `Amount` field should generally **not** be used. Instead, use [delivered_amount](https://xrpl.org/partial-payments.html#the-delivered_amount-field) to see the amount that the Payment delivered. To clarify its meaning, the `Amount` field is being renamed to `DeliverMax`. (https://github.com/XRPLF/rippled/pull/4733)
- In `Payment` transaction type, JSON RPC field `Amount` is renamed to `DeliverMax`. To enable smooth client transition, `Amount` is still handled, as described below: (https://github.com/XRPLF/rippled/pull/4733)
- On JSON RPC input (e.g. `submit_multisigned` etc. methods), `Amount` is recognized as an alias to `DeliverMax` for both API version 1 and version 2 clients.
- On JSON RPC input, submitting both `Amount` and `DeliverMax` fields is allowed _only_ if they are identical; otherwise such input is rejected with `rpcINVALID_PARAMS` error.
- On JSON RPC output (e.g. `subscribe`, `account_tx` etc. methods), `DeliverMax` is present in both API version 1 and version 2.
- On JSON RPC output, `Amount` is only present in API version 1 and _not_ in version 2.
#### Modifications to account_info response
- `signer_lists` is returned in the root of the response. (In API version 1, it was nested under `account_data`.) (https://github.com/XRPLF/rippled/pull/3770)
- When using an invalid `signer_lists` value, the API now returns an "invalidParams" error. (https://github.com/XRPLF/rippled/pull/4585)
- (`signer_lists` must be a boolean. In API version 1, strings were accepted and may return a normal response - i.e. as if `signer_lists` were `true`.)
#### Modifications to [account_tx](https://xrpl.org/account_tx.html#account_tx) response
- Using `ledger_index_min`, `ledger_index_max`, and `ledger_index` returns `invalidParams` because if you use `ledger_index_min` or `ledger_index_max`, then it does not make sense to also specify `ledger_index`. In API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4571)
- The same applies for `ledger_index_min`, `ledger_index_max`, and `ledger_hash`. (https://github.com/XRPLF/rippled/issues/4545#issuecomment-1565065579)
- Using a `ledger_index_min` or `ledger_index_max` beyond the range of ledgers that the server has:
- returns `lgrIdxMalformed` in API version 2. Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/issues/4288)
- Attempting to use a non-boolean value (such as a string) for the `binary` or `forward` parameters returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
#### Modifications to [noripple_check](https://xrpl.org/noripple_check.html#noripple_check) response
- Attempting to use a non-boolean value (such as a string) for the `transactions` parameter returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
## API Version 1
This version is supported by all `rippled` versions. For WebSocket and HTTP JSON-RPC requests, it is currently the default API version used when no `api_version` is specified.
The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conventions/request-formatting/#commandline-format) always uses the latest API version. The command line is intended for ad-hoc usage by humans, not programs or automated scripts. The command line is not meant for use in production code.
### Inconsistency: server_info - network_id
The `network_id` field was added in the `server_info` response in version 1.5.0 (2019), but it is not returned in [reporting mode](https://xrpl.org/rippled-server-modes.html#reporting-mode). However, use of reporting mode is now discouraged, in favor of using [Clio](https://github.com/XRPLF/clio) instead.
## XRP Ledger server version 2.2.0
The following is a non-breaking addition to the API.
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
### Breaking change in 2.3
- `book_changes`: If the requested ledger version is not available on this node, a `ledgerNotFound` error is returned and the node does not attempt to acquire the ledger from the p2p network (as with other non-admin RPCs).
Admins can still attempt to retrieve old ledgers with the `ledger_request` RPC.
### Addition in 2.3
- `book_changes`: Returns a `validated` field in its response, which was missing in prior versions.
The following additions are non-breaking (because they are purely additive).
- `server_definitions`: A new RPC that generates a `definitions.json`-like output that can be used in XRPL libraries.
- In `Payment` transactions, `DeliverMax` has been added. This is a replacement for the `Amount` field, which should not be used. Typically, the `delivered_amount` (in transaction metadata) should be used. To ease the transition, `DeliverMax` is present regardless of API version, since adding a field is non-breaking.
- API version 2 has been moved from beta to supported, meaning that it is generally available (regardless of the `beta_rpc_api` setting).
## XRP Ledger server version 1.12.0
[Version 1.12.0](https://github.com/XRPLF/rippled/releases/tag/1.12.0) was released on Sep 6, 2023. The following additions are non-breaking (because they are purely additive).
- `server_info`: Added `ports`, an array which advertises the RPC and WebSocket ports. This information is also included in the `/crawl` endpoint (which calls `server_info` internally). `grpc` and `peer` ports are also included. (https://github.com/XRPLF/rippled/pull/4427)
- `ports` contains objects, each containing a `port` for the listening port (a number string), and a `protocol` array listing the supported protocols on that port.
- This allows crawlers to build a more detailed topology without needing to port-scan nodes.
- (For peers and other non-admin clients, the info about admin ports is excluded.)
- Clawback: The following additions are gated by the Clawback amendment (`featureClawback`). (https://github.com/XRPLF/rippled/pull/4553)
- Adds an [AccountRoot flag](https://xrpl.org/accountroot.html#accountroot-flags) called `lsfAllowTrustLineClawback` (https://github.com/XRPLF/rippled/pull/4617)
- Adds the corresponding `asfAllowTrustLineClawback` [AccountSet Flag](https://xrpl.org/accountset.html#accountset-flags) as well.
- Clawback is disabled by default, so if an issuer desires the ability to claw back funds, they must use an `AccountSet` transaction to set the AllowTrustLineClawback flag. They must do this before creating any trust lines, offers, escrows, payment channels, or checks.
- Adds the [Clawback transaction type](https://github.com/XRPLF/XRPL-Standards/blob/master/XLS-39d-clawback/README.md#331-clawback-transaction), containing these fields:
- `Account`: The issuer of the asset being clawed back. Must also be the sender of the transaction.
- `Amount`: The amount being clawed back, with the `Amount.issuer` being the token holder's address.
- Adds [AMM](https://github.com/XRPLF/XRPL-Standards/discussions/78) ([#4294](https://github.com/XRPLF/rippled/pull/4294), [#4626](https://github.com/XRPLF/rippled/pull/4626)) feature:
- Adds `amm_info` API to retrieve AMM information for a given tokens pair.
- Adds `AMMCreate` transaction type to create `AMM` instance.
- Adds `AMMDeposit` transaction type to deposit funds into `AMM` instance.
- Adds `AMMWithdraw` transaction type to withdraw funds from `AMM` instance.
- Adds `AMMVote` transaction type to vote for the trading fee of `AMM` instance.
- Adds `AMMBid` transaction type to bid for the Auction Slot of `AMM` instance.
- Adds `AMMDelete` transaction type to delete `AMM` instance.
- Adds `sfAMMID` to `AccountRoot` to indicate that the account is `AMM`'s account. `AMMID` is used to fetch `ltAMM`.
- Adds `lsfAMMNode` `TrustLine` flag to indicate that one side of the `TrustLine` is `AMM` account.
- Adds `tfLPToken`, `tfSingleAsset`, `tfTwoAsset`, `tfOneAssetLPToken`, `tfLimitLPToken`, `tfTwoAssetIfEmpty`,
`tfWithdrawAll`, `tfOneAssetWithdrawAll` which allow a trader to specify different fields combination
for `AMMDeposit` and `AMMWithdraw` transactions.
- Adds new transaction result codes:
- tecUNFUNDED_AMM: insufficient balance to fund AMM. The account does not have funds for liquidity provision.
- tecAMM_BALANCE: AMM has invalid balance. Calculated balances greater than the current pool balances.
- tecAMM_FAILED: AMM transaction failed. Fails due to a processing failure.
- tecAMM_INVALID_TOKENS: AMM invalid LP tokens. Invalid input values, format, or calculated values.
- tecAMM_EMPTY: AMM is in empty state. Transaction requires AMM in non-empty state (LP tokens > 0).
- tecAMM_NOT_EMPTY: AMM is not in empty state. Transaction requires AMM in empty state (LP tokens == 0).
- tecAMM_ACCOUNT: AMM account. Clawback of AMM account.
- tecINCOMPLETE: Some work was completed, but more submissions required to finish. AMMDelete partially deletes the trustlines.
## XRP Ledger server version 1.11.0
[Version 1.11.0](https://github.com/XRPLF/rippled/releases/tag/1.11.0) was released on Jun 20, 2023.
### Breaking changes in 1.11
- Added the ability to mark amendments as obsolete. For the `feature` admin API, there is a new possible value for the `vetoed` field. (https://github.com/XRPLF/rippled/pull/4291)
- The value of `vetoed` can now be `true`, `false`, or `"Obsolete"`.
- Removed the acceptance of seeds or public keys in place of account addresses. (https://github.com/XRPLF/rippled/pull/4404)
- This simplifies the API and encourages better security practices (i.e. seeds should never be sent over the network).
- For the `ledger_data` method, when all entries are filtered out, the `state` field of the response is now an empty list (in other words, an empty array, `[]`). (Previously, it would return `null`.) While this is technically a breaking change, the new behavior is consistent with the documentation, so this is considered only a bug fix. (https://github.com/XRPLF/rippled/pull/4398)
- If and when the `fixNFTokenRemint` amendment activates, there will be a new AccountRoot field, `FirstNFTSequence`. This field is set to the current account sequence when the account issues their first NFT. If an account has not issued any NFTs, then the field is not set. ([#4406](https://github.com/XRPLF/rippled/pull/4406))
- There is a new account deletion restriction: an account can only be deleted if `FirstNFTSequence` + `MintedNFTokens` + `256` is less than the current ledger sequence.
- This is potentially a breaking change if clients have logic for determining whether an account can be deleted.
- NetworkID
- For sidechains and networks with a network ID greater than 1024, there is a new [transaction common field](https://xrpl.org/transaction-common-fields.html), `NetworkID`. (https://github.com/XRPLF/rippled/pull/4370)
- This field helps to prevent replay attacks and is now required for chains whose network ID is 1025 or higher.
- The field must be omitted for Mainnet, so there is no change for Mainnet users.
- There are three new local error codes:
- `telNETWORK_ID_MAKES_TX_NON_CANONICAL`: a `NetworkID` is present but the chain's network ID is less than 1025. Remove the field from the transaction, and try again.
- `telREQUIRES_NETWORK_ID`: a `NetworkID` is required, but is not present. Add the field to the transaction, and try again.
- `telWRONG_NETWORK`: a `NetworkID` is specified, but it is for a different network. Submit the transaction to a different server which is connected to the correct network.
### Additions and bug fixes in 1.11
- Added `nftoken_id`, `nftoken_ids` and `offer_id` meta fields into NFT `tx` and `account_tx` responses. (https://github.com/XRPLF/rippled/pull/4447)
- Added an `account_flags` object to the `account_info` method response. (https://github.com/XRPLF/rippled/pull/4459)
- Added `NFTokenPages` to the `account_objects` RPC. (https://github.com/XRPLF/rippled/pull/4352)
- Fixed: `marker` returned from the `account_lines` command would not work on subsequent commands. (https://github.com/XRPLF/rippled/pull/4361)
## XRP Ledger server version 1.10.0
[Version 1.10.0](https://github.com/XRPLF/rippled/releases/tag/1.10.0)
was released on Mar 14, 2023.
### Breaking changes in 1.10
- If the `XRPFees` feature is enabled, the `fee_ref` field will be
removed from the [ledger subscription stream](https://xrpl.org/subscribe.html#ledger-stream), because it will no longer
have any meaning.
# Unit tests for API changes
The following information is useful to developers contributing to this project:
The purpose of unit tests is to catch bugs and prevent regressions. In general, it often makes sense to create a test function when there is a breaking change to the API. For APIs that have changed in a new API version, the tests should be modified so that both the prior version and the new version are properly tested.
To take one example: for `account_info` version 1, WebSocket and JSON-RPC behavior should be tested. The latest API version, i.e. API version 2, should be tested over WebSocket, JSON-RPC, and command line.

656
BUILD.md
View File

@@ -1,121 +1,11 @@
## A crash course in CMake and Conan
To better understand how to use Conan,
we should first understand _why_ we use Conan,
and to understand that,
we need to understand how we use CMake.
### CMake
Technically, you don't need CMake to build this project.
You could manually compile every translation unit into an object file,
using the right compiler options,
and then manually link all those objects together,
using the right linker options.
However, that is very tedious and error-prone,
which is why we lean on tools like CMake.
We have written CMake configuration files
([`CMakeLists.txt`](./CMakeLists.txt) and friends)
for this project so that CMake can be used to correctly compile and link
all of the translation units in it.
Or rather, CMake will generate files for a separate build system
(e.g. Make, Ninja, Visual Studio, Xcode, etc.)
that compile and link all of the translation units.
Even then, CMake has parameters, some of which are platform-specific.
In CMake's parlance, parameters are specially-named **variables** like
[`CMAKE_BUILD_TYPE`][build_type] or
[`CMAKE_MSVC_RUNTIME_LIBRARY`][runtime].
Parameters include:
- what build system to generate files for
- where to find the compiler and linker
- where to find dependencies, e.g. libraries and headers
- how to link dependencies, e.g. any special compiler or linker flags that
need to be used with them, including preprocessor definitions
- how to compile translation units, e.g. with optimizations, debug symbols,
position-independent code, etc.
- on Windows, which runtime library to link with
For some of these parameters, like the build system and compiler,
CMake goes through a complicated search process to choose default values.
For others, like the dependencies,
_we_ had written in the CMake configuration files of this project
our own complicated process to choose defaults.
For most developers, things "just worked"... until they didn't, and then
you were left trying to debug one of these complicated processes, instead of
choosing and manually passing the parameter values yourself.
You can pass every parameter to CMake on the command line,
but writing out these parameters every time we want to configure CMake is
a pain.
Most humans prefer to put them into a configuration file, once, that
CMake can read every time it is configured.
For CMake, that file is a [toolchain file][toolchain].
### Conan
These next few paragraphs on Conan are going to read much like the ones above
for CMake.
Technically, you don't need Conan to build this project.
You could manually download, configure, build, and install all of the
dependencies yourself, and then pass all of the parameters necessary for
CMake to link to those dependencies.
To guarantee ABI compatibility, you must be sure to use the same set of
compiler and linker options for all dependencies _and_ this project.
However, that is very tedious and error-prone, which is why we lean on tools
like Conan.
We have written a Conan configuration file ([`conanfile.py`](./conanfile.py))
so that Conan can be used to correctly download, configure, build, and install
all of the dependencies for this project,
using a single set of compiler and linker options for all of them.
It generates files that contain almost all of the parameters that CMake
expects.
Those files include:
- A single toolchain file.
- For every dependency, a CMake [package configuration file][pcf],
[package version file][pvf], and for every build type, a package
targets file.
Together, these files implement version checking and define `IMPORTED`
targets for the dependencies.
The toolchain file itself amends the search path
([`CMAKE_PREFIX_PATH`][prefix_path]) so that [`find_package()`][find_package]
will [discover][search] the generated package configuration files.
**Nearly all we must do to properly configure CMake is pass the toolchain
file.**
What CMake parameters are left out?
You'll still need to pick a build system generator,
and if you choose a single-configuration generator,
you'll need to pass the `CMAKE_BUILD_TYPE`,
which should match the `build_type` setting you gave to Conan.
Even then, Conan has parameters, some of which are platform-specific.
In Conan's parlance, parameters are either settings or options.
**Settings** are shared by all packages, e.g. the build type.
**Options** are specific to a given package, e.g. whether to build and link
OpenSSL as a shared library.
For settings, Conan goes through a complicated search process to choose
defaults.
For options, each package recipe defines its own defaults.
You can pass every parameter to Conan on the command line,
but it is more convenient to put them in a [profile][profile].
**All we must do to properly configure Conan is edit and pass the profile.**
By default, Conan will use the profile named "default".
You can let Conan create the default profile with this command:
```
conan profile new default --detect
```
| :warning: **WARNING** :warning:
|---|
| These instructions assume you have a C++ development environment ready with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up on Linux, macOS, or Windows, [see this guide](./docs/build/environment.md). |
> These instructions also assume a basic familiarity with Conan and CMake.
> If you are unfamiliar with Conan,
> you can read our [crash course](./docs/build/conan.md)
> or the official [Getting Started][3] walkthrough.
## Branches
@@ -132,303 +22,457 @@ For the latest release candidate, choose the `release` branch.
git checkout release
```
If you are contributing or want the latest set of untested features,
then use the `develop` branch.
For the latest set of untested features, or to contribute, choose the `develop`
branch.
```
git checkout develop
```
## Minimum Requirements
## Platforms
See [System Requirements](https://xrpl.org/system-requirements.html).
rippled is written in the C++20 dialect and includes the `<concepts>` header.
The [minimum compiler versions][2] that can compile this dialect are given
below:
Building rippled generally requires git, Python, Conan, CMake, and a C++ compiler. Some guidance on setting up such a [C++ development environment can be found here](./docs/build/environment.md).
| Compiler | Minimum Version
|---|---
| GCC | 10
| Clang | 13
| Apple Clang | 13.1.6
| MSVC | 19.23
- [Python 3.7](https://www.python.org/downloads/)
- [Conan 1.60](https://conan.io/downloads.html)[^1]
- [CMake 3.16](https://cmake.org/download/)
We do not recommend Windows for rippled production use at this time.
As of January 2023, the Ubuntu platform has received the highest level of
[^1]: It is possible to build with Conan 2.x,
but the instructions are significantly different,
which is why we are not recommending it yet.
Notably, the `conan profile update` command is removed in 2.x.
Profiles must be edited by hand.
`rippled` is written in the C++20 dialect and includes the `<concepts>` header.
The [minimum compiler versions][2] required are:
| Compiler | Version |
|-------------|---------|
| GCC | 11 |
| Clang | 13 |
| Apple Clang | 13.1.6 |
| MSVC | 19.23 |
### Linux
The Ubuntu operating system has received the highest level of
quality assurance, testing, and support.
Additionally, 32-bit Windows development is not supported.
Visual Studio 2022 is not yet supported.
This is because rippled is not compatible with [Boost][] versions 1.78 or 1.79,
but Conan cannot build Boost versions released earlier than them with VS 2022.
We expect that rippled will be compatible with Boost 1.80, which should be
released in August 2022.
Until then, we advise Windows developers to use Visual Studio 2019.
Here are [sample instructions for setting up a C++ development environment on Linux](./docs/build/environment.md#linux).
### Mac
Many rippled engineers use macOS for development.
Here are [sample instructions for setting up a C++ development environment on macOS](./docs/build/environment.md#macos).
### Windows
Windows is not recommended for production use at this time.
- Additionally, 32-bit Windows development is not supported.
[Boost]: https://www.boost.org/
## Steps
## Prerequisites
### Set Up Conan
To build this package, you will need Python (>= 3.7),
[Conan][] (>= 1.55), and [CMake][] (>= 3.16).
After you have a [C++ development environment](./docs/build/environment.md) ready with Git, Python, Conan, CMake, and a C++ compiler, you may need to set up your Conan profile.
> **Warning**
> The commands in this document are not meant to be blindly copied and pasted.
> This document is written for multiple audiences,
> meaning that your particular circumstances may require some commands and not
> others.
> You should never run any commands without understanding what they do
> and why you are running them.
>
> These instructions assume a basic familiarity with Conan and CMake.
> If you are unfamiliar with Conan,
> then please read the [crash course](#a-crash-course-in-cmake-and-conan)
> at the beginning of this document,
> or the official [Getting Started][3] walkthrough.
These instructions assume a basic familiarity with Conan and CMake.
[Conan]: https://conan.io/downloads.html
[CMake]: https://cmake.org/download/
If you are unfamiliar with Conan, then please read [this crash course](./docs/build/conan.md) or the official [Getting Started][3] walkthrough.
You'll need to compile in the C++20 dialect:
You'll need at least one Conan profile:
```
conan profile update settings.compiler.cppstd=20 default
```
```
conan profile new default --detect
```
Linux developers will commonly have a default Conan [profile][] that compiles
Update the compiler settings:
```
conan profile update settings.compiler.cppstd=20 default
```
Configure Conan (1.x only) to use recipe revisions:
```
conan config set general.revisions_enabled=1
```
**Linux** developers will commonly have a default Conan [profile][] that compiles
with GCC and links with libstdc++.
If you are linking with libstdc++ (see profile setting `compiler.libcxx`),
then you will need to choose the `libstdc++11` ABI:
```
conan profile update settings.compiler.libcxx=libstdc++11 default
```
Ensure inter-operability between `boost::string_view` and `std::string_view` types:
```
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_BEAST_USE_STD_STRING_VIEW"]' default
conan profile update 'env.CXXFLAGS="-DBOOST_BEAST_USE_STD_STRING_VIEW"' default
```
We find it necessary to use the x64 native build tools on Windows.
If you have other flags in the `conf.tools.build` or `env.CXXFLAGS` sections, make sure to retain the existing flags and append the new ones. You can check them with:
```
conan profile show default
```
**Windows** developers may need to use the x64 native build tools.
An easy way to do that is to run the shortcut "x64 Native Tools Command
Prompt" for the version of Visual Studio that you have installed.
Windows developers must build rippled and its dependencies for the x64
architecture:
Windows developers must also build `rippled` and its dependencies for the x64
architecture:
```
conan profile update settings.arch=x86_64 default
```
### Multiple compilers
When `/usr/bin/g++` exists on a platform, it is the default cpp compiler. This
default works for some users.
However, if this compiler cannot build rippled or its dependencies, then you can
install another compiler and set Conan and CMake to use it.
Update the `conf.tools.build:compiler_executables` setting in order to set the correct variables (`CMAKE_<LANG>_COMPILER`) in the
generated CMake toolchain file.
For example, on Ubuntu 20, you may have gcc at `/usr/bin/gcc` and g++ at `/usr/bin/g++`; if that is the case, you can select those compilers with:
```
conan profile update settings.arch=x86_64 default
conan profile update 'conf.tools.build:compiler_executables={"c": "/usr/bin/gcc", "cpp": "/usr/bin/g++"}' default
```
If you have multiple compilers installed on your platform,
then you'll need to make sure that Conan and CMake select the one you want to
use.
This setting will set the correct variables (`CMAKE_<LANG>_COMPILER`) in the
generated CMake toolchain file:
```
conan profile update 'conf.tools.build:compiler_executables={"c": "<path>", "cpp": "<path>"}' default
```
Replace `/usr/bin/gcc` and `/usr/bin/g++` with paths to the desired compilers.
It should choose the compiler for dependencies as well,
but not all of them have a Conan recipe that respects this setting (yet).
For the rest, you can set these environment variables:
For the rest, you can set these environment variables.
Replace `<path>` with paths to the desired compilers:
```
conan profile update env.CC=<path> default
conan profile update env.CXX=<path> default
```
- `conan profile update env.CC=<path> default`
- `conan profile update env.CXX=<path> default`
Export our [Conan recipe for Snappy](./external/snappy).
It does not explicitly link the C++ standard library,
which allows you to statically link it with GCC, if you want.
```
# Conan 1.x
conan export external/snappy snappy/1.1.10@
# Conan 2.x
conan export --version 1.1.10 external/snappy
```
Export our [Conan recipe for RocksDB](./external/rocksdb).
It does not override paths to dependencies when building with Visual Studio.
```
# Conan 1.x
conan export external/rocksdb rocksdb/6.29.5@
# Conan 2.x
conan export --version 6.29.5 external/rocksdb
```
Export our [Conan recipe for SOCI](./external/soci).
It patches their CMake to correctly import its dependencies.
```
# Conan 1.x
conan export external/soci soci/4.0.3@
# Conan 2.x
conan export --version 4.0.3 external/soci
```
Export our [Conan recipe for NuDB](./external/nudb).
It fixes some source files to add missing `#include`s.
## How to build and test
```
# Conan 1.x
conan export external/nudb nudb/2.0.8@
# Conan 2.x
conan export --version 2.0.8 external/nudb
```
Let's start with a couple of examples of common workflows.
The first is for a single-configuration generator (e.g. Unix Makefiles) on
Linux or MacOS:
### Build and Test
```
conan export external/snappy snappy/1.1.9@
mkdir .build
cd .build
conan install .. --output-folder . --build missing --settings build_type=Release
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release ..
cmake --build .
./rippled --unittest
```
1. Create a build directory and move into it.
The second is for a multi-configuration generator (e.g. Visual Studio) on
Windows:
```
mkdir .build
cd .build
```
```
conan export external/snappy snappy/1.1.9@
mkdir .build
cd .build
conan install .. --output-folder . --build missing --settings build_type=Release --settings compiler.runtime=MT
conan install .. --output-folder . --build missing --settings build_type=Debug --settings compiler.runtime=MTd
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake ..
cmake --build . --config Release
cmake --build . --config Debug
./Release/rippled --unittest
./Debug/rippled --unittest
```
You can use any directory name. Conan treats your working directory as an
install folder and generates files with implementation details.
You don't need to worry about these files, but make sure to change
your working directory to your build directory before calling Conan.
Now to explain the individual steps in each example:
**Note:** You can specify a directory for the installation files by adding
the `install-folder` or `-if` option to every `conan install` command
in the next step.
1. Export our [Conan recipe for Snappy](./external/snappy).
2. Generate CMake files for every configuration you want to build.
It does not explicitly link the C++ standard library,
which allows us to statically link it.
1. Create a build directory (and move into it).
You can choose any name you want.
Conan will generate some files in what it calls the "install folder".
These files are implementation details that you don't need to worry about.
By default, the install folder is your current working directory.
If you don't move into your build directory before calling Conan,
then you may be annoyed to see it polluting your project root directory
with these files.
To make Conan put them in your build directory,
you'll have to add the option
`--install-folder` or `-if` to every `conan install` command.
1. Generate CMake files for every configuration you want to build.
```
conan install .. --output-folder . --build missing --settings build_type=Release
conan install .. --output-folder . --build missing --settings build_type=Debug
```
For a single-configuration generator, e.g. `Unix Makefiles` or `Ninja`,
you only need to run this command once.
For a multi-configuration generator, e.g. `Visual Studio`, you may want to
run it more than once.
Each of these commands should have a different `build_type` setting.
A second command with the same `build_type` setting will just overwrite
the files generated by the first.
You can pass the build type on the command line with `--settings
build_type=$BUILD_TYPE` or in the profile itself, under the section
`[settings]`, with the key `build_type`.
Each of these commands should also have a different `build_type` setting.
A second command with the same `build_type` setting will overwrite the files
generated by the first. You can pass the build type on the command line with
`--settings build_type=$BUILD_TYPE` or in the profile itself,
under the section `[settings]` with the key `build_type`.
If you are using a Microsoft Visual C++ compiler,
then you will need to ensure consistency between the `build_type` setting
and the `compiler.runtime` setting.
If you are using a Microsoft Visual C++ compiler, then you will need to
ensure consistency between the `build_type` setting and the
`compiler.runtime` setting.
When `build_type` is `Release`, `compiler.runtime` should be `MT`.
When `build_type` is `Debug`, `compiler.runtime` should be `MTd`.
1. Configure CMake once.
```
conan install .. --output-folder . --build missing --settings build_type=Release --settings compiler.runtime=MT
conan install .. --output-folder . --build missing --settings build_type=Debug --settings compiler.runtime=MTd
```
For all choices of generator, pass the toolchain file generated by Conan.
It will be located at
`$OUTPUT_FOLDER/build/generators/conan_toolchain.cmake`.
If you are using a single-configuration generator, then pass the CMake
variable [`CMAKE_BUILD_TYPE`][build_type] and make sure it matches the
`build_type` setting you chose in the previous step.
3. Configure CMake and pass the toolchain file generated by Conan, located at
`$OUTPUT_FOLDER/build/generators/conan_toolchain.cmake`.
This step is where you may pass build options for rippled.
Single-config generators:
1. Build rippled.
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release -Dxrpld=ON -Dtests=ON ..
```
For a multi-configuration generator, you must pass the option `--config`
to select the build configuration.
For a single-configuration generator, it will build whatever configuration
you passed for `CMAKE_BUILD_TYPE`.
Pass the CMake variable [`CMAKE_BUILD_TYPE`][build_type]
and make sure it matches the `build_type` setting you chose in the previous
step.
Multi-config generators:
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -Dxrpld=ON -Dtests=ON ..
```
**Note:** You can pass build options for `rippled` in this step.
4. Build `rippled`.
For a single-configuration generator, it will build whatever configuration
you passed for `CMAKE_BUILD_TYPE`. For a multi-configuration generator,
you must pass the option `--config` to select the build configuration.
Single-config generators:
```
cmake --build .
```
Multi-config generators:
```
cmake --build . --config Release
cmake --build . --config Debug
```
5. Test rippled.
The exact location of rippled in your build directory
depends on your choice of CMake generator.
You can run unit tests by passing `--unittest`.
Pass `--help` to see the rest of the command line options.
Single-config generators:
```
./rippled --unittest
```
Multi-config generators:
```
./Release/rippled --unittest
./Debug/rippled --unittest
```
The location of `rippled` in your build directory depends on your CMake
generator. Pass `--help` to see the rest of the command line options.
### Options
## Coverage report
The `unity` option allows you to select between [unity][5] and non-unity
builds.
Unity builds may be faster for the first build (at the cost of much
more memory) since they concatenate sources into fewer translation
units.
Non-unity builds may be faster for incremental builds, and can be helpful for
detecting `#include` omissions.
The coverage report is intended for developers using compilers GCC
or Clang (including Apple Clang). It is generated by the build target `coverage`,
which is only enabled when the `coverage` option is set, e.g. with
`--options coverage=True` in `conan` or `-Dcoverage=ON` variable in `cmake`
Below are the most commonly used options,
with their default values in parentheses.
Prerequisites for the coverage report:
- `assert` (OFF): Enable assertions.
- `reporting` (OFF): Build the reporting mode feature.
- `tests` (ON): Build tests.
- `unity` (ON): Configure a [unity build][5].
- `san` (): Enable a sanitizer with Clang. Choices are `thread` and `address`.
- [gcovr tool][gcovr] (can be installed e.g. with [pip][python-pip])
- `gcov` for GCC (installed with the compiler by default) or
- `llvm-cov` for Clang (installed with the compiler by default)
- `Debug` build type
A coverage report is created when the following steps are completed, in order:
### Troubleshooting
1. `rippled` binary built with instrumentation data, enabled by the `coverage`
option mentioned above
2. completed run of unit tests, which populates coverage capture data
3. completed run of the `gcovr` tool (which internally invokes either `gcov` or `llvm-cov`)
to assemble both instrumentation data and the coverage capture data into a coverage report
#### Conan
The above steps are automated into a single target `coverage`. The instrumented
`rippled` binary can also be used for regular development or testing work, at
the cost of extra disk space utilization and a small performance hit
(to store coverage capture). In case of a spurious failure of unit tests, it is
possible to re-run the `coverage` target without rebuilding the `rippled` binary
(since it is simply a dependency of the coverage report target). It is also possible
to select only specific tests for the purpose of the coverage report, by setting
the `coverage_test` variable in `cmake`
If you find trouble building dependencies after changing Conan settings,
then you should retry after removing the Conan cache:
The default coverage report format is `html-details`, but the user
can override it to any of the formats listed in `Builds/CMake/CodeCoverage.cmake`
by setting the `coverage_format` variable in `cmake`. It is also possible
to generate more than one format at a time by setting the `coverage_extra_args`
variable in `cmake`. The specific command line used to run the `gcovr` tool will be
displayed if the `CODE_COVERAGE_VERBOSE` variable is set.
By default, the code coverage tool runs parallel unit tests with `--unittest-jobs`
set to the number of available CPU cores. This may cause spurious test
errors on Apple. Developers can override the number of unit test jobs with
the `coverage_test_parallelism` variable in `cmake`.
Example use with some cmake variables set:
```
rm -rf ~/.conan/data
cd .build
conan install .. --output-folder . --build missing --settings build_type=Debug
cmake -DCMAKE_BUILD_TYPE=Debug -Dcoverage=ON -Dxrpld=ON -Dtests=ON -Dcoverage_test_parallelism=2 -Dcoverage_format=html-details -Dcoverage_extra_args="--json coverage.json" -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake ..
cmake --build . --target coverage
```
After the `coverage` target is completed, the generated coverage report will be
stored inside the build directory, as either of:
#### no std::result_of
- file named `coverage.`_extension_ , with a suitable extension for the report format, or
- directory named `coverage`, with the `index.html` and other files inside, for the `html-details` or `html-nested` report formats.
## Options
| Option | Default Value | Description |
| --- | ---| ---|
| `assert` | OFF | Enable assertions.
| `coverage` | OFF | Prepare the coverage report. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
| `tests` | OFF | Build tests. |
| `unity` | ON | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld (`rippled`) application, and not just the libxrpl library. |
[Unity builds][5] may be faster for the first build
(at the cost of much more memory) since they concatenate sources into fewer
translation units. Non-unity builds may be faster for incremental builds,
and can be helpful for detecting `#include` omissions.
## Troubleshooting
### Conan
After any updates or changes to dependencies, you may need to do the following:
1. Remove your build directory.
2. Remove the Conan cache:
```
rm -rf ~/.conan/data
```
4. Re-run [conan install](#build-and-test).
### no std::result_of
If your compiler version is recent enough to have removed `std::result_of` as
part of C++20, e.g. Apple Clang 15.0,
then you might need to add a preprocessor definition to your bulid:
part of C++20, e.g. Apple Clang 15.0, then you might need to add a preprocessor
definition to your build.
```
conan profile update 'options.boost:extra_b2_flags="define=BOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'tools.build:cflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
conan profile update 'tools.build:cxxflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
conan profile update 'conf.tools.build:cflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
```
#### recompile with -fPIC
### call to 'async_teardown' is ambiguous
If you are compiling with an early version of Clang 16, then you might hit
a [regression][6] when compiling C++20 that manifests as an [error in a Boost
header][7]. You can workaround it by adding this preprocessor definition:
```
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_DISABLE_CONCEPTS"' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
```
### recompile with -fPIC
If you get a linker error suggesting that you recompile Boost with
position-independent code, such as:
```
/usr/bin/ld.gold: error: /home/username/.conan/data/boost/1.77.0/_/_/package/.../lib/libboost_container.a(alloc_lib.o):
requires unsupported dynamic reloc 11; recompile with -fPIC
```
If you get a linker error like the one above suggesting that you recompile
Boost with position-independent code, the reason is most likely that Conan
downloaded a bad binary distribution of the dependency.
For now, this seems to be a [bug][1] in Conan just for Boost 1.77.0 compiled
with GCC for Linux.
The solution is to build the dependency locally by passing `--build boost`
when calling `conan install`:
Conan most likely downloaded a bad binary distribution of the dependency.
This seems to be a [bug][1] in Conan just for Boost 1.77.0 compiled with GCC
for Linux. The solution is to build the dependency locally by passing
`--build boost` when calling `conan install`.
```
conan install --build boost ...
```
## How to add a dependency
## Add a Dependency
If you want to experiment with a new package, here are the steps to get it
working:
If you want to experiment with a new package, follow these steps:
1. Search for the package on [Conan Center](https://conan.io/center/).
1. In [`conanfile.py`](./conanfile.py):
1. Add a version of the package to the `requires` property.
1. Change any default options for the package by adding them to the
`default_options` property (with syntax `'$package:$option': $value`)
1. In [`CMakeLists.txt`](./CMakeLists.txt):
1. Add a call to `find_package($package REQUIRED)`.
1. Link a library from the package to the target `ripple_libs` (search for
the existing call to `target_link_libraries(ripple_libs INTERFACE ...)`).
1. Start coding! Don't forget to include whatever headers you need from the
package.
2. Modify [`conanfile.py`](./conanfile.py):
- Add a version of the package to the `requires` property.
- Change any default options for the package by adding them to the
`default_options` property (with syntax `'$package:$option': $value`).
3. Modify [`CMakeLists.txt`](./CMakeLists.txt):
- Add a call to `find_package($package REQUIRED)`.
- Link a library from the package to the target `ripple_libs`
(search for the existing call to `target_link_libraries(ripple_libs INTERFACE ...)`).
4. Start coding! Don't forget to include whatever headers you need from the package.
[1]: https://github.com/conan-io/conan-center-index/issues/13168
[2]: https://en.cppreference.com/w/cpp/compiler_support/20
[3]: https://docs.conan.io/en/latest/getting_started.html
[5]: https://en.wikipedia.org/wiki/Unity_build
[6]: https://github.com/boostorg/beast/issues/2648
[7]: https://github.com/boostorg/beast/issues/2661
[gcovr]: https://gcovr.com/en/stable/getting-started.html
[python-pip]: https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/
[build_type]: https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html
[runtime]: https://cmake.org/cmake/help/latest/variable/CMAKE_MSVC_RUNTIME_LIBRARY.html
[toolchain]: https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html
[pcf]: https://cmake.org/cmake/help/latest/manual/cmake-packages.7.html#package-configuration-file
[pvf]: https://cmake.org/cmake/help/latest/manual/cmake-packages.7.html#package-version-file
[find_package]: https://cmake.org/cmake/help/latest/command/find_package.html
[search]: https://cmake.org/cmake/help/latest/command/find_package.html#search-procedure
[prefix_path]: https://cmake.org/cmake/help/latest/variable/CMAKE_PREFIX_PATH.html
[profile]: https://docs.conan.io/en/latest/reference/profiles.html

View File

@@ -1,207 +0,0 @@
macro(group_sources_in source_dir curdir)
file(GLOB children RELATIVE ${source_dir}/${curdir}
${source_dir}/${curdir}/*)
foreach (child ${children})
if (IS_DIRECTORY ${source_dir}/${curdir}/${child})
group_sources_in(${source_dir} ${curdir}/${child})
else()
string(REPLACE "/" "\\" groupname ${curdir})
source_group(${groupname} FILES
${source_dir}/${curdir}/${child})
endif()
endforeach()
endmacro()
macro(group_sources curdir)
group_sources_in(${PROJECT_SOURCE_DIR} ${curdir})
endmacro()
macro (exclude_from_default target_)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_ALL ON)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_DEFAULT_BUILD ON)
endmacro ()
macro (exclude_if_included target_)
get_directory_property(has_parent PARENT_DIRECTORY)
if (has_parent)
exclude_from_default (${target_})
endif ()
endmacro ()
function (print_ep_logs _target)
ExternalProject_Get_Property (${_target} STAMP_DIR)
add_custom_command(TARGET ${_target} POST_BUILD
COMMENT "${_target} BUILD OUTPUT"
COMMAND ${CMAKE_COMMAND}
-DIN_FILE=${STAMP_DIR}/${_target}-build-out.log
-P ${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/echo_file.cmake
COMMAND ${CMAKE_COMMAND}
-DIN_FILE=${STAMP_DIR}/${_target}-build-err.log
-P ${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/echo_file.cmake)
endfunction ()
#[=========================================================[
This is a function override for one function in the
standard ExternalProject module. We want to change
the generated build script slightly to include printing
the build logs in the case of failure. Those modifications
have been made here. This function override could break
in the future if the ExternalProject module changes internal
function names or changes the way it generates the build
scripts.
See:
https://gitlab.kitware.com/cmake/cmake/blob/df1ddeec128d68cc636f2dde6c2acd87af5658b6/Modules/ExternalProject.cmake#L1855-1952
#]=========================================================]
function(_ep_write_log_script name step cmd_var)
ExternalProject_Get_Property(${name} stamp_dir)
set(command "${${cmd_var}}")
set(make "")
set(code_cygpath_make "")
if(command MATCHES "^\\$\\(MAKE\\)")
# GNU make recognizes the string "$(MAKE)" as recursive make, so
# ensure that it appears directly in the makefile.
string(REGEX REPLACE "^\\$\\(MAKE\\)" "\${make}" command "${command}")
set(make "-Dmake=$(MAKE)")
if(WIN32 AND NOT CYGWIN)
set(code_cygpath_make "
if(\${make} MATCHES \"^/\")
execute_process(
COMMAND cygpath -w \${make}
OUTPUT_VARIABLE cygpath_make
ERROR_VARIABLE cygpath_make
RESULT_VARIABLE cygpath_error
OUTPUT_STRIP_TRAILING_WHITESPACE
)
if(NOT cygpath_error)
set(make \${cygpath_make})
endif()
endif()
")
endif()
endif()
set(config "")
if("${CMAKE_CFG_INTDIR}" MATCHES "^\\$")
string(REPLACE "${CMAKE_CFG_INTDIR}" "\${config}" command "${command}")
set(config "-Dconfig=${CMAKE_CFG_INTDIR}")
endif()
# Wrap multiple 'COMMAND' lines up into a second-level wrapper
# script so all output can be sent to one log file.
if(command MATCHES "(^|;)COMMAND;")
set(code_execute_process "
${code_cygpath_make}
execute_process(COMMAND \${command} RESULT_VARIABLE result)
if(result)
set(msg \"Command failed (\${result}):\\n\")
foreach(arg IN LISTS command)
set(msg \"\${msg} '\${arg}'\")
endforeach()
message(FATAL_ERROR \"\${msg}\")
endif()
")
set(code "")
set(cmd "")
set(sep "")
foreach(arg IN LISTS command)
if("x${arg}" STREQUAL "xCOMMAND")
if(NOT "x${cmd}" STREQUAL "x")
string(APPEND code "set(command \"${cmd}\")${code_execute_process}")
endif()
set(cmd "")
set(sep "")
else()
string(APPEND cmd "${sep}${arg}")
set(sep ";")
endif()
endforeach()
string(APPEND code "set(command \"${cmd}\")${code_execute_process}")
file(GENERATE OUTPUT "${stamp_dir}/${name}-${step}-$<CONFIG>-impl.cmake" CONTENT "${code}")
set(command ${CMAKE_COMMAND} "-Dmake=\${make}" "-Dconfig=\${config}" -P ${stamp_dir}/${name}-${step}-$<CONFIG>-impl.cmake)
endif()
# Wrap the command in a script to log output to files.
set(script ${stamp_dir}/${name}-${step}-$<CONFIG>.cmake)
set(logbase ${stamp_dir}/${name}-${step})
set(code "
${code_cygpath_make}
function (_echo_file _fil)
file (READ \${_fil} _cont)
execute_process (COMMAND \${CMAKE_COMMAND} -E echo \"\${_cont}\")
endfunction ()
set(command \"${command}\")
execute_process(
COMMAND \${command}
RESULT_VARIABLE result
OUTPUT_FILE \"${logbase}-out.log\"
ERROR_FILE \"${logbase}-err.log\"
)
if(result)
set(msg \"Command failed: \${result}\\n\")
foreach(arg IN LISTS command)
set(msg \"\${msg} '\${arg}'\")
endforeach()
execute_process (COMMAND \${CMAKE_COMMAND} -E echo \"Build output for ${logbase} : \")
_echo_file (\"${logbase}-out.log\")
_echo_file (\"${logbase}-err.log\")
set(msg \"\${msg}\\nSee above\\n\")
message(FATAL_ERROR \"\${msg}\")
else()
set(msg \"${name} ${step} command succeeded. See also ${logbase}-*.log\")
message(STATUS \"\${msg}\")
endif()
")
file(GENERATE OUTPUT "${script}" CONTENT "${code}")
set(command ${CMAKE_COMMAND} ${make} ${config} -P ${script})
set(${cmd_var} "${command}" PARENT_SCOPE)
endfunction()
find_package(Git)
# function that calls git log to get current hash
function (git_hash hash_val)
# note: optional second extra string argument not in signature
if (NOT GIT_FOUND)
return ()
endif ()
set (_hash "")
set (_format "%H")
if (ARGC GREATER_EQUAL 2)
string (TOLOWER ${ARGV1} _short)
if (_short STREQUAL "short")
set (_format "%h")
endif ()
endif ()
execute_process (COMMAND ${GIT_EXECUTABLE} "log" "--pretty=${_format}" "-n1"
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_hash
OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET)
if (_git_exit_code EQUAL 0)
set (_hash ${_temp_hash})
endif ()
set (${hash_val} "${_hash}" PARENT_SCOPE)
endfunction ()
function (git_branch branch_val)
if (NOT GIT_FOUND)
return ()
endif ()
set (_branch "")
execute_process (COMMAND ${GIT_EXECUTABLE} "rev-parse" "--abbrev-ref" "HEAD"
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_branch
OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET)
if (_git_exit_code EQUAL 0)
set (_branch ${_temp_branch})
endif ()
set (${branch_val} "${_branch}" PARENT_SCOPE)
endfunction ()

View File

@@ -1,60 +0,0 @@
#[=========================================================[
SQLITE doesn't provide build files in the
standard source-only distribution. So we wrote
a simple cmake file and we copy it to the
external project folder so that we can use
this file to build the lib with ExternalProject
#]=========================================================]
add_library (sqlite3 STATIC sqlite3.c)
#[=========================================================[
When compiled with SQLITE_THREADSAFE=1, SQLite operates
in serialized mode. In this mode, SQLite can be safely
used by multiple threads with no restriction.
NOTE: This implies a global mutex!
When compiled with SQLITE_THREADSAFE=2, SQLite can be
used in a multithreaded program so long as no two
threads attempt to use the same database connection at
the same time.
NOTE: This is the preferred threading model, but not
currently enabled because we need to investigate our
use-model and concurrency requirements.
TODO: consider whether any other options should be
used: https://www.sqlite.org/compile.html
#]=========================================================]
target_compile_definitions (sqlite3
PRIVATE
SQLITE_THREADSAFE=1
HAVE_USLEEP=1)
target_compile_options (sqlite3
PRIVATE
$<$<BOOL:${MSVC}>:
-wd4100
-wd4127
-wd4232
-wd4244
-wd4701
-wd4706
-wd4996
>
$<$<NOT:$<BOOL:${MSVC}>>:-Wno-array-bounds>)
install (
TARGETS
sqlite3
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES DESTINATION include)
install (
FILES
sqlite3.h
sqlite3ext.h
DESTINATION include)

File diff suppressed because it is too large Load Diff

View File

@@ -1,98 +0,0 @@
#[===================================================================[
coverage report target
#]===================================================================]
if (coverage)
if (is_clang)
if (APPLE)
execute_process (COMMAND xcrun -f llvm-profdata
OUTPUT_VARIABLE LLVM_PROFDATA
OUTPUT_STRIP_TRAILING_WHITESPACE)
else ()
find_program (LLVM_PROFDATA llvm-profdata)
endif ()
if (NOT LLVM_PROFDATA)
message (WARNING "unable to find llvm-profdata - skipping coverage_report target")
endif ()
if (APPLE)
execute_process (COMMAND xcrun -f llvm-cov
OUTPUT_VARIABLE LLVM_COV
OUTPUT_STRIP_TRAILING_WHITESPACE)
else ()
find_program (LLVM_COV llvm-cov)
endif ()
if (NOT LLVM_COV)
message (WARNING "unable to find llvm-cov - skipping coverage_report target")
endif ()
set (extract_pattern "")
if (coverage_core_only)
set (extract_pattern "${CMAKE_CURRENT_SOURCE_DIR}/src/ripple/")
endif ()
if (LLVM_COV AND LLVM_PROFDATA)
add_custom_target (coverage_report
USES_TERMINAL
COMMAND ${CMAKE_COMMAND} -E echo "Generating coverage - results will be in ${CMAKE_BINARY_DIR}/coverage/index.html."
COMMAND ${CMAKE_COMMAND} -E echo "Running rippled tests."
COMMAND rippled --unittest$<$<BOOL:${coverage_test}>:=${coverage_test}> --quiet --unittest-log
COMMAND ${LLVM_PROFDATA}
merge -sparse default.profraw -o rip.profdata
COMMAND ${CMAKE_COMMAND} -E echo "Summary of coverage:"
COMMAND ${LLVM_COV}
report -instr-profile=rip.profdata
$<TARGET_FILE:rippled> ${extract_pattern}
# generate html report
COMMAND ${LLVM_COV}
show -format=html -output-dir=${CMAKE_BINARY_DIR}/coverage
-instr-profile=rip.profdata
$<TARGET_FILE:rippled> ${extract_pattern}
BYPRODUCTS coverage/index.html)
endif ()
elseif (is_gcc)
find_program (LCOV lcov)
if (NOT LCOV)
message (WARNING "unable to find lcov - skipping coverage_report target")
endif ()
find_program (GENHTML genhtml)
if (NOT GENHTML)
message (WARNING "unable to find genhtml - skipping coverage_report target")
endif ()
set (extract_pattern "*")
if (coverage_core_only)
set (extract_pattern "*/src/ripple/*")
endif ()
if (LCOV AND GENHTML)
add_custom_target (coverage_report
USES_TERMINAL
COMMAND ${CMAKE_COMMAND} -E echo "Generating coverage- results will be in ${CMAKE_BINARY_DIR}/coverage/index.html."
# create baseline info file
COMMAND ${LCOV}
--no-external -d "${CMAKE_CURRENT_SOURCE_DIR}" -c -d . -i -o baseline.info
| grep -v "ignoring data for external file"
# run tests
COMMAND ${CMAKE_COMMAND} -E echo "Running rippled tests for coverage report."
COMMAND rippled --unittest$<$<BOOL:${coverage_test}>:=${coverage_test}> --quiet --unittest-log
# Create test coverage data file
COMMAND ${LCOV}
--no-external -d "${CMAKE_CURRENT_SOURCE_DIR}" -c -d . -o tests.info
| grep -v "ignoring data for external file"
# Combine baseline and test coverage data
COMMAND ${LCOV}
-a baseline.info -a tests.info -o lcov-all.info
# extract our files
COMMAND ${LCOV}
-e lcov-all.info "${extract_pattern}" -o lcov.info
COMMAND ${CMAKE_COMMAND} -E echo "Summary of coverage:"
COMMAND ${LCOV} --summary lcov.info
# generate HTML report
COMMAND ${GENHTML}
-o ${CMAKE_BINARY_DIR}/coverage lcov.info
BYPRODUCTS coverage/index.html)
endif ()
endif ()
endif ()

View File

@@ -1,39 +0,0 @@
#[===================================================================[
multiconfig misc
#]===================================================================]
if (is_multiconfig)
# This code finds all source files in the src subdirectory for inclusion
# in the IDE file tree as non-compiled sources. Since this file list will
# have some overlap with files we have already added to our targets to
# be compiled, we explicitly remove any of these target source files from
# this list.
file (GLOB_RECURSE all_sources RELATIVE ${CMAKE_CURRENT_SOURCE_DIR}
CONFIGURE_DEPENDS
src/*.* Builds/*.md docs/*.md src/*.md Builds/*.cmake)
file(GLOB md_files RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} CONFIGURE_DEPENDS
*.md)
LIST(APPEND all_sources ${md_files})
foreach (_target secp256k1::secp256k1 ed25519::ed25519 pbufs xrpl_core rippled)
get_target_property (_type ${_target} TYPE)
if(_type STREQUAL "INTERFACE_LIBRARY")
continue()
endif()
get_target_property (_src ${_target} SOURCES)
list (REMOVE_ITEM all_sources ${_src})
endforeach ()
target_sources (rippled PRIVATE ${all_sources})
set_property (
SOURCE ${all_sources}
APPEND
PROPERTY HEADER_FILE_ONLY true)
if (MSVC)
set_property(
DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
PROPERTY VS_STARTUP_PROJECT rippled)
endif ()
group_sources(src)
group_sources(docs)
group_sources(Builds)
endif ()

View File

@@ -1,180 +0,0 @@
#[===================================================================[
package/container targets - (optional)
#]===================================================================]
if (is_root_project)
if (NOT DOCKER)
find_program (DOCKER docker)
endif ()
if (DOCKER)
# if no container label is provided, use current git hash
git_hash (commit_hash)
if (NOT container_label)
set (container_label ${commit_hash})
endif ()
message (STATUS "using [${container_label}] as build container tag...")
file (MAKE_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/packages)
#[===================================================================[
rpm
#]===================================================================]
add_custom_target (rpm_container
docker build
--pull
--build-arg GIT_COMMIT=${commit_hash}
-t rippleci/rippled-rpm-builder:${container_label}
$<$<BOOL:${rpm_cache_from}>:--cache-from=${rpm_cache_from}>
-f centos-builder/Dockerfile .
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/Builds/containers
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/centos-builder/Dockerfile
Builds/containers/centos-builder/centos_setup.sh
Builds/containers/shared/update-rippled.sh
Builds/containers/shared/update_sources.sh
Builds/containers/shared/rippled.service
Builds/containers/shared/rippled-reporting.service
Builds/containers/packaging/rpm/rippled.spec
Builds/containers/packaging/rpm/build_rpm.sh
Builds/containers/packaging/rpm/50-rippled.preset
Builds/containers/packaging/rpm/50-rippled-reporting.preset
bin/getRippledInfo
)
exclude_from_default (rpm_container)
add_custom_target (rpm
docker run
-v ${CMAKE_CURRENT_SOURCE_DIR}:/opt/rippled_bld/pkg/rippled
-v ${CMAKE_CURRENT_BINARY_DIR}/packages:/opt/rippled_bld/pkg/out
-t rippleci/rippled-rpm-builder:${container_label}
/bin/bash -c "cp -fpu rippled/Builds/containers/packaging/rpm/build_rpm.sh . && ./build_rpm.sh"
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/packaging/rpm/rippled.spec
)
exclude_from_default (rpm)
if (NOT have_package_container)
add_dependencies(rpm rpm_container)
endif ()
#[===================================================================[
dpkg
#]===================================================================]
# currently use ubuntu 16.04 as a base b/c it has one of
# the lower versions of libc among ubuntu and debian releases.
# we could change this in the future and build with some other deb
# based system.
add_custom_target (dpkg_container
docker build
--pull
--build-arg DIST_TAG=20.04
--build-arg GIT_COMMIT=${commit_hash}
-t rippleci/rippled-dpkg-builder:${container_label}
$<$<BOOL:${dpkg_cache_from}>:--cache-from=${dpkg_cache_from}>
-f ubuntu-builder/Dockerfile .
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/Builds/containers
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/packaging/dpkg/debian/rippled-reporting.links
Builds/containers/packaging/dpkg/debian/copyright
Builds/containers/packaging/dpkg/debian/rules
Builds/containers/packaging/dpkg/debian/rippled-reporting.install
Builds/containers/packaging/dpkg/debian/rippled-reporting.postinst
Builds/containers/packaging/dpkg/debian/rippled.links
Builds/containers/packaging/dpkg/debian/rippled.prerm
Builds/containers/packaging/dpkg/debian/rippled.postinst
Builds/containers/packaging/dpkg/debian/rippled-dev.install
Builds/containers/packaging/dpkg/debian/dirs
Builds/containers/packaging/dpkg/debian/rippled.postrm
Builds/containers/packaging/dpkg/debian/rippled.conffiles
Builds/containers/packaging/dpkg/debian/compat
Builds/containers/packaging/dpkg/debian/source/format
Builds/containers/packaging/dpkg/debian/source/local-options
Builds/containers/packaging/dpkg/debian/README.Debian
Builds/containers/packaging/dpkg/debian/rippled.install
Builds/containers/packaging/dpkg/debian/rippled.preinst
Builds/containers/packaging/dpkg/debian/docs
Builds/containers/packaging/dpkg/debian/control
Builds/containers/packaging/dpkg/debian/rippled-reporting.dirs
Builds/containers/packaging/dpkg/build_dpkg.sh
Builds/containers/ubuntu-builder/Dockerfile
Builds/containers/ubuntu-builder/ubuntu_setup.sh
bin/getRippledInfo
Builds/containers/shared/install_cmake.sh
Builds/containers/shared/update-rippled.sh
Builds/containers/shared/update_sources.sh
Builds/containers/shared/rippled.service
Builds/containers/shared/rippled-reporting.service
Builds/containers/shared/rippled-logrotate
Builds/containers/shared/update-rippled-cron
)
exclude_from_default (dpkg_container)
add_custom_target (dpkg
docker run
-v ${CMAKE_CURRENT_SOURCE_DIR}:/opt/rippled_bld/pkg/rippled
-v ${CMAKE_CURRENT_BINARY_DIR}/packages:/opt/rippled_bld/pkg/out
-t rippleci/rippled-dpkg-builder:${container_label}
/bin/bash -c "cp -fpu rippled/Builds/containers/packaging/dpkg/build_dpkg.sh . && ./build_dpkg.sh"
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/packaging/dpkg/debian/control
)
exclude_from_default (dpkg)
if (NOT have_package_container)
add_dependencies(dpkg dpkg_container)
endif ()
#[===================================================================[
ci container
#]===================================================================]
# now use the same ubuntu image for our travis-ci docker images,
# but we use a newer distro (18.04 vs 16.04).
#
# the following steps assume the github pkg repo, but it's possible to
# adapt these for other docker hub repositories.
#
# steps for publishing a new CI image when you make changes:
#
# mkdir bld.ci && cd bld.ci && cmake -Dpackages_only=ON -Dcontainer_label=CI_LATEST
# cmake --build . --target ci_container --verbose
# docker tag rippled-ci-builder:CI_LATEST <HUB REPO PATH>/rippled-ci-builder:YYYY-MM-DD
# (NOTE: change YYYY-MM-DD to match current date, or use a different
# tag/version scheme if you prefer)
# docker push <HUB REPO PATH>/rippled-ci-builder:YYYY-MM-DD
# (NOTE: <HUB REPO PATH> is probably your user or org name if using
# docker hub, or it might be something like
# docker.pkg.github.com/ripple/rippled if using the github pkg
# registry. for any registry, you will need to be logged-in via
# docker and have push access.)
#
# ...then change the DOCKER_IMAGE line in .travis.yml :
# - DOCKER_IMAGE="<HUB REPO PATH>/rippled-ci-builder:YYYY-MM-DD"
add_custom_target (ci_container
docker build
--pull
--build-arg DIST_TAG=20.04
--build-arg GIT_COMMIT=${commit_hash}
--build-arg CI_USE=true
-t rippled-ci-builder:${container_label}
$<$<BOOL:${ci_cache_from}>:--cache-from=${ci_cache_from}>
-f ubuntu-builder/Dockerfile .
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/Builds/containers
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/ubuntu-builder/Dockerfile
Builds/containers/ubuntu-builder/ubuntu_setup.sh
)
exclude_from_default (ci_container)
else ()
message (STATUS "docker NOT found -- won't be able to build containers for packaging")
endif ()
endif ()

View File

@@ -1,122 +0,0 @@
#[===================================================================[
declare user options/settings
#]===================================================================]
option (assert "Enables asserts, even in release builds" OFF)
option (reporting "Build rippled with reporting mode enabled" OFF)
option (tests "Build tests" ON)
option (unity "Creates a build using UNITY support in cmake. This is the default" ON)
if (unity)
if (NOT is_ci)
set (CMAKE_UNITY_BUILD_BATCH_SIZE 15 CACHE STRING "")
endif ()
endif ()
if (is_gcc OR is_clang)
option (coverage "Generates coverage info." OFF)
option (profile "Add profiling flags" OFF)
set (coverage_test "" CACHE STRING
"On gcc & clang, the specific unit test(s) to run for coverage. Default is all tests.")
if (coverage_test AND NOT coverage)
set (coverage ON CACHE BOOL "gcc/clang only" FORCE)
endif ()
option (coverage_core_only
"Include only src/ripple files when generating coverage report. \
Set to OFF to include all sources in coverage report."
ON)
option (wextra "compile with extra gcc/clang warnings enabled" ON)
else ()
set (profile OFF CACHE BOOL "gcc/clang only" FORCE)
set (coverage OFF CACHE BOOL "gcc/clang only" FORCE)
set (wextra OFF CACHE BOOL "gcc/clang only" FORCE)
endif ()
if (is_linux)
option (BUILD_SHARED_LIBS "build shared ripple libraries" OFF)
option (static "link protobuf, openssl, libc++, and boost statically" ON)
option (perf "Enables flags that assist with perf recording" OFF)
option (use_gold "enables detection of gold (binutils) linker" ON)
else ()
# we are not ready to allow shared-libs on windows because it would require
# export declarations. On macos it's more feasible, but static openssl
# produces odd linker errors, thus we disable shared lib builds for now.
set (BUILD_SHARED_LIBS OFF CACHE BOOL "build shared ripple libraries - OFF for win/macos" FORCE)
set (static ON CACHE BOOL "static link, linux only. ON for WIN/macos" FORCE)
set (perf OFF CACHE BOOL "perf flags, linux only" FORCE)
set (use_gold OFF CACHE BOOL "gold linker, linux only" FORCE)
endif ()
if (is_clang)
option (use_lld "enables detection of lld linker" ON)
else ()
set (use_lld OFF CACHE BOOL "try lld linker, clang only" FORCE)
endif ()
option (jemalloc "Enables jemalloc for heap profiling" OFF)
option (werr "treat warnings as errors" OFF)
option (local_protobuf
"Force a local build of protobuf instead of looking for an installed version." OFF)
option (local_grpc
"Force a local build of gRPC instead of looking for an installed version." OFF)
# this one is a string and therefore can't be an option
set (san "" CACHE STRING "On gcc & clang, add sanitizer instrumentation")
set_property (CACHE san PROPERTY STRINGS ";undefined;memory;address;thread")
if (san)
string (TOLOWER ${san} san)
set (SAN_FLAG "-fsanitize=${san}")
set (SAN_LIB "")
if (is_gcc)
if (san STREQUAL "address")
set (SAN_LIB "asan")
elseif (san STREQUAL "thread")
set (SAN_LIB "tsan")
elseif (san STREQUAL "memory")
set (SAN_LIB "msan")
elseif (san STREQUAL "undefined")
set (SAN_LIB "ubsan")
endif ()
endif ()
set (_saved_CRL ${CMAKE_REQUIRED_LIBRARIES})
set (CMAKE_REQUIRED_LIBRARIES "${SAN_FLAG};${SAN_LIB}")
check_cxx_compiler_flag (${SAN_FLAG} COMPILER_SUPPORTS_SAN)
set (CMAKE_REQUIRED_LIBRARIES ${_saved_CRL})
if (NOT COMPILER_SUPPORTS_SAN)
message (FATAL_ERROR "${san} sanitizer does not seem to be supported by your compiler")
endif ()
endif ()
set (container_label "" CACHE STRING "tag to use for package building containers")
option (packages_only
"ONLY generate package building targets. This is special use-case and almost \
certainly not what you want. Use with caution as you won't be able to build \
any compiled targets locally." OFF)
option (have_package_container
"Sometimes you already have the tagged container you want to use for package \
building and you don't want docker to rebuild it. This flag will detach the \
dependency of the package build from the container build. It's an advanced \
use case and most likely you should not be touching this flag." OFF)
# the remaining options are obscure and rarely used
option (beast_no_unit_test_inline
"Prevents unit test definitions from being inserted into global table"
OFF)
option (single_io_service_thread
"Restricts the number of threads calling io_service::run to one. \
This can be useful when debugging."
OFF)
option (boost_show_deprecated
"Allow boost to fail on deprecated usage. Only useful if you're trying\
to find deprecated calls."
OFF)
option (beast_hashers
"Use local implementations for sha/ripemd hashes (experimental, not recommended)"
OFF)
if (WIN32)
option (beast_disable_autolink "Disables autolinking of system libraries on WIN32" OFF)
else ()
set (beast_disable_autolink OFF CACHE BOOL "WIN32 only" FORCE)
endif ()
if (coverage)
message (STATUS "coverage build requested - forcing Debug build")
set (CMAKE_BUILD_TYPE Debug CACHE STRING "build type" FORCE)
endif ()

View File

@@ -1,15 +0,0 @@
#[===================================================================[
read version from source
#]===================================================================]
file (STRINGS src/ripple/protocol/impl/BuildInfo.cpp BUILD_INFO)
foreach (line_ ${BUILD_INFO})
if (line_ MATCHES "versionString[ ]*=[ ]*\"(.+)\"")
set (rippled_version ${CMAKE_MATCH_1})
endif ()
endforeach ()
if (rippled_version)
message (STATUS "rippled version: ${rippled_version}")
else ()
message (FATAL_ERROR "unable to determine rippled version")
endif ()

View File

@@ -1,106 +0,0 @@
################################################################################
# SociConfig.cmake - CMake build configuration of SOCI library
################################################################################
# Copyright (C) 2010 Mateusz Loskot <mateusz@loskot.net>
#
# Distributed under the Boost Software License, Version 1.0.
# (See accompanying file LICENSE_1_0.txt or copy at
# http://www.boost.org/LICENSE_1_0.txt)
################################################################################
include(CheckCXXSymbolExists)
if(WIN32)
check_cxx_symbol_exists("_M_AMD64" "" SOCI_TARGET_ARCH_X64)
if(NOT RTC_ARCH_X64)
check_cxx_symbol_exists("_M_IX86" "" SOCI_TARGET_ARCH_X86)
endif(NOT RTC_ARCH_X64)
# add check for arm here
# see http://msdn.microsoft.com/en-us/library/b0084kay.aspx
else(WIN32)
check_cxx_symbol_exists("__i386__" "" SOCI_TARGET_ARCH_X86)
check_cxx_symbol_exists("__x86_64__" "" SOCI_TARGET_ARCH_X64)
check_cxx_symbol_exists("__arm__" "" SOCI_TARGET_ARCH_ARM)
endif(WIN32)
if(NOT DEFINED LIB_SUFFIX)
if(SOCI_TARGET_ARCH_X64)
set(_lib_suffix "64")
else()
set(_lib_suffix "")
endif()
set(LIB_SUFFIX ${_lib_suffix} CACHE STRING "Specifies suffix for the lib directory")
endif()
#
# C++11 Option
#
if(NOT SOCI_CXX_C11)
set (SOCI_CXX_C11 OFF CACHE BOOL "Build to the C++11 standard")
endif()
#
# Force compilation flags and set desired warnings level
#
if (MSVC)
add_definitions(-D_CRT_SECURE_NO_DEPRECATE)
add_definitions(-D_CRT_SECURE_NO_WARNINGS)
add_definitions(-D_CRT_NONSTDC_NO_WARNING)
add_definitions(-D_SCL_SECURE_NO_WARNINGS)
if(CMAKE_CXX_FLAGS MATCHES "/W[0-4]")
string(REGEX REPLACE "/W[0-4]" "/W4" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /W4 /we4266")
endif()
else()
set(SOCI_GCC_CLANG_COMMON_FLAGS "")
# "-pedantic -Werror -Wno-error=parentheses -Wall -Wextra -Wpointer-arith -Wcast-align -Wcast-qual -Wfloat-equal -Woverloaded-virtual -Wredundant-decls -Wno-long-long")
if (SOCI_CXX_C11)
set(SOCI_CXX_VERSION_FLAGS "-std=c++11")
else()
set(SOCI_CXX_VERSION_FLAGS "-std=gnu++98")
endif()
if("${CMAKE_CXX_COMPILER_ID}" MATCHES "Clang" OR "${CMAKE_CXX_COMPILER}" MATCHES "clang")
if(NOT CMAKE_CXX_COMPILER_VERSION LESS 3.1 AND SOCI_ASAN)
set(SOCI_GCC_CLANG_COMMON_FLAGS "${SOCI_GCC_CLANG_COMMON_FLAGS} -fsanitize=address")
endif()
# enforce C++11 for Clang
set(SOCI_CXX_C11 ON)
set(SOCI_CXX_VERSION_FLAGS "-std=c++11")
add_definitions(-DCATCH_CONFIG_CPP11_NO_IS_ENUM)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SOCI_GCC_CLANG_COMMON_FLAGS} ${SOCI_CXX_VERSION_FLAGS}")
elseif(CMAKE_COMPILER_IS_GNUCC OR CMAKE_COMPILER_IS_GNUCXX)
if(NOT CMAKE_CXX_COMPILER_VERSION LESS 4.8 AND SOCI_ASAN)
set(SOCI_GCC_CLANG_COMMON_FLAGS "${SOCI_GCC_CLANG_COMMON_FLAGS} -fsanitize=address")
endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SOCI_GCC_CLANG_COMMON_FLAGS} ${SOCI_CXX_VERSION_FLAGS} ")
if (CMAKE_COMPILER_IS_GNUCXX)
if (CMAKE_SYSTEM_NAME MATCHES "FreeBSD")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-variadic-macros")
endif()
endif()
else()
message(WARNING "Unknown toolset - using default flags to build SOCI")
endif()
endif()
# Set SOCI_HAVE_* variables for soci-config.h generator
set(SOCI_HAVE_CXX_C11 ${SOCI_CXX_C11} CACHE INTERNAL "Enables C++11 support")

View File

@@ -1,22 +0,0 @@
find_package(Protobuf 3.8)
file(MAKE_DIRECTORY ${CMAKE_BINARY_DIR}/proto_gen)
set(ccbd ${CMAKE_CURRENT_BINARY_DIR})
set(CMAKE_CURRENT_BINARY_DIR ${CMAKE_BINARY_DIR}/proto_gen)
protobuf_generate_cpp(PROTO_SRCS PROTO_HDRS src/ripple/proto/ripple.proto)
set(CMAKE_CURRENT_BINARY_DIR ${ccbd})
add_library(pbufs STATIC ${PROTO_SRCS} ${PROTO_HDRS})
target_include_directories(pbufs SYSTEM PUBLIC
${CMAKE_BINARY_DIR}/proto_gen
${CMAKE_BINARY_DIR}/proto_gen/src/ripple/proto
)
target_link_libraries(pbufs protobuf::libprotobuf)
target_compile_options(pbufs
PUBLIC
$<$<BOOL:${XCODE}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>
)
add_library(Ripple::pbufs ALIAS pbufs)

View File

@@ -1,62 +0,0 @@
find_package(gRPC 1.23)
#[=================================[
generate protobuf sources for
grpc defs and bundle into a
static lib
#]=================================]
set(GRPC_GEN_DIR "${CMAKE_BINARY_DIR}/proto_gen_grpc")
file(MAKE_DIRECTORY ${GRPC_GEN_DIR})
set(GRPC_PROTO_SRCS)
set(GRPC_PROTO_HDRS)
set(GRPC_PROTO_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/src/ripple/proto/org")
file(GLOB_RECURSE GRPC_DEFINITION_FILES LIST_DIRECTORIES false "${GRPC_PROTO_ROOT}/*.proto")
foreach(file ${GRPC_DEFINITION_FILES})
get_filename_component(_abs_file ${file} ABSOLUTE)
get_filename_component(_abs_dir ${_abs_file} DIRECTORY)
get_filename_component(_basename ${file} NAME_WE)
get_filename_component(_proto_inc ${GRPC_PROTO_ROOT} DIRECTORY) # updir one level
file(RELATIVE_PATH _rel_root_file ${_proto_inc} ${_abs_file})
get_filename_component(_rel_root_dir ${_rel_root_file} DIRECTORY)
file(RELATIVE_PATH _rel_dir ${CMAKE_CURRENT_SOURCE_DIR} ${_abs_dir})
set(src_1 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.grpc.pb.cc")
set(src_2 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.pb.cc")
set(hdr_1 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.grpc.pb.h")
set(hdr_2 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.pb.h")
add_custom_command(
OUTPUT ${src_1} ${src_2} ${hdr_1} ${hdr_2}
COMMAND protobuf::protoc
ARGS --grpc_out=${GRPC_GEN_DIR}
--cpp_out=${GRPC_GEN_DIR}
--plugin=protoc-gen-grpc=$<TARGET_FILE:gRPC::grpc_cpp_plugin>
-I ${_proto_inc} -I ${_rel_dir}
${_abs_file}
DEPENDS ${_abs_file} protobuf::protoc gRPC::grpc_cpp_plugin
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
COMMENT "Running gRPC C++ protocol buffer compiler on ${file}"
VERBATIM)
set_source_files_properties(${src_1} ${src_2} ${hdr_1} ${hdr_2} PROPERTIES GENERATED TRUE)
list(APPEND GRPC_PROTO_SRCS ${src_1} ${src_2})
list(APPEND GRPC_PROTO_HDRS ${hdr_1} ${hdr_2})
endforeach()
add_library(grpc_pbufs STATIC ${GRPC_PROTO_SRCS} ${GRPC_PROTO_HDRS})
#target_include_directories(grpc_pbufs PRIVATE src)
target_include_directories(grpc_pbufs SYSTEM PUBLIC ${GRPC_GEN_DIR})
target_link_libraries(grpc_pbufs
"gRPC::grpc++"
# libgrpc is missing references.
absl::random_random
)
target_compile_options(grpc_pbufs
PRIVATE
$<$<BOOL:${MSVC}>:-wd4065>
$<$<NOT:$<BOOL:${MSVC}>>:-Wno-deprecated-declarations>
PUBLIC
$<$<BOOL:${MSVC}>:-wd4996>
$<$<BOOL:${XCODE}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>)
add_library(Ripple::grpc_pbufs ALIAS grpc_pbufs)

View File

@@ -1,17 +0,0 @@
#[=========================================================[
This is a CMake script file that is used to write
the contents of a file to stdout (using the cmake
echo command). The input file is passed via the
IN_FILE variable.
#]=========================================================]
if (EXISTS ${IN_FILE})
file (READ ${IN_FILE} contents)
## only print files that actually have some text in them
if (contents MATCHES "[a-z0-9A-Z]+")
execute_process(
COMMAND
${CMAKE_COMMAND} -E echo "${contents}")
endif ()
endif ()

View File

@@ -1,405 +0,0 @@
#!/usr/bin/env python
# This file is part of rippled: https://github.com/ripple/rippled
# Copyright (c) 2012 - 2017 Ripple Labs Inc.
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
"""
Invocation:
./Builds/Test.py - builds and tests all configurations
The build must succeed without shell aliases for this to work.
To pass flags to cmake, put them at the very end of the command line, after
the -- flag - like this:
./Builds/Test.py -- -j4 # Pass -j4 to cmake --build
Common problems:
1) Boost not found. Solution: export BOOST_ROOT=[path to boost folder]
2) OpenSSL not found. Solution: export OPENSSL_ROOT=[path to OpenSSL folder]
3) cmake is not found. Solution: Be sure cmake directory is on your $PATH
"""
from __future__ import absolute_import, division, print_function, unicode_literals
import argparse
import itertools
import os
import platform
import re
import shutil
import sys
import subprocess
def powerset(iterable):
"""powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"""
s = list(iterable)
return itertools.chain.from_iterable(itertools.combinations(s, r) for r in range(len(s) + 1))
IS_WINDOWS = platform.system().lower() == 'windows'
IS_OS_X = platform.system().lower() == 'darwin'
# CMake
if IS_WINDOWS:
CMAKE_UNITY_CONFIGS = ['Debug', 'Release']
CMAKE_NONUNITY_CONFIGS = ['Debug', 'Release']
else:
CMAKE_UNITY_CONFIGS = []
CMAKE_NONUNITY_CONFIGS = []
CMAKE_UNITY_COMBOS = { '' : [['rippled'], CMAKE_UNITY_CONFIGS],
'.nounity' : [['rippled'], CMAKE_NONUNITY_CONFIGS] }
if IS_WINDOWS:
CMAKE_DIR_TARGETS = { ('msvc' + unity,) : targets for unity, targets in
CMAKE_UNITY_COMBOS.items() }
elif IS_OS_X:
CMAKE_DIR_TARGETS = { (build + unity,) : targets
for build in ['debug', 'release']
for unity, targets in CMAKE_UNITY_COMBOS.items() }
else:
CMAKE_DIR_TARGETS = { (cc + "." + build + unity,) : targets
for cc in ['gcc', 'clang']
for build in ['debug', 'release', 'coverage', 'profile']
for unity, targets in CMAKE_UNITY_COMBOS.items() }
# list of tuples of all possible options
if IS_WINDOWS or IS_OS_X:
CMAKE_ALL_GENERATE_OPTIONS = [tuple(x) for x in powerset(['-GNinja', '-Dassert=true'])]
else:
CMAKE_ALL_GENERATE_OPTIONS = list(set(
[tuple(x) for x in powerset(['-GNinja', '-Dstatic=true', '-Dassert=true', '-Dsan=address'])] +
[tuple(x) for x in powerset(['-GNinja', '-Dstatic=true', '-Dassert=true', '-Dsan=thread'])]))
parser = argparse.ArgumentParser(
description='Test.py - run ripple tests'
)
parser.add_argument(
'--all', '-a',
action='store_true',
help='Build all configurations.',
)
parser.add_argument(
'--keep_going', '-k',
action='store_true',
help='Keep going after one configuration has failed.',
)
parser.add_argument(
'--silent', '-s',
action='store_true',
help='Silence all messages except errors',
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help=('Report more information about which commands are executed and the '
'results.'),
)
parser.add_argument(
'--test', '-t',
default='',
help='Add a prefix for unit tests',
)
parser.add_argument(
'--testjobs',
default='0',
type=int,
help='Run tests in parallel'
)
parser.add_argument(
'--ipv6',
action='store_true',
help='Use IPv6 localhost when running unit tests.',
)
parser.add_argument(
'--clean', '-c',
action='store_true',
help='delete all build artifacts after testing',
)
parser.add_argument(
'--quiet', '-q',
action='store_true',
help='Reduce output where possible (unit tests)',
)
parser.add_argument(
'--dir', '-d',
default=(),
nargs='*',
help='Specify one or more CMake dir names. '
'Will also be used as -Dtarget=<dir> running cmake.'
)
parser.add_argument(
'--target',
default=(),
nargs='*',
help='Specify one or more CMake build targets. '
'Will be used as --target <target> running cmake --build.'
)
parser.add_argument(
'--config',
default=(),
nargs='*',
help='Specify one or more CMake build configs. '
'Will be used as --config <config> running cmake --build.'
)
parser.add_argument(
'--generator_option',
action='append',
help='Specify a CMake generator option. Repeat for multiple options. '
'Will be passed to the cmake generator. '
'Due to limits of the argument parser, arguments starting with \'-\' '
'must be attached to this option. e.g. --generator_option=-GNinja.')
parser.add_argument(
'--build_option',
action='append',
help='Specify a build option. Repeat for multiple options. '
'Will be passed to the build tool via cmake --build. '
'Due to limits of the argument parser, arguments starting with \'-\' '
'must be attached to this option. e.g. --build_option=-j8.')
parser.add_argument(
'extra_args',
default=(),
nargs='*',
help='Extra arguments are passed through to the tools'
)
ARGS = parser.parse_args()
def decodeString(line):
# Python 2 vs. Python 3
if isinstance(line, str):
return line
else:
return line.decode()
def shell(cmd, args=(), silent=False, cust_env=None):
""""Execute a shell command and return the output."""
silent = ARGS.silent or silent
verbose = not silent and ARGS.verbose
if verbose:
print('$' + cmd, *args)
command = (cmd,) + args
# shell is needed in Windows to find executable in the path
process = subprocess.Popen(
command,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
env=cust_env,
shell=IS_WINDOWS)
lines = []
count = 0
# readline returns '' at EOF
for line in iter(process.stdout.readline, ''):
if process.poll() is None:
decoded = decodeString(line)
lines.append(decoded)
if verbose:
print(decoded, end='')
elif not silent:
count += 1
if count >= 80:
print()
count = 0
else:
print('.', end='')
else:
break
if not verbose and count:
print()
process.wait()
return process.returncode, lines
def get_cmake_dir(cmake_dir):
return os.path.join('build' , 'cmake' , cmake_dir)
def run_cmake(directory, cmake_dir, args):
print('Generating build in', directory, 'with', *args or ('default options',))
old_dir = os.getcwd()
if not os.path.exists(directory):
os.makedirs(directory)
os.chdir(directory)
if IS_WINDOWS and not any(arg.startswith("-G") for arg in args) and not os.path.exists("CMakeCache.txt"):
if '--ninja' in args:
args += ( '-GNinja', )
else:
args += ( '-GVisual Studio 14 2015 Win64', )
# hack to extract cmake options/args from the legacy target format
if re.search('\.unity', cmake_dir):
args += ( '-Dunity=ON', )
if re.search('\.nounity', cmake_dir):
args += ( '-Dunity=OFF', )
if re.search('coverage', cmake_dir):
args += ( '-Dcoverage=ON', )
if re.search('profile', cmake_dir):
args += ( '-Dprofile=ON', )
if re.search('debug', cmake_dir):
args += ( '-DCMAKE_BUILD_TYPE=Debug', )
if re.search('release', cmake_dir):
args += ( '-DCMAKE_BUILD_TYPE=Release', )
m = re.search('gcc(-[^.]*)', cmake_dir)
if m:
args += ( '-DCMAKE_C_COMPILER=' + m.group(0),
'-DCMAKE_CXX_COMPILER=g++' + m.group(1), )
elif re.search('gcc', cmake_dir):
args += ( '-DCMAKE_C_COMPILER=gcc', '-DCMAKE_CXX_COMPILER=g++', )
m = re.search('clang(-[^.]*)', cmake_dir)
if m:
args += ( '-DCMAKE_C_COMPILER=' + m.group(0),
'-DCMAKE_CXX_COMPILER=clang++' + m.group(1), )
elif re.search('clang', cmake_dir):
args += ( '-DCMAKE_C_COMPILER=clang', '-DCMAKE_CXX_COMPILER=clang++', )
args += ( os.path.join('..', '..', '..'), )
resultcode, lines = shell('cmake', args)
if resultcode:
print('Generating FAILED:')
if not ARGS.verbose:
print(*lines, sep='')
sys.exit(1)
os.chdir(old_dir)
def run_cmake_build(directory, target, config, args):
print('Building', target, config, 'in', directory, 'with', *args or ('default options',))
build_args=('--build', directory)
if target:
build_args += ('--target', target)
if config:
build_args += ('--config', config)
if args:
build_args += ('--',)
build_args += tuple(args)
resultcode, lines = shell('cmake', build_args)
if resultcode:
print('Build FAILED:')
if not ARGS.verbose:
print(*lines, sep='')
sys.exit(1)
def run_cmake_tests(directory, target, config):
failed = []
if IS_WINDOWS:
target += '.exe'
executable = os.path.join(directory, config if config else 'Debug', target)
if(not os.path.exists(executable)):
executable = os.path.join(directory, target)
print('Unit tests for', executable)
testflag = '--unittest'
quiet = ''
testjobs = ''
ipv6 = ''
if ARGS.test:
testflag += ('=' + ARGS.test)
if ARGS.quiet:
quiet = '-q'
if ARGS.ipv6:
ipv6 = '--unittest-ipv6'
if ARGS.testjobs:
testjobs = ('--unittest-jobs=' + str(ARGS.testjobs))
resultcode, lines = shell(executable, (testflag, quiet, testjobs, ipv6))
if resultcode:
if not ARGS.verbose:
print('ERROR:', *lines, sep='')
failed.append([target, 'unittest'])
return failed
def main():
all_failed = []
if ARGS.all:
build_dir_targets = CMAKE_DIR_TARGETS
generator_options = CMAKE_ALL_GENERATE_OPTIONS
else:
build_dir_targets = { tuple(ARGS.dir) : [ARGS.target, ARGS.config] }
if ARGS.generator_option:
generator_options = [tuple(ARGS.generator_option)]
else:
generator_options = [tuple()]
if not build_dir_targets:
# Let CMake choose the build tool.
build_dir_targets = { () : [] }
if ARGS.build_option:
ARGS.build_option = ARGS.build_option + list(ARGS.extra_args)
else:
ARGS.build_option = list(ARGS.extra_args)
for args in generator_options:
for build_dirs, (build_targets, build_configs) in build_dir_targets.items():
if not build_dirs:
build_dirs = ('default',)
if not build_targets:
build_targets = ('rippled',)
if not build_configs:
build_configs = ('',)
for cmake_dir in build_dirs:
cmake_full_dir = get_cmake_dir(cmake_dir)
run_cmake(cmake_full_dir, cmake_dir, args)
for target in build_targets:
for config in build_configs:
run_cmake_build(cmake_full_dir, target, config, ARGS.build_option)
failed = run_cmake_tests(cmake_full_dir, target, config)
if failed:
print('FAILED:', *(':'.join(f) for f in failed))
if not ARGS.keep_going:
sys.exit(1)
else:
all_failed.extend([decodeString(cmake_dir +
"." + target + "." + config), ':'.join(f)]
for f in failed)
else:
print('Success')
if ARGS.clean:
shutil.rmtree(cmake_full_dir)
if all_failed:
if len(all_failed) > 1:
print()
print('FAILED:', *(':'.join(f) for f in all_failed))
sys.exit(1)
if __name__ == '__main__':
main()
sys.exit(0)

View File

@@ -1 +0,0 @@
[Build instructions are currently located in `BUILD.md`](../../BUILD.md)

View File

@@ -1,45 +0,0 @@
{
// See https://go.microsoft.com//fwlink//?linkid=834763 for more information about this file.
"configurations": [
{
"name": "x64-Debug",
"generator": "Visual Studio 16 2019",
"configurationType": "Debug",
"inheritEnvironments": [ "msvc_x64_x64" ],
"buildRoot": "${thisFileDir}\\build\\${name}",
"cmakeCommandArgs": "",
"buildCommandArgs": "-v:minimal",
"ctestCommandArgs": "",
"variables": [
{
"name": "BOOST_ROOT",
"value": "C:\\lib\\boost"
},
{
"name": "OPENSSL_ROOT",
"value": "C:\\lib\\OpenSSL-Win64"
}
]
},
{
"name": "x64-Release",
"generator": "Visual Studio 16 2019",
"configurationType": "Release",
"inheritEnvironments": [ "msvc_x64_x64" ],
"buildRoot": "${thisFileDir}\\build\\${name}",
"cmakeCommandArgs": "",
"buildCommandArgs": "-v:minimal",
"ctestCommandArgs": "",
"variables": [
{
"name": "BOOST_ROOT",
"value": "C:\\lib\\boost"
},
{
"name": "OPENSSL_ROOT",
"value": "C:\\lib\\OpenSSL-Win64"
}
]
}
]
}

View File

@@ -1,263 +0,0 @@
# Visual Studio 2019 Build Instructions
## Important
We do not recommend Windows for rippled production use at this time. Currently,
the Ubuntu platform has received the highest level of quality assurance,
testing, and support. Additionally, 32-bit Windows versions are not supported.
## Prerequisites
To clone the source code repository, create branches for inspection or
modification, build rippled under Visual Studio, and run the unit tests you will
need these software components
| Component | Minimum Recommended Version |
|-----------|-----------------------|
| [Visual Studio 2019](README.md#install-visual-studio-2019)| 15.5.4 |
| [Git for Windows](README.md#install-git-for-windows)| 2.16.1 |
| [OpenSSL Library](README.md#install-openssl) | 1.1.1L |
| [Boost library](README.md#build-boost) | 1.70.0 |
| [CMake for Windows](README.md#optional-install-cmake-for-windows)* | 3.12 |
\* Only needed if not using the integrated CMake in VS 2019 and prefer generating dedicated project/solution files.
## Install Software
### Install Visual Studio 2019
If not already installed on your system, download your choice of installer from
the [Visual Studio 2019
Download](https://www.visualstudio.com/downloads/download-visual-studio-vs)
page, run the installer, and follow the directions. **You may need to choose the
`Desktop development with C++` workload to install all necessary C++ features.**
Any version of Visual Studio 2019 may be used to build rippled. The **Visual
Studio 2019 Community** edition is available free of charge (see [the product
page](https://www.visualstudio.com/products/visual-studio-community-vs) for
licensing details), while paid editions may be used for an initial free-trial
period.
### Install Git for Windows
Git is a distributed revision control system. The Windows version also provides
the bash shell and many Windows versions of Unix commands. While there are other
varieties of Git (such as TortoiseGit, which has a native Windows interface and
integrates with the Explorer shell), we recommend installing [Git for
Windows](https://git-scm.com/) since it provides a Unix-like command line
environment useful for running shell scripts. Use of the bash shell under
Windows is mandatory for running the unit tests.
### Install OpenSSL
[Download the latest version of
OpenSSL.](http://slproweb.com/products/Win32OpenSSL.html) There will
several `Win64` bit variants available, you want the non-light
`v1.1` line. As of this writing, you **should** select
* Win64 OpenSSL v1.1.1q
and should **not** select
* Anything with "Win32" in the name
* Anything with "light" in the name
* Anything with "EXPERIMENTAL" in the name
* Anything in the 3.0 line - rippled won't currently build with this version.
Run the installer, and choose an appropriate location for your OpenSSL
installation. In this guide we use `C:\lib\OpenSSL-Win64` as the destination
location.
You may be informed on running the installer that "Visual C++ 2008
Redistributables" must first be installed first. If so, download it from the
[same page](http://slproweb.com/products/Win32OpenSSL.html), again making sure
to get the correct 32-/64-bit variant.
* NOTE: Since rippled links statically to OpenSSL, it does not matter where the
OpenSSL .DLL files are placed, or what version they are. rippled does not use
or require any external .DLL files to run other than the standard operating
system ones.
### Build Boost
Boost 1.70 or later is required.
[Download boost](http://www.boost.org/users/download/) and unpack it
to `c:\lib`. As of this writing, the most recent version of boost is 1.80.0,
which will unpack into a directory named `boost_1_80_0`. We recommended either
renaming this directory to `boost`, or creating a junction link `mklink /J boost
boost_1_80_0`, so that you can more easily switch between versions.
Next, open **Developer Command Prompt** and type the following commands
```powershell
cd C:\lib\boost
bootstrap
```
The rippled application is linked statically to the standard runtimes and
external dependencies on Windows, to ensure that the behavior of the executable
is not affected by changes in outside files. Therefore, it is necessary to build
the required boost static libraries using this command:
```powershell
b2 -j<Num Parallel> --toolset=msvc-14.2 address-model=64 architecture=x86 link=static threading=multi runtime-link=shared,static stage
```
where you should replace `<Num Parallel>` with the number of parallel
invocations to use build, e.g. `bjam -j8 ...` would use up to 8 concurrent build
shell commands for the build.
Building the boost libraries may take considerable time. When the build process
is completed, take note of both the reported compiler include paths and linker
library paths as they will be required later.
### (Optional) Install CMake for Windows
[CMake](http://cmake.org) is a cross platform build system generator. Visual
Studio 2019 includes an integrated version of CMake that avoids having to
manually run CMake, but it is undergoing continuous improvement. Users that
prefer to use standard Visual Studio project and solution files need to install
a dedicated version of CMake to generate them. The latest version can be found
at the [CMake download site](https://cmake.org/download/). It is recommended you
select the install option to add CMake to your path.
## Clone the rippled repository
If you are familiar with cloning github repositories, just follow your normal
process and clone `git@github.com:ripple/rippled.git`. Otherwise follow this
section for instructions.
1. If you don't have a github account, sign up for one at
[github.com](https://github.com/).
2. Make sure you have Github ssh keys. For help see
[generating-ssh-keys](https://help.github.com/articles/generating-ssh-keys).
Open the "Git Bash" shell that was installed with "Git for Windows" in the step
above. Navigate to the directory where you want to clone rippled (git bash uses
`/c` for windows's `C:` and forward slash where windows uses backslash, so
`C:\Users\joe\projs` would be `/c/Users/joe/projs` in git bash). Now clone the
repository and optionally switch to the *master* branch. Type the following at
the bash prompt:
```powershell
git clone git@github.com:XRPLF/rippled.git
cd rippled
```
If you receive an error about not having the "correct access rights" make sure
you have Github ssh keys, as described above.
For a stable release, choose the `master` branch or one of the tagged releases
listed on [rippled's GitHub page](https://github.com/ripple/rippled/releases).
```
git checkout master
```
To test the latest release candidate, choose the `release` branch.
```
git checkout release
```
If you are doing development work and want the latest set of beta features,
you can consider using the `develop` branch instead.
```
git checkout develop
```
# Build using Visual Studio integrated CMake
In Visual Studio 2017, Microsoft added [integrated IDE support for
cmake](https://blogs.msdn.microsoft.com/vcblog/2016/10/05/cmake-support-in-visual-studio/).
To begin, simply:
1. Launch Visual Studio and choose **File | Open | Folder**, navigating to the
cloned rippled folder.
2. Right-click on `CMakeLists.txt` in the **Solution Explorer - Folder View** to
generate a `CMakeSettings.json` file. A sample settings file is provided
[here](/Builds/VisualStudio2019/CMakeSettings-example.json). Customize the
settings for `BOOST_ROOT`, `OPENSSL_ROOT` to match the install paths if they
differ from those in the file.
4. Select either the `x64-Release` or `x64-Debug` configuration from the
**Project Settings** drop-down. This should invoke the built-in CMake project
generator. If not, you can right-click on the `CMakeLists.txt` file and
choose **Configure rippled**.
5. Select the `rippled.exe`
option in the **Select Startup Item** drop-down. This will be the target
built when you press F7. Alternatively, you can choose a target to build from
the top-level **CMake | Build** menu. Note that at this time, there are other
targets listed that come from third party visual studio files embedded in the
rippled repo, e.g. `datagen.vcxproj`. Please ignore them.
For details on configuring debugging sessions or further customization of CMake,
please refer to the [CMake tools for VS
documentation](https://docs.microsoft.com/en-us/cpp/ide/cmake-tools-for-visual-cpp).
If using the provided `CMakeSettings.json` file, the executable will be in
```
.\build\x64-Release\Release\rippled.exe
```
or
```
.\build\x64-Debug\Debug\rippled.exe
```
These paths are relative to your cloned git repository.
# Build using stand-alone CMake
This requires having installed [CMake for
Windows](README.md#optional-install-cmake-for-windows). We do not recommend
mixing this method with the integrated CMake method for the same repository
clone. Assuming you included the cmake executable folder in your path,
execute the following commands within your `rippled` cloned repository:
```
mkdir build\cmake
cd build\cmake
cmake ..\.. -G"Visual Studio 16 2019" -Ax64 -DBOOST_ROOT="C:\lib\boost" -DOPENSSL_ROOT="C:\lib\OpenSSL-Win64" -DCMAKE_GENERATOR_TOOLSET=host=x64
```
Now launch Visual Studio 2019 and select **File | Open | Project/Solution**.
Navigate to the `build\cmake` folder created above and select the `rippled.sln`
file. You can then choose whether to build the `Debug` or `Release` solution
configuration.
The executable will be in
```
.\build\cmake\Release\rippled.exe
```
or
```
.\build\cmake\Debug\rippled.exe
```
These paths are relative to your cloned git repository.
# Unity/No-Unity Builds
The rippled build system defaults to using
[unity source files](http://onqtam.com/programming/2018-07-07-unity-builds/)
to improve build times. In some cases it might be desirable to disable the
unity build and compile individual translation units. Here is how you can
switch to a "no-unity" build configuration:
## Visual Studio Integrated CMake
Edit your `CmakeSettings.json` (described above) by adding `-Dunity=OFF`
to the `cmakeCommandArgs` entry for each build configuration.
## Standalone CMake Builds
When running cmake to generate the Visual Studio project files, add
`-Dunity=OFF` to the command line options passed to cmake.
**Note:** you will need to re-run the cmake configuration step anytime you
want to switch between unity/no-unity builds.
# Unit Test (Recommended)
`rippled` builds a set of unit tests into the server executable. To run these
unit tests after building, pass the `--unittest` option to the compiled
`rippled` executable. The executable will exit with summary info after running
the unit tests.

View File

@@ -1,7 +0,0 @@
#!/usr/bin/env bash
num_procs=$(lscpu -p | grep -v '^#' | sort -u -t, -k 2,4 | wc -l) # number of physical cores
path=$(cd $(dirname $0) && pwd)
cd $(dirname $path)
${path}/Test.py -a -c --testjobs=${num_procs} -- -j${num_procs}

View File

@@ -1,31 +0,0 @@
# rippled Packaging and Containers
This folder contains docker container definitions and configuration
files to support building rpm and deb packages of rippled. The container
definitions include some additional software/packages that are used
for general build/test CI workflows of rippled but are not explicitly
needed for the package building workflow.
## CMake Targets
If you have docker installed on your local system, then the main
CMake file will enable several targets related to building packages:
`rpm_container`, `rpm`, `dpkg_container`, and `dpkg`. The package targets
depend on the container targets and will trigger a build of those first.
The container builds can take several dozen minutes to complete (depending
on hardware specs), so quick build cycles are not possible currently. As
such, these targets are often best suited to CI/automated build systems.
The package build can be invoked like any other cmake target from the
rippled root folder:
```
mkdir -p build/pkg && cd build/pkg
cmake -Dpackages_only=ON ../..
cmake --build . --target rpm
```
Upon successful completion, the generated package files will be in
the `build/pkg/packages` directory. For deb packages, simply replace
`rpm` with `dpkg` in the build command above.

View File

@@ -1,26 +0,0 @@
FROM rippleci/centos:7
ARG GIT_COMMIT=unknown
ARG CI_USE=false
LABEL git-commit=$GIT_COMMIT
COPY centos-builder/centos_setup.sh /tmp/
COPY shared/install_cmake.sh /tmp/
RUN chmod +x /tmp/centos_setup.sh && \
chmod +x /tmp/install_cmake.sh
RUN /tmp/centos_setup.sh
RUN /tmp/install_cmake.sh 3.16.3 /opt/local/cmake-3.16
RUN ln -s /opt/local/cmake-3.16 /opt/local/cmake
ENV PATH="/opt/local/cmake/bin:$PATH"
# TODO: Install latest CMake for testing
RUN if [ "${CI_USE}" = true ] ; then /tmp/install_cmake.sh 3.16.3 /opt/local/cmake-3.16; fi
RUN mkdir -m 777 -p /opt/rippled_bld/pkg
WORKDIR /opt/rippled_bld/pkg
RUN mkdir -m 777 ./rpmbuild
RUN mkdir -m 777 ./rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
COPY packaging/rpm/build_rpm.sh ./
CMD ./build_rpm.sh

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env bash
set -ex
source /etc/os-release
yum -y upgrade
yum -y update
yum -y install epel-release centos-release-scl
yum -y install \
wget curl time gcc-c++ yum-utils autoconf automake pkgconfig libtool \
libstdc++-static rpm-build gnupg which make cmake \
devtoolset-11 devtoolset-11-gdb devtoolset-11-binutils devtoolset-11-libstdc++-devel \
devtoolset-11-libasan-devel devtoolset-11-libtsan-devel devtoolset-11-libubsan-devel devtoolset-11-liblsan-devel \
flex flex-devel bison bison-devel parallel \
ncurses ncurses-devel ncurses-libs graphviz graphviz-devel \
lzip p7zip bzip2 bzip2-devel lzma-sdk lzma-sdk-devel xz-devel \
zlib zlib-devel zlib-static texinfo openssl openssl-static \
jemalloc jemalloc-devel \
libicu-devel htop \
rh-python38 \
ninja-build git svn \
swig perl-Digest-MD5

View File

@@ -1,28 +0,0 @@
#!/usr/bin/env sh
set -ex
pkgtype=$1
if [ "${pkgtype}" = "rpm" ] ; then
container_name="${RPM_CONTAINER_NAME}"
elif [ "${pkgtype}" = "dpkg" ] ; then
container_name="${DPKG_CONTAINER_NAME}"
else
echo "invalid package type"
exit 1
fi
if docker pull "${ARTIFACTORY_HUB}/${container_name}:latest_${CI_COMMIT_REF_SLUG}"; then
echo "found container for latest - using as cache."
docker tag \
"${ARTIFACTORY_HUB}/${container_name}:latest_${CI_COMMIT_REF_SLUG}" \
"${container_name}:latest_${CI_COMMIT_REF_SLUG}"
CMAKE_EXTRA="-D${pkgtype}_cache_from=${container_name}:latest_${CI_COMMIT_REF_SLUG}"
fi
cmake --version
test -d build && rm -rf build
mkdir -p build/container && cd build/container
eval time \
cmake -Dpackages_only=ON -DCMAKE_VERBOSE_MAKEFILE=ON ${CMAKE_EXTRA} \
-G Ninja ../..
time cmake --build . --target "${pkgtype}_container" -- -v

View File

@@ -1,28 +0,0 @@
#!/usr/bin/env sh
set -ex
pkgtype=$1
if [ "${pkgtype}" = "rpm" ] ; then
container_name="${RPM_CONTAINER_FULLNAME}"
container_tag="${RPM_CONTAINER_TAG}"
elif [ "${pkgtype}" = "dpkg" ] ; then
container_name="${DPKG_CONTAINER_FULLNAME}"
container_tag="${DPKG_CONTAINER_TAG}"
else
echo "invalid package type"
exit 1
fi
time docker pull "${ARTIFACTORY_HUB}/${container_name}"
docker tag \
"${ARTIFACTORY_HUB}/${container_name}" \
"${container_name}"
docker images
test -d build && rm -rf build
mkdir -p build/${pkgtype} && cd build/${pkgtype}
time cmake \
-Dpackages_only=ON \
-Dcontainer_label="${container_tag}" \
-Dhave_package_container=ON \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-Dunity=OFF \
-G Ninja ../..
time cmake --build . --target ${pkgtype} -- -v

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env sh
set -e
# used as a before/setup script for docker steps in gitlab-ci
# expects to be run in standard alpine/dind image
echo $(nproc)
docker login -u rippled \
-p ${ARTIFACTORY_DEPLOY_KEY_RIPPLED} ${ARTIFACTORY_HUB}
apk add --update py-pip
apk add \
bash util-linux coreutils binutils grep \
make ninja cmake build-base gcc g++ abuild git \
python3 python3-dev
pip3 install awscli
# list curdir contents to build log:
ls -la

View File

@@ -1,16 +0,0 @@
#!/usr/bin/env sh
case ${CI_COMMIT_REF_NAME} in
develop)
export COMPONENT="nightly"
;;
release)
export COMPONENT="unstable"
;;
master)
export COMPONENT="stable"
;;
*)
export COMPONENT="_unknown_"
;;
esac

View File

@@ -1,581 +0,0 @@
#########################################################################
## ##
## gitlab CI defintition for rippled build containers and distro ##
## packages (rpm and dpkg). ##
## ##
#########################################################################
# NOTE: these are sensible defaults for Ripple pipelines. These
# can be overridden by project or group variables as needed.
variables:
# these containers are built manually using the rippled
# cmake build (container targets) and tagged/pushed so they
# can be used here
RPM_CONTAINER_TAG: "2023-02-13"
RPM_CONTAINER_NAME: "rippleci/rippled-rpm-builder"
RPM_CONTAINER_FULLNAME: "${RPM_CONTAINER_NAME}:${RPM_CONTAINER_TAG}"
DPKG_CONTAINER_TAG: "2023-02-13"
DPKG_CONTAINER_NAME: "rippleci/rippled-dpkg-builder"
DPKG_CONTAINER_FULLNAME: "${DPKG_CONTAINER_NAME}:${DPKG_CONTAINER_TAG}"
ARTIFACTORY_HOST: "artifactory.ops.ripple.com"
ARTIFACTORY_HUB: "${ARTIFACTORY_HOST}:6555"
GIT_SIGN_PUBKEYS_URL: "https://gitlab.ops.ripple.com/xrpledger/rippled-packages/snippets/49/raw"
PUBLIC_REPO_ROOT: "https://repos.ripple.com/repos"
# also need to define this variable ONLY for the primary
# build/publish pipeline on the mainline repo:
# IS_PRIMARY_REPO = "true"
stages:
- build_packages
- sign_packages
- smoketest
- verify_sig
- tag_images
- push_to_test
- verify_from_test
- wait_approval_prod
- push_to_prod
- verify_from_prod
- get_final_hashes
- build_containers
.dind_template: &dind_param
before_script:
- . ./Builds/containers/gitlab-ci/docker_alpine_setup.sh
variables:
docker_driver: overlay2
DOCKER_TLS_CERTDIR: ""
image:
name: artifactory.ops.ripple.com/docker:latest
services:
# workaround for TLS issues - consider going back
# back to unversioned `dind` when issues are resolved
- name: artifactory.ops.ripple.com/docker:stable-dind
alias: docker
tags:
- 4xlarge
.only_primary_template: &only_primary
only:
refs:
- /^(master|release|develop)$/
variables:
- $IS_PRIMARY_REPO == "true"
.smoketest_local_template: &run_local_smoketest
tags:
- xlarge
script:
- . ./Builds/containers/gitlab-ci/smoketest.sh local
.smoketest_repo_template: &run_repo_smoketest
tags:
- xlarge
script:
- . ./Builds/containers/gitlab-ci/smoketest.sh repo
#########################################################################
## ##
## stage: build_packages ##
## ##
## build packages using containers from previous stage. ##
## ##
#########################################################################
rpm_build:
timeout: "1h 30m"
stage: build_packages
<<: *dind_param
artifacts:
paths:
- build/rpm/packages/
script:
- . ./Builds/containers/gitlab-ci/build_package.sh rpm
dpkg_build:
timeout: "1h 30m"
stage: build_packages
<<: *dind_param
artifacts:
paths:
- build/dpkg/packages/
script:
- . ./Builds/containers/gitlab-ci/build_package.sh dpkg
#########################################################################
## ##
## stage: sign_packages ##
## ##
## build packages using containers from previous stage. ##
## ##
#########################################################################
rpm_sign:
stage: sign_packages
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/centos:7
<<: *only_primary
before_script:
- |
# Make sure GnuPG is installed
yum -y install gnupg rpm-sign
# checking GPG signing support
if [ -n "$GPG_KEY_B64" ]; then
echo "$GPG_KEY_B64"| base64 -d | gpg --batch --no-tty --allow-secret-key-import --import -
unset GPG_KEY_B64
export GPG_PASSPHRASE=$(echo $GPG_KEY_PASS_B64 | base64 -di)
unset GPG_KEY_PASS_B64
export GPG_KEYID=$(gpg --with-colon --list-secret-keys | head -n1 | cut -d : -f 5)
else
echo -e "\033[0;31m****** GPG signing disabled ******\033[0m"
exit 1
fi
artifacts:
paths:
- build/rpm/packages/
script:
- ls -alh build/rpm/packages
- . ./Builds/containers/gitlab-ci/sign_package.sh rpm
dpkg_sign:
stage: sign_packages
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/ubuntu:18.04
<<: *only_primary
before_script:
- |
# make sure we have GnuPG
apt update
apt install -y gpg dpkg-sig
# checking GPG signing support
if [ -n "$GPG_KEY_B64" ]; then
echo "$GPG_KEY_B64"| base64 -d | gpg --batch --no-tty --allow-secret-key-import --import -
unset GPG_KEY_B64
export GPG_PASSPHRASE=$(echo $GPG_KEY_PASS_B64 | base64 -di)
unset GPG_KEY_PASS_B64
export GPG_KEYID=$(gpg --with-colon --list-secret-keys | head -n1 | cut -d : -f 5)
else
echo -e "\033[0;31m****** GPG signing disabled ******\033[0m"
exit 1
fi
artifacts:
paths:
- build/dpkg/packages/
script:
- ls -alh build/dpkg/packages
- . ./Builds/containers/gitlab-ci/sign_package.sh dpkg
#########################################################################
## ##
## stage: smoketest ##
## ##
## install unsigned packages from previous step and run unit tests. ##
## ##
#########################################################################
centos_7_smoketest:
stage: smoketest
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/centos:7
<<: *run_local_smoketest
rocky_8_smoketest:
stage: smoketest
dependencies:
- rpm_build
image:
name: rockylinux/rockylinux:8
<<: *run_local_smoketest
fedora_37_smoketest:
stage: smoketest
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/fedora:37
<<: *run_local_smoketest
fedora_38_smoketest:
stage: smoketest
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/fedora:38
<<: *run_local_smoketest
ubuntu_20_smoketest:
stage: smoketest
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/ubuntu:20.04
<<: *run_local_smoketest
ubuntu_22_smoketest:
stage: smoketest
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/ubuntu:22.04
<<: *run_local_smoketest
debian_11_smoketest:
stage: smoketest
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/debian:11
<<: *run_local_smoketest
#########################################################################
## ##
## stage: verify_sig ##
## ##
## use git/gpg to verify that HEAD is signed by an approved ##
## committer. The whitelist of pubkeys is manually mantained ##
## and fetched from GIT_SIGN_PUBKEYS_URL (currently a snippet ##
## link). ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
verify_head_signed:
stage: verify_sig
image:
name: artifactory.ops.ripple.com/ubuntu:latest
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/verify_head_commit.sh
#########################################################################
## ##
## stage: tag_images ##
## ##
## apply rippled version tag to containers from previous stage. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
tag_bld_images:
stage: tag_images
variables:
docker_driver: overlay2
DOCKER_TLS_CERTDIR: ""
image:
name: artifactory.ops.ripple.com/docker:latest
services:
# workaround for TLS issues - consider going back
# back to unversioned `dind` when issues are resolved
- name: artifactory.ops.ripple.com/docker:stable-dind
alias: docker
tags:
- large
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/tag_docker_image.sh
#########################################################################
## ##
## stage: push_to_test ##
## ##
## push packages to artifactory repositories (test) ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
push_test:
stage: push_to_test
variables:
DEB_REPO: "rippled-deb-test-mirror"
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/alpine:latest
artifacts:
paths:
- files.info
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "PUT" "."
#########################################################################
## ##
## stage: verify_from_test ##
## ##
## install/test packages from test repos. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
centos_7_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/centos:7
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
rocky_8_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: rockylinux/rockylinux:8
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_37_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/fedora:37
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_38_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/fedora:38
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_20_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "focal"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: artifactory.ops.ripple.com/ubuntu:20.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_22_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "jammy"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: artifactory.ops.ripple.com/ubuntu:22.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
debian_11_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "bullseye"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: artifactory.ops.ripple.com/debian:11
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
#########################################################################
## ##
## stage: wait_approval_prod ##
## ##
## wait for manual approval before proceeding to next stage ##
## which pushes to prod repo. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
wait_before_push_prod:
stage: wait_approval_prod
image:
name: artifactory.ops.ripple.com/alpine:latest
<<: *only_primary
script:
- echo "proceeding to next stage"
when: manual
allow_failure: false
#########################################################################
## ##
## stage: push_to_prod ##
## ##
## push packages to artifactory repositories (prod) ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
push_prod:
variables:
DEB_REPO: "rippled-deb"
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/alpine:latest
stage: push_to_prod
artifacts:
paths:
- files.info
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "PUT" "."
#########################################################################
## ##
## stage: verify_from_prod ##
## ##
## install/test packages from prod repos. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
centos_7_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/centos:7
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
rocky_8_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: rockylinux/rockylinux:8
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_37_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/fedora:37
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_38_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/fedora:38
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_20_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "focal"
DEB_REPO: "rippled-deb"
image:
name: artifactory.ops.ripple.com/ubuntu:20.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_22_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "jammy"
DEB_REPO: "rippled-deb"
image:
name: artifactory.ops.ripple.com/ubuntu:22.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
debian_11_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "bullseye"
DEB_REPO: "rippled-deb"
image:
name: artifactory.ops.ripple.com/debian:11
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
#########################################################################
## ##
## stage: get_final_hashes ##
## ##
## fetch final hashes from artifactory. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
get_prod_hashes:
variables:
DEB_REPO: "rippled-deb"
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/alpine:latest
stage: get_final_hashes
artifacts:
paths:
- files.info
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "GET" ".checksums"
#########################################################################
## ##
## stage: build_containers ##
## ##
## build containers from docker definitions. These containers are NOT ##
## used for the package build. This step is only used to ensure that ##
## the package build targets and files are still working properly. ##
## ##
#########################################################################
build_centos_container:
stage: build_containers
<<: *dind_param
script:
- . ./Builds/containers/gitlab-ci/build_container.sh rpm
build_ubuntu_container:
stage: build_containers
<<: *dind_param
script:
- . ./Builds/containers/gitlab-ci/build_container.sh dpkg

View File

@@ -1,92 +0,0 @@
#!/usr/bin/env sh
set -e
action=$1
filter=$2
. ./Builds/containers/gitlab-ci/get_component.sh
apk add curl jq coreutils util-linux
TOPDIR=$(pwd)
# DPKG
cd $TOPDIR
cd build/dpkg/packages
CURLARGS="-sk -X${action} -urippled:${ARTIFACTORY_DEPLOY_KEY_RIPPLED}"
RIPPLED_PKG=$(ls rippled_*.deb)
RIPPLED_REPORTING_PKG=$(ls rippled-reporting_*.deb)
RIPPLED_DBG_PKG=$(ls rippled-dbgsym_*.*deb)
RIPPLED_REPORTING_DBG_PKG=$(ls rippled-reporting-dbgsym_*.*deb)
# TODO - where to upload src tgz?
RIPPLED_SRC=$(ls rippled_*.orig.tar.gz)
DEB_MATRIX=";deb.component=${COMPONENT};deb.architecture=amd64"
for dist in bullseye focal jammy; do
DEB_MATRIX="${DEB_MATRIX};deb.distribution=${dist}"
done
echo "{ \"debs\": {" > "${TOPDIR}/files.info"
for deb in ${RIPPLED_PKG} ${RIPPLED_DBG_PKG} ${RIPPLED_REPORTING_PKG} ${RIPPLED_REPORTING_DBG_PKG}; do
# first item doesn't get a comma separator
if [ $deb != $RIPPLED_PKG ] ; then
echo "," >> "${TOPDIR}/files.info"
fi
echo "\"${deb}\"": | tee -a "${TOPDIR}/files.info"
ca="${CURLARGS}"
if [ "${action}" = "PUT" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/${DEB_REPO}/pool/${COMPONENT}/${deb}${DEB_MATRIX}"
ca="${ca} -T${deb}"
elif [ "${action}" = "GET" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/api/storage/${DEB_REPO}/pool/${COMPONENT}/${deb}"
fi
echo "file info request url --> ${url}"
eval "curl ${ca} \"${url}\"" | jq -M "${filter}" | tee -a "${TOPDIR}/files.info"
done
echo "}," >> "${TOPDIR}/files.info"
# RPM
cd $TOPDIR
cd build/rpm/packages
RIPPLED_PKG=$(ls rippled-[0-9]*.x86_64.rpm)
RIPPLED_DEV_PKG=$(ls rippled-devel*.rpm)
RIPPLED_DBG_PKG=$(ls rippled-debuginfo*.rpm)
RIPPLED_REPORTING_PKG=$(ls rippled-reporting*.rpm)
# TODO - where to upload src rpm ?
RIPPLED_SRC=$(ls rippled-[0-9]*.src.rpm)
echo "\"rpms\": {" >> "${TOPDIR}/files.info"
for rpm in ${RIPPLED_PKG} ${RIPPLED_DEV_PKG} ${RIPPLED_DBG_PKG} ${RIPPLED_REPORTING_PKG}; do
# first item doesn't get a comma separator
if [ $rpm != $RIPPLED_PKG ] ; then
echo "," >> "${TOPDIR}/files.info"
fi
echo "\"${rpm}\"": | tee -a "${TOPDIR}/files.info"
ca="${CURLARGS}"
if [ "${action}" = "PUT" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/${RPM_REPO}/${COMPONENT}/"
ca="${ca} -T${rpm}"
elif [ "${action}" = "GET" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/api/storage/${RPM_REPO}/${COMPONENT}/${rpm}"
fi
echo "file info request url --> ${url}"
eval "curl ${ca} \"${url}\"" | jq -M "${filter}" | tee -a "${TOPDIR}/files.info"
done
echo "}}" >> "${TOPDIR}/files.info"
jq '.' "${TOPDIR}/files.info" > "${TOPDIR}/files.info.tmp"
mv "${TOPDIR}/files.info.tmp" "${TOPDIR}/files.info"
if [ ! -z "${SLACK_NOTIFY_URL}" ] && [ "${action}" = "GET" ] ; then
# extract files.info content to variable and sanitize so it can
# be interpolated into a slack text field below
finfo=$(cat ${TOPDIR}/files.info | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/\\n/g' | sed -E 's/"/\\"/g')
# try posting file info to slack.
# can add channel field to payload if the
# default channel is incorrect. Get rid of
# newlines in payload json since slack doesn't accept them
CONTENT=$(tr -d '[\n]' <<JSON
payload={
"username": "GitlabCI",
"text": "The package build for branch \`${CI_COMMIT_REF_NAME}\` is complete. File hashes are: \`\`\`${finfo}\`\`\`",
"icon_emoji": ":package:"}
JSON
)
curl ${SLACK_NOTIFY_URL} --data-urlencode "${CONTENT}"
fi

View File

@@ -1,38 +0,0 @@
#!/usr/bin/env bash
set -eo pipefail
sign_dpkg() {
if [ -n "${GPG_KEYID}" ]; then
dpkg-sig \
-g "--no-tty --digest-algo 'sha512' --passphrase '${GPG_PASSPHRASE}' --pinentry-mode=loopback" \
-k "${GPG_KEYID}" \
--sign builder \
"build/dpkg/packages/*.deb"
fi
}
sign_rpm() {
if [ -n "${GPG_KEYID}" ] ; then
find build/rpm/packages -name "*.rpm" -exec bash -c '
echo "yes" | setsid rpm \
--define "_gpg_name ${GPG_KEYID}" \
--define "_signature gpg" \
--define "__gpg_check_password_cmd /bin/true" \
--define "__gpg_sign_cmd %{__gpg} gpg --batch --no-armor --digest-algo 'sha512' --passphrase '${GPG_PASSPHRASE}' --no-secmem-warning -u '%{_gpg_name}' --sign --detach-sign --output %{__signature_filename} %{__plaintext_filename}" \
--addsign '{} \;
fi
}
case "${1}" in
dpkg)
sign_dpkg
;;
rpm)
sign_rpm
;;
*)
echo "Usage: ${0} (dpkg|rpm)"
;;
esac

View File

@@ -1,108 +0,0 @@
#!/usr/bin/env sh
set -e
install_from=$1
use_private=${2:-0} # this option not currently needed by any CI scripts,
# reserved for possible future use
if [ "$use_private" -gt 0 ] ; then
REPO_ROOT="https://rippled:${ARTIFACTORY_DEPLOY_KEY_RIPPLED}@${ARTIFACTORY_HOST}/artifactory"
else
REPO_ROOT="${PUBLIC_REPO_ROOT}"
fi
. ./Builds/containers/gitlab-ci/get_component.sh
. /etc/os-release
case ${ID} in
ubuntu|debian)
pkgtype="dpkg"
;;
fedora|centos|rhel|scientific|rocky)
pkgtype="rpm"
;;
*)
echo "unrecognized distro!"
exit 1
;;
esac
# this script provides info variables about pkg version
. build/${pkgtype}/packages/build_vars
if [ "${pkgtype}" = "dpkg" ] ; then
# sometimes update fails and requires a cleanup
updateWithRetry()
{
if ! apt-get -y update ; then
rm -rvf /var/lib/apt/lists/*
apt-get -y clean
apt-get -y update
fi
}
if [ "${install_from}" = "repo" ] ; then
apt-get -y upgrade
updateWithRetry
apt-get -y install apt apt-transport-https ca-certificates coreutils util-linux wget gnupg
wget -q -O - "${REPO_ROOT}/api/gpg/key/public" | apt-key add -
echo "deb ${REPO_ROOT}/${DEB_REPO} ${DISTRO} ${COMPONENT}" >> /etc/apt/sources.list
updateWithRetry
# uncomment this next line if you want to see the available package versions
# apt-cache policy rippled
apt-get -y install rippled=${dpkg_full_version}
elif [ "${install_from}" = "local" ] ; then
# cached pkg install
updateWithRetry
apt-get -y install libprotobuf-dev libprotoc-dev protobuf-compiler libssl-dev
rm -f build/dpkg/packages/rippled-dbgsym*.*
dpkg --no-debsig -i build/dpkg/packages/*.deb
else
echo "unrecognized pkg source!"
exit 1
fi
else
yum -y update
if [ "${install_from}" = "repo" ] ; then
pkgs=("yum-utils coreutils util-linux")
if [ "$ID" = "rocky" ]; then
pkgs="${pkgs[@]/coreutils}"
fi
yum install -y $pkgs
REPOFILE="/etc/yum.repos.d/artifactory.repo"
echo "[Artifactory]" > ${REPOFILE}
echo "name=Artifactory" >> ${REPOFILE}
echo "baseurl=${REPO_ROOT}/${RPM_REPO}/${COMPONENT}/" >> ${REPOFILE}
echo "enabled=1" >> ${REPOFILE}
echo "gpgcheck=0" >> ${REPOFILE}
echo "gpgkey=${REPO_ROOT}/${RPM_REPO}/${COMPONENT}/repodata/repomd.xml.key" >> ${REPOFILE}
echo "repo_gpgcheck=1" >> ${REPOFILE}
yum -y update
# uncomment this next line if you want to see the available package versions
# yum --showduplicates list rippled
yum -y install ${rpm_version_release}
elif [ "${install_from}" = "local" ] ; then
# cached pkg install
pkgs=("yum-utils openssl-static zlib-static")
if [[ "$ID" =~ rocky|fedora ]]; then
if [[ "$ID" =~ "rocky" ]]; then
sed -i 's/enabled=0/enabled=1/g' /etc/yum.repos.d/Rocky-PowerTools.repo
fi
pkgs="${pkgs[@]/openssl-static}"
fi
yum install -y $pkgs
rm -f build/rpm/packages/rippled-debug*.rpm
rm -f build/rpm/packages/*.src.rpm
rpm -i build/rpm/packages/*.rpm
else
echo "unrecognized pkg source!"
exit 1
fi
fi
# verify installed version
INSTALLED=$(/opt/ripple/bin/rippled --version | awk '{print $NF}')
if [ "${rippled_version}" != "${INSTALLED}" ] ; then
echo "INSTALLED version ${INSTALLED} does not match ${rippled_version}"
exit 1
fi
# run unit tests
/opt/ripple/bin/rippled --unittest --unittest-jobs $(nproc)
/opt/ripple/bin/validator-keys --unittest

View File

@@ -1,21 +0,0 @@
#!/usr/bin/env sh
set -e
docker login -u rippled \
-p ${ARTIFACTORY_DEPLOY_KEY_RIPPLED} "${ARTIFACTORY_HUB}"
# this gives us rippled_version :
source build/rpm/packages/build_vars
docker pull "${ARTIFACTORY_HUB}/${RPM_CONTAINER_FULLNAME}"
docker pull "${ARTIFACTORY_HUB}/${DPKG_CONTAINER_FULLNAME}"
# tag/push two labels...one using the current rippled version and one just using "latest"
for label in ${rippled_version} latest ; do
docker tag \
"${ARTIFACTORY_HUB}/${RPM_CONTAINER_FULLNAME}" \
"${ARTIFACTORY_HUB}/${RPM_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
docker push \
"${ARTIFACTORY_HUB}/${RPM_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
docker tag \
"${ARTIFACTORY_HUB}/${DPKG_CONTAINER_FULLNAME}" \
"${ARTIFACTORY_HUB}/${DPKG_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
docker push \
"${ARTIFACTORY_HUB}/${DPKG_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
done

View File

@@ -1,17 +0,0 @@
#!/usr/bin/env sh
set -ex
apt -y update
DEBIAN_FRONTEND="noninteractive" apt-get -y install tzdata
apt -y install software-properties-common curl git gnupg
curl -sk -o rippled-pubkeys.txt "${GIT_SIGN_PUBKEYS_URL}"
gpg --import rippled-pubkeys.txt
if git verify-commit HEAD; then
echo "git commit signature check passed"
else
echo "git commit signature check failed"
git log -n 5 --color \
--pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an> [%G?]%Creset' \
--abbrev-commit
exit 1
fi

View File

@@ -1,97 +0,0 @@
#!/usr/bin/env bash
set -ex
# make sure pkg source files are up to date with repo
cd /opt/rippled_bld/pkg
cp -fpru rippled/Builds/containers/packaging/dpkg/debian/. debian/
cp -fpu rippled/Builds/containers/shared/rippled*.service debian/
cp -fpu rippled/Builds/containers/shared/update_sources.sh .
source update_sources.sh
# Build the dpkg
#dpkg uses - as separator, so we need to change our -bN versions to tilde
RIPPLED_DPKG_VERSION=$(echo "${RIPPLED_VERSION}" | sed 's!-!~!g')
# TODO - decide how to handle the trailing/release
# version here (hardcoded to 1). Does it ever need to change?
RIPPLED_DPKG_FULL_VERSION="${RIPPLED_DPKG_VERSION}-1"
git config --global --add safe.directory /opt/rippled_bld/pkg/rippled
cd /opt/rippled_bld/pkg/rippled
if [[ -n $(git status --porcelain) ]]; then
git status
error "Unstaged changes in this repo - please commit first"
fi
git archive --format tar.gz --prefix rippled-${RIPPLED_DPKG_VERSION}/ -o ../rippled-${RIPPLED_DPKG_VERSION}.tar.gz HEAD
cd ..
# dpkg debmake would normally create this link, but we do it manually
ln -s ./rippled-${RIPPLED_DPKG_VERSION}.tar.gz rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz
tar xvf rippled-${RIPPLED_DPKG_VERSION}.tar.gz
cd rippled-${RIPPLED_DPKG_VERSION}
cp -pr ../debian .
# dpkg requires a changelog. We don't currently maintain
# a useable one, so let's just fake it with our current version
# TODO : not sure if the "unstable" will need to change for
# release packages (?)
NOWSTR=$(TZ=UTC date -R)
cat << CHANGELOG > ./debian/changelog
rippled (${RIPPLED_DPKG_FULL_VERSION}) unstable; urgency=low
* see RELEASENOTES
-- Ripple Labs Inc. <support@ripple.com> ${NOWSTR}
CHANGELOG
# PATH must be preserved for our more modern cmake in /opt/local
# TODO : consider allowing lintian to run in future ?
export DH_BUILD_DDEBS=1
export CC=gcc-11
export CXX=g++-11
debuild --no-lintian --preserve-envvar PATH --preserve-env -us -uc
rc=$?; if [[ $rc != 0 ]]; then
error "error building dpkg"
fi
cd ..
# copy artifacts
cp rippled-reporting_${RIPPLED_DPKG_FULL_VERSION}_amd64.deb ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.deb ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}.dsc ${PKG_OUTDIR}
# dbgsym suffix is ddeb under newer debuild, but just deb under earlier
cp rippled-dbgsym_${RIPPLED_DPKG_FULL_VERSION}_amd64.* ${PKG_OUTDIR}
cp rippled-reporting-dbgsym_${RIPPLED_DPKG_FULL_VERSION}_amd64.* ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.changes ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.build ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}.debian.tar.xz ${PKG_OUTDIR}
# buildinfo is only generated by later version of debuild
if [ -e rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.buildinfo ] ; then
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.buildinfo ${PKG_OUTDIR}
fi
cat rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.changes
# extract the text in the .changes file that appears between
# Checksums-Sha256: ...
# and
# Files: ...
awk '/Checksums-Sha256:/{hit=1;next}/Files:/{hit=0}hit' \
rippled_${RIPPLED_DPKG_VERSION}-1_amd64.changes | \
sed -E 's!^[[:space:]]+!!' > shasums
DEB_SHA256=$(cat shasums | \
grep "rippled_${RIPPLED_DPKG_VERSION}-1_amd64.deb" | cut -d " " -f 1)
DBG_SHA256=$(cat shasums | \
grep "rippled-dbgsym_${RIPPLED_DPKG_VERSION}-1_amd64.*" | cut -d " " -f 1)
REPORTING_DBG_SHA256=$(cat shasums | \
grep "rippled-reporting-dbgsym_${RIPPLED_DPKG_VERSION}-1_amd64.*" | cut -d " " -f 1)
REPORTING_SHA256=$(cat shasums | \
grep "rippled-reporting_${RIPPLED_DPKG_VERSION}-1_amd64.deb" | cut -d " " -f 1)
SRC_SHA256=$(cat shasums | \
grep "rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz" | cut -d " " -f 1)
echo "deb_sha256=${DEB_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "dbg_sha256=${DBG_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "reporting_sha256=${REPORTING_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "reporting_dbg_sha256=${REPORTING_DBG_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "src_sha256=${SRC_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "rippled_version=${RIPPLED_VERSION}" >> ${PKG_OUTDIR}/build_vars
echo "dpkg_version=${RIPPLED_DPKG_VERSION}" >> ${PKG_OUTDIR}/build_vars
echo "dpkg_full_version=${RIPPLED_DPKG_FULL_VERSION}" >> ${PKG_OUTDIR}/build_vars

View File

@@ -1,3 +0,0 @@
rippled daemon
-- Mike Ellery <mellery451@gmail.com> Tue, 04 Dec 2018 18:19:03 +0000

View File

@@ -1 +0,0 @@
10

View File

@@ -1,19 +0,0 @@
Source: rippled
Section: misc
Priority: extra
Maintainer: Ripple Labs Inc. <support@ripple.com>
Build-Depends: cmake, debhelper (>=9), zlib1g-dev, dh-systemd, ninja-build
Standards-Version: 3.9.7
Homepage: http://ripple.com/
Package: rippled
Architecture: any
Multi-Arch: foreign
Depends: ${misc:Depends}, ${shlibs:Depends}
Description: rippled daemon
Package: rippled-reporting
Architecture: any
Multi-Arch: foreign
Depends: ${misc:Depends}, ${shlibs:Depends}
Description: rippled reporting daemon

View File

@@ -1,86 +0,0 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: rippled
Source: https://github.com/ripple/rippled
Files: *
Copyright: 2012-2019 Ripple Labs Inc.
License: __UNKNOWN__
The accompanying files under various copyrights.
Copyright (c) 2012, 2013, 2014 Ripple Labs Inc.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
The accompanying files incorporate work covered by the following copyright
and previous license notice:
Copyright (c) 2011 Arthur Britto, David Schwartz, Jed McCaleb,
Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant
Some code from Raw Material Software, Ltd., provided under the terms of the
ISC License. See the corresponding source files for more details.
Copyright (c) 2013 - Raw Material Software Ltd.
Please visit http://www.juce.com
Some code from ASIO examples:
// Copyright (c) 2003-2011 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
Some code from Bitcoin:
// Copyright (c) 2009-2010 Satoshi Nakamoto
// Copyright (c) 2011 The Bitcoin developers
// Distributed under the MIT/X11 software license, see the accompanying
// file license.txt or http://www.opensource.org/licenses/mit-license.php.
Some code from Tom Wu:
This software is covered under the following copyright:
/*
* Copyright (c) 2003-2005 Tom Wu
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND,
* EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY
* WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
*
* IN NO EVENT SHALL TOM WU BE LIABLE FOR ANY SPECIAL, INCIDENTAL,
* INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER
* RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER OR NOT ADVISED OF
* THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT
* OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*
* In addition, the following condition applies:
*
* All redistributions must retain an intact copy of this copyright notice
* and disclaimer.
*/
Address all questions regarding this license to:
Tom Wu
tjw@cs.Stanford.EDU

View File

@@ -1,3 +0,0 @@
/var/log/rippled/
/var/lib/rippled/
/etc/systemd/system/rippled.service.d/

View File

@@ -1,3 +0,0 @@
README.md
LICENSE.md
RELEASENOTES.md

View File

@@ -1,3 +0,0 @@
opt/ripple/include
opt/ripple/lib/*.a
opt/ripple/lib/cmake/ripple

View File

@@ -1,3 +0,0 @@
/var/log/rippled-reporting/
/var/lib/rippled-reporting/
/etc/systemd/system/rippled-reporting.service.d/

View File

@@ -1,8 +0,0 @@
bld/rippled-reporting/rippled-reporting opt/rippled-reporting/bin
cfg/rippled-reporting.cfg opt/rippled-reporting/etc
debian/tmp/opt/rippled-reporting/etc/validators.txt opt/rippled-reporting/etc
opt/rippled-reporting/bin/update-rippled-reporting.sh
opt/rippled-reporting/bin/getRippledReportingInfo
opt/rippled-reporting/etc/update-rippled-reporting-cron
etc/logrotate.d/rippled-reporting

View File

@@ -1,3 +0,0 @@
opt/rippled-reporting/etc/rippled-reporting.cfg etc/opt/rippled-reporting/rippled-reporting.cfg
opt/rippled-reporting/etc/validators.txt etc/opt/rippled-reporting/validators.txt
opt/rippled-reporting/bin/rippled-reporting usr/local/bin/rippled-reporting

View File

@@ -1,33 +0,0 @@
#!/bin/sh
set -e
USER_NAME=rippled-reporting
GROUP_NAME=rippled-reporting
case "$1" in
configure)
id -u $USER_NAME >/dev/null 2>&1 || \
adduser --system --quiet \
--home /nonexistent --no-create-home \
--disabled-password \
--group "$GROUP_NAME"
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled-reporting/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled-reporting/
chmod 755 /var/log/rippled-reporting/
chmod 755 /var/lib/rippled-reporting/
chown -R $USER_NAME:$GROUP_NAME /opt/rippled-reporting
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,2 +0,0 @@
/opt/ripple/etc/rippled.cfg
/opt/ripple/etc/validators.txt

View File

@@ -1,8 +0,0 @@
opt/ripple/bin/rippled
opt/ripple/bin/validator-keys
opt/ripple/bin/update-rippled.sh
opt/ripple/bin/getRippledInfo
opt/ripple/etc/rippled.cfg
opt/ripple/etc/validators.txt
opt/ripple/etc/update-rippled-cron
etc/logrotate.d/rippled

View File

@@ -1,3 +0,0 @@
opt/ripple/etc/rippled.cfg etc/opt/ripple/rippled.cfg
opt/ripple/etc/validators.txt etc/opt/ripple/validators.txt
opt/ripple/bin/rippled usr/local/bin/rippled

View File

@@ -1,35 +0,0 @@
#!/bin/sh
set -e
USER_NAME=rippled
GROUP_NAME=rippled
case "$1" in
configure)
id -u $USER_NAME >/dev/null 2>&1 || \
adduser --system --quiet \
--home /nonexistent --no-create-home \
--disabled-password \
--group "$GROUP_NAME"
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled/
chown -R $USER_NAME:$GROUP_NAME /opt/ripple
chmod 755 /var/log/rippled/
chmod 755 /var/lib/rippled/
chmod 644 /opt/ripple/etc/update-rippled-cron
chmod 644 /etc/logrotate.d/rippled
chown -R root:$GROUP_NAME /opt/ripple/etc/update-rippled-cron
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,17 +0,0 @@
#!/bin/sh
set -e
case "$1" in
purge|remove|upgrade|failed-upgrade|abort-install|abort-upgrade|disappear)
;;
*)
echo "postrm called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,20 +0,0 @@
#!/bin/sh
set -e
case "$1" in
install|upgrade)
;;
abort-upgrade)
;;
*)
echo "preinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,20 +0,0 @@
#!/bin/sh
set -e
case "$1" in
remove|upgrade|deconfigure)
;;
failed-upgrade)
;;
*)
echo "prerm called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,76 +0,0 @@
#!/usr/bin/make -f
export DH_VERBOSE = 1
export DH_OPTIONS = -v
# debuild sets some warnings that don't work well
# for our curent build..so try to remove those flags here:
export CFLAGS:=$(subst -Wformat,,$(CFLAGS))
export CFLAGS:=$(subst -Werror=format-security,,$(CFLAGS))
export CXXFLAGS:=$(subst -Wformat,,$(CXXFLAGS))
export CXXFLAGS:=$(subst -Werror=format-security,,$(CXXFLAGS))
%:
dh $@ --with systemd
override_dh_systemd_start:
dh_systemd_start --no-restart-on-upgrade
override_dh_auto_configure:
env
rm -rf bld
conan export external/snappy snappy/1.1.9@
conan install . \
--install-folder bld/rippled \
--build missing \
--settings build_type=Release
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/opt/ripple \
-Dstatic=ON \
-Dunity=OFF \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-Dvalidator_keys=ON \
-B bld/rippled
conan install . \
--install-folder bld/rippled-reporting \
--build missing \
--settings build_type=Release \
--settings compiler.cppstd=17 \
--options reporting=True
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/opt/rippled-reporting \
-Dstatic=ON \
-Dunity=OFF \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-Dreporting=ON \
-B bld/rippled-reporting
override_dh_auto_build:
cmake --build bld/rippled --target rippled --target validator-keys -j${nproc}
cmake --build bld/rippled-reporting --target rippled -j${nproc}
override_dh_auto_install:
cmake --install bld/rippled --prefix debian/tmp/opt/ripple
install -D bld/rippled/validator-keys/validator-keys debian/tmp/opt/ripple/bin/validator-keys
install -D Builds/containers/shared/update-rippled.sh debian/tmp/opt/ripple/bin/update-rippled.sh
install -D bin/getRippledInfo debian/tmp/opt/ripple/bin/getRippledInfo
install -D Builds/containers/shared/update-rippled-cron debian/tmp/opt/ripple/etc/update-rippled-cron
install -D Builds/containers/shared/rippled-logrotate debian/tmp/etc/logrotate.d/rippled
rm -rf debian/tmp/opt/ripple/lib64/cmake/date
mkdir -p debian/tmp/opt/rippled-reporting/etc
mkdir -p debian/tmp/opt/rippled-reporting/bin
cp cfg/validators-example.txt debian/tmp/opt/rippled-reporting/etc/validators.txt
sed -E 's/rippled?/rippled-reporting/g' Builds/containers/shared/update-rippled.sh > debian/tmp/opt/rippled-reporting/bin/update-rippled-reporting.sh
sed -E 's/rippled?/rippled-reporting/g' bin/getRippledInfo > debian/tmp/opt/rippled-reporting/bin/getRippledReportingInfo
sed -E 's/rippled?/rippled-reporting/g' Builds/containers/shared/update-rippled-cron > debian/tmp/opt/rippled-reporting/etc/update-rippled-reporting-cron
sed -E 's/rippled?/rippled-reporting/g' Builds/containers/shared/rippled-logrotate > debian/tmp/etc/logrotate.d/rippled-reporting

View File

@@ -1 +0,0 @@
3.0 (quilt)

View File

@@ -1,2 +0,0 @@
#abort-on-upstream-changes
#unapply-patches

View File

@@ -1 +0,0 @@
enable rippled-reporting.service

View File

@@ -1 +0,0 @@
enable rippled.service

View File

@@ -1,82 +0,0 @@
#!/usr/bin/env bash
set -ex
cd /opt/rippled_bld/pkg
cp -fpu rippled/Builds/containers/packaging/rpm/rippled.spec .
cp -fpu rippled/Builds/containers/shared/update_sources.sh .
source update_sources.sh
# Build the rpm
IFS='-' read -r RIPPLED_RPM_VERSION RELEASE <<< "$RIPPLED_VERSION"
export RIPPLED_RPM_VERSION
RPM_RELEASE=${RPM_RELEASE-1}
# post-release version
if [ "hf" = "$(echo "$RELEASE" | cut -c -2)" ]; then
RPM_RELEASE="${RPM_RELEASE}.${RELEASE}"
# pre-release version (-b or -rc)
elif [[ $RELEASE ]]; then
RPM_RELEASE="0.${RPM_RELEASE}.${RELEASE}"
fi
export RPM_RELEASE
if [[ $RPM_PATCH ]]; then
RPM_PATCH=".${RPM_PATCH}"
export RPM_PATCH
fi
cd /opt/rippled_bld/pkg/rippled
if [[ -n $(git status --porcelain) ]]; then
git status
error "Unstaged changes in this repo - please commit first"
fi
git archive --format tar.gz --prefix rippled/ -o ../rpmbuild/SOURCES/rippled.tar.gz HEAD
cd ..
source /opt/rh/devtoolset-11/enable
rpmbuild --define "_topdir ${PWD}/rpmbuild" -ba rippled.spec
rc=$?; if [[ $rc != 0 ]]; then
error "error building rpm"
fi
# Make a tar of the rpm and source rpm
RPM_VERSION_RELEASE=$(rpm -qp --qf='%{NAME}-%{VERSION}-%{RELEASE}' ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm)
tar_file=$RPM_VERSION_RELEASE.tar.gz
cp ./rpmbuild/RPMS/x86_64/* ${PKG_OUTDIR}
cp ./rpmbuild/SRPMS/* ${PKG_OUTDIR}
RPM_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm 2>/dev/null)
DBG_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-debuginfo*.rpm 2>/dev/null)
DEV_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-devel*.rpm 2>/dev/null)
REP_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-reporting*.rpm 2>/dev/null)
SRC_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/SRPMS/*.rpm 2>/dev/null)
RPM_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm | awk '{ print $1}')"
DBG_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-debuginfo*.rpm | awk '{ print $1}')"
REP_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-reporting*.rpm | awk '{ print $1}')"
DEV_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-devel*.rpm | awk '{ print $1}')"
SRC_SHA256="$(sha256sum ./rpmbuild/SRPMS/*.rpm | awk '{ print $1}')"
echo "rpm_md5sum=$RPM_MD5SUM" > ${PKG_OUTDIR}/build_vars
echo "rep_md5sum=$REP_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "dbg_md5sum=$DBG_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "dev_md5sum=$DEV_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "src_md5sum=$SRC_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "rpm_sha256=$RPM_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "rep_sha256=$REP_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "dbg_sha256=$DBG_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "dev_sha256=$DEV_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "src_sha256=$SRC_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "rippled_version=$RIPPLED_VERSION" >> ${PKG_OUTDIR}/build_vars
echo "rpm_version=$RIPPLED_RPM_VERSION" >> ${PKG_OUTDIR}/build_vars
echo "rpm_file_name=$tar_file" >> ${PKG_OUTDIR}/build_vars
echo "rpm_version_release=$RPM_VERSION_RELEASE" >> ${PKG_OUTDIR}/build_vars

View File

@@ -1,236 +0,0 @@
%define rippled_version %(echo $RIPPLED_RPM_VERSION)
%define rpm_release %(echo $RPM_RELEASE)
%define rpm_patch %(echo $RPM_PATCH)
%define _prefix /opt/ripple
Name: rippled
# Dashes in Version extensions must be converted to underscores
Version: %{rippled_version}
Release: %{rpm_release}%{?dist}%{rpm_patch}
Summary: rippled daemon
License: MIT
URL: http://ripple.com/
Source0: rippled.tar.gz
BuildRequires: cmake zlib-static ninja-build
%description
rippled
%package devel
Summary: Files for development of applications using xrpl core library
Group: Development/Libraries
Requires: zlib-static
%description devel
core library for development of standalone applications that sign transactions.
%package reporting
Summary: Reporting Server for rippled
%description reporting
History server for XRP Ledger
%prep
%setup -c -n rippled
%build
rm -rf ~/.conan/profiles/default
cp /opt/libcstd/libstdc++.so.6.0.22 /usr/lib64
cp /opt/libcstd/libstdc++.so.6.0.22 /lib64
ln -sf /usr/lib64/libstdc++.so.6.0.22 /usr/lib64/libstdc++.so.6
ln -sf /lib64/libstdc++.so.6.0.22 /usr/lib64/libstdc++.so.6
source /opt/rh/rh-python38/enable
pip install "conan<2"
conan profile new default --detect
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update settings.compiler.cppstd=20 default
cd rippled
mkdir -p bld.rippled
conan export external/snappy snappy/1.1.9@
pushd bld.rippled
conan install .. \
--settings build_type=Release \
--output-folder . \
--build missing
cmake -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_INSTALL_PREFIX=%{_prefix} \
-DCMAKE_BUILD_TYPE=Release \
-Dunity=OFF \
-Dstatic=ON \
-Dvalidator_keys=ON \
-DCMAKE_VERBOSE_MAKEFILE=ON \
..
cmake --build . --parallel $(nproc) --target rippled --target validator-keys
popd
mkdir -p bld.rippled-reporting
pushd bld.rippled-reporting
conan install .. \
--settings build_type=Release \
--output-folder . \
--build missing \
--settings compiler.cppstd=17 \
--options reporting=True
cmake -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_INSTALL_PREFIX=%{_prefix} \
-DCMAKE_BUILD_TYPE=Release \
-Dunity=OFF \
-Dstatic=ON \
-Dvalidator_keys=ON \
-Dreporting=ON \
-DCMAKE_VERBOSE_MAKEFILE=ON \
..
cmake --build . --parallel $(nproc) --target rippled
%pre
test -e /etc/pki/tls || { mkdir -p /etc/pki; ln -s /usr/lib/ssl /etc/pki/tls; }
%install
rm -rf $RPM_BUILD_ROOT
DESTDIR=$RPM_BUILD_ROOT cmake --build rippled/bld.rippled --target install #-- -v
mkdir -p $RPM_BUILD_ROOT
rm -rf ${RPM_BUILD_ROOT}/%{_prefix}/lib64/
install -d ${RPM_BUILD_ROOT}/etc/opt/ripple
install -d ${RPM_BUILD_ROOT}/usr/local/bin
install -D ./rippled/cfg/rippled-example.cfg ${RPM_BUILD_ROOT}/%{_prefix}/etc/rippled.cfg
install -D ./rippled/cfg/validators-example.txt ${RPM_BUILD_ROOT}/%{_prefix}/etc/validators.txt
ln -sf %{_prefix}/etc/rippled.cfg ${RPM_BUILD_ROOT}/etc/opt/ripple/rippled.cfg
ln -sf %{_prefix}/etc/validators.txt ${RPM_BUILD_ROOT}/etc/opt/ripple/validators.txt
ln -sf %{_prefix}/bin/rippled ${RPM_BUILD_ROOT}/usr/local/bin/rippled
install -D rippled/bld.rippled/validator-keys/validator-keys ${RPM_BUILD_ROOT}%{_bindir}/validator-keys
install -D ./rippled/Builds/containers/shared/rippled.service ${RPM_BUILD_ROOT}/usr/lib/systemd/system/rippled.service
install -D ./rippled/Builds/containers/packaging/rpm/50-rippled.preset ${RPM_BUILD_ROOT}/usr/lib/systemd/system-preset/50-rippled.preset
install -D ./rippled/Builds/containers/shared/update-rippled.sh ${RPM_BUILD_ROOT}%{_bindir}/update-rippled.sh
install -D ./rippled/bin/getRippledInfo ${RPM_BUILD_ROOT}%{_bindir}/getRippledInfo
install -D ./rippled/Builds/containers/shared/update-rippled-cron ${RPM_BUILD_ROOT}%{_prefix}/etc/update-rippled-cron
install -D ./rippled/Builds/containers/shared/rippled-logrotate ${RPM_BUILD_ROOT}/etc/logrotate.d/rippled
install -d $RPM_BUILD_ROOT/var/log/rippled
install -d $RPM_BUILD_ROOT/var/lib/rippled
# reporting mode
%define _prefix /opt/rippled-reporting
mkdir -p ${RPM_BUILD_ROOT}/etc/opt/rippled-reporting/
install -D rippled/bld.rippled-reporting/rippled-reporting ${RPM_BUILD_ROOT}%{_bindir}/rippled-reporting
install -D ./rippled/cfg/rippled-reporting.cfg ${RPM_BUILD_ROOT}%{_prefix}/etc/rippled-reporting.cfg
install -D ./rippled/cfg/validators-example.txt ${RPM_BUILD_ROOT}%{_prefix}/etc/validators.txt
install -D ./rippled/Builds/containers/packaging/rpm/50-rippled-reporting.preset ${RPM_BUILD_ROOT}/usr/lib/systemd/system-preset/50-rippled-reporting.preset
ln -s %{_prefix}/bin/rippled-reporting ${RPM_BUILD_ROOT}/usr/local/bin/rippled-reporting
ln -s %{_prefix}/etc/rippled-reporting.cfg ${RPM_BUILD_ROOT}/etc/opt/rippled-reporting/rippled-reporting.cfg
ln -s %{_prefix}/etc/validators.txt ${RPM_BUILD_ROOT}/etc/opt/rippled-reporting/validators.txt
install -d $RPM_BUILD_ROOT/var/log/rippled-reporting
install -d $RPM_BUILD_ROOT/var/lib/rippled-reporting
install -D ./rippled/Builds/containers/shared/rippled-reporting.service ${RPM_BUILD_ROOT}/usr/lib/systemd/system/rippled-reporting.service
sed -E 's/rippled?/rippled-reporting/g' ./rippled/Builds/containers/shared/update-rippled.sh > ${RPM_BUILD_ROOT}%{_bindir}/update-rippled-reporting.sh
sed -E 's/rippled?/rippled-reporting/g' ./rippled/bin/getRippledInfo > ${RPM_BUILD_ROOT}%{_bindir}/getRippledReportingInfo
sed -E 's/rippled?/rippled-reporting/g' ./rippled/Builds/containers/shared/update-rippled-cron > ${RPM_BUILD_ROOT}%{_prefix}/etc/update-rippled-reporting-cron
sed -E 's/rippled?/rippled-reporting/g' ./rippled/Builds/containers/shared/rippled-logrotate > ${RPM_BUILD_ROOT}/etc/logrotate.d/rippled-reporting
%post
%define _prefix /opt/ripple
USER_NAME=rippled
GROUP_NAME=rippled
getent passwd $USER_NAME &>/dev/null || useradd $USER_NAME
getent group $GROUP_NAME &>/dev/null || groupadd $GROUP_NAME
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled/
chown -R $USER_NAME:$GROUP_NAME %{_prefix}/
chmod 755 /var/log/rippled/
chmod 755 /var/lib/rippled/
chmod 644 %{_prefix}/etc/update-rippled-cron
chmod 644 /etc/logrotate.d/rippled
chown -R root:$GROUP_NAME %{_prefix}/etc/update-rippled-cron
%post reporting
%define _prefix /opt/rippled-reporting
USER_NAME=rippled-reporting
GROUP_NAME=rippled-reporting
getent passwd $USER_NAME &>/dev/null || useradd -r $USER_NAME
getent group $GROUP_NAME &>/dev/null || groupadd $GROUP_NAME
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled-reporting/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled-reporting/
chown -R $USER_NAME:$GROUP_NAME %{_prefix}/
chmod 755 /var/log/rippled-reporting/
chmod 755 /var/lib/rippled-reporting/
chmod -x /usr/lib/systemd/system/rippled-reporting.service
%files
%define _prefix /opt/ripple
%doc rippled/README.md rippled/LICENSE.md
%{_bindir}/rippled
/usr/local/bin/rippled
%{_bindir}/update-rippled.sh
%{_bindir}/getRippledInfo
%{_prefix}/etc/update-rippled-cron
%{_bindir}/validator-keys
%config(noreplace) %{_prefix}/etc/rippled.cfg
%config(noreplace) /etc/opt/ripple/rippled.cfg
%config(noreplace) %{_prefix}/etc/validators.txt
%config(noreplace) /etc/opt/ripple/validators.txt
%config(noreplace) /etc/logrotate.d/rippled
%config(noreplace) /usr/lib/systemd/system/rippled.service
%config(noreplace) /usr/lib/systemd/system-preset/50-rippled.preset
%dir /var/log/rippled/
%dir /var/lib/rippled/
%files devel
%{_prefix}/include
%{_prefix}/lib/*.a
%{_prefix}/lib/cmake/ripple
%files reporting
%define _prefix /opt/rippled-reporting
%doc rippled/README.md rippled/LICENSE.md
%{_bindir}/rippled-reporting
/usr/local/bin/rippled-reporting
%config(noreplace) /etc/opt/rippled-reporting/rippled-reporting.cfg
%config(noreplace) %{_prefix}/etc/rippled-reporting.cfg
%config(noreplace) %{_prefix}/etc/validators.txt
%config(noreplace) /etc/opt/rippled-reporting/validators.txt
%config(noreplace) /usr/lib/systemd/system/rippled-reporting.service
%config(noreplace) /usr/lib/systemd/system-preset/50-rippled-reporting.preset
%dir /var/log/rippled-reporting/
%dir /var/lib/rippled-reporting/
%{_bindir}/update-rippled-reporting.sh
%{_bindir}/getRippledReportingInfo
%{_prefix}/etc/update-rippled-reporting-cron
%config(noreplace) /etc/logrotate.d/rippled-reporting
%changelog
* Wed Aug 28 2019 Mike Ellery <mellery451@gmail.com>
- Switch to subproject build for validator-keys
* Wed May 15 2019 Mike Ellery <mellery451@gmail.com>
- Make validator-keys use local rippled build for core lib
* Wed Aug 01 2018 Mike Ellery <mellery451@gmail.com>
- add devel package for signing library
* Thu Jun 02 2016 Brandon Wilson <bwilson@ripple.com>
- Install validators.txt

View File

@@ -1,37 +0,0 @@
#!/usr/bin/env bash
set -e
IFS=. read cm_maj cm_min cm_rel <<<"$1"
: ${cm_rel:-0}
CMAKE_ROOT=${2:-"${HOME}/cmake"}
function cmake_version ()
{
if [[ -d ${CMAKE_ROOT} ]] ; then
local perms=$(test $(uname) = "Linux" && echo "/111" || echo "+111")
local installed=$(find ${CMAKE_ROOT} -perm ${perms} -type f -name cmake)
if [[ "${installed}" != "" ]] ; then
echo "$(${installed} --version | head -1)"
fi
fi
}
installed=$(cmake_version)
if [[ "${installed}" != "" && ${installed} =~ ${cm_maj}.${cm_min}.${cm_rel} ]] ; then
echo "cmake already installed: ${installed}"
exit
fi
# From CMake 20+ "Linux" is lowercase so using `uname` won't create be the correct path
if [ ${cm_min} -gt 19 ]; then
linux="linux"
else
linux=$(uname)
fi
pkgname="cmake-${cm_maj}.${cm_min}.${cm_rel}-${linux}-x86_64.tar.gz"
tmppkg="/tmp/cmake.tar.gz"
wget --quiet https://cmake.org/files/v${cm_maj}.${cm_min}/${pkgname} -O ${tmppkg}
mkdir -p ${CMAKE_ROOT}
cd ${CMAKE_ROOT}
tar --strip-components 1 -xf ${tmppkg}
rm -f ${tmppkg}
echo "installed: $(cmake_version)"

View File

@@ -1,15 +0,0 @@
/var/log/rippled/*.log {
daily
minsize 200M
rotate 7
nocreate
missingok
notifempty
compress
compresscmd /usr/bin/nice
compressoptions -n19 ionice -c3 gzip
compressext .gz
postrotate
/opt/ripple/bin/rippled --conf /opt/ripple/etc/rippled.cfg logrotate
endscript
}

View File

@@ -1,15 +0,0 @@
[Unit]
Description=Ripple Daemon
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/opt/rippled-reporting/bin/rippled-reporting --silent --conf /etc/opt/rippled-reporting/rippled-reporting.cfg
Restart=on-failure
User=rippled-reporting
Group=rippled-reporting
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@@ -1,15 +0,0 @@
[Unit]
Description=Ripple Daemon
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/opt/ripple/bin/rippled --net --silent --conf /etc/opt/ripple/rippled.cfg
Restart=on-failure
User=rippled
Group=rippled
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@@ -1,10 +0,0 @@
# For automatic updates, symlink this file to /etc/cron.d/
# Do not remove the newline at the end of this cron script
# bash required for use of RANDOM below.
SHELL=/bin/bash
PATH=/sbin;/bin;/usr/sbin;/usr/bin
# invoke check/update script with random delay up to 59 mins
0 * * * * root sleep $((RANDOM*3540/32768)) && /opt/ripple/bin/update-rippled.sh

View File

@@ -1,65 +0,0 @@
#!/usr/bin/env bash
# auto-update script for rippled daemon
# Check for sudo/root permissions
if [[ $(id -u) -ne 0 ]] ; then
echo "This update script must be run as root or sudo"
exit 1
fi
LOCKDIR=/tmp/rippleupdate.lock
UPDATELOG=/var/log/rippled/update.log
function cleanup {
# If this directory isn't removed, future updates will fail.
rmdir $LOCKDIR
}
# Use mkdir to check if process is already running. mkdir is atomic, as against file create.
if ! mkdir $LOCKDIR 2>/dev/null; then
echo $(date -u) "lockdir exists - won't proceed." >> $UPDATELOG
exit 1
fi
trap cleanup EXIT
source /etc/os-release
can_update=false
if [[ "$ID" == "ubuntu" || "$ID" == "debian" ]] ; then
# Silent update
apt-get update -qq
# The next line is an "awk"ward way to check if the package needs to be updated.
RIPPLE=$(apt-get install -s --only-upgrade rippled | awk '/^Inst/ { print $2 }')
test "$RIPPLE" == "rippled" && can_update=true
function apply_update {
apt-get install rippled -qq
}
elif [[ "$ID" == "fedora" || "$ID" == "centos" || "$ID" == "rhel" || "$ID" == "scientific" ]] ; then
RIPPLE_REPO=${RIPPLE_REPO-stable}
yum --disablerepo=* --enablerepo=ripple-$RIPPLE_REPO clean expire-cache
yum check-update -q --enablerepo=ripple-$RIPPLE_REPO rippled || can_update=true
function apply_update {
yum update -y --enablerepo=ripple-$RIPPLE_REPO rippled
}
else
echo "unrecognized distro!"
exit 1
fi
# Do the actual update and restart the service after reloading systemctl daemon.
if [ "$can_update" = true ] ; then
exec 3>&1 1>>${UPDATELOG} 2>&1
set -e
apply_update
systemctl daemon-reload
systemctl restart rippled.service
echo $(date -u) "rippled daemon updated."
else
echo $(date -u) "no updates available" >> $UPDATELOG
fi

View File

@@ -1,20 +0,0 @@
#!/usr/bin/env bash
function error {
echo $1
exit 1
}
cd /opt/rippled_bld/pkg/rippled
export RIPPLED_VERSION=$(egrep -i -o "\b(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)(-[0-9a-z\-]+(\.[0-9a-z\-]+)*)?(\+[0-9a-z\-]+(\.[0-9a-z\-]+)*)?\b" src/ripple/protocol/impl/BuildInfo.cpp)
: ${PKG_OUTDIR:=/opt/rippled_bld/pkg/out}
export PKG_OUTDIR
if [ ! -d ${PKG_OUTDIR} ]; then
error "${PKG_OUTDIR} is not mounted"
fi
if [ -x ${OPENSSL_ROOT}/bin/openssl ]; then
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${OPENSSL_ROOT}/lib ${OPENSSL_ROOT}/bin/openssl version -a
fi

View File

@@ -1,24 +0,0 @@
ARG DIST_TAG=20.04
FROM ubuntu:$DIST_TAG
ARG GIT_COMMIT=unknown
ARG CI_USE=false
LABEL git-commit=$GIT_COMMIT
# install/setup prerequisites:
COPY ubuntu-builder/ubuntu_setup.sh /tmp/
COPY shared/install_cmake.sh /tmp/
RUN chmod +x /tmp/ubuntu_setup.sh && \
chmod +x /tmp/install_cmake.sh
RUN /tmp/ubuntu_setup.sh
RUN /tmp/install_cmake.sh 3.16.3 /opt/local/cmake-3.16
RUN ln -s /opt/local/cmake-3.16 /opt/local/cmake
ENV PATH="/opt/local/cmake/bin:$PATH"
# prep files for package building
RUN update-alternatives --set gcc /usr/bin/gcc-11
RUN mkdir -m 777 -p /opt/rippled_bld/pkg/
WORKDIR /opt/rippled_bld/pkg
COPY packaging/dpkg/build_dpkg.sh ./
CMD ./build_dpkg.sh

View File

@@ -1,104 +0,0 @@
#!/usr/bin/env bash
set -ex
source /etc/os-release
if [[ ${VERSION_ID} =~ ^20\. || ${VERSION_ID} =~ ^22\. ]] ; then
echo "setup for ${PRETTY_NAME}"
else
echo "${VERSION} not supported"
exit 1
fi
export DEBIAN_FRONTEND="noninteractive"
echo "Acquire::Retries 3;" > /etc/apt/apt.conf.d/80-retries
echo "Acquire::http::Pipeline-Depth 0;" >> /etc/apt/apt.conf.d/80-retries
echo "Acquire::http::No-Cache true;" >> /etc/apt/apt.conf.d/80-retries
echo "Acquire::BrokenProxy true;" >> /etc/apt/apt.conf.d/80-retries
apt-get update -o Acquire::CompressionTypes::Order::=gz
apt-get -y update
apt-get -y install apt-utils
apt-get -y install software-properties-common wget curl ca-certificates
apt-get -y install python3-pip
apt-get -y upgrade
add-apt-repository -y ppa:ubuntu-toolchain-r/test
apt-get -y clean
apt-get -y update
apt-get -y --fix-missing install \
make cmake ninja-build autoconf automake libtool pkg-config libtool \
openssl libssl-dev \
liblzma-dev libbz2-dev zlib1g-dev \
libjemalloc-dev \
gdb gdbserver \
libstdc++6 \
flex bison parallel \
libicu-dev texinfo \
java-common javacc \
dpkg-dev debhelper devscripts fakeroot \
debmake git-buildpackage dh-make gitpkg debsums gnupg \
dh-buildinfo dh-make \
apt-transport-https
if [[ ${VERSION_ID} =~ ^20\. ]] ; then
apt-get install -y \
dh-systemd
fi
apt-get -y install gcc-11 g++-11
update-alternatives --install \
/usr/bin/gcc gcc /usr/bin/gcc-11 20 \
--slave /usr/bin/g++ g++ /usr/bin/g++-11 \
--slave /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-11 \
--slave /usr/bin/gcc-nm gcc-nm /usr/bin/gcc-nm-11 \
--slave /usr/bin/gcc-ranlib gcc-ranlib /usr/bin/gcc-ranlib-11 \
--slave /usr/bin/gcov gcov /usr/bin/gcov-11 \
--slave /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-dump-11 \
--slave /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-tool-11
update-alternatives --auto gcc
update-alternatives --install /usr/bin/cpp cpp /usr/bin/cpp-11 20
update-alternatives --auto cpp
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add -
if [[ ${VERSION_ID} =~ ^20\. ]] ; then
cat << EOF > /etc/apt/sources.list.d/llvm.list
deb http://apt.llvm.org/focal/ llvm-toolchain-focal main
deb-src http://apt.llvm.org/focal/ llvm-toolchain-focal main
deb http://apt.llvm.org/focal/ llvm-toolchain-focal-13 main
deb-src http://apt.llvm.org/focal/ llvm-toolchain-focal-13 main
deb http://apt.llvm.org/focal/ llvm-toolchain-focal-14 main
deb-src http://apt.llvm.org/focal/ llvm-toolchain-focal-14 main
EOF
apt-get -y install binutils clang-12
fi
apt-get -y update
if [[ ${VERSION_ID} =~ ^20\. ]] ; then
for v in 12 14; do
apt-get -y install \
clang-$v libclang-common-$v-dev libclang-$v-dev libllvm$v llvm-$v \
llvm-$v-dev llvm-$v-runtime clang-format-$v python3-clang-$v \
lld-$v libfuzzer-$v-dev libc++-$v-dev python-is-python3
update-alternatives --install \
/usr/bin/clang clang /usr/bin/clang-$v 40 \
--slave /usr/bin/clang++ clang++ /usr/bin/clang++-$v \
--slave /usr/bin/llvm-profdata llvm-profdata /usr/bin/llvm-profdata-$v \
--slave /usr/bin/asan-symbolize asan-symbolize /usr/bin/asan_symbolize-$v \
--slave /usr/bin/llvm-symbolizer llvm-symbolizer /usr/bin/llvm-symbolizer-$v \
--slave /usr/bin/clang-format clang-format /usr/bin/clang-format-$v \
--slave /usr/bin/llvm-ar llvm-ar /usr/bin/llvm-ar-$v \
--slave /usr/bin/llvm-cov llvm-cov /usr/bin/llvm-cov-$v \
--slave /usr/bin/llvm-nm llvm-nm /usr/bin/llvm-nm-$v
done
fi
pip install "conan<2" && \
conan profile new default --detect && \
conan profile update settings.compiler.cppstd=20 default && \
conan profile update settings.compiler.libcxx=libstdc++11 default
apt-get -y autoremove

View File

@@ -13,12 +13,15 @@ then
git clean -ix
fi
# Ensure all sorting is ASCII-order consistently across platforms.
export LANG=C
rm -rfv results
mkdir results
includes="$( pwd )/results/rawincludes.txt"
pushd ../..
echo Raw includes:
grep -r '#include.*/.*\.h' src/ripple/ src/test/ | \
grep -r '#include.*/.*\.h' include src | \
grep -v boost | tee ${includes}
popd
pushd results

View File

@@ -1,51 +1,45 @@
Loop: ripple.app ripple.core
ripple.app > ripple.core
Loop: ripple.app ripple.ledger
ripple.app > ripple.ledger
Loop: ripple.app ripple.net
ripple.app > ripple.net
Loop: ripple.app ripple.nodestore
ripple.app > ripple.nodestore
Loop: ripple.app ripple.overlay
ripple.overlay ~= ripple.app
Loop: ripple.app ripple.peerfinder
ripple.app > ripple.peerfinder
Loop: ripple.app ripple.rpc
ripple.rpc > ripple.app
Loop: ripple.app ripple.shamap
ripple.app > ripple.shamap
Loop: ripple.basics ripple.core
ripple.core > ripple.basics
Loop: ripple.basics ripple.json
ripple.json ~= ripple.basics
Loop: ripple.basics ripple.protocol
ripple.protocol > ripple.basics
Loop: ripple.core ripple.net
ripple.net > ripple.core
Loop: ripple.net ripple.rpc
ripple.rpc > ripple.net
Loop: ripple.nodestore ripple.overlay
ripple.overlay ~= ripple.nodestore
Loop: ripple.overlay ripple.rpc
ripple.rpc ~= ripple.overlay
Loop: test.jtx test.toplevel
test.toplevel > test.jtx
Loop: test.jtx test.unit_test
test.unit_test == test.jtx
Loop: xrpl.basics xrpl.json
xrpl.json == xrpl.basics
Loop: xrpld.app xrpld.core
xrpld.app > xrpld.core
Loop: xrpld.app xrpld.ledger
xrpld.app > xrpld.ledger
Loop: xrpld.app xrpld.net
xrpld.app > xrpld.net
Loop: xrpld.app xrpld.overlay
xrpld.overlay == xrpld.app
Loop: xrpld.app xrpld.peerfinder
xrpld.app > xrpld.peerfinder
Loop: xrpld.app xrpld.rpc
xrpld.rpc > xrpld.app
Loop: xrpld.app xrpld.shamap
xrpld.app > xrpld.shamap
Loop: xrpld.core xrpld.net
xrpld.net > xrpld.core
Loop: xrpld.core xrpld.perflog
xrpld.perflog == xrpld.core
Loop: xrpld.net xrpld.rpc
xrpld.rpc ~= xrpld.net
Loop: xrpld.overlay xrpld.rpc
xrpld.rpc ~= xrpld.overlay
Loop: xrpld.perflog xrpld.rpc
xrpld.rpc ~= xrpld.perflog

View File

@@ -1,228 +1,195 @@
ripple.app > ripple.basics
ripple.app > ripple.beast
ripple.app > ripple.conditions
ripple.app > ripple.consensus
ripple.app > ripple.crypto
ripple.app > ripple.json
ripple.app > ripple.protocol
ripple.app > ripple.resource
ripple.app > test.unit_test
ripple.basics > ripple.beast
ripple.conditions > ripple.basics
ripple.conditions > ripple.protocol
ripple.consensus > ripple.basics
ripple.consensus > ripple.beast
ripple.consensus > ripple.json
ripple.consensus > ripple.protocol
ripple.core > ripple.beast
ripple.core > ripple.json
ripple.core > ripple.protocol
ripple.crypto > ripple.basics
ripple.json > ripple.beast
ripple.ledger > ripple.basics
ripple.ledger > ripple.beast
ripple.ledger > ripple.core
ripple.ledger > ripple.json
ripple.ledger > ripple.protocol
ripple.net > ripple.basics
ripple.net > ripple.beast
ripple.net > ripple.json
ripple.net > ripple.protocol
ripple.net > ripple.resource
ripple.nodestore > ripple.basics
ripple.nodestore > ripple.beast
ripple.nodestore > ripple.core
ripple.nodestore > ripple.json
ripple.nodestore > ripple.protocol
ripple.nodestore > ripple.unity
ripple.overlay > ripple.basics
ripple.overlay > ripple.beast
ripple.overlay > ripple.core
ripple.overlay > ripple.json
ripple.overlay > ripple.peerfinder
ripple.overlay > ripple.protocol
ripple.overlay > ripple.resource
ripple.overlay > ripple.server
ripple.peerfinder > ripple.basics
ripple.peerfinder > ripple.beast
ripple.peerfinder > ripple.core
ripple.peerfinder > ripple.protocol
ripple.perflog > ripple.basics
ripple.perflog > ripple.beast
ripple.perflog > ripple.core
ripple.perflog > ripple.json
ripple.perflog > ripple.nodestore
ripple.perflog > ripple.protocol
ripple.perflog > ripple.rpc
ripple.protocol > ripple.beast
ripple.protocol > ripple.crypto
ripple.protocol > ripple.json
ripple.resource > ripple.basics
ripple.resource > ripple.beast
ripple.resource > ripple.json
ripple.resource > ripple.protocol
ripple.rpc > ripple.basics
ripple.rpc > ripple.beast
ripple.rpc > ripple.core
ripple.rpc > ripple.crypto
ripple.rpc > ripple.json
ripple.rpc > ripple.ledger
ripple.rpc > ripple.nodestore
ripple.rpc > ripple.protocol
ripple.rpc > ripple.resource
ripple.rpc > ripple.server
ripple.rpc > ripple.shamap
ripple.server > ripple.basics
ripple.server > ripple.beast
ripple.server > ripple.crypto
ripple.server > ripple.json
ripple.server > ripple.protocol
ripple.shamap > ripple.basics
ripple.shamap > ripple.beast
ripple.shamap > ripple.crypto
ripple.shamap > ripple.nodestore
ripple.shamap > ripple.protocol
test.app > ripple.app
test.app > ripple.basics
test.app > ripple.beast
test.app > ripple.core
test.app > ripple.json
test.app > ripple.ledger
test.app > ripple.overlay
test.app > ripple.protocol
test.app > ripple.resource
test.app > ripple.rpc
libxrpl.basics > xrpl.basics
libxrpl.basics > xrpl.protocol
libxrpl.crypto > xrpl.basics
libxrpl.json > xrpl.basics
libxrpl.json > xrpl.json
libxrpl.protocol > xrpl.basics
libxrpl.protocol > xrpl.json
libxrpl.protocol > xrpl.protocol
libxrpl.resource > xrpl.basics
libxrpl.resource > xrpl.resource
libxrpl.server > xrpl.basics
libxrpl.server > xrpl.json
libxrpl.server > xrpl.protocol
libxrpl.server > xrpl.server
test.app > test.jtx
test.app > test.rpc
test.app > test.toplevel
test.app > test.unit_test
test.basics > ripple.basics
test.basics > ripple.beast
test.basics > ripple.json
test.basics > ripple.protocol
test.basics > ripple.rpc
test.app > xrpl.basics
test.app > xrpld.app
test.app > xrpld.core
test.app > xrpld.ledger
test.app > xrpld.overlay
test.app > xrpld.rpc
test.app > xrpl.json
test.app > xrpl.protocol
test.app > xrpl.resource
test.basics > test.jtx
test.basics > test.unit_test
test.beast > ripple.basics
test.beast > ripple.beast
test.conditions > ripple.basics
test.conditions > ripple.beast
test.conditions > ripple.conditions
test.consensus > ripple.app
test.consensus > ripple.basics
test.consensus > ripple.beast
test.consensus > ripple.consensus
test.consensus > ripple.ledger
test.basics > xrpl.basics
test.basics > xrpld.perflog
test.basics > xrpld.rpc
test.basics > xrpl.json
test.basics > xrpl.protocol
test.beast > xrpl.basics
test.conditions > xrpl.basics
test.conditions > xrpld.conditions
test.consensus > test.csf
test.consensus > test.toplevel
test.consensus > test.unit_test
test.core > ripple.basics
test.core > ripple.beast
test.core > ripple.core
test.core > ripple.crypto
test.core > ripple.json
test.core > ripple.server
test.consensus > xrpl.basics
test.consensus > xrpld.app
test.consensus > xrpld.consensus
test.consensus > xrpld.ledger
test.core > test.jtx
test.core > test.toplevel
test.core > test.unit_test
test.csf > ripple.basics
test.csf > ripple.beast
test.csf > ripple.consensus
test.csf > ripple.json
test.csf > ripple.protocol
test.json > ripple.beast
test.json > ripple.json
test.core > xrpl.basics
test.core > xrpld.core
test.core > xrpld.perflog
test.core > xrpl.json
test.core > xrpl.server
test.csf > xrpl.basics
test.csf > xrpld.consensus
test.csf > xrpl.json
test.csf > xrpl.protocol
test.json > test.jtx
test.jtx > ripple.app
test.jtx > ripple.basics
test.jtx > ripple.beast
test.jtx > ripple.consensus
test.jtx > ripple.core
test.jtx > ripple.json
test.jtx > ripple.ledger
test.jtx > ripple.net
test.jtx > ripple.protocol
test.jtx > ripple.server
test.ledger > ripple.app
test.ledger > ripple.basics
test.ledger > ripple.beast
test.ledger > ripple.core
test.ledger > ripple.ledger
test.ledger > ripple.protocol
test.json > xrpl.json
test.jtx > xrpl.basics
test.jtx > xrpld.app
test.jtx > xrpld.consensus
test.jtx > xrpld.core
test.jtx > xrpld.ledger
test.jtx > xrpld.net
test.jtx > xrpld.rpc
test.jtx > xrpl.json
test.jtx > xrpl.protocol
test.jtx > xrpl.resource
test.jtx > xrpl.server
test.ledger > test.jtx
test.ledger > test.toplevel
test.net > ripple.net
test.net > test.jtx
test.net > test.toplevel
test.net > test.unit_test
test.nodestore > ripple.app
test.nodestore > ripple.basics
test.nodestore > ripple.beast
test.nodestore > ripple.core
test.nodestore > ripple.nodestore
test.nodestore > ripple.protocol
test.nodestore > ripple.unity
test.ledger > xrpl.basics
test.ledger > xrpld.app
test.ledger > xrpld.core
test.ledger > xrpld.ledger
test.ledger > xrpl.protocol
test.nodestore > test.jtx
test.nodestore > test.toplevel
test.nodestore > test.unit_test
test.overlay > ripple.app
test.overlay > ripple.basics
test.overlay > ripple.beast
test.overlay > ripple.core
test.overlay > ripple.overlay
test.overlay > ripple.peerfinder
test.overlay > ripple.protocol
test.overlay > ripple.shamap
test.nodestore > xrpl.basics
test.nodestore > xrpld.core
test.nodestore > xrpld.nodestore
test.nodestore > xrpld.unity
test.overlay > test.jtx
test.overlay > test.unit_test
test.peerfinder > ripple.basics
test.peerfinder > ripple.beast
test.peerfinder > ripple.core
test.peerfinder > ripple.peerfinder
test.peerfinder > ripple.protocol
test.overlay > xrpl.basics
test.overlay > xrpld.app
test.overlay > xrpld.overlay
test.overlay > xrpld.peerfinder
test.overlay > xrpld.shamap
test.overlay > xrpl.protocol
test.peerfinder > test.beast
test.peerfinder > test.unit_test
test.protocol > ripple.basics
test.protocol > ripple.beast
test.protocol > ripple.crypto
test.protocol > ripple.json
test.protocol > ripple.protocol
test.peerfinder > xrpl.basics
test.peerfinder > xrpld.core
test.peerfinder > xrpld.peerfinder
test.peerfinder > xrpl.protocol
test.protocol > test.toplevel
test.resource > ripple.basics
test.resource > ripple.beast
test.resource > ripple.resource
test.protocol > xrpl.basics
test.protocol > xrpl.json
test.protocol > xrpl.protocol
test.resource > test.unit_test
test.rpc > ripple.app
test.rpc > ripple.basics
test.rpc > ripple.beast
test.rpc > ripple.core
test.rpc > ripple.json
test.rpc > ripple.net
test.rpc > ripple.nodestore
test.rpc > ripple.overlay
test.rpc > ripple.protocol
test.rpc > ripple.resource
test.rpc > ripple.rpc
test.resource > xrpl.basics
test.resource > xrpl.resource
test.rpc > test.jtx
test.rpc > test.nodestore
test.rpc > test.toplevel
test.server > ripple.app
test.server > ripple.basics
test.server > ripple.beast
test.server > ripple.core
test.server > ripple.json
test.server > ripple.rpc
test.server > ripple.server
test.rpc > xrpl.basics
test.rpc > xrpld.app
test.rpc > xrpld.core
test.rpc > xrpld.net
test.rpc > xrpld.overlay
test.rpc > xrpld.rpc
test.rpc > xrpl.json
test.rpc > xrpl.protocol
test.rpc > xrpl.resource
test.server > test.jtx
test.server > test.toplevel
test.server > test.unit_test
test.shamap > ripple.basics
test.shamap > ripple.beast
test.shamap > ripple.nodestore
test.shamap > ripple.protocol
test.shamap > ripple.shamap
test.server > xrpl.basics
test.server > xrpld.app
test.server > xrpld.core
test.server > xrpld.rpc
test.server > xrpl.json
test.server > xrpl.server
test.shamap > test.unit_test
test.toplevel > ripple.json
test.shamap > xrpl.basics
test.shamap > xrpld.nodestore
test.shamap > xrpld.shamap
test.shamap > xrpl.protocol
test.toplevel > test.csf
test.unit_test > ripple.basics
test.unit_test > ripple.beast
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
xrpl.protocol > xrpl.basics
xrpl.protocol > xrpl.json
xrpl.resource > xrpl.basics
xrpl.resource > xrpl.json
xrpl.resource > xrpl.protocol
xrpl.server > xrpl.basics
xrpl.server > xrpl.json
xrpl.server > xrpl.protocol
xrpld.app > test.unit_test
xrpld.app > xrpl.basics
xrpld.app > xrpld.conditions
xrpld.app > xrpld.consensus
xrpld.app > xrpld.nodestore
xrpld.app > xrpld.perflog
xrpld.app > xrpl.json
xrpld.app > xrpl.protocol
xrpld.app > xrpl.resource
xrpld.conditions > xrpl.basics
xrpld.conditions > xrpl.protocol
xrpld.consensus > xrpl.basics
xrpld.consensus > xrpl.json
xrpld.consensus > xrpl.protocol
xrpld.core > xrpl.basics
xrpld.core > xrpl.json
xrpld.core > xrpl.protocol
xrpld.ledger > xrpl.basics
xrpld.ledger > xrpld.core
xrpld.ledger > xrpl.json
xrpld.ledger > xrpl.protocol
xrpld.net > xrpl.basics
xrpld.net > xrpl.json
xrpld.net > xrpl.protocol
xrpld.net > xrpl.resource
xrpld.nodestore > xrpl.basics
xrpld.nodestore > xrpld.core
xrpld.nodestore > xrpld.unity
xrpld.nodestore > xrpl.json
xrpld.nodestore > xrpl.protocol
xrpld.overlay > xrpl.basics
xrpld.overlay > xrpld.core
xrpld.overlay > xrpld.peerfinder
xrpld.overlay > xrpld.perflog
xrpld.overlay > xrpl.json
xrpld.overlay > xrpl.protocol
xrpld.overlay > xrpl.resource
xrpld.overlay > xrpl.server
xrpld.peerfinder > xrpl.basics
xrpld.peerfinder > xrpld.core
xrpld.peerfinder > xrpl.protocol
xrpld.perflog > xrpl.basics
xrpld.perflog > xrpl.json
xrpld.perflog > xrpl.protocol
xrpld.rpc > xrpl.basics
xrpld.rpc > xrpld.core
xrpld.rpc > xrpld.ledger
xrpld.rpc > xrpld.nodestore
xrpld.rpc > xrpl.json
xrpld.rpc > xrpl.protocol
xrpld.rpc > xrpl.resource
xrpld.rpc > xrpl.server
xrpld.shamap > xrpl.basics
xrpld.shamap > xrpld.nodestore
xrpld.shamap > xrpl.protocol

View File

@@ -1 +0,0 @@
[Build instructions are currently located in `BUILD.md`](../../BUILD.md)

View File

@@ -1 +0,0 @@
[Build instructions are currently located in `BUILD.md`](../../BUILD.md)

View File

@@ -9,9 +9,9 @@ endif()
# Fix "unrecognized escape" issues when passing CMAKE_MODULE_PATH on Windows.
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
project(rippled)
project(xrpl)
set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
@@ -19,7 +19,7 @@ set(CMAKE_CXX_STANDARD_REQUIRED ON)
# make GIT_COMMIT_HASH define available to all sources
find_package(Git)
if(Git_FOUND)
execute_process(COMMAND ${GIT_EXECUTABLE} describe --always --abbrev=40
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git describe --always --abbrev=40
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gch)
if(gch)
set(GIT_COMMIT_HASH "${gch}")
@@ -38,7 +38,6 @@ include (CheckCXXCompilerFlag)
include (FetchContent)
include (ExternalProject)
include (CMakeFuncs) # must come *after* ExternalProject b/c it overrides one function in EP
include (ProcessorCount)
if (target)
message (FATAL_ERROR "The target option has been removed - use native cmake options to control build")
endif ()
@@ -46,7 +45,6 @@ endif ()
include(RippledSanity)
include(RippledVersion)
include(RippledSettings)
include(RippledRelease)
# this check has to remain in the top-level cmake
# because of the early return statement
if (packages_only)
@@ -71,8 +69,11 @@ find_package(OpenSSL 1.1.1 REQUIRED)
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2
)
add_subdirectory(src/secp256k1)
add_subdirectory(src/ed25519-donna)
set(SECP256K1_INSTALL TRUE)
add_subdirectory(external/secp256k1)
add_library(secp256k1::secp256k1 ALIAS secp256k1)
add_subdirectory(external/ed25519-donna)
find_package(gRPC REQUIRED)
find_package(lz4 REQUIRED)
# Target names with :: are not allowed in a generator expression.
# We need to pull the include directories and imported location properties
@@ -80,7 +81,6 @@ find_package(lz4 REQUIRED)
find_package(LibArchive REQUIRED)
find_package(SOCI REQUIRED)
find_package(SQLite3 REQUIRED)
find_package(Snappy REQUIRED)
option(rocksdb "Enable RocksDB" ON)
if(rocksdb)
@@ -93,36 +93,33 @@ endif()
find_package(nudb REQUIRED)
find_package(date REQUIRED)
include(deps/Protobuf)
include(deps/gRPC)
find_package(xxHash REQUIRED)
target_link_libraries(ripple_libs INTERFACE
ed25519::ed25519
LibArchive::LibArchive
lz4::lz4
nudb::core
OpenSSL::Crypto
OpenSSL::SSL
Ripple::grpc_pbufs
Ripple::pbufs
secp256k1::secp256k1
soci::soci
SQLite::SQLite3
)
if(reporting)
find_package(cassandra-cpp-driver REQUIRED)
find_package(PostgreSQL REQUIRED)
target_link_libraries(ripple_libs INTERFACE
cassandra-cpp-driver::cassandra-cpp-driver
PostgreSQL::PostgreSQL
)
# Work around changes to Conan recipe for now.
if(TARGET nudb::core)
set(nudb nudb::core)
elseif(TARGET NuDB::nudb)
set(nudb NuDB::nudb)
else()
message(FATAL_ERROR "unknown nudb target")
endif()
target_link_libraries(ripple_libs INTERFACE ${nudb})
if(coverage)
include(RippledCov)
endif()
###
set(PROJECT_EXPORT_SET RippleExports)
include(RippledCore)
include(RippledInstall)
include(RippledCov)
include(RippledMultiConfig)
include(RippledValidatorKeys)

View File

@@ -1,54 +1,51 @@
The XRP Ledger has many and diverse stakeholders, and everyone deserves
a chance to contribute meaningful changes to the code that runs the XRPL.
a chance to contribute meaningful changes to the code that runs the
XRPL.
# Contributing
We assume you are familiar with the general practice of [making contributions
on GitHub][1].
This file includes only special instructions specific to this project.
We assume you are familiar with the general practice of [making
contributions on GitHub][1]. This file includes only special
instructions specific to this project.
## Before you start
All of your contributions must be developed in your personal
In general, contributions should be developed in your personal
[fork](https://github.com/XRPLF/rippled/fork).
No personal branches may ever be pushed to the [main project][rippled].
These are the only branches that may ever exist in the main project:
The following branches exist in the main project repository:
- `develop`: The latest set of unreleased features, and the most common
starting point for contributions.
- `release`: The latest release candidate.
- `release`: The latest beta release or release candidate.
- `master`: The latest stable release.
- `gh-pages`: The documentation for this project, built by Doxygen.
The tip of each branch must be signed.
In order for GitHub to sign a squashed commit that it builds from your pull
request,
all of your commits must be signed,
and GitHub must know your verifying key.
Please walk through the excellent documentation from GitHub to set
up [signature verification][signing].
The tip of each branch must be signed. In order for GitHub to sign a
squashed commit that it builds from your pull request, GitHub must know
your verifying key. Please set up [signature verification][signing].
[rippled]: https://github.com/XRPLF/rippled
[signing]: https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification
[signing]:
https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification
## Major contributions
If your contribution is a major feature or breaking change,
then you must first write an XRP Ledger Standard (XLS) describing it.
Go to [XRPL-Standards](https://github.com/XRPLF/XRPL-Standards/discussions),
choose the next available standard number, and
open a discussion with an appropriate title to propose your draft standard.
If your contribution is a major feature or breaking change, then you
must first write an XRP Ledger Standard (XLS) describing it. Go to
[XRPL-Standards](https://github.com/XRPLF/XRPL-Standards/discussions),
choose the next available standard number, and open a discussion with an
appropriate title to propose your draft standard.
When you submit a pull request, please link the corresponding XLS in the
description.
An XLS still in draft status is considered a work-in-progress and open for
discussion.
Please do not submit a pull request before allowing due time for questions,
suggestions, and changes to the XLS draft.
It is the responsibility of the XLS author to update the draft to match the
final implementation when its corresponding pull request is merged.
description. An XLS still in draft status is considered a
work-in-progress and open for discussion. Please allow time for
questions, suggestions, and changes to the XLS draft. It is the
responsibility of the XLS author to update the draft to match the final
implementation when its corresponding pull request is merged, unless the
author delegates that responsibility to others.
## Before making a pull request
@@ -76,32 +73,241 @@ The source must be formatted according to the style guide below.
Header includes must be [levelized](./Builds/levelization).
Changes should be usually squashed down into a single commit.
Some larger or more complicated change sets make more sense,
and are easier to review if organized into multiple logical commits.
Either way, all commits should fit the following criteria:
* Changes should be presented in a single commit or a logical
sequence of commits.
Specifically, chronological commits that simply
reflect the history of how the author implemented
the change, "warts and all", are not useful to
reviewers.
* Every commit should have a [good message](#good-commit-messages).
to explain a specific aspects of the change.
* Every commit should be signed.
* Every commit should be well-formed (builds successfully,
unit tests passing), as this helps to resolve merge
conflicts, and makes it easier to use `git bisect`
to find bugs.
### Good commit messages
Refer to
["How to Write a Git Commit Message"](https://cbea.ms/git-commit/)
for general rules on writing a good commit message.
In addition to those guidelines, please add one of the following
prefixes to the subject line if appropriate.
* `fix:` - The primary purpose is to fix an existing bug.
* `perf:` - The primary purpose is performance improvements.
* `refactor:` - The changes refactor code without affecting
functionality.
* `test:` - The changes _only_ affect unit tests.
* `docs:` - The changes _only_ affect documentation. This can
include code comments in addition to `.md` files like this one.
* `build:` - The changes _only_ affect the build process,
including CMake and/or Conan settings.
* `chore:` - Other tasks that don't affect the binary, but don't fit
any of the other cases. e.g. formatting, git settings, updating
Github Actions jobs.
Whenever possible, when updating commits after the PR is open, please
add the PR number to the end of the subject line. e.g. `test: Add
unit tests for Feature X (#1234)`.
## Pull requests
Pull requests must target the `develop` branch.[^1]
In general, pull requests use `develop` as the base branch.
(Hotfixes are an exception.)
[^1]: There are exceptions to this policy for hotfixes, but no one consulting
this document will be in that situation.
If your changes are not quite ready, but you want to make it easily available
for preliminary examination or review, you can create a "Draft" pull request.
While a pull request is marked as a "Draft", you can rebase or reorganize the
commits in the pull request as desired.
Changes to pull requests must be added as new commits.
You may **never force push a branch in a pull request** (e.g. after a rebase).
Github pull requests are created as "Ready" by default, or you can mark
a "Draft" pull request as "Ready".
Once a pull request is marked as "Ready",
any changes must be added as new commits. Do not
force-push to a branch in a pull request under review.
(This includes rebasing your branch onto the updated base branch.
Use a merge operation, instead or hit the "Update branch" button
at the bottom of the Github PR page.)
This preserves the ability for reviewers to filter changes since their last
review.
A pull request must obtain **approvals from at least two reviewers** before it
can be considered for merge by a Maintainer.
A pull request must obtain **approvals from at least two reviewers**
before it can be considered for merge by a Maintainer.
Maintainers retain discretion to require more approvals if they feel the
credibility of the existing approvals is insufficient.
Pull requests must be merged by [squash-and-merge][2]
to preserve a linear history for the `develop` branch.
### When and how to merge pull requests
#### "Passed"
A pull request should only have the "Passed" label added when it
meets a few criteria:
1. It must have two approving reviews [as described
above](#pull-requests). (Exception: PRs that are deemed "trivial"
only need one approval.)
2. All CI checks must be complete and passed. (One-off failures may
be acceptable if they are related to a known issue.)
3. The PR must have a [good commit message](#good-commit-messages).
* If the PR started with a good commit message, and it doesn't
need to be updated, the author can indicate that in a comment.
* Any contributor, preferably the author, can leave a comment
suggesting a commit message.
* If the author squashes and rebases the code in preparation for
merge, they should also ensure the commit message(s) are updated
as well.
4. The PR branch must be up to date with the base branch (usually
`develop`). This is usually accomplised by merging the base branch
into the feature branch, but if the other criteria are met, the
changes can be squashed and rebased on top of the base branch.
5. Finally, and most importantly, the author of the PR must
positively indicate that the PR is ready to merge. That can be
accomplished by adding the "Passed" label if their role allows,
or by leaving a comment to the effect that the PR is ready to
merge.
Once the "Passed" label is added, a maintainer may merge the PR at
any time, so don't use it lightly.
#### Instructions for maintainers
The maintainer should double-check that the PR has met all the
necessary criteria, and can request additional information from the
owner, or additional reviews, and can always feel free to remove the
"Passed" label if appropriate. The maintainer has final say on
whether a PR gets merged, and are encouraged to communicate and
issues or concerns to other maintainers.
##### Most pull requests: "Squash and merge"
Most pull requests don't need special handling, and can simply be
merged using the "Squash and merge" button on the Github UI. Update
the suggested commit message if necessary.
##### Slightly more complicated pull requests
Some pull requests need to be pushed to `develop` as more than one
commit. There are multiple ways to accomplish this. If the author
describes a process, and it is reasonable, follow it. Otherwise, do
a fast forward only merge (`--ff-only`) on the command line and push.
Either way, check that:
* The commits are based on the current tip of `develop`.
* The commits are clean: No merge commits (except when reverse
merging), no "[FOLD]" or "fixup!" messages.
* All commits are signed. If the commits are not signed by the author, use
`git commit --amend -S` to sign them yourself.
* At least one (but preferably all) of the commits has the PR number
in the commit message.
**Never use the "Create a merge commit" or "Rebase and merge"
functions!**
##### Releases, release candidates, and betas
All releases, including release candidates and betas, are handled
differently from typical PRs. Most importantly, never use
the Github UI to merge a release.
1. There are two possible conditions that the `develop` branch will
be in when preparing a release.
1. Ready or almost ready to go: There may be one or two PRs that
need to be merged, but otherwise, the only change needed is to
update the version number in `BuildInfo.cpp`. In this case,
merge those PRs as appropriate, updating the second one, and
waiting for CI to finish in between. Then update
`BuildInfo.cpp`.
2. Several pending PRs: In this case, do not use the Github UI,
because the delays waiting for CI in between each merge will be
unnecessarily onerous. Instead, create a working branch (e.g.
`develop-next`) based off of `develop`. Squash the changes
from each PR onto the branch, one commit each (unless
more are needed), being sure to sign each commit and update
the commit message to include the PR number. You may be able
to use a fast-forward merge for the first PR. The workflow may
look something like:
```
git fetch upstream
git checkout upstream/develop
git checkout -b develop-next
# Use -S on the ff-only merge if prbranch1 isn't signed.
# Or do another branch first.
git merge --ff-only user1/prbranch1
git merge --squash user2/prbranch2
git commit -S
git merge --squash user3/prbranch3
git commit -S
[...]
git push --set-upstream origin develop-next
</pre>
```
2. Create the Pull Request with `release` as the base branch. If any
of the included PRs are still open,
[use closing keywords](https://docs.github.com/articles/closing-issues-using-keywords)
in the description to ensure they are closed when the code is
released. e.g. "Closes #1234"
3. Instead of the default template, reuse and update the message from
the previous release. Include the following verbiage somewhere in
the description:
```
The base branch is release. All releases (including betas) go in
release. This PR will be merged with --ff-only (not squashed or
rebased, and not using the GitHub UI) to both release and develop.
```
4. Sign-offs for the three platforms usually occur offline, but at
least one approval will be needed on the PR.
5. Once everything is ready to go, open a terminal, and do the
fast-forward merges manually. Do not push any branches until you
verify that all of them update correctly.
```
git fetch upstream
git checkout -b upstream--develop -t upstream/develop || git checkout upstream--develop
git reset --hard upstream/develop
# develop-next must be signed already!
git merge --ff-only origin/develop-next
git checkout -b upstream--release -t upstream/release || git checkout upstream--release
git reset --hard upstream/release
git merge --ff-only origin/develop-next
# Only do these 3 steps if pushing a release. No betas or RCs
git checkout -b upstream--master -t upstream/master || git checkout upstream--master
git reset --hard upstream/master
git merge --ff-only origin/develop-next
# Check that all of the branches are updated
git log -1 --oneline
# The output should look like:
# 02ec8b7962 (HEAD -> upstream--master, origin/develop-next, upstream--release, upstream--develop, develop-next) Set version to 2.2.0-rc1
# Note that all of the upstream--develop/release/master are on this commit.
# (Master will be missing for betas, etc.)
# Just to be safe, do a dry run first:
git push --dry-run upstream-push HEAD:develop
git push --dry-run upstream-push HEAD:release
# git push --dry-run upstream-push HEAD:master
# Now push
git push upstream-push HEAD:develop
git push upstream-push HEAD:release
# git push upstream-push HEAD:master
# Don't forget to tag the release, too.
git tag <version number>
git push upstream-push <version number>
```
6. Finally
[create a new release on Github](https://github.com/XRPLF/rippled/releases).
# Style guide
This is a non-exhaustive list of recommended style guidelines.
These are not always strictly enforced and serve as a way to keep the codebase coherent rather than a set of _thou shalt not_ commandments.
This is a non-exhaustive list of recommended style guidelines. These are
not always strictly enforced and serve as a way to keep the codebase
coherent rather than a set of _thou shalt not_ commandments.
## Formatting
@@ -121,6 +327,40 @@ this:
You can format individual files in place by running `clang-format -i <file>...`
from any directory within this project.
There is a Continuous Integration job that runs clang-format on pull requests. If the code doesn't comply, a patch file that corrects auto-fixable formatting issues is generated.
To download the patch file:
1. Next to `clang-format / check (pull_request) Failing after #s` -> click **Details** to open the details page.
2. Left menu -> click **Summary**
3. Scroll down to near the bottom-right under `Artifacts` -> click **clang-format.patch**
4. Download the zip file and extract it to your local git repository. Run `git apply [patch-file-name]`.
5. Commit and push.
You can install a pre-commit hook to automatically run `clang-format` before every commit:
```
pip3 install pre-commit
pre-commit install
```
## Unit Tests
To execute all unit tests:
```rippled --unittest --unittest-jobs=<number of cores>```
(Note: Using multiple cores on a Mac M1 can cause spurious test failures. The
cause is still under investigation. If you observe this problem, try specifying fewer jobs.)
To run a specific set of test suites:
```
rippled --unittest TestSuiteName
```
Note: In this example, all tests with prefix `TestSuiteName` will be run, so if
`TestSuiteName1` and `TestSuiteName2` both exist, then both tests will run.
Alternatively, if the unit test name finds an exact match, it will stop
doing partial matches, i.e. if a unit test with a title of `TestSuiteName`
exists, then no other unit test will be executed, apart from `TestSuiteName`.
## Avoid
@@ -128,20 +368,27 @@ from any directory within this project.
2. Proliferation of new files and classes.
3. Complex inheritance and complex OOP patterns.
4. Unmanaged memory allocation and raw pointers.
5. Macros and non-trivial templates (unless they add significant value.)
6. Lambda patterns (unless these add significant value.)
7. CPU or architecture-specific code unless there is a good reason to include it, and where it is used guard it with macros and provide explanatory comments.
5. Macros and non-trivial templates (unless they add significant value).
6. Lambda patterns (unless these add significant value).
7. CPU or architecture-specific code unless there is a good reason to
include it, and where it is used, guard it with macros and provide
explanatory comments.
8. Importing new libraries unless there is a very good reason to do so.
## Seek to
9. Extend functionality of existing code rather than creating new code.
10. Prefer readability over terseness where important logic is concerned.
11. Inline functions that are not used or are not likely to be used elsewhere in the codebase.
12. Use clear and self-explanatory names for functions, variables, structs and classes.
13. Use TitleCase for classes, structs and filenames, camelCase for function and variable names, lower case for namespaces and folders.
14. Provide as many comments as you feel that a competent programmer would need to understand what your code does.
10. Prefer readability over terseness where important logic is
concerned.
11. Inline functions that are not used or are not likely to be used
elsewhere in the codebase.
12. Use clear and self-explanatory names for functions, variables,
structs and classes.
13. Use TitleCase for classes, structs and filenames, camelCase for
function and variable names, lower case for namespaces and folders.
14. Provide as many comments as you feel that a competent programmer
would need to understand what your code does.
# Maintainers
@@ -168,16 +415,39 @@ existing maintainer without a vote.
## Current Maintainers
Maintainers are users with admin access to the repo. Maintainers do not typically approve or deny pull requests.
* [intelliot](https://github.com/intelliot) (Ripple)
* [JoelKatz](https://github.com/JoelKatz) (Ripple)
* [manojsdoshi](https://github.com/manojsdoshi) (Ripple)
* [n3tc4t](https://github.com/n3tc4t) (XRPL Labs)
* [Nik Bougalis](https://github.com/nbougalis)
* [nixer89](https://github.com/nixer89) (XRP Ledger Foundation)
* [RichardAH](https://github.com/RichardAH) (XRPL Labs + XRP Ledger Foundation)
* [seelabs](https://github.com/seelabs) (Ripple)
* [Silkjaer](https://github.com/Silkjaer) (XRP Ledger Foundation)
* [WietseWind](https://github.com/WietseWind) (XRPL Labs + XRP Ledger Foundation)
## Current Code Reviewers
Code Reviewers are developers who have the ability to review and approve source code changes.
* [HowardHinnant](https://github.com/HowardHinnant) (Ripple)
* [scottschurr](https://github.com/scottschurr) (Ripple)
* [seelabs](https://github.com/seelabs) (Ripple)
* [Ed Hennis](https://github.com/ximinez) (Ripple)
* [mvadari](https://github.com/mvadari) (Ripple)
* [thejohnfreeman](https://github.com/thejohnfreeman) (Ripple)
* [Bronek](https://github.com/Bronek) (Ripple)
* [manojsdoshi](https://github.com/manojsdoshi) (Ripple)
* [godexsoft](https://github.com/godexsoft) (Ripple)
* [mDuo13](https://github.com/mDuo13) (Ripple)
* [ckniffen](https://github.com/ckniffen) (Ripple)
* [arihantkothari](https://github.com/arihantkothari) (Ripple)
* [pwang200](https://github.com/pwang200) (Ripple)
* [sophiax851](https://github.com/sophiax851) (Ripple)
* [shawnxie999](https://github.com/shawnxie999) (Ripple)
* [gregtatcam](https://github.com/gregtatcam) (Ripple)
* [mtrippled](https://github.com/mtrippled) (Ripple)
* [ckeshava](https://github.com/ckeshava) (Ripple)
* [nbougalis](https://github.com/nbougalis) None
* [RichardAH](https://github.com/RichardAH) (XRPL Labs + XRP Ledger Foundation)
* [dangell7](https://github.com/dangell7) (XRPL Labs)
[1]: https://docs.github.com/en/get-started/quickstart/contributing-to-projects

View File

@@ -3,14 +3,17 @@
The [XRP Ledger](https://xrpl.org/) is a decentralized cryptographic ledger powered by a network of peer-to-peer nodes. The XRP Ledger uses a novel Byzantine Fault Tolerant consensus algorithm to settle and record transactions in a secure distributed database without a central operator.
## XRP
[XRP](https://xrpl.org/xrp.html) is a public, counterparty-free asset native to the XRP Ledger, and is designed to bridge the many different currencies in use worldwide. XRP is traded on the open-market and is available for anyone to access. The XRP Ledger was created in 2012 with a finite supply of 100 billion units of XRP. Its creators gifted 80 billion XRP to a company, now called [Ripple](https://ripple.com/), to develop the XRP Ledger and its ecosystem. Ripple uses XRP to help build the Internet of Value, ushering in a world in which money moves as fast and efficiently as information does today.
[XRP](https://xrpl.org/xrp.html) is a public, counterparty-free asset native to the XRP Ledger, and is designed to bridge the many different currencies in use worldwide. XRP is traded on the open-market and is available for anyone to access. The XRP Ledger was created in 2012 with a finite supply of 100 billion units of XRP.
## rippled
The server software that powers the XRP Ledger is called `rippled` and is available in this repository under the permissive [ISC open-source license](LICENSE.md). The `rippled` server software is written primarily in C++ and runs on a variety of platforms. The `rippled` server software can run in several modes depending on its [configuration](https://xrpl.org/rippled-server-modes.html).
The server software that powers the XRP Ledger is called `rippled` and is available in this repository under the permissive [ISC open-source license](LICENSE.md). The `rippled` server software is written primarily in C++ and runs on a variety of platforms. The `rippled` server software can run in several modes depending on its [configuration](https://xrpl.org/rippled-server-modes.html).
If you are interested in running an **API Server** (including a **Full History Server**), take a look at [Clio](https://github.com/XRPLF/clio). (rippled Reporting Mode has been replaced by Clio.)
### Build from Source
* [Read the build instructions in `BUILD.md`](BUILD.md)
* If you encounter any issues, please [open an issue](https://github.com/XRPLF/rippled/issues)
## Key Features of the XRP Ledger
@@ -53,10 +56,14 @@ Some of the directories under `src` are external repositories included using
git-subtree. See those directories' README files for more details.
## See Also
## Additional Documentation
* [XRP Ledger Dev Portal](https://xrpl.org/)
* [Setup and Installation](https://xrpl.org/install-rippled.html)
* [Source Documentation (Doxygen)](https://xrplf.github.io/rippled/)
## See Also
* [Clio API Server for the XRP Ledger](https://github.com/XRPLF/clio)
* [Mailing List for Release Announcements](https://groups.google.com/g/ripple-server)
* [Learn more about the XRP Ledger (YouTube)](https://www.youtube.com/playlist?list=PLJQ55Tj1hIVZtJ_JdTvSum2qMTsedWkNi)

File diff suppressed because it is too large Load Diff

View File

@@ -37,7 +37,7 @@ Your report should include the following:
- The steps to reproduce the vulnerability;
- Any other relevant details or artifacts, including code, scripts or patches.
In your mail, please describe of the issue or the potential threat; if possible, please include a "repro" (code that can reproduce the issue) or describe the best way to reproduce and replicate the issue. Please make your report as extensive as possible.
In your email, please describe the issue or potential threat. If possible, include a "repro" (code that can reproduce the issue) or describe the best way to reproduce and replicate the issue. Please make your report as detailed and comprehensive as possible.
For more information on responsible disclosure, please read this [Wikipedia article](https://en.wikipedia.org/wiki/Responsible_disclosure).
@@ -60,15 +60,15 @@ While we commit to responding with 24 hours of your initial report with our tria
## Bug Bounty Program
[Ripple](https://ripple.com) is generously sponsoring a bug bounty program for vulnerabilities in [`rippled`](https://github.com/ripple/rippled) (and other related projects, like [`ripple-lib`](https://github.com/ripple/ripple-lib)).
[Ripple](https://ripple.com) is generously sponsoring a bug bounty program for vulnerabilities in [`rippled`](https://github.com/XRPLF/rippled) (and other related projects, like [`xrpl.js`](https://github.com/XRPLF/xrpl.js), [`xrpl-py`](https://github.com/XRPLF/xrpl-py), [`xrpl4j`](https://github.com/XRPLF/xrpl4j)).
This program allows us to recognize and reward individuals or groups that identify and report bugs. In summary, order to qualify for a bounty, the bug must be:
This program allows us to recognize and reward individuals or groups that identify and report bugs. In summary, in order to qualify for a bounty, the bug must be:
1. **In scope**. Only bugs in software under the scope of the program qualify. Currently, that means `rippled` and `ripple-lib`.
2. **Relevant**. A security issue, posing a danger to user funds, privacy or the operation of the XRP Ledger.
1. **In scope**. Only bugs in software under the scope of the program qualify. Currently, that means `rippled`, `xrpl.js`, `xrpl-py`, `xrpl4j`.
2. **Relevant**. A security issue, posing a danger to user funds, privacy, or the operation of the XRP Ledger.
3. **Original and previously unknown**. Bugs that are already known and discussed in public do not qualify. Previously reported bugs, even if publicly unknown, are not eligible.
4. **Specific**. We welcome general security advice or recommendations, but we cannot pay bounties for that.
5. **Fixable**. There has to be something we can do to permanently fix the problem. Note that bugs in other peoples software may still qualify in some cases. For example, if you find a bug in a library that we use which can compromises the security of software that is in scope and we can get it fixed, you may qualify for a bounty.
5. **Fixable**. There has to be something we can do to permanently fix the problem. Note that bugs in other peoples software may still qualify in some cases. For example, if you find a bug in a library that we use which can compromise the security of software that is in scope and we can get it fixed, you may qualify for a bounty.
6. **Unused**. If you use the exploit to attack the XRP Ledger, you do not qualify for a bounty. If you report a vulnerability used in an ongoing or past attack and there is specific, concrete evidence that suggests you are the attacker we reserve the right not to pay a bounty.
The amount paid varies dramatically. Vulnerabilities that are harmless on their own, but could form part of a critical exploit will usually receive a bounty. Full-blown exploits can receive much higher bounties. Please dont hold back partial vulnerabilities while trying to construct a full-blown exploit. We will pay a bounty to anyone who reports a complete chain of vulnerabilities even if they have reported each component of the exploit separately and those vulnerabilities have been fixed in the meantime. However, to qualify for a the full bounty, you must to have been the first to report each of the partial exploits.

View File

@@ -1,24 +0,0 @@
In this directory are two scripts, `build.sh` and `test.sh` used for building
and testing rippled.
(For now, they assume Bash and Linux. Once I get Windows containers for
testing, I'll try them there, but if Bash is not available, then they will
soon be joined by PowerShell scripts `build.ps` and `test.ps`.)
We don't want these scripts to require arcane invocations that can only be
pieced together from within a CI configuration. We want something that humans
can easily invoke, read, and understand, for when we eventually have to test
and debug them interactively. That means:
(1) They should work with no arguments.
(2) They should document their arguments.
(3) They should expand short arguments into long arguments.
While we want to provide options for common use cases, we don't need to offer
the kitchen sink. We can rightfully expect users with esoteric, complicated
needs to write their own scripts.
To make argument-handling easy for us, the implementers, we can just take all
arguments from environment variables. They have the nice advantage that every
command-line uses named arguments. For the benefit of us and our users, we
document those variables at the top of each script.

View File

@@ -1,31 +0,0 @@
#!/usr/bin/env bash
set -o xtrace
set -o errexit
# The build system. Either 'Unix Makefiles' or 'Ninja'.
GENERATOR=${GENERATOR:-Unix Makefiles}
# The compiler. Either 'gcc' or 'clang'.
COMPILER=${COMPILER:-gcc}
# The build type. Either 'Debug' or 'Release'.
BUILD_TYPE=${BUILD_TYPE:-Debug}
# Additional arguments to CMake.
# We use the `-` substitution here instead of `:-` so that callers can erase
# the default by setting `$CMAKE_ARGS` to the empty string.
CMAKE_ARGS=${CMAKE_ARGS-'-Dwerr=ON'}
# https://gitlab.kitware.com/cmake/cmake/issues/18865
CMAKE_ARGS="-DBoost_NO_BOOST_CMAKE=ON ${CMAKE_ARGS}"
if [[ ${COMPILER} == 'gcc' ]]; then
export CC='gcc'
export CXX='g++'
elif [[ ${COMPILER} == 'clang' ]]; then
export CC='clang'
export CXX='clang++'
fi
mkdir build
cd build
cmake -G "${GENERATOR}" -DCMAKE_BUILD_TYPE=${BUILD_TYPE} ${CMAKE_ARGS} ..
cmake --build . -- -j $(nproc)

View File

@@ -1,41 +0,0 @@
#!/usr/bin/env bash
set -o xtrace
set -o errexit
# Set to 'true' to run the known "manual" tests in rippled.
MANUAL_TESTS=${MANUAL_TESTS:-false}
# The maximum number of concurrent tests.
CONCURRENT_TESTS=${CONCURRENT_TESTS:-$(nproc)}
# The path to rippled.
RIPPLED=${RIPPLED:-build/rippled}
# Additional arguments to rippled.
RIPPLED_ARGS=${RIPPLED_ARGS:-}
function join_by { local IFS="$1"; shift; echo "$*"; }
declare -a manual_tests=(
'beast.chrono.abstract_clock'
'beast.unit_test.print'
'ripple.NodeStore.Timing'
'ripple.app.Flow_manual'
'ripple.app.NoRippleCheckLimits'
'ripple.app.PayStrandAllPairs'
'ripple.consensus.ByzantineFailureSim'
'ripple.consensus.DistributedValidators'
'ripple.consensus.ScaleFreeSim'
'ripple.tx.CrossingLimits'
'ripple.tx.FindOversizeCross'
'ripple.tx.Offer_manual'
'ripple.tx.OversizeMeta'
'ripple.tx.PlumpBook'
)
if [[ ${MANUAL_TESTS} == 'true' ]]; then
RIPPLED_ARGS+=" --unittest=$(join_by , "${manual_tests[@]}")"
else
RIPPLED_ARGS+=" --unittest --quiet --unittest-log"
fi
RIPPLED_ARGS+=" --unittest-jobs ${CONCURRENT_TESTS}"
${RIPPLED} ${RIPPLED_ARGS}

View File

@@ -1,274 +0,0 @@
#!/usr/bin/env bash
set -ex
function version_ge() { test "$(echo "$@" | tr " " "\n" | sort -rV | head -n 1)" == "$1"; }
__dirname=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
echo "using CC: ${CC}"
"${CC}" --version
export CC
COMPNAME=$(basename $CC)
echo "using CXX: ${CXX:-notset}"
if [[ $CXX ]]; then
"${CXX}" --version
export CXX
fi
: ${BUILD_TYPE:=Debug}
echo "BUILD TYPE: ${BUILD_TYPE}"
: ${TARGET:=install}
echo "BUILD TARGET: ${TARGET}"
JOBS=${NUM_PROCESSORS:-2}
if [[ ${TRAVIS:-false} != "true" ]]; then
JOBS=$((JOBS+1))
fi
if [[ ! -z "${CMAKE_EXE:-}" ]] ; then
export PATH="$(dirname ${CMAKE_EXE}):$PATH"
fi
if [ -x /usr/bin/time ] ; then
: ${TIME:="Duration: %E"}
export TIME
time=/usr/bin/time
else
time=
fi
echo "Building rippled"
: ${CMAKE_EXTRA_ARGS:=""}
if [[ ${NINJA_BUILD:-} == true ]]; then
CMAKE_EXTRA_ARGS+=" -G Ninja"
fi
coverage=false
if [[ "${TARGET}" == "coverage_report" ]] ; then
echo "coverage option detected."
coverage=true
fi
cmake --version
CMAKE_VER=$(cmake --version | cut -d " " -f 3 | head -1)
#
# allow explicit setting of the name of the build
# dir, otherwise default to the compiler.build_type
#
: "${BUILD_DIR:=${COMPNAME}.${BUILD_TYPE}}"
BUILDARGS="--target ${TARGET}"
BUILDTOOLARGS=""
if version_ge $CMAKE_VER "3.12.0" ; then
BUILDARGS+=" --parallel"
fi
if [[ ${NINJA_BUILD:-} == false ]]; then
if version_ge $CMAKE_VER "3.12.0" ; then
BUILDARGS+=" ${JOBS}"
else
BUILDTOOLARGS+=" -j ${JOBS}"
fi
fi
if [[ ${VERBOSE_BUILD:-} == true ]]; then
CMAKE_EXTRA_ARGS+=" -DCMAKE_VERBOSE_MAKEFILE=ON"
if version_ge $CMAKE_VER "3.14.0" ; then
BUILDARGS+=" --verbose"
else
if [[ ${NINJA_BUILD:-} == false ]]; then
BUILDTOOLARGS+=" verbose=1"
else
BUILDTOOLARGS+=" -v"
fi
fi
fi
if [[ ${USE_CCACHE:-} == true ]]; then
echo "using ccache with basedir [${CCACHE_BASEDIR:-}]"
CMAKE_EXTRA_ARGS+=" -DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache"
fi
if [ -d "build/${BUILD_DIR}" ]; then
rm -rf "build/${BUILD_DIR}"
fi
mkdir -p "build/${BUILD_DIR}"
pushd "build/${BUILD_DIR}"
# cleanup possible artifacts
rm -fv CMakeFiles/CMakeOutput.log CMakeFiles/CMakeError.log
# Clean up NIH directories which should be git repos, but aren't
for nih_path in ${NIH_CACHE_ROOT}/*/*/*/src ${NIH_CACHE_ROOT}/*/*/src
do
for dir in lz4 snappy rocksdb
do
if [ -e ${nih_path}/${dir} -a \! -e ${nih_path}/${dir}/.git ]
then
ls -la ${nih_path}/${dir}*
rm -rfv ${nih_path}/${dir}*
fi
done
done
# generate
${time} cmake ../.. -DCMAKE_BUILD_TYPE=${BUILD_TYPE} ${CMAKE_EXTRA_ARGS}
# Display the cmake output, to help with debugging if something fails
for file in CMakeOutput.log CMakeError.log
do
if [ -f CMakeFiles/${file} ]
then
ls -l CMakeFiles/${file}
cat CMakeFiles/${file}
fi
done
# build
export DESTDIR=$(pwd)/_INSTALLED_
${time} eval cmake --build . ${BUILDARGS} -- ${BUILDTOOLARGS}
if [[ ${TARGET} == "docs" ]]; then
## mimic the standard test output for docs build
## to make controlling processes like jenkins happy
if [ -f docs/html/index.html ]; then
echo "1 case, 1 test total, 0 failures"
else
echo "1 case, 1 test total, 1 failures"
fi
exit
fi
popd
if [[ "${TARGET}" == "validator-keys" ]] ; then
export APP_PATH="$PWD/build/${BUILD_DIR}/validator-keys/validator-keys"
else
export APP_PATH="$PWD/build/${BUILD_DIR}/rippled"
fi
echo "using APP_PATH: ${APP_PATH}"
# See what we've actually built
ldd ${APP_PATH}
: ${APP_ARGS:=}
if [[ "${TARGET}" == "validator-keys" ]] ; then
APP_ARGS="--unittest"
else
function join_by { local IFS="$1"; shift; echo "$*"; }
# This is a list of manual tests
# in rippled that we want to run
# ORDER matters here...sorted in approximately
# descending execution time (longest running tests at top)
declare -a manual_tests=(
'ripple.ripple_data.reduce_relay_simulate'
'ripple.tx.Offer_manual'
'ripple.tx.CrossingLimits'
'ripple.tx.PlumpBook'
'ripple.app.Flow_manual'
'ripple.tx.OversizeMeta'
'ripple.consensus.DistributedValidators'
'ripple.app.NoRippleCheckLimits'
'ripple.ripple_data.compression'
'ripple.NodeStore.Timing'
'ripple.consensus.ByzantineFailureSim'
'beast.chrono.abstract_clock'
'beast.unit_test.print'
)
if [[ ${TRAVIS:-false} != "true" ]]; then
# these two tests cause travis CI to run out of memory.
# TODO: investigate possible workarounds.
manual_tests=(
'ripple.consensus.ScaleFreeSim'
'ripple.tx.FindOversizeCross'
"${manual_tests[@]}"
)
fi
if [[ ${MANUAL_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest=$(join_by , "${manual_tests[@]}")"
else
APP_ARGS+=" --unittest --quiet --unittest-log"
fi
if [[ ${coverage} == false && ${PARALLEL_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest-jobs ${JOBS}"
fi
if [[ ${IPV6_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest-ipv6"
fi
fi
if [[ ${coverage} == true && $CC =~ ^gcc ]]; then
# Push the results (lcov.info) to codecov
codecov -X gcov # don't even try and look for .gcov files ;)
find . -name "*.gcda" | xargs rm -f
fi
if [[ ${SKIP_TESTS:-} == true ]]; then
echo "skipping tests."
exit
fi
ulimit -a
corepat=$(cat /proc/sys/kernel/core_pattern)
if [[ ${corepat} =~ ^[:space:]*\| ]] ; then
echo "WARNING: core pattern is piping - can't search for core files"
look_core=false
else
look_core=true
coredir=$(dirname ${corepat})
fi
if [[ ${look_core} == true ]]; then
before=$(ls -A1 ${coredir})
fi
set +e
echo "Running tests for ${APP_PATH}"
if [[ ${MANUAL_TESTS:-} == true && ${PARALLEL_TESTS:-} != true ]]; then
for t in "${manual_tests[@]}" ; do
${APP_PATH} --unittest=${t}
TEST_STAT=$?
if [[ $TEST_STAT -ne 0 ]] ; then
break
fi
done
else
${APP_PATH} ${APP_ARGS}
TEST_STAT=$?
fi
set -e
if [[ ${look_core} == true ]]; then
after=$(ls -A1 ${coredir})
oIFS="${IFS}"
IFS=$'\n\r'
found_core=false
for l in $(diff -w --suppress-common-lines <(echo "$before") <(echo "$after")) ; do
if [[ "$l" =~ ^[[:space:]]*\>[[:space:]]*(.+)$ ]] ; then
corefile="${BASH_REMATCH[1]}"
echo "FOUND core dump file at '${coredir}/${corefile}'"
gdb_output=$(/bin/mktemp /tmp/gdb_output_XXXXXXXXXX.txt)
found_core=true
gdb \
-ex "set height 0" \
-ex "set logging file ${gdb_output}" \
-ex "set logging on" \
-ex "print 'ripple::BuildInfo::versionString'" \
-ex "thread apply all backtrace full" \
-ex "info inferiors" \
-ex quit \
"$APP_PATH" \
"${coredir}/${corefile}" &> /dev/null
echo -e "CORE INFO: \n\n $(cat ${gdb_output}) \n\n)"
fi
done
IFS="${oIFS}"
fi
if [[ ${found_core} == true ]]; then
exit -1
else
exit $TEST_STAT
fi

View File

@@ -1,36 +0,0 @@
#!/usr/bin/env bash
# run our build script in a docker container
# using travis-ci hosts
set -eux
function join_by { local IFS="$1"; shift; echo "$*"; }
set +x
echo "VERBOSE_BUILD=true" > /tmp/co.env
matchers=(
'TRAVIS.*' 'CI' 'CC' 'CXX'
'BUILD_TYPE' 'TARGET' 'MAX_TIME'
'CODECOV.+' 'CMAKE.*' '.+_TESTS'
'.+_OPTIONS' 'NINJA.*' 'NUM_.+'
'NIH_.+' 'BOOST.*' '.*CCACHE.*')
matchstring=$(join_by '|' "${matchers[@]}")
echo "MATCHSTRING IS:: $matchstring"
env | grep -E "^(${matchstring})=" >> /tmp/co.env
set -x
# need to eliminate TRAVIS_CMD...don't want to pass it to the container
cat /tmp/co.env | grep -v TRAVIS_CMD > /tmp/co.env.2
mv /tmp/co.env.2 /tmp/co.env
cat /tmp/co.env
mkdir -p -m 0777 ${TRAVIS_BUILD_DIR}/cores
echo "${TRAVIS_BUILD_DIR}/cores/%e.%p" | sudo tee /proc/sys/kernel/core_pattern
docker run \
-t --env-file /tmp/co.env \
-v ${TRAVIS_HOME}:${TRAVIS_HOME} \
-w ${TRAVIS_BUILD_DIR} \
--cap-add SYS_PTRACE \
--ulimit "core=-1" \
$DOCKER_IMAGE \
/bin/bash -c 'if [[ $CC =~ ([[:alpha:]]+)-([[:digit:].]+) ]] ; then sudo update-alternatives --set ${BASH_REMATCH[1]} /usr/bin/$CC; fi; bin/ci/ubuntu/build-and-test.sh'

View File

@@ -1,44 +0,0 @@
#!/usr/bin/env bash
# some cached files create churn, so save them here for
# later restoration before packing the cache
set -eux
clean_cache="travis_clean_cache"
if [[ ! ( "${TRAVIS_JOB_NAME}" =~ "windows" || \
"${TRAVIS_JOB_NAME}" =~ "prereq-keep" ) ]] && \
( [[ "${TRAVIS_COMMIT_MESSAGE}" =~ "${clean_cache}" ]] || \
( [[ -v TRAVIS_PULL_REQUEST_SHA && \
"${TRAVIS_PULL_REQUEST_SHA}" != "" ]] && \
git log -1 "${TRAVIS_PULL_REQUEST_SHA}" | grep -cq "${clean_cache}" -
)
)
then
find ${TRAVIS_HOME}/_cache -maxdepth 2 -type d
rm -rf ${TRAVIS_HOME}/_cache
mkdir -p ${TRAVIS_HOME}/_cache
fi
pushd ${TRAVIS_HOME}
if [ -f cache_ignore.tar ] ; then
rm -f cache_ignore.tar
fi
if [ -d _cache/nih_c ] ; then
find _cache/nih_c -name "build.ninja" | tar rf cache_ignore.tar --files-from -
find _cache/nih_c -name ".ninja_deps" | tar rf cache_ignore.tar --files-from -
find _cache/nih_c -name ".ninja_log" | tar rf cache_ignore.tar --files-from -
find _cache/nih_c -name "*.log" | tar rf cache_ignore.tar --files-from -
find _cache/nih_c -name "*.tlog" | tar rf cache_ignore.tar --files-from -
# show .a files in the cache, for sanity checking
find _cache/nih_c -name "*.a" -ls
fi
if [ -d _cache/ccache ] ; then
find _cache/ccache -name "stats" | tar rf cache_ignore.tar --files-from -
fi
if [ -f cache_ignore.tar ] ; then
tar -tf cache_ignore.tar
fi
popd

218
bin/physical.sh Executable file
View File

@@ -0,0 +1,218 @@
#!/bin/bash
set -o errexit
marker_base=985c80fbc6131f3a8cedd0da7e8af98dfceb13c7
marker_commit=${1:-${marker_base}}
if [ $(git merge-base ${marker_commit} ${marker_base}) != ${marker_base} ]; then
echo "first marker commit not an ancestor: ${marker_commit}"
exit 1
fi
if [ $(git merge-base ${marker_commit} HEAD) != $(git rev-parse --verify ${marker_commit}) ]; then
echo "given marker commit not an ancestor: ${marker_commit}"
exit 1
fi
if [ -e Builds/CMake ]; then
echo move CMake
git mv Builds/CMake cmake
git add --update .
git commit -m 'Move CMake directory' --author 'Pretty Printer <cpp@ripple.com>'
fi
if [ -e src/ripple ]; then
echo move protocol buffers
mkdir -p include/xrpl
if [ -e src/ripple/proto ]; then
git mv src/ripple/proto include/xrpl
fi
extract_list() {
git show ${marker_commit}:Builds/CMake/RippledCore.cmake | \
awk "/END ${1}/ { p = 0 } p && /src\/ripple/; /BEGIN ${1}/ { p = 1 }" | \
sed -e 's#src/ripple/##' -e 's#[^a-z]\+$##'
}
move_files() {
oldroot="$1"; shift
newroot="$1"; shift
detail="$1"; shift
files=("$@")
for file in ${files[@]}; do
if [ ! -e ${oldroot}/${file} ]; then
continue
fi
dir=$(dirname ${file})
if [ $(basename ${dir}) == 'details' ]; then
dir=$(dirname ${dir})
fi
if [ $(basename ${dir}) == 'impl' ]; then
dir="$(dirname ${dir})/${detail}"
fi
mkdir -p ${newroot}/${dir}
git mv ${oldroot}/${file} ${newroot}/${dir}
done
}
echo move libxrpl headers
files=$(extract_list 'LIBXRPL HEADERS')
files+=(
basics/SlabAllocator.h
beast/asio/io_latency_probe.h
beast/container/aged_container.h
beast/container/aged_container_utility.h
beast/container/aged_map.h
beast/container/aged_multimap.h
beast/container/aged_multiset.h
beast/container/aged_set.h
beast/container/aged_unordered_map.h
beast/container/aged_unordered_multimap.h
beast/container/aged_unordered_multiset.h
beast/container/aged_unordered_set.h
beast/container/detail/aged_associative_container.h
beast/container/detail/aged_container_iterator.h
beast/container/detail/aged_ordered_container.h
beast/container/detail/aged_unordered_container.h
beast/container/detail/empty_base_optimization.h
beast/core/LockFreeStack.h
beast/insight/Collector.h
beast/insight/Counter.h
beast/insight/CounterImpl.h
beast/insight/Event.h
beast/insight/EventImpl.h
beast/insight/Gauge.h
beast/insight/GaugeImpl.h
beast/insight/Group.h
beast/insight/Groups.h
beast/insight/Hook.h
beast/insight/HookImpl.h
beast/insight/Insight.h
beast/insight/Meter.h
beast/insight/MeterImpl.h
beast/insight/NullCollector.h
beast/insight/StatsDCollector.h
beast/test/fail_counter.h
beast/test/fail_stream.h
beast/test/pipe_stream.h
beast/test/sig_wait.h
beast/test/string_iostream.h
beast/test/string_istream.h
beast/test/string_ostream.h
beast/test/test_allocator.h
beast/test/yield_to.h
beast/utility/hash_pair.h
beast/utility/maybe_const.h
beast/utility/temp_dir.h
# included by only json/impl/json_assert.h
json/json_errors.h
protocol/PayChan.h
protocol/RippleLedgerHash.h
protocol/messages.h
protocol/st.h
)
files+=(
basics/README.md
crypto/README.md
json/README.md
protocol/README.md
resource/README.md
)
move_files src/ripple include/xrpl detail ${files[@]}
echo move libxrpl sources
files=$(extract_list 'LIBXRPL SOURCES')
move_files src/ripple src/libxrpl "" ${files[@]}
echo check leftovers
dirs=$(cd include/xrpl; ls -d */)
dirs=$(cd src/ripple; ls -d ${dirs} 2>/dev/null || true)
files="$(cd src/ripple; find ${dirs} -type f)"
if [ -n "${files}" ]; then
echo "leftover files:"
echo ${files}
exit
fi
echo remove empty directories
empty_dirs="$(cd src/ripple; find ${dirs} -depth -type d)"
for dir in ${empty_dirs[@]}; do
if [ -e ${dir} ]; then
rmdir ${dir}
fi
done
echo move xrpld sources
files=$(
extract_list 'XRPLD SOURCES'
cd src/ripple
find * -regex '.*\.\(h\|ipp\|md\|pu\|uml\|png\)'
)
move_files src/ripple src/xrpld detail ${files[@]}
files="$(cd src/ripple; find . -type f)"
if [ -n "${files}" ]; then
echo "leftover files:"
echo ${files}
exit
fi
fi
rm -rf src/ripple
echo rename .hpp to .h
find include src -name '*.hpp' -exec bash -c 'f="{}"; git mv "${f}" "${f%hpp}h"' \;
echo move PerfLog.h
if [ -e include/xrpl/basics/PerfLog.h ]; then
git mv include/xrpl/basics/PerfLog.h src/xrpld/perflog
fi
# Make sure all protobuf includes have the correct prefix.
protobuf_replace='s:^#include\s*["<].*org/xrpl\([^">]\+\)[">]:#include <xrpl/proto/org/xrpl\1>:'
# Make sure first-party includes use angle brackets and .h extension.
ripple_replace='s:include\s*["<]ripple/\(.*\)\.h\(pp\)\?[">]:include <ripple/\1.h>:'
beast_replace='s:include\s*<beast/:include <xrpl/beast/:'
# Rename impl directories to detail.
impl_rename='s:\(<xrpl.*\)/impl\(/details\)\?/:\1/detail/:'
echo rewrite includes in libxrpl
find include/xrpl src/libxrpl -type f -exec sed -i \
-e "${protobuf_replace}" \
-e "${ripple_replace}" \
-e "${beast_replace}" \
-e 's:^#include <ripple/:#include <xrpl/:' \
-e "${impl_rename}" \
{} +
echo rewrite includes in xrpld
# # https://www.baeldung.com/linux/join-multiple-lines
libxrpl_dirs="$(cd include/xrpl; ls -d1 */ | sed 's:/$::')"
# libxrpl_dirs='a\nb\nc\n'
readarray -t libxrpl_dirs <<< "${libxrpl_dirs}"
# libxrpl_dirs=(a b c)
libxrpl_dirs=$(printf -v txt '%s\\|' "${libxrpl_dirs[@]}"; echo "${txt%\\|}")
# libxrpl_dirs='a\|b\|c'
find src/xrpld src/test -type f -exec sed -i \
-e "${protobuf_replace}" \
-e "${ripple_replace}" \
-e "${beast_replace}" \
-e "s:^#include <ripple/basics/PerfLog.h>:#include <xrpld/perflog/PerfLog.h>:" \
-e "s:^#include <ripple/\(${libxrpl_dirs}\)/:#include <xrpl/\1/:" \
-e 's:^#include <ripple/:#include <xrpld/:' \
-e "${impl_rename}" \
{} +
git commit -m 'Rearrange sources' --author 'Pretty Printer <cpp@ripple.com>'
find include src -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -exec clang-format-10 -i {} +
git add --update .
git commit -m 'Rewrite includes' --author 'Pretty Printer <cpp@ripple.com>'
./Builds/levelization/levelization.sh
git add --update .
git commit -m 'Recompute loops' --author 'Pretty Printer <cpp@ripple.com>'

View File

@@ -1,10 +0,0 @@
#!/bin/sh
# Execute this script with a running Postgres server on the current host.
# It should work with the most generic installation of Postgres,
# and is necessary for rippled to store data in Postgres.
# usage: sudo -u postgres ./initdb.sh
psql -c "CREATE USER rippled"
psql -c "CREATE DATABASE rippled WITH OWNER = rippled"

Some files were not shown because too many files have changed in this diff Show More